Dump all except tickets+attachments?

If anyone has any clever advice as to how to about this,
I’d love to hear it.

RT 3.8.7 + security patches, PostgreSQL 8.1.23, RHEL5

We need to replicate our production RT instance’s data
to a development instance (yes, backward), but we don’t
want to carry the ~6GB of ticket and attachment
data (what else?) over to development.

Essentially, we want the same environment in development
as production, but with empty queues.

Advice?

You are now connected to database “rt3”.
rt3=# \dt
List of relations
Schema | Name | Type | Owner
public | acl | table | postgres
public | attachments | table | postgres
public | attributes | table | postgres
public | cachedgroupmembers | table | postgres
public | customfields | table | postgres
public | customfieldvalues | table | postgres
public | fm_articles | table | rt_user
public | fm_classes | table | rt_user
public | fm_objecttopics | table | rt_user
public | fm_topics | table | rt_user
public | groupmembers | table | postgres
public | groups | table | postgres
public | links | table | postgres
public | objectcustomfields | table | postgres
public | objectcustomfieldvalues | table | postgres
public | principals | table | postgres
public | queues | table | postgres
public | scripactions | table | postgres
public | scripconditions | table | postgres
public | scrips | table | postgres
public | sessions | table | postgres
public | templates | table | postgres
public | tickets | table | postgres
public | transactions | table | postgres
public | users | table | postgres
(25 rows)

Jeff,

If you are in a position to do it via sql , then I would suggest you take this route.
By via sql I mean you are able to do statements like (insert into development_db.Users (select * from production_db.Users) etc …
If you ignoring Tickets and Attachments I would also ignore the Transactions table content , unless you care about change history for users/groups.
Its also possible to ignore objectcustomfields and objectcustomfieldvalues content unless you are using users/groups/queues custom fields which are not very common.
And also ignore sessions table content

Tables like groups / GroupMembers and CachedGroupMembers are tricky and hard to get right via sql, if you start with the Groups table and select * from Groups where Domain != ‘RT::Ticket-Role’, then you should be able to do the GroupMembers by select * from production_db.GroupMembers where GroupId in (select Id from development_db.Groups)
As for CachedGroupMembers, its far more tricky and please look into it in more details, but for a starting point I would suggest you do the same select as GroupMembers.

Hope that helps,
Regards;
Roy

Visit our website today www.daisygroupplc.com

Registered Office: Daisy House, Lindred Road Business Park, Nelson, Lancashire BB9 5SR
Company Registration Number: 4145329 | VAT Number: 722471355
Daisy Communications Limited is a company registered in England and Wales.
DISCLAIMER

This email (including any attachments) is strictly confidential and may also be legally privileged. If the recipient has received this email in error please notify the sender and do not read, print, re-transmit, store or act in reliance on the email or its attachments and immediately delete this email and its attachments from the recipient’s system. Daisy Communications Limited cannot accept liability for any breaches of confidence arising through use of email. Employees of Daisy Communications Limited are expressly required not to make any defamatory statements and not to infringe or authorise any infringement of copyright or any other legal right by email communications. Any such communication is contrary to the company’s policy and outside the scope of the employment of the individual concerned. Daisy Communications Limited will not accept any liability in respect of such a communication, and the employee responsible will be personally liable for any damages or other liability arising.

If you are the intended recipient of this email please ensure that neither the email nor any attachments are copied to third parties outside your organisation or saved without the written permission of the sender. In the event of any unauthorised copying or forwarding, the recipient will be required to indemnify Daisy Communications Limited against any claim for loss or damage caused by any viruses or otherwise.

WARNING: Computer viruses can be transmitted by email. The recipient should check this email and any attachments for the presence of viruses. Daisy Communications Limited accepts no liability for any damage caused by any virus transmitted by this email or any attachments.
NOTICE TO CUSTOMERS
If you have ordered a telephone number from Daisy Communications Limited (non-geographic or new line installation) please do NOT arrange for any form of advertising until the number is live and tested.-----Original Message-----

From: rt-users-bounces@lists.bestpractical.com [mailto:rt-users-
bounces@lists.bestpractical.com] On Behalf Of Jeff Blaine
Sent: 23 June 2011 16:33
To: rt-users@lists.bestpractical.com
Subject: [rt-users] Dump all except tickets+attachments?

If anyone has any clever advice as to how to about this,
I’d love to hear it.

RT 3.8.7 + security patches, PostgreSQL 8.1.23, RHEL5

We need to replicate our production RT instance’s data
to a development instance (yes, backward), but we don’t
want to carry the ~6GB of ticket and attachment
data (what else?) over to development.

Essentially, we want the same environment in development
as production, but with empty queues.

Advice?

You are now connected to database “rt3”.
rt3=# \dt
List of relations
Schema | Name | Type | Owner
--------±------------------------±------±---------
public | acl | table | postgres
public | attachments | table | postgres
public | attributes | table | postgres
public | cachedgroupmembers | table | postgres
public | customfields | table | postgres
public | customfieldvalues | table | postgres
public | fm_articles | table | rt_user
public | fm_classes | table | rt_user
public | fm_objecttopics | table | rt_user
public | fm_topics | table | rt_user
public | groupmembers | table | postgres
public | groups | table | postgres
public | links | table | postgres
public | objectcustomfields | table | postgres
public | objectcustomfieldvalues | table | postgres
public | principals | table | postgres
public | queues | table | postgres
public | scripactions | table | postgres
public | scripconditions | table | postgres
public | scrips | table | postgres
public | sessions | table | postgres
public | templates | table | postgres
public | tickets | table | postgres
public | transactions | table | postgres
public | users | table | postgres
(25 rows)


2011 Training: http://bestpractical.com/services/training.html

Roy,

Thanks. What about “links”? Anyone with any other
ideas?

So basically, this, which excludes transactions, sessions,
tickets, attachments:

for t in acl attributes cachedgroupmembers customfields
customfieldvalues fm_articles fm_classes fm_objecttopics
fm_topics groupmembers groups links objectcustomfields
objectcustomfieldvalues principals queues scripactions
scripconditions scrips templates users
do
pg_dump -U postgres --table $t rt3 > dump_${t}.sql
done

Then I suppose a ‘make initialize-database’ on the
development server and start loading the data, eh?

Jeff,

If you are in a position to do it via sql , then I would suggest you take this route.
By via sql I mean you are able to do statements like (insert into development_db.Users (select * from production_db.Users) etc …
If you ignoring Tickets and Attachments I would also ignore the Transactions table content , unless you care about change history for users/groups.
Its also possible to ignore objectcustomfields and objectcustomfieldvalues content unless you are using users/groups/queues custom fields which are not very common.
And also ignore sessions table content

Tables like groups / GroupMembers and CachedGroupMembers are tricky and hard to get right via sql, if you start with the Groups table and select * from Groups where Domain != ‘RT::Ticket-Role’, then you should be able to do the GroupMembers by select * from production_db.GroupMembers where GroupId in (select Id from development_db.Groups)
As for CachedGroupMembers, its far more tricky and please look into it in more details, but for a starting point I would suggest you do the same select as GroupMembers.

Hope that helps,
Regards;
Roy

Visit our website today www.daisygroupplc.com

Registered Office: Daisy House, Lindred Road Business Park, Nelson, Lancashire BB9 5SR
Company Registration Number: 4145329 | VAT Number: 722471355
Daisy Communications Limited is a company registered in England and Wales.
DISCLAIMER

This email (including any attachments) is strictly confidential and may also be legally privileged. If the recipient has received this email in error please notify the sender and do not read, print, re-transmit, store or act in reliance on the email or its attachments and immediately delete this email and its attachments from the recipient’s system. Daisy Communications Limited cannot accept liability for any breaches of confidence arising through use of email. Employees of Daisy Communications Limited are expressly required not to make any defamatory statements and not to infringe or authorise any infringement of copyright or any other legal right by email communications. Any such communication is contrary to the company’s policy and outside the scope of the employment of the individual concerned. Daisy Communications Limited will not accept any liability in respect of such a communication, and the employee responsible will be personally liable for any damages or other li
ability arising.

Replying to my thread starter from a few months ago.On 6/23/2011 11:32 AM, Jeff Blaine wrote:

If anyone has any clever advice as to how to about this,
I’d love to hear it.

RT 3.8.7 + security patches, PostgreSQL 8.1.23, RHEL5

We need to replicate our production RT instance’s data
to a development instance (yes, backward), but we don’t
want to carry the ~6GB of ticket and attachment
data (what else?) over to development.

Essentially, we want the same environment in development
as production, but with empty queues.

Warning: Do not try this. It seemed to work fine, then
weird things started popping up. Instead, dump everything
from production, restore everything on the dev server,
then shred the tickets on the dev server.

Replying to my thread starter from a few months ago.

If anyone has any clever advice as to how to about this,
I’d love to hear it.

RT 3.8.7 + security patches, PostgreSQL 8.1.23, RHEL5

We need to replicate our production RT instance’s data
to a development instance (yes, backward), but we don’t
want to carry the ~6GB of ticket and attachment
data (what else?) over to development.

Essentially, we want the same environment in development
as production, but with empty queues.

Warning: Do not try this. It seemed to work fine, then
weird things started popping up. Instead, dump everything
from production, restore everything on the dev server,
then shred the tickets on the dev server.

I had a use case similar to this (backporting production DB to a dev
environment, wanting to drop all tickets and related data but preserve
everything else). In my case I had tried to shred the tickets but ran
into an error to the effect of too many tickets specified by the
shredder resulting in too long of a GET request URI length for the
server. I concluded that shredding that many tickets from the UI
wasn’t going to work and there must be a better way. Is this right, or
is there some way of massaging a deployment to delete a largish number
of tickets? (In our case it was something like ~3500 tickets in the
four RTIR queues).

Happy to post back with details if needed.

Darren Spruell
phatbuckett@gmail.com

into an error to the effect of too many tickets specified by the
shredder resulting in too long of a GET request URI length for the
server. I concluded that shredding that many tickets from the UI
wasn’t going to work and there must be a better way. Is this right, or

I lately had exactly the same problem, and the conclusion was:
Either to learn how to shred ‘inside the Database’ (without
the mason-code) or to restrict each shred-call to a few hundred
selected tickets. The shredder seems to create a long URI and
so the standard limits of URIs (from concatenating the call and
lots of ticket-numbers) seem unavoidable.

We simply ignored the old tickets, hoping for a solution
to pop up somewhere, before the next cycle. Alas we see
4.* now and have 3.* running, and still no idea …

Would it be possible to ‘translate’ the shredders algorithm
from mason-code directly to ‘some SQL dialect’ or at least
to translate ‘shredding one ticket completely in SQL’?
(Or is that impossible, because the contents and links of
a ticket can not be analyzed that way, and really need
’perlcode’, to follow and decide what to shred?)

Stucki

Christoph von Stuckrad * * |nickname |Mail stucki@mi.fu-berlin.de
Freie Universitaet Berlin |/_*|‘stucki’ |Tel(Mo.,Mi.):+49 30 838-75 459|
Mathematik & Informatik EDV |\ *|if online| (Di,Do,Fr):+49 30 77 39 6600|
Takustr. 9 / 14195 Berlin * * |on IRCnet|Fax(home): +49 30 77 39 6601/

into an error to the effect of too many tickets specified by the
shredder resulting in too long of a GET request URI length for the
server. I concluded that shredding that many tickets from the UI
wasn’t going to work and there must be a better way. Is this right, or

I lately had exactly the same problem, and the conclusion was:
Either to learn how to shred ‘inside the Database’ (without
the mason-code) or to restrict each shred-call to a few hundred
selected tickets. The shredder seems to create a long URI and
so the standard limits of URIs (from concatenating the call and
lots of ticket-numbers) seem unavoidable.

We simply ignored the old tickets, hoping for a solution
to pop up somewhere, before the next cycle. Alas we see
4.* now and have 3.* running, and still no idea …

Would it be possible to ‘translate’ the shredders algorithm
from mason-code directly to ‘some SQL dialect’ or at least
to translate ‘shredding one ticket completely in SQL’?
(Or is that impossible, because the contents and links of
a ticket can not be analyzed that way, and really need
’perlcode’, to follow and decide what to shred?)

Try using /opt/rt4/sbin/rt-shredder rather than the web ui.
There are also some config options you may need to tune to cause it to
be able to delete thousands of tickets at a time.

-kevin

Hello,

WARNING:

Only for quick shredding testing environment from all tickets on 3.8+.
Use this tools in production very carefully and always test such
destructive actions in a test environment.

Solution with validator:

DELETE FROM Tickets;
./sbin/rt-validator -c --resolve

Solution with rt-delete-tickets-mysql:

UPDATE Tickets SET Status = ‘deleted’;
./sbin/rt-delete-tickets-mysql
./sbin/rt-validator -c

That’s it.On Thu, Aug 18, 2011 at 4:02 PM, Chr. von Stuckrad real-stucki@mi.fu-berlin.de wrote:

On Wed, 17 Aug 2011, Darren Spruell wrote:

into an error to the effect of too many tickets specified by the
shredder resulting in too long of a GET request URI length for the
server. I concluded that shredding that many tickets from the UI
wasn’t going to work and there must be a better way. Is this right, or

I lately had exactly the same problem, and the conclusion was:
Either to learn how to shred ‘inside the Database’ (without
the mason-code) or to restrict each shred-call to a few hundred
selected tickets. The shredder seems to create a long URI and
so the standard limits of URIs (from concatenating the call and
lots of ticket-numbers) seem unavoidable.

We simply ignored the old tickets, hoping for a solution
to pop up somewhere, before the next cycle. Alas we see
4.* now and have 3.* running, and still no idea …

Would it be possible to ‘translate’ the shredders algorithm
from mason-code directly to ‘some SQL dialect’ or at least
to translate ‘shredding one ticket completely in SQL’?
(Or is that impossible, because the contents and links of
a ticket can not be analyzed that way, and really need
’perlcode’, to follow and decide what to shred?)

Stucki


Christoph von Stuckrad * * |nickname |Mail stucki@mi.fu-berlin.de
Freie Universitaet Berlin |/_*|‘stucki’ |Tel(Mo.,Mi.):+49 30 838-75 459|
Mathematik & Informatik EDV |\ *|if online| (Di,Do,Fr):+49 30 77 39 6600|
Takustr. 9 / 14195 Berlin * * |on IRCnet|Fax(home): +49 30 77 39 6601/

RT Training Sessions (http://bestpractical.com/services/training.html)

  • Chicago, IL, USA September 26 & 27, 2011
  • San Francisco, CA, USA October 18 & 19, 2011
  • Washington DC, USA October 31 & November 1, 2011
  • Melbourne VIC, Australia November 28 & 29, 2011
  • Barcelona, Spain November 28 & 29, 2011

Best regards, Ruslan.

I added this before I followed up to this thread with
"Don’t try this (dump partial DB and transfer):


Shred ALL TICKETS

WARNING WARNING WARNING: If for some reason you want to reset your
entire RT instance’s TICKETS AND TICKET DATA ONLY (and keep Scrips,
Custom Fields, etc), you could do something like the following. This was
useful for me when I wanted to take our production RT instance and
duplicate it onto a development box but not have the huge database full
of tickets and ticket-related data.

Bourne shell syntax is shown below:

cd /tmp
while :
do
date
SHREDDED=rt-shredder --plugin "Tickets=query,id > 0;limit,100" --force --sqldump foo.sql 2>&1 | grep RT::Ticket | wc -l
echo "Shredded roughly $SHREDDED tickets."
sleep 3 # let the system get a breath
rm -f foo.sql # we don’t care about restoring what we shredded in this case
if [ $SHREDDED -eq 0 ]; then
break
fi
doneOn 8/18/2011 8:02 AM, Chr. von Stuckrad wrote:

On Wed, 17 Aug 2011, Darren Spruell wrote:

into an error to the effect of too many tickets specified by the
shredder resulting in too long of a GET request URI length for the
server. I concluded that shredding that many tickets from the UI
wasn’t going to work and there must be a better way. Is this right, or

I lately had exactly the same problem, and the conclusion was:
Either to learn how to shred ‘inside the Database’ (without
the mason-code) or to restrict each shred-call to a few hundred
selected tickets. The shredder seems to create a long URI and
so the standard limits of URIs (from concatenating the call and
lots of ticket-numbers) seem unavoidable.

We simply ignored the old tickets, hoping for a solution
to pop up somewhere, before the next cycle. Alas we see
4.* now and have 3.* running, and still no idea …

Would it be possible to ‘translate’ the shredders algorithm
from mason-code directly to ‘some SQL dialect’ or at least
to translate ‘shredding one ticket completely in SQL’?
(Or is that impossible, because the contents and links of
a ticket can not be analyzed that way, and really need
’perlcode’, to follow and decide what to shred?)

Stucki

and duplicate it onto a development box but not have the huge
database full of tickets and ticket-related data.

Exactly what we wanted to do then.

Bourne shell syntax is shown below:

SHREDDED=`rt-shredder --plugin “Tickets=query,id > 0;limit,100”
… -------------------------------------------------^^^^^^^^^^^

And THIS ^^^^ seems to be the exact difference to what I tried then,
you made it into a loop of 100 Tickets each until empty …
Needing the duplicate ‘tomorrow morning’ I never tried it that way,
but thanks for showing, that it ‘should have worked’ :-))

Would you consider restricting the ‘query’ to one specific queue
and cleaning that queue completely, to be safe enough for applying
it to a production-RT? (We did split an RT into two completely
independent Hosts. Everything is already logically independent,
both are working with copies of the complete former state, changeing
only ‘their half’ of the data. Only the leftover halves could be
shredded (same problem again, but ‘more dangerous’ on a working
instance which must not be damaged).

Stucki

Christoph von Stuckrad * * |nickname |Mail stucki@mi.fu-berlin.de
Freie Universitaet Berlin |/_*|‘stucki’ |Tel(Mo.,Mi.):+49 30 838-75 459|
Mathematik & Informatik EDV |\ *|if online| (Di,Do,Fr):+49 30 77 39 6600|
Takustr. 9 / 14195 Berlin * * |on IRCnet|Fax(home): +49 30 77 39 6601/

A bunch of people wrote:

[…]

This has been an informative thread, thanks for the responses.

Darren Spruell
phatbuckett@gmail.com