ShowCustomFields extremely slow

After upgrade to v4.0.17 from 3.x loading tickets is extremely slow…
can anyone give me any clues please…?

Profiler shows this for a load of a single ticket:

/Ticket/Display.html BEGINS {{{
/autohandler {{{
/Elements/SetupSessionCookie {{{
/Elements/SetupSessionCookie }}} 0.2310
/Ticket/autohandler {{{
/Ticket/Display.html {{{
/Ticket/Elements/FindTransactions {{{
/Ticket/Elements/FindTransactions }}} 0.6903
/Ticket/Elements/FindAttachments {{{
/Ticket/Elements/FindAttachments }}} 0.0013
/Ticket/Elements/LoadTextAttachments {{{
/Ticket/Elements/LoadTextAttachments }}} 0.0013
/Elements/Header {{{
/NoAuth/css/aileron/main.css {{{
/NoAuth/css/aileron/main.css }}} 0.0012
/NoAuth/css/base/main.css {{{
/NoAuth/css/base/main.css }}} 0.0012
/NoAuth/css/base/yui-fonts.css {{{
/NoAuth/css/base/yui-fonts.css }}} 0.0010
/NoAuth/css/base/jquery-ui.css {{{
/NoAuth/css/base/jquery-ui.css }}} 0.0010
/NoAuth/css/base/jquery-ui.custom.modified.css {{{
/NoAuth/css/base/jquery-ui.custom.modified.css }}}
0.0014
/NoAuth/css/base/jquery-ui-timepicker-addon.css {{{
/NoAuth/css/base/jquery-ui-timepicker-addon.css }}}
0.0010
/NoAuth/css/base/superfish.css {{{
/NoAuth/css/base/superfish.css }}} 0.0010
/NoAuth/css/base/superfish-navbar.css {{{
/NoAuth/css/base/superfish-navbar.css }}} 0.0010
/NoAuth/css/base/superfish-vertical.css {{{
/NoAuth/css/base/superfish-vertical.css }}} 0.0010
/NoAuth/css/base/tablesorter.css {{{
/NoAuth/css/base/tablesorter.css }}} 0.0010
/NoAuth/css/base/farbtastic.css {{{
/NoAuth/css/base/farbtastic.css }}} 0.0011
/NoAuth/css/base/collection.css {{{
/NoAuth/css/base/collection.css }}} 0.0010
/NoAuth/css/base/forms.css {{{
/NoAuth/css/base/forms.css }}} 0.0010
/NoAuth/css/base/misc.css {{{
/NoAuth/css/base/misc.css }}} 0.0010
/NoAuth/css/base/nav.css {{{
/NoAuth/css/base/nav.css }}} 0.0010
/NoAuth/css/base/ticket-form.css {{{
/NoAuth/css/base/ticket-form.css }}} 0.0010
/NoAuth/css/base/rights-editor.css {{{
/NoAuth/css/base/rights-editor.css }}} 0.0010
/NoAuth/css/base/theme-editor.css {{{
/NoAuth/css/base/theme-editor.css }}} 0.0010
/NoAuth/css/base/ticket.css {{{
/NoAuth/css/base/ticket.css }}} 0.0010
/NoAuth/css/base/admin.css {{{
/NoAuth/css/base/admin.css }}} 0.0010
/NoAuth/css/base/articles.css {{{
/NoAuth/css/base/articles.css }}} 0.0010
/NoAuth/css/base/portlets.css {{{
/NoAuth/css/base/portlets.css }}} 0.0010
/NoAuth/css/base/tools.css {{{
/NoAuth/css/base/tools.css }}} 0.0010
/NoAuth/css/base/login.css {{{
/NoAuth/css/base/login.css }}} 0.0010
/NoAuth/css/base/history-folding.css {{{
/NoAuth/css/base/history-folding.css }}} 0.0010
/NoAuth/css/aileron/base.css {{{
/NoAuth/css/aileron/base.css }}} 0.0009
/NoAuth/css/aileron/layout.css {{{
/NoAuth/css/aileron/layout.css }}} 0.0010
/NoAuth/css/aileron/nav.css {{{
/NoAuth/css/aileron/nav.css }}} 0.0010
/NoAuth/css/aileron/boxes.css {{{
/NoAuth/css/aileron/boxes.css }}} 0.0010
/NoAuth/css/aileron/forms.css {{{
/NoAuth/css/aileron/forms.css }}} 0.0009
/NoAuth/css/aileron/login.css {{{
/NoAuth/css/aileron/login.css }}} 0.0009
/NoAuth/css/aileron/ticket-lists.css {{{
/NoAuth/css/aileron/ticket-lists.css }}} 0.0009
/NoAuth/css/aileron/ticket-search.css {{{
/NoAuth/css/aileron/ticket-search.css }}} 0.0009
/NoAuth/css/aileron/ticket.css {{{
/NoAuth/css/aileron/ticket.css }}} 0.0009
/NoAuth/css/aileron/misc.css {{{
/NoAuth/css/aileron/misc.css }}} 0.0009
/Elements/Framekiller {{{
/Elements/Framekiller }}} 0.0008
/Elements/HeaderJavascript {{{
/NoAuth/js/jquery-1.4.2.min.js {{{
/NoAuth/js/jquery-1.4.2.min.js }}} 0.0010
/NoAuth/js/jquery_noconflict.js {{{
/NoAuth/js/jquery_noconflict.js }}} 0.0009
/NoAuth/js/jquery-ui-1.8.4.custom.min.js {{{
/NoAuth/js/jquery-ui-1.8.4.custom.min.js }}} 0.0009
/NoAuth/js/jquery-ui-timepicker-addon.js {{{
/NoAuth/js/jquery-ui-timepicker-addon.js }}} 0.0009
/NoAuth/js/jquery-ui-patch-datepicker.js {{{
/NoAuth/js/jquery-ui-patch-datepicker.js }}} 0.0009
/NoAuth/js/jquery.cookie.js {{{
/NoAuth/js/jquery.cookie.js }}} 0.0009
/NoAuth/js/titlebox-state.js {{{
/NoAuth/js/titlebox-state.js }}} 0.0009
/NoAuth/js/util.js {{{
/NoAuth/js/util.js }}} 0.0009
/NoAuth/js/userautocomplete.js {{{
/NoAuth/js/userautocomplete.js }}} 0.0009
/NoAuth/js/jquery.event.hover-1.0.js {{{
/NoAuth/js/jquery.event.hover-1.0.js }}} 0.0009
/NoAuth/js/superfish.js {{{
/NoAuth/js/superfish.js }}} 0.0009
/NoAuth/js/supersubs.js {{{
/NoAuth/js/supersubs.js }}} 0.0009
/NoAuth/js/jquery.supposition.js {{{
/NoAuth/js/jquery.supposition.js }}} 0.0009
/NoAuth/js/history-folding.js {{{
/NoAuth/js/history-folding.js }}} 0.0009
/NoAuth/js/late.js {{{
/NoAuth/js/late.js }}} 0.0009
/Elements/HeaderJavascript }}} 0.4800
/NoAuth/css/aileron/InHeader {{{
/NoAuth/css/aileron/InHeader }}} 0.0009
/Elements/Logo {{{
/Elements/Logo }}} 0.0013
/Elements/PersonalQuickbar {{{
/Elements/PersonalQuickbar }}} 0.0023
/Elements/Header }}} 1.5502
/Elements/Tabs {{{
/Elements/SimpleSearch {{{
/Elements/SimpleSearch }}} 0.0025
/Elements/CreateTicket {{{
/Elements/SelectNewTicketQueue {{{
/Elements/SelectQueue {{{
/Elements/SelectQueue }}} 0.0014
/Elements/SelectNewTicketQueue }}} 0.4578
/l_unsafe {{{
/l_unsafe }}} 0.0011
/Elements/CreateTicket }}} 0.4610
/Dashboards/Elements/ListOfDashboards {{{
/Dashboards/Elements/DashboardsForObject {{{
/Dashboards/Elements/DashboardsForObject }}} 0.2292
/Dashboards/Elements/DashboardsForObject {{{
/Dashboards/Elements/DashboardsForObject }}} 0.3084
/Dashboards/Elements/ListOfDashboards }}} 0.8113
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0009
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0008
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0008
/Search/Elements/SearchesForObject {{{
/Search/Elements/SearchesForObject }}} 0.2402
/Ticket/Elements/Bookmark {{{
/Ticket/Elements/Bookmark }}} 0.4588
/Elements/PageLayout {{{
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0010
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0012
/Elements/Menu }}} 0.0025
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0013
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0013
/Elements/Menu {{{
/Elements/Menu }}} 0.0013
/Elements/Menu {{{
/Elements/Menu }}} 0.0013
/Elements/Menu {{{
/Elements/Menu }}} 0.0013
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0014
/Elements/Menu {{{
/Elements/Menu }}} 0.0014
/Elements/Menu {{{
/Elements/Menu }}} 0.0017
/Elements/Menu }}} 0.0064
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0014
/Elements/Menu {{{
/Elements/Menu }}} 0.0014
/Elements/Menu }}} 0.0041
/Elements/Menu {{{
/Elements/Menu }}} 0.0014
/Elements/Menu }}} 0.0192
/Elements/Menu }}} 0.0220
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0021
/Elements/Menu }}} 0.0037
/Elements/Menu }}} 0.0048
/Elements/Menu }}} 0.0320
/Elements/Menu {{{
/Elements/Menu {{{
/Elements/Menu }}} 0.0014
/Elements/Menu }}} 0.0031
/Elements/WidgetBar {{{
/Elements/WidgetBar }}} 0.0009
/Elements/PageLayout }}} 0.0396
/Elements/Tabs }}} 2.5050
/Elements/ListActions {{{
/Elements/ListActions }}} 0.0009
/Ticket/Elements/ShowUpdateStatus {{{
/Ticket/Elements/ShowUpdateStatus }}} 0.2275
/Widgets/TitleBox {{{
/Ticket/Elements/ShowSummary {{{
/Widgets/TitleBox {{{
/Ticket/Elements/ShowBasics {{{
/Ticket/Elements/ShowPriority {{{
/Ticket/Elements/ShowPriority }}} 0.0012
/Ticket/Elements/ShowQueue {{{
/Ticket/Elements/ShowQueue }}} 0.0013
/Ticket/Elements/ShowBasics }}} 0.0097
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0013
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0010
/Widgets/TitleBox }}} 0.0160
/Widgets/TitleBox {{{
/Ticket/Elements/ShowCustomFields {{{
/Elements/ShowCustomFields {{{
/Elements/ShowCustomFields }}} 429.2604
/Ticket/Elements/ShowCustomFields }}} 429.2626
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0012
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0010
/Widgets/TitleBox }}} 429.2664
/Widgets/TitleBox {{{
/Ticket/Elements/ShowPeople {{{
/Elements/ShowUser {{{
/Elements/ShowUserConcise {{{
/Elements/ShowUserConcise }}} 0.0014
/Elements/ShowUser }}} 0.4606
/Elements/ShowUserEmailFrequency {{{
/Elements/ShowUserEmailFrequency }}} 0.0012
/Ticket/Elements/ShowGroupMembers {{{
/Elements/ShowUser {{{
/Elements/ShowUserConcise {{{
/Elements/ShowUserConcise }}}
0.0014
/Elements/ShowUser }}} 0.2313
/Elements/ShowUserEmailFrequency {{{
/Elements/ShowUserEmailFrequency
}}} 0.4572
/Ticket/Elements/ShowGroupMembers }}}
1.1614
/Ticket/Elements/ShowGroupMembers {{{
/Ticket/Elements/ShowGroupMembers }}}
0.4707
/Ticket/Elements/ShowGroupMembers {{{
/Ticket/Elements/ShowGroupMembers }}}
0.4712
/Ticket/Elements/ShowPeople }}} 3.7236
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0012
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0010
/Widgets/TitleBox }}} 3.7280
/Ticket/Elements/ShowAttachments {{{
/Ticket/Elements/ShowAttachments }}} 0.9425
/Ticket/Elements/ShowRequestor {{{
/Widgets/TitleBox {{{
/Elements/ShowUser {{{
/Elements/ShowUserConcise {{{
/Elements/ShowUserConcise }}} 0.0012
/Elements/ShowUser }}} 0.2306
/Ticket/Elements/ShowRequestorExtraInfo {{{

/Elements/CollectionAsTable/ParseFormat {{{

/Elements/CollectionAsTable/ParseFormat }}} 0.0015
/Ticket/Elements/ShowRequestorExtraInfo
}}} 0.0039
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0011
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0011

/Ticket/Elements/ShowRequestorTicketsActive {{{

/Ticket/Elements/ShowRequestorTickets {{{
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0014
/Elements/ShowLink {{{
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2319
/Elements/ShowLink }}} 0.4746

/Ticket/Elements/ShowRequestorTickets }}} 2.1989

/Ticket/Elements/ShowRequestorTicketsActive }}} 2.2011
/Elements/ShowMemberships {{{
/Elements/ShowMemberships }}} 0.9270
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0012
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0010
/Widgets/TitleBox }}} 3.7327
/Ticket/Elements/ShowRequestor }}} 5.0157
/Widgets/TitleBox {{{
/Ticket/Elements/Reminders {{{
/Ticket/Elements/Reminders:NewReminder {{{
/Elements/SelectOwner {{{
/Elements/SelectOwnerDropdown {{{
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0016
/Elements/ShowUser }}} 0.2318
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0016
/Elements/ShowUser }}} 0.2314
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2314
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2318
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0015
/Elements/ShowUser }}} 0.2313
/Elements/ShowUser {{{

/Elements/ShowUserConcise {{{

/Elements/ShowUserConcise }}} 0.0016
/Elements/ShowUser }}} 0.2320
/Elements/SelectOwnerDropdown
}}} 5.9094
/Elements/SelectOwner }}} 6.1403
/Elements/SelectDate {{{
/Elements/SelectDate }}} 0.0014
/Ticket/Elements/Reminders:NewReminder
}}} 6.1484
/Ticket/Elements/Reminders }}} 6.7735
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0012
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0010
/Widgets/TitleBox }}} 6.7810
/Widgets/TitleBox {{{
/Ticket/Elements/ShowDates {{{
/Elements/ShowUser {{{
/Elements/ShowUserConcise {{{
/Elements/ShowUserConcise }}} 0.0013
/Elements/ShowUser }}} 0.2305
/Ticket/Elements/ShowDates }}} 0.9264
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0012
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0010
/Widgets/TitleBox }}} 0.9312
/Widgets/TitleBox {{{
/Elements/ShowLinks {{{
/Elements/ShowRelationLabel {{{
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0012
/Elements/ShowRelationLabel }}} 0.0024
/Elements/ShowRelationLabel {{{
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0011
/Elements/ShowRelationLabel }}} 0.0023
/Elements/ShowRelationLabel {{{
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0011
/Elements/ShowRelationLabel }}} 0.0023
/Ticket/Elements/ShowParents {{{
/Ticket/Elements/ShowParents }}} 0.2287
/Elements/ShowRelationLabel {{{
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0011
/Elements/ShowRelationLabel }}} 0.0023
/Ticket/Elements/ShowMembers {{{
/Ticket/Elements/ShowMembers }}} 0.2311
/Elements/ShowRelationLabel {{{
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0011
/Elements/ShowRelationLabel }}} 0.0023
/Elements/ShowRelationLabel {{{
/Elements/QueryString {{{
/Elements/QueryString }}} 0.0011
/Elements/ShowRelationLabel }}} 0.0023
/Elements/ShowLinks }}} 1.6303
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0013
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0010
/Widgets/TitleBox }}} 1.6356
/Ticket/Elements/ShowSummary }}} 448.3220
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0010
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0009
/Widgets/TitleBox }}} 448.3261
/Ticket/Elements/ShowHistory {{{
/Widgets/TitleBoxStart {{{
/Widgets/TitleBoxStart }}} 0.0011
/Ticket/Elements/ShowTransaction {{{
/Elements/ShowUser {{{
/Elements/ShowUserConcise {{{
/Elements/ShowUserConcise }}} 0.0014
/Elements/ShowUser }}} 0.2302
/Elements/ShowCustomFields {{{
/Elements/ShowCustomFields }}} 0.2306
/Ticket/Elements/ShowTransactionAttachments {{{
/Ticket/Elements/ShowMessageHeaders {{{
/Elements/MakeClicky {{{
/Elements/MakeClicky }}} 0.0017
/Elements/MakeClicky {{{
/Elements/MakeClicky }}} 0.0016
/Elements/MakeClicky {{{
/Elements/MakeClicky }}} 0.0016
/Elements/MakeClicky {{{
/Elements/MakeClicky }}} 0.0016
/Ticket/Elements/ShowMessageHeaders }}} 0.0109
/Ticket/Elements/ShowTransactionAttachments {{{
/Ticket/Elements/ShowMessageHeaders {{{
/Ticket/Elements/ShowMessageHeaders }}}
0.0021
/Ticket/Elements/ShowMessageStanza {{{
/Elements/MakeClicky {{{
/Elements/MakeClicky }}} 0.0020
/Ticket/Elements/ShowMessageStanza }}}
0.4604

/Ticket/Elements/ShowTransactionAttachments {{{

/Ticket/Elements/ShowTransactionAttachments }}} 0.0023
/Ticket/Elements/ShowMessageHeaders {{{
/Ticket/Elements/ShowMessageHeaders }}}
0.0022

/Ticket/Elements/ShowTransactionAttachments {{{

/Ticket/Elements/ShowTransactionAttachments }}} 0.0022
/Ticket/Elements/ShowTransactionAttachments
}}} 0.9543
/Ticket/Elements/ShowTransactionAttachments }}}
0.9704
/Ticket/Elements/ShowTransaction }}} 2.1441
/Ticket/Elements/ShowTransaction {{{
/Elements/ShowUser {{{
/Elements/ShowUserConcise {{{
/Elements/ShowUserConcise }}} 0.0013
/Elements/ShowUser }}} 0.2297
/Ticket/Elements/ShowTransaction }}} 0.6901
/Widgets/TitleBoxEnd {{{
/Widgets/TitleBoxEnd }}} 0.0009
/Ticket/Elements/ShowHistory }}} 3.5403
/Ticket/Display.html }}} 458.0237
/Ticket/autohandler }}} 458.0245
/Elements/Footer {{{
/l_unsafe {{{
/l_unsafe }}} 0.0009
/Elements/Footer }}} 0.0026
/autohandler }}} 459.6573
/Ticket/Display.html }}} ENDS

Michelle Sullivan
http://www.mhix.org/

After upgrade to v4.0.17 from 3.x loading tickets is extremely slow…
can anyone give me any clues please…?

What version were you upgrading from? What database backend are you
using?

Profiler shows this for a load of a single ticket:

/Ticket/Display.html BEGINS {{{
/autohandler {{{
/Elements/SetupSessionCookie {{{
/Elements/SetupSessionCookie }}} 0.2310

The fact that even cookie setup takes a quarter of a second is
problematic, and suggests that your database badly needs tuning.

                             /Elements/ShowCustomFields {{{
                             /Elements/ShowCustomFields }}} 429.2604

Your database’s slow query log will hopefully shed some light on the
query/queries that are being problematic.

                             /Elements/ShowUser {{{
                                 /Elements/ShowUserConcise {{{
                                 /Elements/ShowUserConcise }}} 0.0014
                             /Elements/ShowUser }}} 0.4606

The fact that displaying a user takes half a second is another sign that
your database accesses, in general, are being problematic.

  • Alex

Alex Vandiver wrote:

After upgrade to v4.0.17 from 3.x loading tickets is extremely slow…
can anyone give me any clues please…?

What version were you upgrading from? What database backend are you
using?

3.6.3
Postgres 8.4.x

Profiler shows this for a load of a single ticket:

/Ticket/Display.html BEGINS {{{
/autohandler {{{
/Elements/SetupSessionCookie {{{
/Elements/SetupSessionCookie }}} 0.2310

The fact that even cookie setup takes a quarter of a second is
problematic, and suggests that your database badly needs tuning.

I doubt it, it’s probably because it’s busy.

3.6.3 was running against Pg8.3.x on a 2 proc 64bit intels (early ones)
with 3G RAM on a 6 disk SCSI hardware raid 5 running FreeBSD and Bucardo
4.5 to replicate to a backup database. The front end is running on a
800Mhz PIII BL10e blade with Apache2.2 on FreeBSD 6.x and Mod_perl2.

4.0.17 setup I’m trying to migrate to has a clean DB dump, against 4 off
Pg 8.4.x each with 16 cores, 128G RAM, and 12x15k SAS Drives using
Bucardo 5 in multi-master mode on CentOS. Front ends are 2 off, Intel,
2 procs x 2 cores x 8G RAM FreeBSD 9.2, Apache 2.2 and mod_perl2.

3.6.3 setup is a dedicated DB for RT. The 4.0.17 DB hardware has 2
databases, RT4 and SORBS - needless to say SORBS is busy on reads and
writes, however as it’s 4 servers with 2G bonded interfaces in 2 DCs
One DC is East Coast, the other West Coast. The primary writer for
SORBS is East Cost, primary writer for RT is west coast, so contention
is only against Bucardo. Bucardo replication writes takes 7 seconds to
all four DB Hosts for 10,000 qps.

                             /Elements/ShowCustomFields {{{
                             /Elements/ShowCustomFields }}} 429.2604

Your database’s slow query log will hopefully shed some light on the
query/queries that are being problematic.

Slow query log is set to 100ms on each of the servers, when displaying a
ticket, only one query shows in the log (usually) and that’s nearly
always 1800-1900ms.

                             /Elements/ShowUser {{{
                                 /Elements/ShowUserConcise {{{
                                 /Elements/ShowUserConcise }}} 0.0014
                             /Elements/ShowUser }}} 0.4606

The fact that displaying a user takes half a second is another sign that
your database accesses, in general, are being problematic.

The SQL log only shows 20s (and most of them are the same query over and
over again - sometimes as much as 100 times per ticket… This query:
SELECT * FROM Attributes WHERE LOWER(ObjectType) = LOWER(?) AND
LOWER(Name) = LOWER(?) AND ObjectId = ?; ) (67 times takes 8 seconds
according to the SQL log… running by hand with the same params shows it
take 0.062ms - so that’s possibly network latency as the frontends are
not in the same location as the DBs)

Regards,

Michelle Sullivan
http://www.mhix.org/

Michelle Sullivan wrote:

Alex Vandiver wrote:

The fact that even cookie setup takes a quarter of a second is
problematic, and suggests that your database badly needs tuning.

I doubt it, it’s probably because it’s busy.

Oh I should also point out in lulls between incoming spam runs when the
DBs drop to just 1-2 qps (which happens for 15 minute periods every
couple of hours, the query speed for the RT4 DB doesn’t change
noticeably - the trace was taken during one of the quiet periods)

Michelle Sullivan
http://www.mhix.org/

Michelle Sullivan wrote:

Michelle Sullivan wrote:

Alex Vandiver wrote:

The fact that even cookie setup takes a quarter of a second is
problematic, and suggests that your database badly needs tuning.

I doubt it, it’s probably because it’s busy.

Oh I should also point out in lulls between incoming spam runs when the
DBs drop to just 1-2 qps (which happens for 15 minute periods every
couple of hours, the query speed for the RT4 DB doesn’t change
noticeably - the trace was taken during one of the quiet periods)

Also monitoring the process list whilst it’s sitting there in
ShowCustomFields I see no queries… On the occasion I do pick up a
query (eg after a restart of the DB, or after a vacuum) it’s very
obviously being slowed by index loading (can be 10 seconds or so) then
the query time drops to a few ms… however this is not the cause (would
be fairly easy to identify.)

I suspected it might be a lock in the DB(s) but checking the locks I see
nothing waiting.

I suspected it might be a run-away perl process… but the apache server
is idle…

I suspected (in desperation) maybe I had configured the resolver badly…
not that either…

The only thing I can think of is something to do with the custom fields
and it’s returning 1000’s of rows (very quick for the query, but slow to
transfer due to network latency (~20ms round trip) - Primary DB
(hardcoded on the test servers) is in Santa Clara, webserver is in
Seattle)…

rt4=# select count(1) from objectcustomfieldvalues;
count
100689
(1 row)

rt4=# select count(1) from objectcustomfields;
count
36
(1 row)

Everytime the problem is in ShowCustomFields taking >60s (and averaging
400s)

Regards,

Michelle Sullivan
http://www.mhix.org/

Ok so temporarily I set min_duration to 0 and logged everything, the log
is here:

http://www.mhix.org/ticketload.log

That’sa single ticket load, and is a little difficult to read due to the
replication… but with filtering (eg: grep -Evi
‘(DETAIL:|DEALLOCATE|bind|seq$|conkey|bucardo|ping|public.objectcustomfieldvalues_id_s|public.objectcustomfields_id_s|_seq)’
) you can see the times involved.

Any clues would be very much appreciated.

Michelle

Michelle Sullivan
http://www.mhix.org/

Ok so temporarily I set min_duration to 0 and logged everything, the log
is here:

The situation can be made much clearer by limiting to the postgres
process IDs which are responding to queries from RT, namely 15537 and
15481. The longest statement duration is 1559.273ms, which is a parse
of a statement query, not even an execution. The next-longest query is
DBD::Pg’s query (which is only run once per process, FTR) at 02:11:09 to
determine table information – which takes nearly a full second to
return. On the largest Postgres installation which I have access to,
this takes less than 100ms.

I am not familiar with Bucardo as a replication strategy, so I can’t
comment on that. However, I strongly suspect that the answer lies
amongst your replication tool and network topology. The symptoms do not
match any failure modes that I’m familiar with in RT. You may wish to
try simplifying your database configuration (temporarily install Pg on
one of the front-ends) to see if that resolves any of the problems.

  • Alex

Alex Vandiver wrote:

Ok so temporarily I set min_duration to 0 and logged everything, the log
is here:

The situation can be made much clearer by limiting to the postgres
process IDs which are responding to queries from RT, namely 15537 and
15481.

( parsed this an put it in http://www.mhix.org/ticketload-rt.log )

The longest statement duration is 1559.273ms, which is a parse
of a statement query, not even an execution. The next-longest query is
DBD::Pg’s query (which is only run once per process, FTR) at 02:11:09 to
determine table information – which takes nearly a full second to
return. On the largest Postgres installation which I have access to,
this takes less than 100ms.

Well here’s the thing…

Jan 28 02:11:09 to Jan 28 02:18:31 … a full 7 minutes of being in
ShowCustomFields there are just 100’s (like more than 1900 of them) of
statements like this:

Jan 28 02:18:30 m0040933 postgres[15537]: [7893-1] LOG: duration: 0.042
ms bind dbdpg_p88026_1968: SELECT consrc FROM pg_catalog.pg_constraint
WHERE contype = ‘c’ AND conrelid = 134792121 AND conkey = ‘{33}’
Jan 28 02:18:30 m0040933 postgres[15537]: [7894-1] LOG: duration: 0.021
ms execute dbdpg_p88026_1968: SELECT consrc FROM
pg_catalog.pg_constraint WHERE contype = ‘c’ AND conrelid = 134792121
AND conkey = ‘{33}’
Jan 28 02:18:31 m0040933 postgres[15537]: [7895-1] LOG: duration: 0.363
ms statement: DEALLOCATE dbdpg_p88026_1968

… and no other queries… The start being immediately after:

Jan 28 02:11:09 m0040933 postgres[15537]: [218-2] SELECT
Jan 28 02:11:09 m0040933 postgres[15537]: [218-3]
NULL::text AS “TABLE_CAT”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-4] ,
quote_ident(n.nspname) AS “TABLE_SCHEM”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-5] ,
quote_ident(c.relname) AS “TABLE_NAME”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-6] ,
quote_ident(a.attname) AS “COLUMN_NAME”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-7] ,
a.atttypid AS “DATA_TYPE”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-8] ,
pg_catalog.format_type(a.atttypid, NULL) AS “TYPE_NAME”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-9] ,
a.attlen AS “COLUMN_SIZE”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-10] ,
NULL::text AS “BUFFER_LENGTH”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-11] ,
NULL::text AS “DECIMAL_DIGITS”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-12] ,
NULL::text AS “NUM_PREC_RADIX”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-13] ,
CASE a.attnotnull WHEN ‘t’ THEN 0 ELSE 1 END AS “NULLABLE”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-14] ,
pg_catalog.col_description(a.attrelid, a.attnum) AS “REMARKS”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-15] ,
pg_catalog.pg_get_expr(af.adbin, af.adrelid) AS “COLUMN_DEF”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-16] ,
NULL::text AS “SQL_DATA_TYPE”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-17] ,
NULL::text AS “SQL_DATETIME_SUB”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-18] ,
NULL::text AS “CHAR_OCTET_LENGTH”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-19] ,
a.attnum AS “ORDINAL_POSITION”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-20] ,
CASE a.attnotnull WHEN ‘t’ THEN ‘NO’ ELSE ‘YES’ END AS “IS_NULLABLE”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-21] ,
pg_catalog.format_type(a.atttypid, a.atttypmod) AS “pg_type”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-22] ,
‘?’ AS “pg_constraint”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-23] ,
n.nspname AS “pg_schema”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-24] ,
c.relname AS “pg_table”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-25] ,
a.attname AS “pg_column”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-26] ,
a.attrelid AS “pg_attrelid”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-27] ,
a.attnum AS “pg_attnum”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-28] ,
a.atttypmod AS “pg_atttypmod”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-29] ,
t.typtype AS “_pg_type_typtype”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-30] ,
t.oid AS “_pg_type_oid”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-31] FROM
Jan 28 02:11:09 m0040933 postgres[15537]: [218-32]
pg_catalog.pg_type t
Jan 28 02:11:09 m0040933 postgres[15537]: [218-33]
JOIN pg_catalog.pg_attribute a ON (t.oid = a.atttypid)
Jan 28 02:11:09 m0040933 postgres[15537]: [218-34]
JOIN pg_catalog.pg_class c ON (a.attrelid = c.oid)
Jan 28 02:11:09 m0040933 postgres[15537]: [218-35]
LEFT JOIN pg_catalog.pg_attrdef af ON (a.attnum = af.adnum AND
a.attrelid = af.adrelid)
Jan 28 02:11:09 m0040933 postgres[15537]: [218-36]
JOIN pg_catalog.pg_namespace n ON (n.oid = c.relnamespace)
Jan 28 02:11:09 m0040933 postgres[15537]: [218-37] WHERE
Jan 28 02:11:09 m0040933 postgres[15537]: [218-38]
a.attnum >= 0
Jan 28 02:11:09 m0040933 postgres[15537]: [218-39]
AND c.relkind IN (‘r’,‘v’)
Jan 28 02:11:09 m0040933 postgres[15537]: [218-40]
Jan 28 02:11:09 m0040933 postgres[15537]: [218-41]
AND c.relname LIKE ‘%’
Jan 28 02:11:09 m0040933 postgres[15537]: [218-42]
AND a.attname LIKE ‘%’
Jan 28 02:11:09 m0040933 postgres[15537]: [218-43] ORDER
BY “TABLE_SCHEM”, “TABLE_NAME”, “ORDINAL_POSITION”
Jan 28 02:11:09 m0040933 postgres[15537]: [218-44]
Jan 28 02:11:09 m0040933 postgres[15537]: [219-1] LOG: duration: 0.665
ms statement: DEALLOCATE dbdpg_p88026_49

Which is a column_info() call in DBD::Pg

Which is called from Fields() in DBIx::Searchbuilder …

Which I assume is being called in ShowCustomFields somewhere…

The problem is network latency between the front ends and the backend
DB… Running this on the same host returns in under a second.

My frontends are on completely different networks (and indeed different
data centers) from the database servers for security (mainly DDoS type)
reasons

Now I’d suspect something very broken possibly in column_info as
pg_catalog.pg_constraint only has 36 rows in the rt4 table, and all
those are the pk’s…

The TABLE_CAT query returns 1921 rows, which corresponds exactly to the
number of pg_catalog.pg_constraint queries…

I am not familiar with Bucardo as a replication strategy, so I can’t
comment on that.

I’ve CC’d Gregg as he’s the current author for DBD::Pg, and he’s also
the person behind Bucardo replication.

However, I strongly suspect that the answer lies
amongst your replication tool and network topology.

Replication tool - no. Network topology, maybe.

The symptoms do not
match any failure modes that I’m familiar with in RT. You may wish to
try simplifying your database configuration (temporarily install Pg on
one of the front-ends) to see if that resolves any of the problems.

Not really possible or practical - as it would appear the problem is
related to network latency (of around 20ms round-trip - the query itself
completes in 0.021ms - ~0.400ms if you include the bind and dealloc) and
the use of column_info() calls… is there a reason why ShowCustomFields
needs to call column_info()?

Regards,

Michelle Sullivan
http://www.mhix.org/

Jan 28 02:11:09 to Jan 28 02:18:31 … a full 7 minutes of being in
ShowCustomFields there are just 100’s (like more than 1900 of them) of
statements like this: [snip]

… and no other queries… The start being immediately after: [snip]

Which is a column_info() call in DBD::Pg

Which is called from Fields() in DBIx::Searchbuilder …

Which I assume is being called in ShowCustomFields somewhere…

Correct. It should also only run once per process, and be cached
thereafter.

The problem is network latency between the front ends and the backend
DB… Running this on the same host returns in under a second.

Yes. These queries aren’t a notable performance hit on most servers.
At worst case, they may take a second or two – they certainly don’t
take upwards of five minutes. The latency is, as you’ve diagnosed,
almost certainly the root cause of your problems.

My frontends are on completely different networks (and indeed different
data centers) from the database servers for security (mainly DDoS type)
reasons

[snip discussion of optimization of column_info()]

RT simply isn’t designed to operate in an environment where every query
has tens to hundreds of ms of latency. Attempts to optimize this
particular set of queries will merely cause the limiting factor to
become some other set. While we’re absolutely interested in patches
that help decrease the number of queries that RT runs (as you’ve noted,
there are several queries run multiple times), I expect you will find
this to be a frustrating game of whack-a-mole. If at all possible,
altering your network topology to remove the latency is most probably a
more straightforward way to have a performant RT instance.

  • Alex

Alex Vandiver wrote:> On Thu, 2014-01-30 at 10:57 +0100, Michelle Sullivan wrote:

My frontends are on completely different networks (and indeed different
data centers) from the database servers for security (mainly DDoS type)
reasons

[snip discussion of optimization of column_info()]

RT simply isn’t designed to operate in an environment where every query
has tens to hundreds of ms of latency. Attempts to optimize this
particular set of queries will merely cause the limiting factor to
become some other set. While we’re absolutely interested in patches
that help decrease the number of queries that RT runs (as you’ve noted,
there are several queries run multiple times), I expect you will find
this to be a frustrating game of whack-a-mole. If at all possible,
altering your network topology to remove the latency is most probably a
more straightforward way to have a performant RT instance.

A thought (and a non too simple one I’ll admit) … how about developing
a ‘light-weight’ RT front end talking to a heavy backend… (might be a
good reason to develop out the the REST interface :wink: )

In the mean time I’m going to have to re-think the network design and
hope that my employer doesn’t just go ‘we’ll use a different interface’
and force me onto CTS as I’m trying to get it deployed company wide and
up-root the Salesforce and CTS Ticketing systems that are being
currently used (which needless to say would result in a sizeable
support/devel contract coming BPs way… If you want to know more about
that message me privately/directly.)

Michelle

Michelle Sullivan
http://www.mhix.org/

Alex Vandiver wrote:> On Thu, 2014-01-30 at 10:57 +0100, Michelle Sullivan wrote:

Jan 28 02:11:09 to Jan 28 02:18:31 … a full 7 minutes of being in
ShowCustomFields there are just 100’s (like more than 1900 of them) of
statements like this: [snip]

… and no other queries… The start being immediately after: [snip]

Which is a column_info() call in DBD::Pg

Which is called from Fields() in DBIx::Searchbuilder …

Which I assume is being called in ShowCustomFields somewhere…

Correct. It should also only run once per process, and be cached
thereafter.

The problem is network latency between the front ends and the backend
DB… Running this on the same host returns in under a second.

Yes. These queries aren’t a notable performance hit on most servers.
At worst case, they may take a second or two – they certainly don’t
take upwards of five minutes. The latency is, as you’ve diagnosed,
almost certainly the root cause of your problems.

My frontends are on completely different networks (and indeed different
data centers) from the database servers for security (mainly DDoS type)
reasons

[snip discussion of optimization of column_info()]

RT simply isn’t designed to operate in an environment where every query
has tens to hundreds of ms of latency. Attempts to optimize this
particular set of queries will merely cause the limiting factor to
become some other set. While we’re absolutely interested in patches
that help decrease the number of queries that RT runs (as you’ve noted,
there are several queries run multiple times), I expect you will find
this to be a frustrating game of whack-a-mole. If at all possible,
altering your network topology to remove the latency is most probably a
more straightforward way to have a performant RT instance.

In a follow up to this and my other post about slow ticket loading…
The 20ms round trip time is not the issue here - it’s just exaggerating
the issue.

I’ve built a custom port on FreeBSD to install pgpool-II-3.3.3 and am
using memcacheq and many of the RT queries (such as ACL checks) are
being cached… however the bulk of the queries are not being cached at
all, they seem to be constantly changing - and even reloading of the
same ticket is taking minutes.

The problem is something in showcustomfields… hundreds (sometimes
thousands) of queries like this:

May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Execute: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11404 AND conkey = ‘{8}’
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Parse: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11404 AND conkey = ‘{9}’
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: B message
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: D message
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Execute: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11404 AND conkey = ‘{9}’
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Parse: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11408 AND conkey = ‘{1}’
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: B message
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: D message
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Execute: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11408 AND conkey = ‘{1}’
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Parse: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11408 AND conkey = ‘{2}’
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: B message
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: D message
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Execute: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11408 AND conkey = ‘{2}’
May 15 18:59:59 vortex pgpool[31750]: DB node id: 0 backend pid: 15008
statement: Parse: SELECT consrc FROM pg_catalog.pg_constraint WHERE
contype = ‘c’ AND conrelid = 11408 AND conkey = ‘{3}’

Everytime ‘conrelid’ changes and/or ‘conkey’ changes - so far, not even
reloading the same ticket results in not a single cache entry… out of
hundreds of thousands…

However I know caching is working because I see things like:

May 15 18:57:29 vortex pgpool[31752]: query result fetched from cache.
statement: SELECT * FROM Tickets WHERE id = $1
000000010000000636363130343100010000
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: Parse: SELECT * FROM Queues WHERE id = $1
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: B message
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: D message
May 15 18:57:29 vortex pgpool[31752]: query result fetched from cache.
statement: SELECT * FROM Queues WHERE id = $1 0000000100000002313900010000
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: Parse: SELECT ACL.id, ACL.ObjectType, ACL.ObjectId FROM ACL,
Principals, CachedGroupMembers WHERE Principals.id = ACL.PrincipalId AND
Principals.PrincipalType = ‘Group’ AND Principals.Disabled = 0 AND
CachedGroupMembers.GroupId = ACL.PrincipalId AND
CachedGroupMembers.GroupId = Principals.id AND
CachedGroupMembers.MemberId = 1079074 AND CachedGroupMembers.Disabled =
0 AND ((ACL.ObjectType = ‘RT::System’ AND ACL.ObjectId = 1) OR
(ACL.ObjectType = ‘RT::Queue’ AND ACL.ObjectId = 19) OR (ACL.ObjectType
= ‘RT::Ticket’ AND ACL.ObjectId = 661041)) AND (ACL.RightName =
‘SuperUser’ OR ACL.RightName = ‘ShowOutgoingEmail’) LIMIT 1
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: B message
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: D message
May 15 18:57:29 vortex pgpool[31752]: query result fetched from cache.
statement: SELECT ACL.id, ACL.ObjectType, ACL.ObjectId FROM ACL,
Principals, CachedGroupMembers WHERE Principals.id = ACL.PrincipalId AND
Principals.PrincipalType = ‘Group’ AND Principals.Disabled = 0 AND
CachedGroupMembers.GroupId = ACL.PrincipalId AND
CachedGroupMembers.GroupId = Principals.id AND
CachedGroupMembers.MemberId = 1079074 AND CachedGroupMembers.Disabled =
0 AND ((ACL.ObjectType = ‘RT::System’ AND ACL.ObjectId = 1) OR
(ACL.ObjectType = ‘RT::Queue’ AND ACL.ObjectId = 19) OR (ACL.ObjectType
= ‘RT::Ticket’ AND ACL.ObjectId = 661041)) AND (ACL.RightName =
‘SuperUser’ OR ACL.RightName = ‘ShowOutgoingEmail’) LIMIT 1
0000000000010000
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: Parse: SELECT * FROM Groups WHERE LOWER(Domain) = LOWER($1)
AND LOWER(Type) = LOWER($2)
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: B message
May 15 18:57:29 vortex pgpool[31752]: DB node id: 0 backend pid: 14906
statement: D message
May 15 18:57:29 vortex pgpool[31752]: query result fetched from cache.
statement: SELECT * FROM Groups WHERE LOWER(Domain) = LOWER($1) AND
LOWER(Type) = LOWER($2)
000000020000000E53797374656D496E7465726E616C0000000C556E70726976696C6567656400010000

This could well be something underneath (DBD::Pg or DBIx::Searchbuilder
or even the pgsql.so shared library) - however it’s only RT that’s doing
it - I use DBD::Pg and the same version of the libraries to the same
servers for my main website where every page is served by the same
version of perl under the same version of mod_perl2 (not using Mason or
DBIx::* on my site though)… I don’t use pgpool caching on my site
either and see no such performance problems… so it’s something that is
being called that either shouldn’t be or it’s very very broken.
Pointers would be good - even on the same network (even if that is
possible for me) it would be a performance hit - the queries take 2ms
average - but doing 1000 of them is going to give a 2s delay even if the
server was on the same host, already connected and had no (as in zero)
lag to the DB… and that’s without the processing overhead to process
hundreds of useless queries per second with no results for every page
load…

Regards,

Michelle Sullivan
http://www.mhix.org/