Pause on Error: The solution to a Really Big Problem

I could have sworn I scheduled several blogs about what I learned at Pause On Error.  Yet here they aren’t.


Well, on further review, I see that I managed to post one, but I did learn more than one thing over the course of those amazing 2 1/2 days.  The most compelling ones were, of course, ideas that affect me directly.  Let me tell you about my current challenge and then I can share how Pause was immediately helpful.


One of my favorite clients has a large database with lots of records and lots of problems.  One of those problems is that the original developer developed an accounting system without using transactions, so the numbers can be a bit wobbly.  This is very bad for an accounting system.


Another problem is that over the 3 1/2 years of this database’s existence, it has become slower, and . . . slower, and . . . .  . . . . . sloooooooowwwwwwwerrrrrrrrrr.   Their key month-end report takes upwards of two hours to run.


Nearly every field in that report is a calc, based on fields that are calcs, many of which are based on fields that are also calcs.  Even the opening balance for every month is calculated on EVERY TRANSACTION THAT HAS OCCURRED SINCE THE DAY THE DATABASE WENT LIVE.


I put that in all caps because I’m still not over it.


By Pause, I already knew what I needed to do about this problem.  The numbers need to be captured as static values, obviously.  This database is vast.  When I got it, the relationship chart looks like it was attacked by a nest of psychotic spiders.  The four biggest tables have upwards of 400 fields each, with at least a fourth in each calcs (based on calcs based on calcs).


I copied each of those tables and changed all the calculation fields to numbers or text (a hellish, tedious process).  Then the script:  set variable.  Set field.  Set variable.  Set field.  Set variable . . .    As I suffered through the keystrokes of this latest, fresh hell I pondered the problems.   This is a very dynamic solution with several very skilled users with full access privileges.  Long after I’ve finished, they may add new fields to capture new data, and they won’t know to add them to this process.   I see disaster on the distant horizon!  I was also worried about the delay.  How long would this script take to run each time the schedule layout (which uses basically all the fields in all the tables) was committed.


So . . . back to Google anybody ever accesses my browser history, they will be so bored:


 


Capture


New plan:  Move those new static tables to an archive file.  Exports are much faster.  They can be done in bulk each week or on commit.


So so many details go in to either plan, so I won’t bore you (more) with them, but the brilliant part was the timing.  Just as I was putting this plan into action — Pause on Error.


I’m positive that the Metropolitan Nine in downtown Cleveland has never been so densely filled with such massive brain power.  I sat at lunch with some of those giant brains, explaining my problem and approach, and they helped me refine the process as we ate tiny, delicious pies.


The Filemaker Pro community is characterized by generosity — almost to the human.   I’ve now been to one DevCon and one POE, and the greatest benefit by far is that opportunity to hash through your current challenges with the elite giant brains who are so willing to listen and look at your screen.


In my next blog I’ll tell you about my next step with this database, which came from a tip that was tossed out practically as an aside by Vince Mennano of Beezwax, but it solved a big problem for me.


 •  0 comments  •  flag
Share on Twitter
Published on April 29, 2016 09:09
No comments have been added yet.