Wednesday, July 3, 2013

How Should I Do My UAT? (1): Program / Spreadsheet / Model

If you are working in the actuarial department of an insurance company, it may not be uncommon to you involving in various testing exercises, which are known as User Acceptance Testing ("UAT") in general. Some of us may think that UAT is only applicable to the policy administration system implementation project (Oops... Do I remind on the non-stop follow-ups from your Project Manager?), but in fact UAT is also needed for the programs, spreadsheet templates or actuarial model we use to perform various actuarial studies - whether they are developed internally or by external consultant. It is particularly important for those tools which we are going to use to perform regular studies.

In my view, there is no one right answer for the approach to be used in performing UAT on program / spreadsheet / model - most importantly, the approach that you adopt should allow the required testing to be carried out in a structured manner, and it should not create too much "processes" that doesn't help to improve quality of testing (e.g. filling in too many forms or having too many sign-offs). In case you are struggling in finding a suitable approach for your UAT exercises, perhaps the following proposed method can provide some ideas to you.


STEP 1: Prepare Test Script
To ensure your UAT to carried out in a systematic manner, the first thing you need to do is preparing a test script. A test script consists of (1) test scenario; (2) expected results; (3) actual results (which you will fill in after carrying the required testing). Below is a sample format of a test script for a FoxPro program:


A test script serves as a structured guide to the testers, so that they have a clear idea what areas to be tested and how they should carry out the required testing, in proper order. Without any test script, the testers will only test on what they manage to think of at they are doing the testing (it will be worse if they do the testing after office hour with tired minds), like a kid running helter skelter in the street. Apart from overlooking important areas to be tested, the errors / required modifications reported to the developer will not be in a proper order - which will increase time & efforts to modify the tested tools as well as redo the required testing.

STEP 2: Prepare Test Data
Although it is good to use the actual policy data to perform UAT, sometimes the available data are unsuitable for testing, especially at the initial stage where you would like to detect any errors in formulas. Alternatively, you can use a simulated set of data with a specific pattern to do your testing, such as illustrated in the following diagram (a spreadsheet template for loss ratio study):


As shown in the above diagram, you can easily detect the formula error for Plan D, which by right the claim incurred should be 1,300 and the calculated loss ratio should be 10.00%. If you use the actual data directly, you may not be able to detect such formula error and may lead to incorrect conclusion for Plan D.

Of course, before you sign-off the program / spreadsheet / model, it is recommended to do at least one testing using actual data for reasonableness checking.

STEP 3: Perform Testing
Before you start doing testing, you may want to sort the test scenarios in an order that will allow you to carry out testing in a more efficient way. In case you find out some test scenarios left out when preparing the test scripts, you may add the test scenarios into your test script. Same for additional testing you carry out - especially the findings are meaningful.

Similarly, in case you find out any test scenarios are no longer applicable, you may want to cancel the test scenarios. We should be flexible enough in handling various conditions arising during testing.

STEP 4: Observe Outcomes
After completing each testing, you need to update the "Actual Results" column based on the outcomes you observe and decide the testing is "passed" or "failed". If the UAT exercise is complex, you may consider using an issue log in managing errors you report to the developer (IMPORTANT: Developer and Tester should NOT be the same person) or modifications you request the developer to do.

Some Remarks...
The above proposed approach is not new to actuarial people, since it uses the concept of "actual vs. expected" - which we are always doing in "monitoring the results" stage of the actuarial control cycle. However, in term of how comprehensive a test script should be, it is important to note that we should avoid having million dollar solutions to ten dollar problems - we cannot test everything or check everything, if it is a simple program, some simple testing will do (which justifies the time & effort spent).

Also, we need to take note that a program / spreadsheet / model will not be perfect after doing ONE session of UAT. Please allow the tools to improve over time, but enhancements should not be done frequently (unless there are errors to be fixed) - otherwise, you will end up being busy all the time by doing testing.

Sunday, June 23, 2013

Multiple Prophet Workspaces? Combine into ONE!

SCENARIO:
Mr. A, the head of valuation team of Company X, was struggling on how to manage his Prophet workspaces. Due to various constraints (time constraints, resource constraints, ...) in the past, his team ended up using multiple Prophet workspaces in performing valuation exercises - which incurred extra time & efforts in completing required studies. To make the matter worse, some workspaces were actually having the same products (well, the oldest workspace was used to calculate statutory reserves and the latest workspace was used to calculate IFRS reserves...) - his team needed to prepare different sets of model point files for these workspaces because the variables used to read model model point files having different names!

The Chief Actuary was unhappy with this inefficiency in managing workspaces. He requested Mr. A to work out a way to consolidate the workspaces into one workspace, or at least lesser workspaces. The problem is, the Prophet Manager couldn't allocate much time for this unplanned exercise and he didn't have much time in performing UAT...

(Note: Each workspace used only 1 library)

MY PROPOSED SOLUTIONS
If we don't have time & resources in doing proper consolidation exercise on the multiple existing workspaces, perhaps the fastest way in consolidating multiple workspaces is to import libraries into a selected workspace:

  • Select a workspace as the base workspace. There is no fixed rule in selecting a base workspace - basically we can select the workspace having most products or most complicated. For example, if we have a workspace for individual products ("Individual Workspace") and another one for group products ("Group Workspace"), we may want to make Individual Workspace as the base workspace as it is more complicated than the Group Workspace.
     
  • Import libraries into the base workspace. A workspace can have more than 1 library, as long as the first character of the library name is different. To do so, we can select Tools > Import > Libraries. In case the existing library name in the source workspace is the same as the base workspace (e.g. "Conventional" or "Unit Linked", we need to duplicate the library in the source workspace and rename into different name before we import the library into the base workspace.

    For the options to be selected in the "Import of ???? Library" dialog, I would suggest to retain all existing workspace level properties of the base workspace (hopefully we are using the same definitions for these properties in both workspaces...). It is OK to replace all existing definitions because the library you would like to import has different name.


  • Re-create products in the base workspace. If the library name of the source workspace is originally different from the base workspace, we can import the products (Tools > Import > Products) from the source workspace. Otherwise, we need to re-create the products available in the source workspace - we should select the workspace having the least input variables as the base workspace! This is because we cannot duplicate a product for different library (even though both libraries are in the same workspace).
     
  • Add fields in model point files. If the variables from different libraries that used to read the same values have different names, I would suggest to include additional fields in the source workspace's model point fields - i.e. there are 2 or more fields having the same values in the model point files. When Prophet performs calculations for a particular product, it will ignore those fields which the field names are unavailable in the library that the product belongs to. By including these additional fields, we no longer need to prepare multiple sets of model point files.

Of course, we still need to perform necessary checking on the run results before we finalize the workspace for production - we may need to do additional modifications especially if the source workspace uses different definitions for the workspace level properties.

However, please note that the above proposal serves as short term solution only - we still need to do proper workspace consolidation in long run, otherwise it will require more efforts when we want to add new product into the workspace.

Monday, January 21, 2013

How Can We Manage Valuation Prophet Workspaces?

Finally it comes the time to prepare for actuarial valuation work for new financial year 2013. I think it is the correct time for me to share my proposal to one of my clients, Actuarial Department of Company A,  in order to improve how they can manage their valuation Prophet workspaces.

Existing Approach

Apart from monthly valuation (i.e. computing statutory reserves), Company A performed various valuation exercises, such as market-consistent embedded value ("MCEV"), on regular basis. Currently, they use a centralized Prophet valuation workspaces for all sorts of valuation exercises, which designated run numbers are assigned to different types of exercises.

In order to segregate runs for different valuation months, the Prophet workspace is duplicated every month (including the relevant tables) into a new folder. Of course, all workspaces are saved in a designated drive in the server.


Although there are some advantages using this approach, I do share with Company A its shortcomings:

  • Under utilization of run numbers - Each Prophet workspace allows for up to 99 runs in a single workspace. By using this approach, many run numbers in a particular may not be utilized - in simple words, many run numbers are "wasted". On the other hand, the December workspace (i.e. financial year end) may not have enough run numbers to cater for all sorts of analysis - especially those only done annually.
     
  • Housekeeping difficulties - No one likes to do housekeeping, but this is the task we need to do regularly ("Yes we hate it but we have to do it..."). Apart from increasing the housekeeping workloads (due to too many workspaces), it will also cause dispute on who should "zip" up the result files and backup accordingly - it is not efficient to have many teams doing housekeeping on a single workspace, and definitely it is not fair to appoint an UNFORTUNATE staff (normally junior staffs will be prospective "candidates") to "zip" files & do backups.
     
  • Different needs in different valuation exercises - There are different requirements for different actuarial exercises. For example, for a annual budget workspace, we may need to create hypothetical products to project the new business for planned new products (which may require coding modifications in the library); however, for monthly valuation workspaces, it may be inappropriate to create such hypothetical products in the workspaces - especially if the required coding modifications impact reserves. Furthermore, monthly valuation workspaces may want to different timing of coding updates - the monthly valuation team may want to do several coding changes at once (say) quarterly, especially those have minor impacts to reserves (your appointed actuary may question you if you need to update your coding every month but minor impacts... Furthermore, you (as well as your boss) need to do testing-checking-review-documentation exercise every month until you don't have time to date your BF / GF...).

Proposed Approach

Hence, in order to overcome the above-mentioned shortcomings, I have made the following proposal to my client. In my view, a Prophet workspace, which consists of actuarial models used to produce various calculations, should be properly controlled. Apart from differentiating developer / user access rights, by right Prophet Manager should be the ONLY ONE to create a new PRODUCTION workspace, which the respective users should submit a request (ensure the process is simple - no multiple forms & sign-off, please) when a new workspace is needed.


My proposal is:
  • Designated workspaces for each valuation team - Setup separate workspaces for different valuation team, or even by actuarial exercise if needed. For example, a particular valuation team responsible for both embedded value and budget exercises may want to have separate workspaces for each exercise (as the requirements are different).
     
  • Continuous use of workspaces - Utilize as many run number as possible in a workspace, until a new workspace is created to replace the workspace.

    For example, we create a monthly valuation workspace and name it as "p_mv13a". If there is no revision needed for Jan '13-Jun '13, we can continue to use p_mv13a to perform monthly valuation runs for Jan '13-Jun '13. In case a coding revision is needed during Jul '13, we can create the revised workspace and name it as "p_mv13b". Apart from managing less workspaces, I think you can easily see that your housekeeping work is lesser.
     
  • When to create a new workspace? - In order to control number of production workspaces, I would suggest the following approach when we want to introduce a new product to a workspace:

    - If the new product doesn't involve any coding modifications in the library (i.e. only modify the definitions in input variables) and doesn't require change of structure in any table, I would think that it is OK to create the new product in the existing workspace - instead of creating a new workspace for this purpose - as this addition of new product doesn't affect existing products / run structure / run settings. Of course, we need to properly define the workspace version (such as updating the workspace version from 1.0 to 1.1).

    - If the new product created to replace an existing product (e.g. split an existing product into 2 products), I would think it is necessary to create a new workspace. The existing run structure / run setting containing the replaced product are no longer workable without modification.
     
  • Use run control form - Use a run control form to document the run activities, so that we have a proper reference in the future in case we need to use / check a specific run. Of course, such documentation is not a pleasant thing to do - since we already have run log for each run, the run control form should be simple and with SOFTCOPY (so that we can duplicate and existing run control form and update the form easily).

    The above-mentioned "run activities" include tables updated, product selected and error/warnings handling (especially those we have ignored).
     
  • Try to make Prophet run error free - I would recommend that we should try to make a Prophet run error free - in case there is no longer any in force policy for an old product, please remove from the run structure. If we keep ignoring errors arising from Prophet run (which most of them are missing model point files), it would be possible that we will overlook the REAL run error - which we may only discover when we analyze results or NOT discover it at all! Furthermore, having many errors in a Prophet run will require additional efforts to check the run log.
     
  • Housekeeping & backup regularly, please - Although disk space is quite cheap nowadays, it is still a good practice to do housekeeping & backup regularly. If we "zip" up previous result files regularly, it will not only help to reduce the time required for backup - it also help us to ensure enough disk spaces for future Prophet runs (of course you don't want to find out that you have to "zip" files when you are running out of time...).

    If you store your workspaces and run results in a server, please ensure that your IT colleagues do necessary backup regularly.

Get Inspiration for New Problems

In my last post and debut podcast, I talked about why strong foundations matter for coming up with smart, workable solutions in business. Bu...