Thursday, May 31, 2018

Focus on Skills, and NOT Knowledge Itself

Among the materials & books related to learning effectiveness that I have studied recently, “PEAK: How all of us can achieve extraordinary things” written by Anders Ericsson& Robert Pool (I read the Chinese translation, e-book format) is the one giving me the most inputs.

How does an EXPERT do?

Instead of going to the deliberate practice straightaway, I decide to take another approach in carrying out this discussion, in the same manner as doing a business planning - let's start with the targets / goals we would like to achieve, work backward and find out what the best way to follow.

Say I would like to excel in a particular subject, such as in mastering a programming language, so that I will be more competitive in the job market. Apart from attending programming classes without a clear target of how far I should go (although certification may be a good reference), perhaps it will be good for me to take a look on how an EXPERT should be. Ideally, an expert should be:

1. Familiar with the programming languages (definitely!)
2. Able to analyze a problem and find out the root causes quickly
3. Able to design an effective solution to resolve the problem

For (1), an expert programmer and an average programmer may know the same amount of commands & functions - what make an former as an EXPERT is how he can apply the programming languages in the problem solving - i.e. how he can analyze & understand the problem and design & implement an effective solutions by connecting the programming language he learned before.

What makes an expert? It's MENTAL REPRESENTATION!

The ability of connecting the skills to the problems, as defined by the authors, depends on how extensive relevant "mental representations" are built up in his mind (technically, in the long term memory). Well, this "mental representations" term may look too abstract to many of us. From I understand from book, let me explain this  concept in another context, which may be easier for you to relate to. Try to imagine the "mental representations" as a model / program (e.g. Prophet model) that you use to perform calculations (e.g. reserve calculations):
  1. By using the model / program, you can easily feed in the input data and get the results you want after the calculations are complete. Without the model / program, you would need to carry out the calculations manually on the spreadsheets, which will take a lot of time (and prone to human errors!) .
  2. Obviously, using the model / program in calculations is a more effective solutions. Do you realize that when you run the model / program, you don't really need to figure out how the calculations are done?
  3. For manual calculations, you need to be conscious on how you need to construct the correct formulas in order to calculate required results correctly. Such manual work definitely consume more efforts and time (please also spare some time for detecting errors and corrections…).
Another example is calculating simple arithmetic, such as "5 + 4", which we can instantly get "9" as the correct answer without counting manually. This is because we have already built a model / program in our mind so that we can perform such simple calculations instantly without needing to manually count one-by-one. Similarly, a chess master (the authors' favorite example in the book) has built out many thousands of chess models in his mind so that he can easily retrieve the most relevant chess model in order to apply the correct strategy. The more mental presentations built up in our mind, the more efficient we can understand a particular problem and come out with the correct solutions - our brain will search for the correct model / program to be used, feed in necessary inputs for the problems and return the results we would like to have.

Apart from being able to solve problems effectively, having large amount of mental representations equips us with the ability to self-learn for more in-depth areas, i.e. less dependent on guidance from teacher / coach.

How to build mental representations EFFECTIVELY? It's the DELIBERATE PRACTICE!

Here comes the most important question of this article: "How do we build up mental representations EFFECTIVELY?" When we learn a particular new skill, we are actually establishing mental representations in our mind. Does repeating a learning process help to build up more mental representations? The answer is YES and NO, depending on the learning process we carry out - the authors have pointed out that the "10,000 hours of practice" rule suggested by Malcolm Gladwell is actually incorrect, i.e. repeating the same practice for 10,000 hours will not help to make us an expert. More appropriately, the authors suggest us to adopt a more effective way of learning (which may not be necessary to take 10,000 hours), which is called DELIBERATE PRACTICE.

Highlights of deliberate practice are:
  1. Practice repeatedly, which is the key way to build up & strengthen mental representations. Repetition is a MUST - higher IQ may speed up learning for the beginning stage, but in order to an expert, repetition in practices is unavoidable.
  2. Practice with purposes, i.e. having clear objectives for the practices, such as aiming to tackle a particular weakness.
  3. Incorporate some challenges in the practices, which should be out of our comfort zone. The practices should not be too far from the comfort zone until it becomes too difficult to achieve. Learn to walk first before attempting to run.
Obtain guidance from teacher / coach, especially who can provide feedbacks promptly so that we know our weakness and how to overcome. Please note that a person with strong skills may not be a good teacher / coach.

ISBN: 9781473513143

ISBN: 9787111551287

About the Book

Title: PEAK: How all of us can achieve extraordinary things
Author: Anders Ericsson & Robert Pool
ISBN: (English) 9781473513143 (Chinese) 9787111551287
Publisher: (English) Vintage Publishing (Chinese) China Machine Press

Cover Photo Credit: Igor Ovsyannykov on Unsplash

Saturday, January 9, 2016

Waiver of Premium: How Do You Manage It?

When involving in system implementation (policy administration system) related work, I always encounter queries raised in administrating claim processes related to waiver of premium ("WOP") benefit. When WOP feature is triggered by a covered event (either death, TPD or critical illness), the benefit will be paid over long period (which may stretch for many years). This makes a systematic WOP claim process more important so that the inflow and outgo related to WOP benefit can be managed properly.

What is WOP benefit?
Briefly, when WOP feature is triggered by a covered event (either payor or insured), all future payments for premium after event date are "waived" - this doesn't mean the affected policy no longer receiving premiums. More accurately, although no payment is required for this policy, this policy still receives premiums - just the premiums come from different sources. This policy is expected to work as normal without any hiccup, for example, an investment-linked policy will continue to have allocated premiums to be invested in the unit funds as usual.

Yup, you have raised a very good question: "What do 'different sources' refer to?" Basically, when the WOP feature is triggered, risk fund shall "pay out" WOP claim. Different from other types of claims, the claim is not directly paid to the beneficiary - instead, the claim amount shall be used to: (depending on the company's preferred practice)
  • Method 1: Set aside as provision (no change in reserves calculation)
  • Method 2: Support huge increase in reserves (i.e. during reserve calculations, no future premium is assumed for policies with waiver status. When a policy changes from "normal paying" to "waiver", there is a huge increase in reserves due to zerorization of future premiums)

Method 1(a): Set aside as provision (no discounting)
There are variations for Method 1, i.e. "no discounting" & "with discounting" - method selection depends on product designs. Let's take a closer look on how the first one works using the following illustration:

Say a WOP rider (covers payor's death, TPD & critical illness) is attached to Policy A. It only covers for basic plan's premium, i.e. quarterly modal premium = 500. On a particular event date, let's say he policy still requires 22 quarterly premium payments after the event date.

  • When WOP claim is approved, risk fund provides a claim provision amounted 11,000 (= 22 x 500) (e.g. provision for outstanding claims).
  • When the quarterly premium is due, 500 is taken from the provision (as settled claim) and serves a premium payment.
  • If Policy A is terminated before its maturity date (e.g. surrender), the provision that is not utilized is released back to risk fund.
It is possible that there are outstanding premium payments between last premium payment and event  date, however there is no specific way to handle such situation. The company may require customers to settle the outstanding premiums before approving WOP claims (however sometimes this option is not feasible, especially if the outstanding amount is large), utilize the accumulated cash values or portion of provisions to settle the outstanding claims (the customers need to continue to pay premiums when the provision is exhausted).

In case some premium payments after the event date are already paid by the customer, an amount equal to total payments should be considered as "excess" that would need to be refunded to the customer.

Method 1(b): Set aside as provision (with discounting)
Basically, both variations of Method 1 work in the same way, with a few exceptions. Under this variation, instead of providing full future premium amount, a discounted amount (i.e. a present value, calculated using a specific discount rate) is set up as claim provision.

The tricky part of using Method 1(b) is the claim provision needs to be adjusted from time to time to reflect investment element - by using the same discounting rate. This investment element may introduce additional risk to the company, i.e. the actual investment yield may be lower than the discount rate - as the provision needs to be adjusted at fixed rates (otherwise it will be insufficient), in the event of shortfall, it needs to be supported by other sources. Additional capital may be required for uncertainty in investment element.

Method 2: Support huge increase in reserves
Method 2 is similar to Method 1(b) - instead of setting up separate claim provision for waiver policies, all future premiums are zerorized when calculating actuarial reserves, which yields a huge increase in reserves. The company may want to quantify portion of increase in reserves attributable to zerorization of future premiums as WOP claim.

When a premium is due, a portion of reserves will be released for premium payment:
  • For a large portfolio, the release in reserves should be more than amount to be utilized as premium payments, due to prudent assumptions in reserve calculations.
  • However, mismatch may occur for small portfolio, i.e. the release in reserve is insufficient to support premium payments (as variance is larger for small portfolio).
In practical, this method is more complex to implement, due to various "imperfect"condition. For example, the claim approval date is always later than the event date, which premiums due from event date to approval date should be immediately paid to the policy, instead of taken from release in reserves. To get a correct picture of claim, the company would need to include these immediate payment of WOP claim.

Furthermore, the company may require additional capitals due to larger reserves - Method 2 involves more uncertainty comparing to Method 1(b) because reserve calculation uses assumptions in decrements (e.g. death, TPD, critical illness & surrender).

Last but Not Least: COI / tabarru' for WOP Rider
I always face headache when working on WOP rider, as it involves very large tables - too large until it is impossible for me to examine the rates carefully. Large tables are acceptable for premium rates (as level premiums are charged for WOP rider), however it is unacceptable for COI (Cost of Insurance) for conventional investment-linked products or tabarru' for family takaful products (both ordinary family with participant account & investment-linked). Why?

As discussed earlier, when a WOP claim is approved (under Method 1), a specific claim amount is paid out from risk fund. Consistently, the COI / tabarru' should be calculated on the benefit amount expected to be paid out from risk fund upon claim.

By using the example discussed earlier, the COI / tabarru' should be calculated as:

  • Benefit amount:  500 x 22 = 11,000.00
  • COI / tabarru' rate (qx): 0.5 per 1,000 (varying by gender & attained age)
  • Monthly COI / tabarru' = 11,000 / 1,000 x 0.5 x 12 = 0.46
The COI / tabarru' should have the same structure as other tables which are used to calculate COI / tabarru' for death benefit - which normally varying by gender & attained age only. You can even use the same COI / tabarru' tables as the basic plans.

Sunday, August 24, 2014

Replacement for VFP… Is Access a Good Choice for Actuarial?

I started looking for a replacement Visual FoxPro ("VFP") since 2013, especially I noticed that Microsoft is going to cease its support on VFP 9.0 (released in 2004 and updated in 2007) after 13 January 2015. Although such technical support is not a concern to me (I never requested any support from Microsoft during my more than 10 years of experience using VFP for my actuarial work), I noticed that it would be more and more difficult for me to convince my clients (those who are not existing VFP users) to consider VFP in designing solutions for them.

When I did my research on the website, I came across many names that I was totally not familiar… Is "Lightswitch" mentioned in the official VFP website suitable? After studying its features, I concluded that it is perhaps a good application for an IT programmer, but it is too much for me in handling actuarial work. Basically, what I need in my work is to manipulate and analyze data, ideally using common programming languages like VBA and SQL. How about other applications like Lianja or Xbase++? They are "alien" to many clients (and myself as well!) and I am totally unsure whether they are vendors in Malaysia supplying these applications - furthermore, most clients will not consider these applications that are difficult to be maintained by their own users (as the solutions I designed mainly use End User Computing applications)  or getting supports.

Look like I don't have other choice except MS Access. Cheap & easily available.

Frankly speaking, initially I did not have a good perception on MS Access - I used to be a "debugger" for my team member, Mr. S, when I was the Valuation Manager in Company I. When Mr. S encountered errors in using his Access programs (developed by my genius ex-colleague Mr. F), he sometimes had no idea how to resolve those errors as Access did not give him enough "clue" on what was going on and what caused those errors. Differently, when we encountered an error in running VFP programs, VFP would indicate which line of SQL code causing that error and the error message did provide a good hint on how to resolve it. Recently I also found another actuary also has the same perception on Access.

In my recent project with one of my clients, I ended up using Access to design my solutions. Well, I have no other choice because their IT Security officer rejected VFP due to their IT policy - it was not allowed to acquire VFP as Microsoft is going to cease their supports. Well, I had to cracked my head how to make my Access programs more user-friendly, as well as having a similar efficient design structure as the VFP programs (which I have spent many years to establish). After doing some researches and referring to the two Access thick reference books I got from Kinokuniya, I have decided to use only the followings to develop my Access programs:
  1. Visual Basic Application ("VBA")
  2. Forms
  3. Tables
It is a bit challenging writing VBA codes in Access, as it doesn't have a function to record "macros" as available in Excel - that means NO SHORTCUT, i.e. I have to enter ALL VBA codes by my own. Furthermore, the VBA language used in Access is not exactly the same as in Excel. Luckily, Access still allows me to write & execute SQL codes, in order to carry out processes like creating tables, selecting records, updating values, adding / removing columns, etc. After going through several times of "enhancements", I have established an initial design structure for Access, as well as leveraging the visual features (i.e. forms) available in Access that I didn't use in VFP - as shown in the screen captures below (modified from one of my actual solution to a non-actuarial user, to remove company specific info).

Users are required to key in username & password in order to use the functions setup in the Access program.

Users key in the run parameters (Start Date, End Date & Master Folder) and select required program procedures
Now I'm getting more and more comfortable with Access and I think Access is a good choice to replace VFP for actuarial work - as well as non-actuarial users. I shall share in more details in my subsequent articles.

Monday, February 10, 2014

Other Ways to Prepare Prophet Model Point Files? Try FoxPro!

If you are a frequent Prophet user, you must be definitely familiar with "Model Point Files" ("MPF"), i.e. the policy data that you compile in a specific format that Prophet can recognize as inputs for Prophet runs. If my guess is right, most probably you use Data Conversion System ("DCS") to convert the source data you download from your policy administration system (may be in various format, e.g. fixed ASCII / fixed width, comma delimited, tab delimited, etc.) into the required MPF, based on the definitions you specified in your DCS programs. In case your source data are divided into various files, you may need to carry out more steps to prepare MPF, for example:

  • Open different source files in Excel
  • Combine required fields from different source files by using commands (such as) VLOOKUP, SUMIF & MATCH. You may have automated this process by using VBA.
  • Convert the required files into text files
  • Use DCS to convert those text files into MPF 

However, as Excel is not an ideal data manipulation tool (Unfortunately MS Access doesn't seem to be a good choice too!), you may find out the run time becomes longer & longer once the policy data are getting bigger - furthermore you would need to pray hard that the Excel won't hangup!

Understand MPF Structure
If you open a MPF using Notepad, you can easily find out that a MPF consists of 3 parts, namely header, contents & footer - as shown in the following sample MPF below:

  • Consist of field names (variable names) and policy data.
  • Delimited with comma (","), similar to CSV format you are familiar with.
  • 1st field should always start with "MPF identifier" field, which the field name is defined as "!" (or "!1" as shown in the above figure) and the values are defined as "*".

"Contents" is the most important part of your MPF, which consists of the policy data that you would like to input into Prophet run (the "header" is the MPF description and "footer" consists of some codes that only Prophet understand). In case you would like to use other tools to create MPF (such as Excel or FoxPro), basically you only need to create the "contents" part.

How to Create MPF Using FoxPro?
Before you start creating MPF using FoxPro, I would suggest you perform all necessary matching and consolidated all/most required policy data fields in ONE master table (file format: DBF) - so that FoxPro only needs to refer to 1 table when generating MPF, as well as making your checking process easier when you need to find out how a particular value is produced.

To better manage your FoxPro program, you may setup a separate PROCEDURE in the FoxPro program for generating MPF.

STEP 1: Generate MPF Using FoxPro
In the MPF generation procedure of your FoxPro program, I would suggest you split the procedure into 2 parts: (1) define values for all necessary variables first, and followed by (2) writing policy data.

  • Define values for Prophet variables that you need to include in MPF, especially those fields containing values deriving from multiple fields.

  • By using TEXTMERGE method, write the required field names to a text file. Please remember to include a comma (",") between 2 fields. 

  • Similarly, write the values into the target text file using TEXTMERGE method. In case you would like to convert a numerical value to text value, please remember to set the length & no. of decimal places for the value and remove redundant spaces, such as ALLTRIM(STR(ANN_PREM,15,2)). You need to ensure the string length should cater for all numerical fields that require conversion - use ONE max length will do, otherwise you will be confused by having many different max length.

If you open the file generated by Foxpro, it will look like the following screen capture - without header & footer:

STEP 2: Open & Update MPF in Prophet
Open every MPF in Prophet - click "Import" button for "Open" dialog. If your MPF consists of any text fields (i.e. non-numeric except "!" field), you would need to select all records in a particular field and add "Quote" to the values (right-click, select "Quote" in the menu). Otherwise, no modification is necessary unless you would like to include a description in the MPF (Correct, basically you only need to open & save the MPF if you do not have any text fields).

STEP 3: Save MPF
Save MPF after you have made the necessary modifications in STEP 2. By saving the MPF, Prophet will automatically add the missing "header" & "footer" in the MPF and now the MPF are ready to be used for Prophet runs.

A Reminder...
Apart from FoxPro, you may use Excel to prepare MPF for Prophet runs as well. However, please always note that the policy data should be sorted by SPCODE (Sub Product Code), in ascending order - otherwise, your Prophet run will fail. I would suggest you sort the policy data: (1) first by SPCODE in ascending order; (2) followed by policy number in ascending order.

I would recommend that you always include policy number in your MPF and Individual model point result files (i.e. RPT files) - in case you require Prophet results by policy level in your analysis.

Wednesday, July 3, 2013

How Should I Do My UAT? (1): Program / Spreadsheet / Model

If you are working in the actuarial department of an insurance company, it may not be uncommon to you involving in various testing exercises, which are known as User Acceptance Testing ("UAT") in general. Some of us may think that UAT is only applicable to the policy administration system implementation project (Oops... Do I remind on the non-stop follow-ups from your Project Manager?), but in fact UAT is also needed for the programs, spreadsheet templates or actuarial model we use to perform various actuarial studies - whether they are developed internally or by external consultant. It is particularly important for those tools which we are going to use to perform regular studies.

In my view, there is no one right answer for the approach to be used in performing UAT on program / spreadsheet / model - most importantly, the approach that you adopt should allow the required testing to be carried out in a structured manner, and it should not create too much "processes" that doesn't help to improve quality of testing (e.g. filling in too many forms or having too many sign-offs). In case you are struggling in finding a suitable approach for your UAT exercises, perhaps the following proposed method can provide some ideas to you.

STEP 1: Prepare Test Script
To ensure your UAT to carried out in a systematic manner, the first thing you need to do is preparing a test script. A test script consists of (1) test scenario; (2) expected results; (3) actual results (which you will fill in after carrying the required testing). Below is a sample format of a test script for a FoxPro program:

A test script serves as a structured guide to the testers, so that they have a clear idea what areas to be tested and how they should carry out the required testing, in proper order. Without any test script, the testers will only test on what they manage to think of at they are doing the testing (it will be worse if they do the testing after office hour with tired minds), like a kid running helter skelter in the street. Apart from overlooking important areas to be tested, the errors / required modifications reported to the developer will not be in a proper order - which will increase time & efforts to modify the tested tools as well as redo the required testing.

STEP 2: Prepare Test Data
Although it is good to use the actual policy data to perform UAT, sometimes the available data are unsuitable for testing, especially at the initial stage where you would like to detect any errors in formulas. Alternatively, you can use a simulated set of data with a specific pattern to do your testing, such as illustrated in the following diagram (a spreadsheet template for loss ratio study):

As shown in the above diagram, you can easily detect the formula error for Plan D, which by right the claim incurred should be 1,300 and the calculated loss ratio should be 10.00%. If you use the actual data directly, you may not be able to detect such formula error and may lead to incorrect conclusion for Plan D.

Of course, before you sign-off the program / spreadsheet / model, it is recommended to do at least one testing using actual data for reasonableness checking.

STEP 3: Perform Testing
Before you start doing testing, you may want to sort the test scenarios in an order that will allow you to carry out testing in a more efficient way. In case you find out some test scenarios left out when preparing the test scripts, you may add the test scenarios into your test script. Same for additional testing you carry out - especially the findings are meaningful.

Similarly, in case you find out any test scenarios are no longer applicable, you may want to cancel the test scenarios. We should be flexible enough in handling various conditions arising during testing.

STEP 4: Observe Outcomes
After completing each testing, you need to update the "Actual Results" column based on the outcomes you observe and decide the testing is "passed" or "failed". If the UAT exercise is complex, you may consider using an issue log in managing errors you report to the developer (IMPORTANT: Developer and Tester should NOT be the same person) or modifications you request the developer to do.

Some Remarks...
The above proposed approach is not new to actuarial people, since it uses the concept of "actual vs. expected" - which we are always doing in "monitoring the results" stage of the actuarial control cycle. However, in term of how comprehensive a test script should be, it is important to note that we should avoid having million dollar solutions to ten dollar problems - we cannot test everything or check everything, if it is a simple program, some simple testing will do (which justifies the time & effort spent).

Also, we need to take note that a program / spreadsheet / model will not be perfect after doing ONE session of UAT. Please allow the tools to improve over time, but enhancements should not be done frequently (unless there are errors to be fixed) - otherwise, you will end up being busy all the time by doing testing.

Sunday, June 23, 2013

Multiple Prophet Workspaces? Combine into ONE!

Mr. A, the head of valuation team of Company X, was struggling on how to manage his Prophet workspaces. Due to various constraints (time constraints, resource constraints, ...) in the past, his team ended up using multiple Prophet workspaces in performing valuation exercises - which incurred extra time & efforts in completing required studies. To make the matter worse, some workspaces were actually having the same products (well, the oldest workspace was used to calculate statutory reserves and the latest workspace was used to calculate IFRS reserves...) - his team needed to prepare different sets of model point files for these workspaces because the variables used to read model model point files having different names!

The Chief Actuary was unhappy with this inefficiency in managing workspaces. He requested Mr. A to work out a way to consolidate the workspaces into one workspace, or at least lesser workspaces. The problem is, the Prophet Manager couldn't allocate much time for this unplanned exercise and he didn't have much time in performing UAT...

(Note: Each workspace used only 1 library)

If we don't have time & resources in doing proper consolidation exercise on the multiple existing workspaces, perhaps the fastest way in consolidating multiple workspaces is to import libraries into a selected workspace:

  • Select a workspace as the base workspace. There is no fixed rule in selecting a base workspace - basically we can select the workspace having most products or most complicated. For example, if we have a workspace for individual products ("Individual Workspace") and another one for group products ("Group Workspace"), we may want to make Individual Workspace as the base workspace as it is more complicated than the Group Workspace.
  • Import libraries into the base workspace. A workspace can have more than 1 library, as long as the first character of the library name is different. To do so, we can select Tools > Import > Libraries. In case the existing library name in the source workspace is the same as the base workspace (e.g. "Conventional" or "Unit Linked", we need to duplicate the library in the source workspace and rename into different name before we import the library into the base workspace.

    For the options to be selected in the "Import of ???? Library" dialog, I would suggest to retain all existing workspace level properties of the base workspace (hopefully we are using the same definitions for these properties in both workspaces...). It is OK to replace all existing definitions because the library you would like to import has different name.

  • Re-create products in the base workspace. If the library name of the source workspace is originally different from the base workspace, we can import the products (Tools > Import > Products) from the source workspace. Otherwise, we need to re-create the products available in the source workspace - we should select the workspace having the least input variables as the base workspace! This is because we cannot duplicate a product for different library (even though both libraries are in the same workspace).
  • Add fields in model point files. If the variables from different libraries that used to read the same values have different names, I would suggest to include additional fields in the source workspace's model point fields - i.e. there are 2 or more fields having the same values in the model point files. When Prophet performs calculations for a particular product, it will ignore those fields which the field names are unavailable in the library that the product belongs to. By including these additional fields, we no longer need to prepare multiple sets of model point files.

Of course, we still need to perform necessary checking on the run results before we finalize the workspace for production - we may need to do additional modifications especially if the source workspace uses different definitions for the workspace level properties.

However, please note that the above proposal serves as short term solution only - we still need to do proper workspace consolidation in long run, otherwise it will require more efforts when we want to add new product into the workspace.

Monday, January 21, 2013

How Can We Manage Valuation Prophet Workspaces?

Finally it comes the time to prepare for actuarial valuation work for new financial year 2013. I think it is the correct time for me to share my proposal to one of my clients, Actuarial Department of Company A,  in order to improve how they can manage their valuation Prophet workspaces.

Existing Approach

Apart from monthly valuation (i.e. computing statutory reserves), Company A performed various valuation exercises, such as market-consistent embedded value ("MCEV"), on regular basis. Currently, they use a centralized Prophet valuation workspaces for all sorts of valuation exercises, which designated run numbers are assigned to different types of exercises.

In order to segregate runs for different valuation months, the Prophet workspace is duplicated every month (including the relevant tables) into a new folder. Of course, all workspaces are saved in a designated drive in the server.

Although there are some advantages using this approach, I do share with Company A its shortcomings:

  • Under utilization of run numbers - Each Prophet workspace allows for up to 99 runs in a single workspace. By using this approach, many run numbers in a particular may not be utilized - in simple words, many run numbers are "wasted". On the other hand, the December workspace (i.e. financial year end) may not have enough run numbers to cater for all sorts of analysis - especially those only done annually.
  • Housekeeping difficulties - No one likes to do housekeeping, but this is the task we need to do regularly ("Yes we hate it but we have to do it..."). Apart from increasing the housekeeping workloads (due to too many workspaces), it will also cause dispute on who should "zip" up the result files and backup accordingly - it is not efficient to have many teams doing housekeeping on a single workspace, and definitely it is not fair to appoint an UNFORTUNATE staff (normally junior staffs will be prospective "candidates") to "zip" files & do backups.
  • Different needs in different valuation exercises - There are different requirements for different actuarial exercises. For example, for a annual budget workspace, we may need to create hypothetical products to project the new business for planned new products (which may require coding modifications in the library); however, for monthly valuation workspaces, it may be inappropriate to create such hypothetical products in the workspaces - especially if the required coding modifications impact reserves. Furthermore, monthly valuation workspaces may want to different timing of coding updates - the monthly valuation team may want to do several coding changes at once (say) quarterly, especially those have minor impacts to reserves (your appointed actuary may question you if you need to update your coding every month but minor impacts... Furthermore, you (as well as your boss) need to do testing-checking-review-documentation exercise every month until you don't have time to date your BF / GF...).

Proposed Approach

Hence, in order to overcome the above-mentioned shortcomings, I have made the following proposal to my client. In my view, a Prophet workspace, which consists of actuarial models used to produce various calculations, should be properly controlled. Apart from differentiating developer / user access rights, by right Prophet Manager should be the ONLY ONE to create a new PRODUCTION workspace, which the respective users should submit a request (ensure the process is simple - no multiple forms & sign-off, please) when a new workspace is needed.

My proposal is:
  • Designated workspaces for each valuation team - Setup separate workspaces for different valuation team, or even by actuarial exercise if needed. For example, a particular valuation team responsible for both embedded value and budget exercises may want to have separate workspaces for each exercise (as the requirements are different).
  • Continuous use of workspaces - Utilize as many run number as possible in a workspace, until a new workspace is created to replace the workspace.

    For example, we create a monthly valuation workspace and name it as "p_mv13a". If there is no revision needed for Jan '13-Jun '13, we can continue to use p_mv13a to perform monthly valuation runs for Jan '13-Jun '13. In case a coding revision is needed during Jul '13, we can create the revised workspace and name it as "p_mv13b". Apart from managing less workspaces, I think you can easily see that your housekeeping work is lesser.
  • When to create a new workspace? - In order to control number of production workspaces, I would suggest the following approach when we want to introduce a new product to a workspace:

    - If the new product doesn't involve any coding modifications in the library (i.e. only modify the definitions in input variables) and doesn't require change of structure in any table, I would think that it is OK to create the new product in the existing workspace - instead of creating a new workspace for this purpose - as this addition of new product doesn't affect existing products / run structure / run settings. Of course, we need to properly define the workspace version (such as updating the workspace version from 1.0 to 1.1).

    - If the new product created to replace an existing product (e.g. split an existing product into 2 products), I would think it is necessary to create a new workspace. The existing run structure / run setting containing the replaced product are no longer workable without modification.
  • Use run control form - Use a run control form to document the run activities, so that we have a proper reference in the future in case we need to use / check a specific run. Of course, such documentation is not a pleasant thing to do - since we already have run log for each run, the run control form should be simple and with SOFTCOPY (so that we can duplicate and existing run control form and update the form easily).

    The above-mentioned "run activities" include tables updated, product selected and error/warnings handling (especially those we have ignored).
  • Try to make Prophet run error free - I would recommend that we should try to make a Prophet run error free - in case there is no longer any in force policy for an old product, please remove from the run structure. If we keep ignoring errors arising from Prophet run (which most of them are missing model point files), it would be possible that we will overlook the REAL run error - which we may only discover when we analyze results or NOT discover it at all! Furthermore, having many errors in a Prophet run will require additional efforts to check the run log.
  • Housekeeping & backup regularly, please - Although disk space is quite cheap nowadays, it is still a good practice to do housekeeping & backup regularly. If we "zip" up previous result files regularly, it will not only help to reduce the time required for backup - it also help us to ensure enough disk spaces for future Prophet runs (of course you don't want to find out that you have to "zip" files when you are running out of time...).

    If you store your workspaces and run results in a server, please ensure that your IT colleagues do necessary backup regularly.