The Business Case
Our customer, a leading global investment management firm, had outsourced the pricing of derivative instruments in their funds to external service providers.
The service providers would supply the customer’s fund accountant with daily theoretical prices that would then go into the funds’ net asset values (NAVs). Given that excellent know-how regarding derivative pricing was also available in the customer’s internal valuation unit, the firm found a business case in entering into additional contracts with market data vendors, setting up an in-house pricing automaton, and replacing the third-party pricing feeds with their own feeds.
Scope of the Project
The customer’s valuation unit already had experience with a third-party pricing library which should be used for the actual derivative valuation. This library would be the heart of a new application that would:
- Ingest product data from the fund accounting system,
- Ingest market data from two different external providers,
- Transform both product and market data into the format required by the valuation library,
- Pass the inputs to the library in batches and store the outputs,
- Test both inputs and outputs for statistical anomalies, and finally
- Send the theoretical prices to the fund accountant.
Given existing skills in the valuation team, it was decided to write the application in Python. It should be run by the valuation unit as a business-managed application (BAM) while the internal IT department would provide infrastructure in the form of an SQL database service, virtual machines, and in- and outbound file transfers (fund accountant, market data vendors).
We consider the following the key requirements on the application:
Depending on the fund, derivative prices have to be delivered by three different deadlines during the day, incorporating intra-day product data updates. The first tranche of prices has to be delivered little more than one hour after the market data becomes available. Failure to deliver would potentially lead to incorrect fund NAVs which need to be escalated.
By regulatory and internal requirements, the input data (product data, market data, manual configurations) for every price generated by the system and the additional log output generated by the pricing library need to be kept for audit purposes.
While the original project scope of roughly 2000 instruments did not make high demands on computing or storage resources, the system should seamlessly be able to scale up by one or two orders of magnitude. The potential future use case of scenario calculations (Value at Risk) and the customer’s existing utilization of a public cloud service should be considered.
In view of the tight daily deadlines, the experts in the valuation team must be informed quickly in case of anomalies and be able to access additional information in a targeted way.
The Role of UCG
The customer approached UCG after the project had run for several months. We were able to provide the customer with an expert with many years of experience in developing and supporting highly available large-scale valuation solutions and with a firm background in both information technology and financial engineering. He joined the team (apart from him consisting of 3 full-time external, 2 full-time internal, multiple part-time external and internal developers and analysts) as technical project lead and found that:
- the project was behind schedule; the original go-live date was not realistic any longer.
- the project had to deal with several key resources leaving or having to reduce their engagement.
- while good progress had been made with the setup of the pricing library for the various product types (consultants from the pricing library’s vendor were involved) and with the basic technical framework (automated testing, containerization using Docker Compose, Kafka messaging, Flask UI, logging with Logstash and Elasticsearch), most of the other key features were still in early phases of development.
Our expert proposed a re-planning with a new go-live goal and adjusted priorities. Each of the application’s key business features (ingestion and transformation of product and market data, calling the pricing library and storing its results, sending prices) should be delivered in a no-frills version as quickly as possible. Some specific requirements were questioned and turned out to be optional and could be postponed. The UI, for example, was deprioritized in favour of simple SQL reports and a text-based REST interface. The different product types should be onboarded in several tranches; the composition and order of the tranches were a compromise between cost-saving targets and complexity. Our expert strongly recommended a testing phase of several weeks under realistic conditions in order to identify errors and shortcomings in both the software and the processes.
While originally coming in as SCRUM product owner, our expert felt that the external Python developers working on the project did not have experience with derivative pricing and thus struggled with the business requirements. So he not only prepared detailed user stories for the other team members to work on, but also took over the development of many of the key components and made important changes and extensions to the data model. In parallel, special attention was then given to authentication – another must-have for the go-live – so our expert developed the application’s SAML 2.0 support. He furthermore built up a simple monitoring system that would regularly check certain assertions and inform the team by email in case of anomalies.
In addition to the hands-on software engineering, our expert:
- prepared and presented reports on the project status for management, auditors and other stakeholders together with the internal project manager,
- helped designing the daily processes (in collaboration with the external service providers),
- ran extensive backtest time series comparing the new prices to the previous vendor prices,
- contributed to the documentation, and
- monitored the system closely during the very intense integration testing and hypercare phases.
To assist the hypercare phase and the parallel preparation of further product tranches, another UCG expert with excellent knowledge of the Bloomberg system and price verification was brought onto the team. Together with the internal experts, we:
- analysed the backtest timeseries and
- analyzed discrepancies, e.g. by setting up the trades in Bloomberg’s pricing tool and varying the parameters.
UCG also helped validating the setup of the entire portfolio in Bloomberg, which served as a fallback pricing source.
The new application had its go-live only a few days after the new schedule. Several production problems came up during the first few weeks and could mostly be traced back to delays in the system’s supply with input data; after analysis and certain improvements by the operations team, the frequency of such delays has reduced dramatically. The monitoring system and user interface turned out to be sufficient to ensure stable operations and timely production of high-quality prices. With a few insignificant exceptions, all derivative product were onboarded within a few months. The planned cost savings were realized.
- The idea of building a minimum viable product (MVP) and gaining practical experience with it as early as possible should be followed strictly in order to identify problems and adjust the priorities of the remaining requirements.
- Allow enough time for thorough testing, i.e. time to write automated unit tests and a pre-production phase under realistic conditions. Communicate openly about problems.
- On the long run, a developer’s understanding of the subject matter is more important than special skills with a certain programming language or technology.