What are Development Impact Bonds?

A Development Impact Bond is a new way of financing in the social development sector. Primarily, there are five parties involved: the service provider (the one who provides with a product or service that will bring a positive impact to the recipients), the risk investor (the one who puts the money upfront betting on the service provider’s credentials and ability to deliver the outcomes), the outcome funder (the one who will pay money with interest to the risk investor but only if the service provider achieves a pre-decided quantum of impact), the outcome evaluator (the one who tells us if the service provider met the target), and the process manager (the one who helps run the show). In case the outcome funder is a government then it’s called a social impact bond (SIB) and behaves similarly. For the purpose of this article a reference to DIB can be to either a DIB or SIB.

It’s called an impact bond because it pays for better social outcomes that create impact; otherwise not. The foundation of successful evaluation of performance in the DIB lies on two pillars: setting the right targets, and a rigorous process to evaluate the outcomes. This article will provide a brief on the role of the outcome evaluator in a DIB – in particular about setting the threshold targets which triggers the payment from an outcome funder to the risk investor.

Credit: India Development Review

What is an outcome that can be evaluated?

Choosing the right metrics for evaluation is a key first step to determining targets – because evaluation on these metrics will define if the programme has had a sufficient positive outcome. Hence if the theory of change involves an increase in learning outcomes of reading and mathematics – then those need to be measured (as opposed to other outcomes – such as confidence of the child, etc). In past and ongoing DIBs the outcome metrics  include measuring an increase in student enrolment in schools, reduction in health risk of diabetes, reduced carbon emissions, increase in crop yield and even reducing recidivism (tendency of a convicted criminal to reoffend) among prison inmates. Naturally, the outcome must be beneficial for the consumer and denote progress towards a better society.

For example, for a DIB focused on the quality of education, the improvement of scores of students in math and language (i.e. student learning outcomes) can be a metric for evaluation. This can be then further broken down into several areas such as solving a number puzzle involving multi-step number operations, or identifying the meaning of a word in a given sentence context. These granular learning objectives make the backdrop of the outcome evaluation which is done by a third party.

This article further cites examples and references to an ongoing DIB in Haryana in India to understand the process of target setting, evaluation, and pay outs better.

The Haryana Early Literacy intervention is the first-ever Development Impact Bond (DIB) project in India to leverage CSR (Corporate Social Responsibility) funding for outcomes payment exclusively focusing on early literacy. It involves the Haryana School Shiksha Pariyojna Parishad (HSSPP) and Language & Learning Foundation (LLF) in partnership with IndusInd Bank and SBI Capital Markets. This DIB will scale up the existing program of Language and Learning Foundation in the state of Haryana. Educational Initiatives is playing the role of an outcome evaluator in the DIB. Henceforth, in this article, this project will be cited as an example and referred as the “HEL-DIB”


How do we set the targets to evaluate these said outcomes?

Setting targets is a multi-pronged approach. The below are insights from doing it for an early grade literacy program for government school students in the State of Haryana (HEL-DIB).

  1. Talk to experts

Typically, a range of experts in the field are consulted – these include subject/technical experts, assessment experts, statisticians, and ideally beneficiaries of the programme. For HEL-DIB, the right numbers for the target were decided in consultation with a pedagogy expert (to know what are children expected to know that this age), an assessment expert (to know what kind of assessment will be conducted on children), a child psychologist (to know major development outcomes at different intervals), and interacting with some children.

  1. Refer past literature

In the HEL-DIB, reference and study of past literature on similar experiments was done and the gains (improvement in learning of children understood through various metrics) through a pre-post-test which were documented in published research papers were studied. Previous studies[1] done in India which assesses students from grades 1 – 3 (identified as foundational learning linked to early literacy gains) were read and similarities and limitations were drawn from the study design in planning. These studies provide a current benchmark of where students are and what are the achievement levels we can hold them to. These helped to define the range of the targets.

  1. Set boundary conditions

A maximum limit based on the technical or pedagogical aspects (based on field trials and published research)  can be decided for the target. For example, if we know that an average person speaks 125 words per minute with a standard deviation of 15, it might be not be prudent to set a target for more than 200 words per minute for a typical intervention. This is also linked to the pay out at the end of the project (see next section). In this particular case, the number of words in Oral Reading Fluency (ORF)[2] was benchmarked.

  1. Link the target to the duration of the project

The targets are also set considering the duration of the project. In this case, a two year target was set for the evaluation period. However the target can be lowered if the intervention is for only one year. 

How do we know if we have achieved the set targets?

Identifying the study design is a crucial step to knowing how one can evaluate if the said targets are met. A decision must be taken on whether the gains will be compared to only the intervention group gains or to net of a control or comparison group. There are four options in decreasing order of accuracy. The first two use a control/comparison group, and the next two measure the difference in performance of children at two different points in time.

  1. Randomized Control Trial[3] Students are assigned to a treatment group or a control group. The randomness in the assignment of subjects to groups reduces selection bias and allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments. This establishes an almost equal baseline between the two groups and any gains in the intervention group are netted out against gains in control group students to be able to solely attribute the net gains to the intervention.
  2. Comparative Study: selecting a group for comparison after a group has already been identified to receive the treatment. Since this is not randomized, the gains may not be as accurately attributed to the intervention as in an RCT.
  3. Comparing the 60th percentile of the group’s learning level to the 50th percentile: while the actual numbers could vary, this showcases by how much the score distribution curve needs to shift to the right.  This was done for the HEL-DIB study design.
  4. Independent Baseline and End-line Measurement: measuring the outcomes in students at the start and end of the programme to know the gain in learning. This may not provide accurate results because the cause and effect factors may vary and it is difficult to rule out the externalities. Also because there is no way to say whether the gain is what one would expect without any intervention or if it is because of the intervention. Hence the gains are compared to a meta-study of “business as usual” gains of similar programs.

Okay, but what if the outcome does not meet the targets exactly? Will the pay-out option be just a binary – yes or no?

The nature of the DIB is such that unless all parties agree, it won’t come to fruition, and naturally everyone wants to push for what’s favourable to them. Lower targets tend to be favourable for the service provider and the risk investor while higher targets tend to be favourable to the outcome funder because would gain maximum value from their grant ‘investment’.

The setting of targets is a consultative process that goes through multiple iterations. In multi-year DIBs, one can also find that the targets for each year are different. This is because knowing the nature of the programme and effects on the beneficiaries, there may be higher gains only in the third year of the implementation.

It is not necessary for the pay-outs to be binary – the targets may also be a range where the pay-out can be a function of the actual gain. In newer DIBs, where the ‘rate card’ is not yet established, all parties could agree for the pay-outs to have a cap on both sides – for e.g. 80% of the payment is made regardless of whether there is any gain and a bonus of an extra 20% can be paid when gains exceed targets.

In the Early Literacy DIB, the ability of students to do a list of 13 tasks (testing nine skills across reading, writing, speaking, and listening) was evaluated. However targets were set for only 6 sub-tasks out of the 13 sub-tasks that were linked to pay-outs.

About the organisation’s involvement in DIBs: While EI is an outcome evaluator in the HEL-DIB, EI is the service provider in the UBS Foundation’s QEI – DIB in India. In partnership with Pratham Infotech Foundation, EI has implemented Mindspark in 55 schools serving 11,000 students in Lucknow, Uttar Pradesh for improving learning outcomes in Language & Math

Additional Resources to understand more about DIBs:

  1. A short video on thoughts on RCTs, Assessments, and Impact Evaluationsby Dr. Lant Pritchett, Dr. Rukmini Banerji, Prachi Windlass, and Dr. Karthik Mudalidharan
  2. Understanding Development Impact Bonds and the need for evidence-based decision making by Prachi Windlass(8:03 to 21:45)
  3. A short article by IDR on basics of DIBs
  4. About the current ongoing Quality Education India DIB

[1] Includes Educational Initiatives’ research work on Foundational Literacy & Numeracy, Room to Read’s Scaling Up Early Reading Intervention Project, and RTI’s reports in Mali and South Africa

[2] Oral Reading Fluency is defined as the ability to read with speed, accuracy, and proper expression. In order to understand what they read, children must be able to read fluently whether they are reading aloud or silently.

[3] From Wikipedia: A randomized control trial is a type of scientific experiment that aims to reduce certain sources of bias when testing the effectiveness of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—receives the intervention being assessed, while the other—usually called the control group—receives an alternative treatment, such as a placebo or no intervention. The groups are monitored under conditions of the trial design to determine the effectiveness of the experimental intervention, and efficacy is assessed in comparison to the control.