A Review of Randomized Evaluation Justine Burns and Simon Halliday March 2, 2007

1

Introduction

In developmental policy, the main imperative is to find ways to help people, groups and societies develop so that they arrive at a situation that is better than that in which they found themselves previously. The problem thought, is how to evaluate adequately whether an outcome (lifestyle, welfare, education level) has changed as a result of an intervention policy. Is anyone better off? Do they lead improved lives? Are their families healthier and better educated? Randomized Evaluation (RE) offers an opportunity to overcome many of the classic problems that occur in evaluation. These problems are based on two questions: 1. How would those people who received an intervention have fared without the presence of the intervention? 2. How would those people who did not receive an intervention have fared in the presence of the intervention? To answer these questions, and thus to evaluate rigorously any intervention, requires a credible comparison group - a group that would have fared the same as the treatment group would have in the absence of an intervention. In terms of current (non-RE) practice, ‘control’ and ‘treatment’ groups often differ. For example, the treatment could occur in a specific area, or there could be some criterion for entering a program that disqualifies certain participants. This creates selection bias. The groups to be compared therefore need to be as similar as possible in order to facilitate the process of evaluation and this should occur as a result of a carefully considered approach to the intervention. Non-RE techniques often try to do one of the following: a. Difference-in-difference Analysis: This process contrasts the growth in the variable of interest between a treatment group and a relevant control group, normally different regions within an area. However, it is necessary for there to be long-standing time-series data in order to ensure that the groups are as similar as possible and to project that they would have behaved similarly without the presence of the treatment. Nevertheless, there can be futher biases to the standard errors from using this method. 1

b. Propensity Score Matching: In this case, groups or individuals are found that are as similar as possible along the lines of observable characteristics and ’matched’ with those in the treatment sample (normally those who have applied for or are seeking the treatment). This then allows a retrospective analysis of the data with this ‘control’ group which should diminish selection bias. The problem with this method is that it assumes that the researcher has identified all the salient observable factors underlying the process of treatment. This is often not the case.1 c. Regression Discontinuity Design: In this instance, researchers take advantage of extant discontinuities that occur as the result of policy. For example, in Israel if a class size exceeds forty students a second class is introduced to cater for this increase in student numbers. Hence there is a discontinuity between the levels of 40 students and 41 students in a grade respectively, or 80 and 81, and so forth. This allows researchers to observe differences immediately above and immediately below the threshold level [Angrist and Lavy, 1999].2 The reason that these methods came about was a response to the same problem that RE arose to solve, that of the problem of selection bias. Selection bias results from having groups with underlying different characteristics that are treated as if they are the same. For example, if an intervention was performed using different suburbs, but it only targeted poor suburbs, then comparing these (poor) suburbs with others that are not the same in the case of observable characteristics could result in biases to estimates.3 A ‘control’ group must be as similar as humanly possible to a ‘treatment’ group for unbiased estimates [Duflo and Kremer, 2005]. Moreover, an additional problem that could result from the above econometric techniques is that there may be large biases from what is known as specification error. These biases could even be greater than the selection bias that the technique was introduced to solve. The question therefore is: How is randomization different? Randomization involves a lottery process. Individuals are randomly selected into either the treatment group or the control group. This randomization process removes other potential differences that could exist between the two groups for which social scientists cannot control, or for which they find it difficult to control, such as ability, work ethic, psychological disposition and so forth. Moreover, if randomization is well-planned, the collection of comprehensive pre- and postintervention data on both control and treatment groups can provide volumes of information to illuminate the creation of policy. 1A

recent study by Diaz and Handa [forthcoming] shows that with the collection of a large number of observables propensity score matching can approximate RE results. 2 Similar work has been done in South Africa with respect to welfare responses resulting from access to the state Old Age Pension - household with individuals immediately above/below the OAP age threshold are contrasted. 3 Controlling for these observables does abrogate some of this effect, but we don’t solve the identification problem by doing so.

2

2

Why Randomized Evaluation? Some Successful Case Studies

It is important to note that there seem to be certain characteristics of successful pilot randomized evaluation projects: • They are interventions at a micro (individual/community) level. • They target specific a priori accepted outcomes. • These outcomes are measurable - i.e. attendance at school or body-mass index. • Counter-intuitive or non-traditional interventions have often seen large results - e.g. a de-worming program in Kenya saw higher attendance responses than other more conventional interventions (the use of flip-charts, textbooks, etc). • Incentive-compatible interventions have shown significant results - i.e. PROGRESA in Mexico provided a conditional cash grant to the treatment group. • Who gets the intervention matters - i.e. there is strong evidence that females, rather than males, should receive cash transfer monies in order for there to be an impact on welfare. Taking cognizance of these, it is also necessary to take consideration of the kinds of programs for which RE has been used. In microeconomics, RE has been used predominantly in either the sphere of education or that of health care. Moreover, it is important to note that there are spillover effects from the one type of program to the other, for example the de-worming program in Kenya (expressly a health consideration) dramatically affected school attendance, and far more so than ‘normal’ school inputs such as flip-charts and textbooks.

2.1 2.1.1

Education Programs PROGRESA

PROGRESA (now Opportunidades) was a program instituted in Mexico during 1998 that provided cash grants, distributed to women, conditional on child attendance at school and the application of preventative health measures for the students. The Randomized pilot was done in association with the International Food Policy Research Institute (IFPRI). There were positive and significant responses for both children (23% reduction in illness incidence, 18% reduction in anæmia, a 1-4% gain in height) and adults (reduction of 19% in days lost to illness). There were furthermore increases in enrollment at school of 3.4% for grades 1-8, and a total enrollment increase specifically for girls at the end of grade 6 of 14.8% [Gertler and Boyce, 2001, Schultz, 2004].4 4 It is worthwhile to note that this project has been adopted, with randomized pilots in several other South American countries. The Family Allowance Program (FAP) in Honduras

3

2.1.2

Programs in Kenya

A school meal program in Kenya showed increases in attendance of 30% and increases in test scores of 0.4 standard deviations (with the teachers having received training prior to the program) [Vermeersch and Kremer, 2004]. Another program, in conjunction with the NGO ICS5 , provided school uniforms, textbooks and classroom construction to 7 out of 14 randomly selected schools. There was a significant decrease in dropout rates and after 5 years students in the treatment schools had completed 15% more schooling. There was an increase in class size at the treatment schools, but this had no visible effect on test scores of students[Kremer et al., 2002]. A randomized program that introduced textbooks to Kenyan schools showed an increase in test scores of approximately 0.2 standard deviations for students in the top one or two quintiles of pre-intervention tests. Students below these levels did not benefit. Consequently a program to introduce flip-charts was initiated to test whether they were useful, using RE for a prospective use of the charts found no signifcant impact from their use [Glewwe et al., 2004a,b]. 2.1.3

Programs in India

Duflo and Kremer [2005] comment on work with Seva Mandir, an NGO in rural India, to investigate whether the hiring of additional, preferably female, teachers for non-formal schools teaching numeracy and literacy would be a worthwhile project. They found that attendance for girls increased by approximately 50%. Moreover, the schools that were allocated a second teacher were closed less often (39%) than their one-teacher counterparts (44%). However, there were no visible increases in test scores. Consequently, the NGO chose to end the program and allocate the resources elsewhere. Duflo and Hanna [2006] also used Seva Mandir to investigate another method to incentivise teacher attendance. Using both a financial incentive (wages being a function of attendance) and several methods to monitor teacher attendance, Duflo and Hanna [2006] show that their program result in lower teacher absenteeism (24%) relative to the control schools (43%). This impacted on student performance dramatically, with students in the treated schools obtaining test scores of approximately 0.17 of a standard deviation higher than the control group. A program for remedial education that hired additional remedial teachers for grades in schools showed increased test scores for students by 0.39 standard deviations, with higher results for students in the lowest third of the class gaining 0.6 standard deviations after two years [Banerjee et al., 2007].

2.2

Health Care Programs

Miguel and Kremer [2004] investigated a biannual mass-treatment de-worming [Research, 2000], a conditional cash transfer program in Nicaragua [Maluccio and Flores, 2005] and a conditional cash transfer program in Ecuador [Schady and Araujo, 2006]. 5 Internationaal Christelijk Steunfonds Africa

4

program in Kenyan schools. Attendance and health of children in treated schools went up, as did that of schools nearby because of reduced transmission. Absenteeism was 25% lower treatment schools than control schools. Including the spillover benefits, schooling was increased by 0.15 years per person from the treatment. Government sponsored private-public collaborations on the quality of health care were conducted in Cambodia targeting mother and child health [Bloom et al., 2006]. They showed that targeted outcomes resulted in an increase of 0.5 of a standard deviation in the health outcomes for the treated groups. There were minimal changes in non-targeted outcomes. Government expenditure moreover offset the private expenditure of individuals and resulted in behaviour shifts: individuals switched from unlicensed drug sellers and traditional healers to government clinics.

3

Sampling

There are basic issues that need to be dealt with in sampling - who gets what and why are the most base issues. It is not necessary for our purposes to go into the problem that needs to be addressed in terms of statistical power and sample sizes, for which it is almost always better to have a large sample rather than a small one. However, in the context of sampling, there are several factors worthy of consideration: 1. multiple treatment groups 2. randomization at the group level 3. partial compliance 4. control variables 5. stratification

3.1

Multiple Treatment Groups

This first problem is one that exists when a program provides multiple treatments. For example if there are two treaments, the evaluator would need a group for the first treatment T1 , a group for the second treatment T2 , and a control group C. In order for the control group to be effective it would have to be double the size of either of the treatment groups, or alternatively be as large as the sum of the two treatment groups. Thus if n1 was the sample size of T1 and n2 that for T2 , the sample size of the control group, nc would have to be bound by the following condition: cn = n1 + n2 .6 The general conclusion is that if there are multiple treatments occurring concurrently then there needs to be a larger control group to cater for this. 6 Alternatively, the condition could be conceived as: n = 2n if n > n , or n = 2n if 1 1 2 2 C C n2 > n1 .

5

3.2

Randomization at the group level

One of the problems that may be encountered in sampling is that an intervention was randomized at a group level with individual data recorded from this group-level intervention. The problem that could result is that individuals within a given community could negatively (postively) be affected by some shock with the consequence that their individual outcomes could be correlated as a result. Hence, the programme evaluator could believe (falsely) that increasing the sample size of individuals within a group will give their evaluation more power. However, it is more accurate to state that at the margin the evaluator will gain more information from the addition of a new cluster or group than they will through the addition of a new individual to an already existing group. Hence it is necessary, if an intervention is to be provided at the group level, such as a school, crˆeche or community organisation, that the evaluator have a large sample size of these groups as much as they do of individuals within these groups. The addition of new groups helps to cater for the possibility of intra-group shocks that could affect a number of individuals in a significant manner.

3.3

Partial Compliance

A significant problem that can be faced by policy-makers and evaluators is the problem of partial compliance by individuals in the sample. For example if there were to be approximately an 80% level of compliance by the treated group, then the entire sample would have to be approximately 50% larger in order to get communsurable effects relative to a group that had 100% compliance. Thus, it will come down to the evaluator and policy-creators to make cost-benefit decisions between the level of compliance of a more comprehensive program vs. a program with a lower level of compliance but requiring larger sample sizes in order to have the same level of power from the results. It may happen that a more comprehensively considered program with higher (predicted) compliance levels may in fact be less expensive to implement than a project with lower compliance levels.

3.4

Control Variables

Controlling for specific observable outcomes, such as test scores in education, can help the evaluator by offering a system to ensure reduced variance in the estimated coefficients.7 Moreover, controlling for pre-intervention outcomes and observables reduces the ‘noise’ in the outcomes that one measures subsequent to the intervention, i.e. it reduces the possible biases in the estimated parameters. It is greatly advised to use pre-intervention surveys of the sample to capture as much information as possible given the degrees of freedom and the sample size. 7 On the assumption that there is stratifcation, else there may be a concomitant increase ˆ the parameter estimates. in the β,

6

3.5

Stratification

Stratification allows for the balancing of data such that the data can be representative of specific groups. For example stratification can occur by area, gender, or other observable characteristics. By allocating individuals to these different ‘blocks’ or strata it makes the controls of these later far more precise than they would have been without stratification. Stratification also ensures that there is a representative sample of the population at hand. If a truly random sample is drawn it is entirely possible that certain observable groups could be left out of the treatment or control groups respectively. Stratification ensures that this cannot occur. However, it is necessary to consider the role of the variable by which the researcher chooses to stratify the sample in the prediction of specific outcomes and the order in which stratification occurs. If the variables by which we were stratifying a sample were income and gender then it could be more effective to stratify by income only, rather than by gender and then income if income was a more pertinent consideration for the outcome variable. Another important factor when deciding to stratify is the decision by the researcher to investigate the effects of the treatment on specific sub-groups within a sample. This would require representation by, and a large enough sample size of, said sub-group in order for the conclusions to be powerful.

4

Additional Design Issues: Timing, Tracking, Targeting and Sequencing

In addition to the issues considered previously there are several other important factors: 1. the timing of programs, 2. whether the individuals in a program can be viably tracked, 3. who to target and why, 4. and whether there should be specific sequencing in programs with multiple interventions or different treatments. Each of these factors provides further complexities to the policy-maker.

4.1

Timing

Timing itself has to do with two major factors, the time at which the intervention occurs so as to maximise the effectiveness and efficiency of operation and the length of time for which the program remains underway before it is assessed and declared successful or not. Most programs operate with some partner, be it government or an NGO, in order to have as much contact with the treatment groups to maximise their exposure to the treatment and therefore the response by the treated individual to the intervention. In most cases this approximates 7

the beginning of a new year, a new school year, or the beginning of a new (NGOdriven) program independent of these external factors. The length of exposure is driven mainly by the data collection protocols and also by the understanding that there can be lags in responses to an intervention. With health care programs as an example, the interventions will only have effects once better health care outcomes (BMI, height-weight ratios, incidence of absenteeism, etc) can be definitively measured. Thus the length of the program hinges on what the outcome variable of concern is and whether there is sufficient time in the program for there to be a change in the outcome variable. In most cases this results in programs requiring a minimum time of a year’s exposure and monitoring. It is preferable, as with tracking discussed below, to follow the individual observations such that long-term influences from an intervention can be ascertained. This is specifically the case with early childhood development programs where the influences of an intervention can be seen far further down the line than with other interventions.

4.2

Tracking

Tracking individual observations generally requires a good relationship between the researchers, the research partners and the communities and individuals involved in the programs. As can be seen in most any research paper written on the subject, the thanks that go out to these partners and to the communities themselves is meant to be a warrant to the importance that they have in the formation of the research. The reason that this is the case is that the researcher needs to guard against attrition. To do this they need to ensure, to some greater or lesser extent, that their interests and the interests of the concerned parties are aligned, i.e. that there is a sufficient incentive to remain in a program, or a sufficiently large disincentive from leaving the program. A problem results however when attrition is related to some unobservable characteristic for which the researchers and policy-makers cannot control. With health interventions individuals who feel healthier as a result of an intervention may stop complying with the stipulates of the intervention and either drop out or produce effects that bias the results. In the balsakhi program program, Banerjee et al. [2007] ensured that when students did not appear at schools the data collectors went to their homes in order to ensure that they collected the data and could track the individuals. This resulted in lower attrition.8

4.3

Targeting

Targeting is a consideration both for measurement, the targeting of outcomes, and for sampling, the targeting of specific groups. Bloom et al. [2006] comment that programs that target specific outcomes resulted in far more significant outcomes than non-targeted interventions did. They argued that particular 8 Not only does a situation like this provide improved data, but it shows that the researchers are concerned with the individuals and therefore makes that researchers, who some may feel are alien/outsiders, more human and more accessible.

8

health care considerations, such as child immunization, should be targeted as they result in improved outcomes for treated individuals relative to instances in which outcomes are not targeted. The intuition behind this is that nontargeting leaves too much up to the individual clinic or NGO and can result in efficiency losses in terms of the unit of money spent for a specific outcome. In terms of targeting specific groups, this is covered in the stratification section. The basic intuition is that one needs to maintain a random sample, and therefore one method by which to capture specific groups within the sample is through stratification. Another consideration with targeting, for a program such as PROGRESA, was who receives the case transfer? Previously, certain policies gave money to the household head who was often male. It has subsequently been more difficult to detect visible positive outcomes from such transfers. Consequently, targeting of mothers and grandmothers in households has occurred. In South Africa, Duflo [2001] found that there was a correlation between grandmothers receiving an old age pension (OAP) and the welfare of granddaughters. Hence, this could be particularly salient in South African policy-making.

4.4

Sequencing

Another consideration is that of the sequencing of projects or interventions. In the case where multiple interventions are being tested to understand which is the most efficient and effective, the sequencing of interventions should be a nuanced and technical decision. Duflo et al. [2006] comment how the de-worming program in Kenya was sequenced in terms of ensuring both a fair distribution of the treatment and a decent research protocol: there were 75 schools, in the first phase (1998) 25 of these were treated and contrasted with 50 untreated schools, in the second phase (1999) an additional 25 schools were brought onto the program and the remaining 25 served as a control, in the last phase (2000) the remaining 25 were provided the treatment. For multiple programs there could be a similar situation. With 120 groups or clinics for example, 30 could be given the intervention in the first phase and contrasted with the remaining 90, in the second phase an additional 30 could be provided with the second intervention type and contrasted with the 30 other treated groups and the 60 control groups. In the last phase the remaining sixty could be provided with the most effective intervention, with the provision of the more effective intervention to the group that received the less effective one in either phases one or two. The ethical consideration of which to treat first and why becomes moot if they will receive it regardless and if we are trying to ensure that they receive the optimal treatment.

5

Scaling Up Existing Programs

There are four possible ways to either scale up programs that already exist to randomized programs, or to create a program such that it will be able to have

9

fair random selection processes. 1. Oversubscription Method: when the demand for a good (such as a loan) exceeds the supply of that good then the logical, and probably the most fair, method of allocating the resources is by a process of randomization. This then allows interpretation of results for those groups/individuals for which the allocation was randomized. 2. Randomized Order of Phase-In: When there are limited implementation resources, the fairest way to allocate these resources is again randomization of who gets the resources first and then later. Moreover, with additional phases of the treatment, randomized phase-in acts as an incentive for those groups that currently act as controls for the treatment groups to remain in contact with the researchers in anticipation of future benefits. This can therefore lower attrition. However, there are complexities with timing and thus of measuring and contrasting the effects of programs. 3. Within Group Randomization: If Randomized Phase-In is an insufficient incentive for data collection, then within group randomization9 is a viable alternative. Randomization would work therefore through a within-group process. For example within a school different grades could be used. In the education program reviewed by Banerjee et al. [2007] grade 3 students were used in one school where grade 4 students were untreated, and grade 4 students were treated in another school that had untreated grade 3 students. 4. Encouragement Designs: This allows the assessment of projects that are available to all, but for which take-up is not universal. Using this process individuals or groups are encouraged to use a specific good, or take part in a program. The randomization is therefore not over the treatment itself, but rather over the encouragement to use the resource. However, this only affects the probabiliy that any individual will take advantage of a specific program rather than them actually using it. This creates complex analytical phenomena beyond the breadth of this report.

6

Conclusion

There are many problems that are confronted by any policy-maker wishing to do impact evaluation. From issues with sampling to problems with tracking individuals the researcher is fighint an uphill battle. However, randomized evaluation offers the optimal method by which to assess whether an intervention actually does anything. At the crux of development policy is the question: have we developed? Randomized evaluation offers a method by which to answer this question convincingly, rigorously and in a method unbiased by political rhetoric. It is the optimal method by which to access the truth of a policy intervention’s 9 As

occurred in India with the remedial school program.

10

outcomes and from which the policy-maker can make warranted and powerful claims about the success or failure of government interventions.

References Joshua Angrist and Victor Lavy. Using Maimonides’ rule to estimate the effect of class size on scholastic achievement. Quarterly Journal of Economics, 114 (2):533–575, 1999. Abhijit Banerjee, Esther Duflo, Shawn Cole, and Leigh Lindon. Remedying education: Evidence from two randomized experiments in India. Quarterly Journal of Economics, forthcoming, 2007. E. Bloom, I. Bhushan, D. Clingingsmith, R. Hung, E. King, M. Kremer, B. Loevinsohn, and B. Schwartz. Contracting for health: Evidence from Cambodia. mimeo, 2006. J.J. Diaz and S. Handa. An assessment of propensity score matching as a nonexperimental impact estimator. Journal of Human Resources, forthcoming. Esther Duflo. Grandmothers and granddaughters: Old age pensions and intrahousehold allocation in South Africa. World Bank Economic Review, 2001. Esther Duflo and Rema Hanna. Monitoring works: Getting teachers to come to school. NBER Working Paper No. 11880, 2006. Esther Duflo and Michael Kremer. Evaluating Development Effectiveness, volume 7, chapter Use of Randomization in the Evaluation of Development Effectiveness, pages 205–232. Transaction Publishers, UK, 2005. Esther Duflo, Rachel Glennerster, and Michael Kremer. Using Randomization in Development Economics Research: A Toolkit, 2006. P.J. Gertler and S. Boyce. An experiment in incentive based welfare: The impact of PROGRESA on health in Mexico. Mimeo, 2001. Paul Glewwe, Michael Kremer, and Sylvie Moulain. Textbooks and test scores: Evidence from a prospective evaluation in Kenya. Mimeo, 2004a. Paul Glewwe, Michael Kremer, Sylvie Moulin, and Eric Zitzewitz. Retrospective vs. prospective analyses of school inputs: The case of flip-charts in Kenya. Journal of Development Economics, 74(1):251–268, 2004b. Michael Kremer, Sylvie Moulain, and Robert Namunyu. Unbalanced decentralization. Mimeo, 2002. J.A. Maluccio and R. Flores. Impact evaluation of a conditional cash transfer program. Discussion Paper, 2005.

11

Edward Miguel and Michael Kremer. Worms: Identifying impacts on education and health in the presence of externalities. Econometrica, 72(1):159–218, 2004. International Food Policy Research. Monitoring and Evaluation System, 2000. Discussion Paper. N.R. Schady and M.C. Araujo. Cash, conditions, school enrollment, and child work: Evidence from a randomized experiment in Ecuador. unpublished manuscript, 2006. T. Paul Schultz. School subsidies for the poor: Evaluating the Mexican PROGRESA poverty program. Journal of Development Economics, 74(1):199–250, 2004. Christel Vermeersch and Michael Kremer. School meals, educational achievement and school competition: a randomized evaluation. World Bank Policy Research Paper, No. 3523, 2004.

12

A Review of Randomized Evaluation

Mar 2, 2007 - Miguel and Kremer [2004] investigated a biannual mass-treatment de-worming. [Research, 2000], a conditional cash transfer program in Nicaragua [Maluccio and Flores, 2005] and a conditional cash transfer program in Ecuador [Schady and Araujo, 2006]. 5Internationaal Christelijk Steunfonds Africa. 4 ...

126KB Sizes 0 Downloads 230 Views

Recommend Documents

Shortcuts for biodiversity evaluation: a review of ...
the objective is to find the predictive function and the hierarchical level for .... used ecological niche modelling based on environmental surrogates and found a.

A Randomized Controlled Trial of Cognitive-Behavioral ...
Mental health practitioners had referred five; the remainder had responded ... ing (CSR) to reflect degree of distress and impairment associated with each .... Master's. 30.2% (n. 13). 20% (n. 8). 25% (n. 21). Ph.D. 4.7% (n. 2). 2.5% (n. 1). 4.8% ( .

Five-Year Results of a Randomized, Single-Center Study of ...
67) under a standardized post-transplant protocol comparing Tac and mCyA. Methods: Sixty-seven heart transplant patients were randomized to Tac (n.

Review of the DAC Principles for Evaluation of ...
objectives of poverty reduction, governance issues (elections, role of civil societies, human rights, ..... clarify the role of evaluation in multi-donor support programmes, which were variously ...... requested, on TORs, selection of evaluators, met

A Randomized Effectiveness Trial of Interpersonal ...
ment, although it is unknown whether school-based cli- nicians can be ... teria for major depressive disorder, dysthymia, depres- sion disorder not ... health services.8 In recent years, school- based health ...... US Food and Drug Administration.

A randomized comparison of polypropylene mesh ...
Jun 10, 2007 - A randomized comparison of polypropylene mesh surgery with site-specific ... Patients were followed up at 6 weeks, 6 months, and annually ... 6 cm, respectively. .... anterior colporrhaphy plus polyglactin mesh surgery. The.

Randomized trial of a calling-infused career workshop ...
May 3, 2008 - the counseling and career services center at the university was testing a .... self-disclosure by sharing that they experienced and successfully managed ..... Four (or five) sessions and a cloud of dust: Old assumptions and new ...

A Randomized Clinical Trial of Mupirocin in the ... - Semantic Scholar
centage of soft-tissue infections, catheter-related infections, bac- teremia, and .... (A or B). Blinding was broken after data analysis was completed. Results.

A Randomized Pilot Trial Comparing ...
failure in the long term.3-5 Conversely, patients who respond to therapy .... by telephone to 1 of the 2 treatment regimens in a centralized randomized order, with ...

Randomized Trial of Drainage, Irrigation and Fibrinolytic.pdf ...
Randomized Trial of Drainage, Irrigation and Fibrinolytic.pdf. Randomized Trial of Drainage, Irrigation and Fibrinolytic.pdf. Open. Extract. Open with. Sign In.

Prospective Randomized Comparison of Antiarrhythmic ...
rent was delivered to the tip electrode of the ablation catheter using either the EPT 1000 generator (EP Tech- ... energy delivery could not be achieved due to immediate impedance rise even at low power setting. Radiofrequency ..... patients (6%) dev

statistical review and evaluation
May 23, 2008 - The statistical analysis plan (SAP) including the definitions of the endpoints, study population, subgroups, and statistical .... The third sensitivity method, which was not part of the SAP, examined the consequences of the observed di

A Randomized Trial of a Single Dose of Oral ...
for the Pediatric Emergency Research Canada Network. From the Department of .... The computer-generated randomization scheme, stratified by center, used ...

Experimental Performance Evaluation of a ...
packets among SW MAC, HW MAC, and Host-PC. The HW. MAC writes the packets received from the PHY into the shared-memory using Direct Memory Access ...

A Randomized Algorithm for Finding a Path ... - Semantic Scholar
Dec 24, 1998 - Integrated communication networks (e.g., ATM) o er end-to-end ... suming speci c service disciplines, they cannot be used to nd a path subject ...

Randomized Trial of Folic Acid for Prevention of ...
112 La Casa Via, Suite 210, Walnut Creek, CA 94598. Phone: 925-944-0351;. Fax: 925-944-1957; Email: [email protected]. 1046-6673/1502-0420.