Further Improvements in the Calculation of Censored Quantile Regressions Mehdi Hosseinkouchacky December 29, 2010

Abstract Censored Quantile Regressions of Powell (1984, 1986) are very powerful inferencing tools in economics and engineering. As the calculation of censored quantile regressions involves minimizing a nonconvex and nondi¤erentiable function, global optimization techniques can be the only breakthroughs. The …rst implementation of a global optimization technique, namely Threshold Accepting of Fitzenberger and Winker (1998, 2007), is challenged by Genetic Algorithm (GA) in this paper. The results show that GA provides substantial improvements over Threshold Accepting for cases with randomly distributed censoring points. Keywords: Censored Quantile Regression, Genetic Algorithms, Threshold Accepting, Simulated Annealing. Global Optimization

y

The author is thankful to Bernd Fitzenberger and Uwe Hassler for helpful comments. Goethe University Frankfurt, E-mail: [email protected]

1

1

Introduction

Analysis of censored data appeared in the work of Tobin (1958), where he introduced the Tobit model. The estimation technique that he proposed was a Maximum Likelihood Estimator (MLE). Amemiya (1973) presented the properties of Tobin’s model along with an extension. Koenker and Bassett (1978), introduced Quantile Regressions, and presented their properties. The framework of quantile regressions is in such a way that Tobin’s (mean-regression) model is not applicable to them, hence one could not readily have regression quantiles when a censoring is present. It was Powell (1984, 1986) who introduced a model that incorporated censoring points for quantile regressions and coined the term Censored Quantile Regressions (CQR). The Powell’s model, for right censoring, can be written as follows

min

1 Pn n i=1

(Yi

min fyci ; Xi g)

(1)

where n is the number of observations, fYi ; Xi gni=1 represent the observations with Xi being the explanatory variable which is a vector of size k, yci is the censoring point for observation i, and (x) = [

I(x < 0)]x is a check function, with I(:) being a usual indicator function. This

function is non-convex and non-di¤erentiable and may have multiple local optima. A graphical representation make the form of Powell’s score clear. Draw a sample size of 10, with independent elements, from a normal standard distribution, i.e. xi

N (0; 1) and i = 1; 2; ::; 10, and let the

iid error terms have a standard normal distribution as well. De…ne a right censored dependent variable yi = min f0:1xi ; 0g. Now the score function, for a median quantile regression, can take a form as plotted in …gure 1 where its x-axis represents

and its y-axis represents the value of

the score function. The score function (1), has its two local minima close to each other when = 0:5 (Figure 1) while its localities fall far apart from each other for

= 0:1 (Figure 2).

This example shows that solving (1) can’t be done using classical optimization methods such as those that need the gradient.

2

0.28

0.275

0.27

0.265

0.26

0.255

0.25

0.245

0.24 -0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

Figure 1. An example for Powell’s score with

0.3

0.4

0.5

= 0:5.

0.7

0.65

0.6

0.55

0.5

0.45

0.4

0.35

-15

-10

-5

0

5

10

Figure 2. An example for Powell’s score with

15

20

= 0:1.

CQRs have been appealing for econometricians mainly for two reasons. First, quantile regressions are techniques for estimating an entire family of conditional quantile functions, that provide a more subtle statistical inference framework. Second, they are capable of taking into account censored data which happen to exist because of institutional policies or inherent features of some economic issues, e.g. labor market analysis. (see for example Fitzenberger 3

and Wilke (2006)). Exploiting the inferencing power of CQRs led to many research papers that were aimed at facilitating the process of obtaining the CQRs. The …rst important and in‡uential algorithm to solve (1) was introduced by Buchinsky (1994). The algorithm called Iterative Linear Programming Approach (ILPA), is based on this idea that regardless of the censoring one can still use standard programming techniques to solve (1). As documented by Chernozhukov and Hong (2002), ILPA fails to always provide a global solution to (1). Koenker and Park (1996) developed a general interior point algorithm. Fitzenberger (1997) provided an algorithm which is based on the algorithm of Barrodale and Roberts (1973). Fitzenberger and Winker (1998, 2007) used Threshold Accepting which is a (meta) heuristic algorithm and is one of the focal points of the current paper, and we will consider it in more details later. Buchinsky and Hahn (1998) estimate the probability of censoring for each data point and use it to modify the Powell’s model to have a convex distance function which can easily be solved using standard linear programming techniques. Honoré, Khan and Powell (2002), extend the model of Buchinsky and Hahn (1998) to incorporate random censoring points, where they proposed a Kaplan-Meier technique for estimating the censoring distribution. They called their estimator Randomly Censored Least Absolute Deviations (RCLAD) estimator. Their simulation study shows that RCLAD is at least slightly better than its counterparts in terms of di¤erent statistical measures such as RMSE. Chernozhukov and Hong (2002) introduced a 3-step estimator for CQRs which resembles ILPA in some sense. CQRs with random censoring points are also treated by Chernozhukov and Hong (2003) using a Markov Chain Monte Carlo (MCMC) estimation framework in a Bayesian setup. In contrast to all the previous works which are mainly relying on the early work of Powell (1984, 1986), Portnoy (2003) provides a di¤erent modeling approach to estimate CQRs. Portnoy (2003) splits each censored data point to one lying at the observed value, Yi , and one at +1, Y1 , and assigns weights of wi ( ) and 1

wi ( ) to each, respectively. For a right censored model,

wi ( ) is the probability that the ith observation is below the desired conditional quantile of dependent variable given that it is censored. This leads to the following model for obtaining CQRs.

min

X

i2K( = )

(Yi

Xi ) +

X

i2K( )

fwi ( )

4

(Yi

Xi ) + (1

wi ( ))

(Y1

Xi )g (2)

where

(x) is de…ne as before, K ( ) is the set of censored data points up to , and Y1

is a big number. The estimation framework is recursive in the sense that it starts with the estimation of a quantile regressions at a vicinity of zero and moves upwards towards the desired quantile level. This means that weights, as de…ned before, can change from one quantile level to the next higher level, because some censored data points can be crossed. This reweighting procedure makes the estimation procedure quite complicated and the asymptotics become hard to establish, however Portnoy (2003) and Neocleous et.al. (2006) elaborated on providing them. It is worth noting that the Portnoy’s approach reduces to a Kaplan-Meier estimator for survival function, hence it is a generalization of Kaplan-Meier survival function estimator to quantile regressions. Peng and Huang (2008) argue that the estimation framework of Portnoy (2003) is far too complicated to be fully understood. To ease this theoretical complication they propose another estimation framework that is a generalization of Nelson-Aalen survival function estimator. Peng and Huang (2008) establish the uniform consistency and weak convergence of their estimator. Koenker (2008) performed a simulation study to compare Peng and Huang (2008), Portnoy (2003) and Powell’s model. For the model of Powell he used BRCENS and all his simulations study were carried out in R, where all these methods are already implemented in the quantreg package. In a one-sample setting, Koenker (2008) shows that using Kaplan-Meier leads to some e¢ ciency gain over Powell’s model, however his simulation study do not really support this idea for a more general setting. The discussion of Portnoy (2003) and Peng and Huang (2008) and comparing them to previous models are out of the scope of this paper. In short, Portnoy (2003) and Peng and Huang (2008) developed methods that are substantially di¤erent from the work of Powell (1984, 1986), which based on the discussions provided by Koenker (2008) do not seem to provide substantial improvements in calculation of CQRs. This argument inspires further investigation of Powell’s model, using di¤erent global optimization techniques. In what follows we provide an implementation of Genetic Algorithms for the estimation of CQRs. Applying a global optimization algorithm to the obtain CQRs was …rst proposed by Fitzenberger and Winker (1998), where, as mentioned before, they proposed a Threshold Accepting algorithm. As documented by Fitzenberger and Winker (1998, 2007), TA outperforms many other approaches to obtaining CQRs, a study which inspires further

5

investigation of global optimization techniques. Genetic Algorithm among many other global optimization algorithms, is the focus of the current paper as a means to solve (1). GA was applied to obtain CQRs by Czarnitzki and Doherr (2002) before, however, their implementation ignored a very important feature of CQRs, namely Interpolation Property, which will be discussed in the next section. This fact makes the previous implementation of GA to CQRs be de…cient. The structure of the paper is as follows. The next section discusses Threshold Accepting as implemented by Fitzenberger and Winker (1998, 2007). In section 3, we describe Genetic Algorithm and discuss its implementation to solve (1). In section 4 we provide a replication the simulation study of Fitzenberger and Winker (1998, 2007), while we discuss why we decided to choose the same simulation study. In section 5, we discuss the results, and …nally we conclude in section 6.

2

Threshold Accepting

Threshold Accepting (TA) is a global optimization algorithm, which was …rst introduced by Dueck and Scheuer (1990) to solve traveling sales man problem (TSP) which is one of the most well known combinatorial problems. Fitzenberger and Winker (1998, 2007) implemented TA to compute CQRs. Before giving a description of the algorithm, it is necessary to consider Interpolation Property (IP). IP states that the solution to (1) interpolates exactly k data points where k is the length of . Using IP the search space for solving (1) becomes discrete, because a solution of (1) is characterized by k data points. Let E be the search space, then it consists of all k-tuples of data points with linearly independent regressor vectors1 . The description of the algorithm is simple. Let mc 2 E be the current solution of the algorithm, and B (mc ) 2 E de…ne a neighborhood of mc . Now the algorithm will move to m 2 B (mc ) if the score function at the new point is lower, otherwise the decision to move to m is made if the deterioration of score is below a threshold value. This threshold converges to zero as the algorithm iterates forward. Key elements of this algorithm are the de…nition of neighborhood B (mc ) and the threshold sequence. As Winker (2001) discusses, B (mc ) is purely dependent on the speci…cation of CQRs. 1

The maximum number of elements of E is

n! (n k)!k! .

6

If B (mc ) has a too small radius then the local movements will be constrained and on the other side a too big radius leads to ine¢ cient search. Fitzenberger and Winker (1998, 2007) de…ne B (mc ) to be those element of E that are constructed through replacing one element of mc with another data point which is linearly independent of the other elements of mc and strictly reduces the absolute size of the residuals (yi

Xi ), where

is characterized by mc .

A pseudo code of TA is sketched below. 1. Initialize of the TA in two steps. (a) Choose a threshold sequence, Ti , i = 0; :::; Imax . (The procedure to construct threshold sequence is described below) (b) set i = 0 and generated an initial solution mc , get the corresponding , and evaluate the objective function of (1) at it, fmc 2. De…ne the elements of B (mc ). 3. Choose a random solution m in B (mc ), get its corresponding , and evaluate the objective function of (1) at it, fm 4. Calculate the change in the objective function of (1) 5. If

= fmc

fm .

< Ti , then mc = m and update B (mc ).

6. If i < Imax then i = i + 1 and go to step 3, otherwise terminate the algorithm with mc . As mentioned before, neighborhood structure and threshold sequence fT0 ; T1 ; :::; TImax g are the key elements of Threshold Accepting algorithm. The length of threshold sequence, i.e. the maximum number of iterations that TA takes is also an important factor in convergence to the global optimum. (see for example Althöfer and Koschnick (1991)). To obtain the threshold sequence, Fitzenberger and Winker (2007) suggest to choose a large number of random solutions in E. Then for each of these solutions construct the neighborhood set, B, and choose a random neighborhood for each. Now calculate the value of objective function at each of the initial points and its corresponding substitute. For each point calculate the ratio of the absolute value of change in the objective function (1) and divide it by the value of the smaller objective function.

7

These ratios, after being sorted, represent the threshold sequence. For a convergence analysis of TA we refer the reader to Fitzenberger and Winker (2007). Fitzenberger and Winker (1998, 2007) elaborated on a very extensive simulation study comparing ILPA, BRCENS, TA, and NLRQ (Nonlinear Regression Quantile) which was developed in Koenker and Park (1996). What they also propose is to use TA’s …nal point as a starting point for BRCENS to make more improvements, and call this procedure TB. In short their simulation shows that TB has the best performance, as it almost always provides a lower value for the objective function of (1), and at the same time it exhibits lower values for RMSE and a 90% quantile of the absolute deviations from the true parameters. A …nal note is that as discussed by Winker (2001), the de…nition of a neighborhood structure in‡uences the performance of the algorithms. To see such an e¤ect, as far as a simulation study allows, we de…ne another structure for B and call it TA-unres for which we leave one element of the current solution mc out and replace it with any other data point that does not decrease the rank of mc . The implementation of Fitzenberger and Winker (1998, 2007) for B, which we call it TA-res, restricted the neighborhood to a local set of quantile hyperplanes, which might not always be desirable. However, as the simulation results in section 5 show, these two implementations of TA are not substantially di¤erent, hence we only report on TA-res.

3

Genetic Algorithm

Genetic Algorithm is an optimization technique that is categorized as a global search heuristic. Genetic algorithm are based on an abstraction of the natural evolutionary behavior which was originally proposed by Holland (1970). A GA comprises a set of genetically coded solutions (the population) and a set of biologically inspired operators de…ned over the population itself. According to the evolutionary theories, only the most suited elements in a population are likely to survive and generate o¤spring, thus transmitting their biological heredity to new generations. The chromosomes in a GA population typically take the form of bit strings, but they can take some other forms as well, e.g. vectors, letters, digits and integers and each element of the chromosome is to be considered as a gene. Each chromosome can be considered to be a point in the search space. GA processes chromosome populations, successively replacing one such population with another. GA, most often requires a …tness function that assigns a 8

score (…tness) to each chromosome in the current population. The …tness of a chromosome depends on how well that chromosome solves the problem at hand. So, when applying GA to a minimization problem, the …tness function can be de…nes as the inverse of the objective function. As the GA is based on the notion of the survival of the …ttest, the better the …tness value, the greater is the chance to survive. The simplest form of GA involves three types of operators: selection, crossover, and mutation. To initiate a discussion of these operators for our GA implementation to solve (1), we need a bit of notations. The most basic step in implementing a GA is providing a representation of the chromosomes. We propose to choose each element of E, as de…ned in the previous section2 , to represent a chromosome where each data point used to construct each element of E is considered to be a gene. A population, G, in GA will be a subset of E with a known number of elements, say ps . Crossover is an operator of GA which is responsible for generating o¤springs from a current population. These o¤springs form the biggest part of the next generation. Let M1 = [x1;M1 ; :::; xk;M1 ] and M2 = [x1;M 2 ; :::; xk;M2 ] be two chromosomes of the current population, Gc , which are k

k matrices. Generate a random number, r, in f1; 2; ::; kg and choose v = [x1;M1 ;

:::; xr;M1 , xr+1;M 2 ; :::; xk;M2 ]. If the rank of v is not k then a new random number, r, is drawn and v is updated and crossover …nishes, yielding v as an o¤spring. Crossover, in fact, mixes the genes of a couple to generate an o¤spring, a procedure which pretty much resembles natural reproduction. Mutation is another operator of GA, which randomly chooses some prede…ned number of chromosomes of E. To initiate a crossover one needs to choose two chromosomes from Gc . This is done through the selection operator. For each chromosome of Gc calculate the …tness value, which is the inverse of objective function (1) at

vectors characterized by chromosomes

of Gc . Each chromosome of Gc is …rst assigned a weight, wj with j = 1; :::; pc , equal to the ratio of its …tness value to sum of all of the …tness values, the weights are then sorted. Now P using a random number u unif (0; 1), chromosome j is selected if wi wj 1 wi < u and P u. wi wj wi Given that the number of elements of each generation is pc , we have to specify the contribu-

tion of mutation and crossover in generating a next generation. At each generation we always move the best chromosome to the next generation3 , and assign a fraction 0:9 < 2

< 1:0 of the

E contains all the k-tuples of data points with linearly independent regressor vectors, as discussed before. In this way, based on Interpolation Property (IP), only points are searched over that are possibly optimum. 3 Mitchell (1996) addressed that elitism can substantially improve the performance of GA.

9

remaining pc

1 elements to crossover and 1

to mutation.

De…ning a stopping criterion for GA, i.e. how many generations should GA proceed, is necessary. On the other hand deciding about the size of each population is also important, to this end we considered 3 di¤erent implementations for GA: 100 GA20 , 100 GA50 , and 150 GA50 .4 As we will see in section 5, increasing the dimensions of GA has a positive e¤ect on the performance of GA in terms of providing a lower level of objective function of (1) in the simulation study. There are, however, many manuscripts that discuss how the choice of these parameters a¤ects the performance of GA among which we refer the reader to Goldberg (1989). For convergence results, we refer the redear to Cerf (1998) who established the …rst well-founded convergence results for GA. A pseudo code for GA is as follows: 1. Choose the maximum number of iterations of GA, Imax , and set i = 0. 2. Choose pc random chromosomes from E to construction the …rst generation Gc . 3. Calculate the inverse of objective function of (1) (…tness values) for all the chromosomes of Gc . 4. Find the chromosome with highest …tness and save it as the elite. 5. Perform selection followed by a crossover till nc = [

(pc

1)] chromosomes are gener-

ated5 . 6. Perform mutation until pc

nc

1 chromosomes are generated.

7. Save the chromosomes generated in steps 3 to 5 in G. 8. Replace Gc with G and set i = i + 1. 9. If i < Imax go to step 3, otherwise …nd the elite of Gc and terminate GA. GA was applied to solve CQR by Czarnitzki and Doherr (2002) before, however their implementation ignores Interpolation Property which leads to a de…cient search6 . 4

In y GAx , y is the number of generations and x is the population size. [x] is the biggest integer that is smaller than x. 6 The search space proposed and used by Czarnitzki and Doherr (2002) is continuous, while the search space de…ned by the Interpolation Property and used in the current implementation of GA is discrete and has a 5

10

4

Simulation Study

The simulation study used here, is exactly the same as the one proposed by Fitzenberger and Winker (1998, 2007). We chose the same setting for two reasons. First, their simulation study is very comprehensive and covers a wide rage of problems. Second, as we analyze the same problem, namely solving (1), with a di¤erent heuristic, using the same simulation study provides the best possible platform for comparing the relative performance of TA and GA. We have 8 di¤erent Data Generation Processes (DGPs) called A, B, C, ..., H. All DGPs are based on the following model

yi = min(yci ;

1

+

Xk

j=2

j xij

(3)

+ "i )

where k = 2; 3; 4. We focus on a median regression, i.e.

= 0:5. Table 1 shows all the

DGPs. Table 1. Di¤erent DGPs used in this paper Censoring Point A C B C C C + 0:5 D C + 0:5 E N (C; 1) F N (C; 1) G N (C + 0:5; 1) H N (C + 0:5; 1)

True Coe¢ cients (0; 0; 0; 0) (0; 0; 0; 0) (0:5; 0:5; 0:5; 0:5) (0:5; 0:5; 0:5; 0:5) (0; 0; 0; 0) (0; 0; 0; 0) (0:5; 0:5; 0:5; 0:5) (0:5; 0:5; 0:5; 0:5)

Regressor values xi;2 ; xi;3 N (0; 1) xi;2 = 9:9 + 0:2i, xi;2 ; xi;3 N (0; 1) xi;2 = 9:9 + 0:2i, xi;2 ; xi;3 N (0; 1) xi;2 = 9:9 + 0:2i, xi;2 ; xi;3 N (0; 1) xi;2 = 9:9 + 0:2i,

For all the DGPs, C takes three values of 0, 0.5, and 1, and xi;4

xi;3 = x2i;2 xi;3 = x2i;2 xi;3 = x2i;2 xi;3 = x2i;2

N (0; 1). First four DGPs,

i.e. A-D have constant censoring points, while for the rest, E-H, they are normally distributed. DGPs B, D, F, and H have deterministic regressors, and true parameters are (0; 0; 0; 0) for A, B, E, and F, while they are (0:5; 0:5; 0:5; 0:5) for the rest. Fitzenberger and Winker (1998, 2007) discussed this simulation study in details7 . For DGPs with 2 or 3 regressors, 1000 samples of bounded and known maximum number of members. Czarnitzki and Doherr (2002) check candidate solutions for optimality that do not satisfy Interpolation Property, in other words their search space spans solutions that are for sure not optimal. These for sure no optimal solutions are not few as a continuous space contains uncountable number of members. In other words they perform a lot of unnecessary calculations. This on the other hand leads to a need for more iterations. Interpolation Property simply shrinks the search space hence scales down the optimization problem. 7 The simulation study of Fitzenberger and Winker (1998, 2007) has an inconsistency in DGPs (D) and (H) for 3 and 4 regressors cases for which they reported a high level of censoring. In case of DGP (D) for the 3 regressors model, they report a censoring value of 50.0 % in table 6. Note that when C = 0, DGP is the yi = minf0:5; 2: 08i 0:02i2 53: 455 + "i gif this leads to 50.0 percent of censoring then it means that

11

size 100 were generated, while when k = 4, only 100 samples of size 100 was used, which is to cut the computation time. As we see in table 2, there are simulations for which the average censoring frequencies are less than 10 percent.

Table 2. Average share of censored observations in random samples for various DGPs as a fraction of one. k=2 k=3 k=4 DGP C = 1:0 C = 0:5 C = 0:0 C = 1:0 C = 0:5 C = 0:0 C = 1:0 C = 0:5 C = 0:0 A 0:1588 0:3093 0:4996 0:1588 0:3093 0:4996 0:1631 0:3131 0:5043 B 0:1591 0:3067 0:4989 0:1591 0:3067 0:4989 0:1617 0:3071 0:4987 C 0:1856 0:3268 0:4992 0:2076 0:3411 0:5009 0:2311 0:3569 0:5054 D 0:4094 0:4594 0:5092 0:0177 0:0370 0:0651 0:0214 0:0438 0:0718 E 0:1443 0:2413 0:3616 0:1443 0:2413 0:3616 0:1426 0:2432 0:3703 F 0:1447 0:2389 0:3617 0:1447 0:2389 0:3617 0:1441 0:2347 0:3664 G 0:2530 0:3704 0:5002 0:2636 0:3758 0:4992 0:2761 0:3814 0:5009 H 0:4100 0:4587 0:5105 0:0320 0:0503 0:0754 0:0353 0:0526 0:0760

5

Results

Through all of this section, we do not compare the two variations of TA as proposed before, because they are reporting the same values for the objective function of (1). In fact we only provide comparison results for TA-res which is the original implementation of Fitzenberger and Winker (1998, 2007). A crucial feature of TA is the maximum number of its iterations which we chose to be equal to the maximum value of iterations chosen by Fitzenberger and Winker (1998, 2007), i.e. Imax = 100 000. Table 3 shows the relative performance of algorithms in terms of obtaining a lower value of the objective function of (1). A value of 1:000 means that TA reached a lower value of the objective function of (1) compared to GA for all the simulations, and vice versa for a value of 0:000. This table is very crucial and deserves an extensive analysis. Table 3 divides the results for the relative performance into 3 sections according to the number of regressors for di¤erent DGPs. The columns represent di¤erent censoring values corresponding to di¤erent values of P r(2: 08i 0:02i2 53: 455 + "i 0:5) = 0:5 which is impossible for standard normal "i in this simulation (i = 1; 2; : : : ; 100). As we see, in table 1 of the current paper, the censoring value for this setting is much lower than what reported by Fitzenberger and Winker (1998, 2007). This mistake repeats itself in setting (D) when the number of regressors is 4, and for setting H.

12

C as de…ned in the previous section. As an example, consider DGP (A) for 2-regressor case and when the censoring parameter C is 0:0. For such a simulation setup, we observe a value of 0:138 in table 3 which means that in 13.8% of the simulations TA reached a lower value for the objective function of (1) and in the rest of the simulations GA performed better and provided a lower value. With this description a value lower than 0:500 means a worse performance for TA. As one can observe in table 3, there are only few cases for which TA provides a lower value for the objective function of (1). To see how TA and GA compare to each other consider …gure 3 which represents a density …t to all the entries of table 38 . As one can observe, there are two peaks in …gure 3: one close to zero and close to 0.9. The peak close to zero is signi…cantly higher, i.e. GA obtains a lower value for the objective function of (1) more often. To match the lower peak with the data, one can see from table 2 that this happens for DGPs A and B for low levels of censoring (C = 0:5 and 1:0) and also for DGPs D and H (only for 3- and 4-regressor cases). Note that DGPs D and H for 3- and 4-regressor cases have deterministic regressors for which the true value of

is di¤erent from zero and that the average share of

censored observations for them is always less than 0.1 (see table 2). The rest of table 3 is in favor of using a GA. Looking into table 3 one would, however, see that the performance of TA gets better when the number of regressors increase. This is also the case when the censoring parameter, C, increases. To further analyze the distinction between TA and GA consider DGPs with random censoring points (E, F, G, and H). When C = 0:0 and k = 2, TA gets to a lower value for the objective function of (1) between 0:9 percent and 1:1 percent of the cases when compared to

150 GA50

a result which does not turn in favor of TA for more regressors except

for DGP H with 4 regressors. This pattern repeats itself for C = 0:5, however for C = 1:0 we see that except DGP H in 3 and 4-regressor cases, DGPs E and F are also in favor of TA. To sum up

150 GA50

is better o¤ except those few cases mentioned above for a CQR with random

censoring points. 8 This is a simple density …t obtained through using a kernel density estimation, with the kernel chosen to be Gaussian and the bandwidth is a Silvermann’s rule of thumb, which is just for illustration purposes.

13

1.1

1

0.9

0.8

0.7

0.6

0.5

0.4

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 3. A density …t to the entries of table 3.

Tables 4 to 9 show the Root Mean Square Deviation (RMSD) of the estimates from the values used in the simulation for di¤erent DGPs for

1

and

2

9

. RMSD of

1

for C = 0:0

is reported in table 4, while tables 5 and 6 report this measure for C = 0:5, and C = 1:0, respectively. The next three tables report RMSD of

2

in the same order for censoring levels.

At the …rst glance TA and GA generally exhibit a lower RMSD for

2

compared to

1.

In table

4, where censoring indicator C is equal to zero, one can observe that GA does not provide a comparable level of RMSD for

1,

specially for DGPs A and B, where we previously observed

that TA obtains a lower level of objective function of (1). However GA seems to provide better estimates in terms of RMSD when the censoring indicator is C = 0:5 or C = 1:0 (see tables 5 and 6), which could be expected. Apart from these cases, GA almost always provides a lower level of RMSD for DGPs E, F, G, and H, where the censoring points are random. The pattern described for tables 4, 5, and 6, repeats for the next 3 tables (7, 8, 9) where RMSD of

2

is

reported however TA gets better when C = 1:0 while GA in its best setting remains at best comparable. Estimation bias of

1

and

2

are reported in tables 10 to 15. One can observe in these

tables that GA tends to report a negative bias as compared to TA for which it is almost always 9

Reporting on

3

and

4

does not change the overall conclusion.

14

positive. The bias size is substantially lower for GA for DGPs with randomly distributed censoring points in almost all case, while as was the case with RMSD, TA always provides a lower bias for DGPs A and B. Mean Absolute Deviation (MAD) of the estimates from the true parameters used in the simulation study are reported for

1

and

2

in tables 16 to 21. These tables do not provide a

substantial change in the relative stance of TA and GA, i.e. these two algorithms report almost the same values of MAD for di¤erent DGPs.

6

Conclusion

A short overview of the approaches to obtain CQRs are presented in this paper. Following the hints made in Koenker (2008) one can consider more investigation of Powell type models as compared to the approaches of Portnoy (2003) and Peng and Huang (2008). The best performing approach to obtain CQRs of Powell’s model is the implementation of Threshold Accepting by Fitzenberger and Winker (1998, 2007). This means that using global optimization techniques can be preferred to other approaches in the context of CQRs. In this paper we implement a Genetic Algorithm to obtain CQRs. Based on the results presented in the previous section we suggest to choose GA to obtain CQRs specially when the censoring points are random because GA has a substantially smaller bias and in most of the cases obtains a lower value for the objective function of a CQR, however GA only attains smaller values for RMSD of its estimates when there are a lot of censoring present while for the other cases its RMSD remains at most comparable.

7

References 1. Althöfer, I. and K. U. Koschnick (1991). On the convergence of Threshold Accepting. Applied Mathematics and Optimization 24, 183-195. 2. Amemiya, T. (1973). Regression analysis when the dependent variable is truncated normal. Econometrica 41, 997–1016. 3. Barrodale, I. and F. D. K. Roberts (1973). An improved algorithm for discrete l1 linear

15

approximation. SIAM Journal of Numerical Analysis 10, 839-848. 4. Buchinsky, M. (1994). Changes in U.S. wage wtructure 1963-1987: applications of quantile regressions. Econometrica 62, 405-458. 5. Buchinsky, M. and J. Hahn (1998). An alternative estimator for the censored quantile regression model. Econometrica 66, 653-671. 6. Cerf, R. (1998). Asymptotic convergence of genetic algorithms. Advanced Applied Probability 30, 521-550. 7. Chernozhukov, V. and H. Hong (2002). Three-step censored quantile regressions and extramarital a¤airs. Journal of American Statistical Association 97, 1001-1012. 8. Chernozhukov, V. and H. Hong (2003). An MCMC approach to classical estimation. Journal of Econometrics 115, 293-346. 9. Czarnitzki, D. and T. Doherr (2002). Genetic algorithms: a tool for optimization in econometrics -basic concept and an example for empirical applications. ZEW discussion Paper No. 02-41. 10. Dueck, G. and T. Scheuer (1990). Threshold accepting: a general purpose algorithm appearing superior to simulated annealing. Journal of Computational Physics 90, 161175. 11. Fitzenberger, B. (1997). A note on estimating censored quantile regressions Center of International Labor Economics, Discussion Paper No. 14, University of Konstanz. 12. Fitzenberger, B. and R. Wilke (2006). Using quantile regression for duration analysis. Allgemeines Statistisches Archiv 90, 103-118. 13. Fitzenberger, B. and P. Winker (1998). Using threshold accepting to improve the computation of censored quantile regressions. In: Compustat, Proceedings in Computational Statistics - 13th Symposium help in Bristol, Great Britain, 1998, (eds. Payne, R. and P. Green), 311-316. Physica-Verlag, Heidelberg, New York. 14. Fitzenberger, B. and P. Winker (2007). Improving the computation of censored quantile regressions. Computational Statistics and Data Analysis 52, 88-108. 16

15. Goldberg, G. (1989). Genetic algorithms in search, optimization and machine learning. Addison-Wesley. 16. Holland, J. H. (1970). Robust algorithms for adaptation set in a general formal framework. Proceedings of the IEEE Symposium on Adaptive Processes - Decision and Control 17, 5.1-5.5. 17. Honoré, B., S. Khan, and J. L. Powell (2002). Quantile regression under random censoring. Journal of Econometrics 109, 67-107. 18. Koenker R. and G. Bassett (1978). Regression Quantiles. Econometrica 46, 33-50. 19. Koenker, R. and B. J. Park (1996). An interior point algorithm for nonlinear quantile regression. Journal of Econometrics 71, 265-283. 20. Koenker, R. (2008). Censored quantile regression redux. Journal of Statistical software 27, 1-25. 21. Mitchell, M. (1996), An introduction to genetic algorithms, Cambridge. 22. Neocleous, T., K. Vanden Branden and S. Portnoy (2006). Correction to censored regression quantiles by S. Portnoy, 98 (2003), 1001–1012. Journal of the American Statistical Association 101, 860-861. 23. Peng, L. and Y. Huang (2008). Survival analysis with quantile regression models. Journal of the American Statistical Association 103, 637-649. 24. Portnoy, S. (2003). Censored regression quantiles. Journal of the American Statistical Association 98, 1001-1012. 25. Powell, J. L. (1984). Least absolute deviations estimation for the censored regression model. Journal of Econometrics 25, 303-325. 26. Powell, J. L. (1986). Censored regression quantiles. Journal of Econometrics 32, 143-155. 27. Tobin, J. (1958). Estimation of relationships for limited dependent variables. Econometrica 26, 24-36.

17

28. Winker, P. (2001). Optimization heuristics in econometrics: applications of threshold accepting. Wiley, Chichester.

18

8

Tables

Table 3. Average relative performance of TA-res vs 100 GA20 , and 100 GA50 , and 150 GA50 in terms of reaching a lower value for the Powell’s distance function C

TA-res vs 100 GA20 0:0 0:5 1:0

A B C D E F G H

0:138 0:155 0:018 0:021 0:066 0:054 0:038 0:017

0:892 0:890 0:155 0:015 0:142 0:135 0:061 0:022

0:967 0:957 0:612 0:023 0:457 0:390 0:125 0:041

A B C D E F G H

0:099 0:109 0:043 0:893 0:183 0:182 0:085 0:777

0:881 0:834 0:222 0:994 0:474 0:476 0:154 0:918

0:995 0:992 0:596 0:999 0:807 0:803 0:301 0:974

A B C D E F G H

0:090 0:120 0:090 0:930 0:320 0:300 0:170 0:960

0:850 0:810 0:340 0:980 0:740 0:810 0:230 0:960

0:990 1:000 0:690 1:000 0:900 0:880 0:460 1:000

TA-res vs 100 GA50 0:0 0:5 1:0 k=2 0:105 0:772 0:873 0:095 0:806 0:863 0:008 0:061 0:420 0:005 0:005 0:007 0:013 0:024 0:160 0:008 0:026 0:154 0:013 0:011 0:016 0:005 0:006 0:009 k=3 0:053 0:812 0:981 0:031 0:790 0:982 0:011 0:066 0:315 0:720 0:988 0:996 0:049 0:170 0:524 0:038 0:159 0:535 0:017 0:051 0:081 0:494 0:732 0:912 k=4 0:020 0:750 0:970 0:010 0:800 0:990 0:020 0:080 0:330 0:780 0:950 0:990 0:160 0:370 0:750 0:060 0:410 0:790 0:040 0:100 0:220 0:780 0:820 0:940

19

TA-res vs 150 GA50 0:0 0:5 1:0 0:090 0:087 0:005 0:006 0:009 0:011 0:006 0:004

0:713 0:783 0:057 0:004 0:014 0:011 0:014 0:004

0:837 0:832 0:388 0:004 0:129 0:097 0:017 0:008

0:051 0:026 0:009 0:688 0:038 0:034 0:016 0:447

0:801 0:779 0:046 0:983 0:129 0:127 0:035 0:707

0:984 0:976 0:289 0:993 0:489 0:492 0:067 0:889

0:020 0:020 0:030 0:800 0:110 0:040 0:050 0:720

0:750 0:740 0:070 0:890 0:350 0:350 0:070 0:860

0:980 1:000 0:270 0:960 0:770 0:750 0:160 0:900

Table 4. Root Mean Square Deviation from the true paramters: C = 0:0 for TA-res A B C D E F G H

0:1722 0:1615 0:1946 1:0143 0:2495 0:2535 0:4170 1:0172

A B C D E F G H

0:1538 1:2759 0:2861 0:1511 0:2612 0:5319 0:4542 0:2431

A B C D E F G H

0:1578 0:2239 0:3232 0:1991 0:2673 0:3438 0:4738 0:2694

100 GA20

100 GA50 k=2 1:8845 4:3113 7:1742 11:9325 3:1968 2:6473 0:4054 0:3728 0:1620 0:1551 0:1631 0:1603 0:2253 0:2242 0:3853 0:3456 k=3 2:3886 2:1323 15:3422 19:0049 0:5024 0:6094 0:2442 0:2122 0:1927 0:1645 0:2742 0:2538 0:2749 0:2388 0:2743 0:2294 k=4 0:3229 1:1966 4:4665 7:6257 0:5407 0:3740 0:2640 0:2382 0:2245 0:1869 0:3172 0:2894 0:3030 0:2512 0:3737 0:2525

20

150 GA50

4:8357 11:2023 1:8360 0:3669 0:1538 0:1607 0:2292 0:3355 3:5457 18:7713 0:6592 0:2117 0:1666 0:2476 0:2395 0:2185 1:2603 12:7376 0:3862 0:2191 0:1947 0:2909 0:2774 0:2519

1

Table 5. Root Mean Square Deviation from the true paramters: C = 0:5 for TA-res A B C D E F G H

0:1383 0:1610 0:1627 0:8083 0:1818 0:1805 0:2754 0:8254

A B C D E F G H

0:1395 1:1673 0:2132 0:1811 0:1842 1:2291 0:3099 0:1961

A B C D E F G H

0:1390 0:1885 0:2273 0:1721 0:1873 0:2735 0:3365 0:2217

100 GA20

100 GA50 k=2 0:2185 0:1944 0:5315 0:6132 0:2586 0:2350 0:3403 0:3066 0:1521 0:1426 0:1531 0:1456 0:1925 0:1787 0:3106 0:2712 k=3 0:2863 0:2019 2:7753 2:7601 0:2465 0:2432 0:2649 0:2171 0:1753 0:1526 0:2603 0:2265 0:2090 0:1872 0:2682 0:2235 k=4 0:2348 0:2054 0:3139 0:3180 0:2412 0:2134 0:2815 0:2302 0:2273 0:1791 0:3022 0:2620 0:2336 0:1919 0:2971 0:2252

21

150 GA50

0:3228 0:7376 0:2409 0:3099 0:1431 0:1455 0:1856 0:2710 0:4311 4:4652 0:2365 0:2188 0:1498 0:2299 0:1850 0:2165 0:2260 9:6206 0:2358 0:2044 0:1828 0:2622 0:2028 0:2472

1

Table 6. Root Mean Square Deviation from the true paramters: C = 1:0 for TA-res A B C D E F G H

0:1263 0:1253 0:1464 0:6376 0:1357 0:1339 0:1993 0:6433

A B C D E F G H

0:1317 0:8841 0:1449 0:1847 0:1376 0:1933 0:2109 0:1821

A B C D E F G H

0:1213 0:1977 0:1542 0:1697 0:1297 0:2113 0:2239 0:2049

100 GA20

100 GA50 k=2 0:1504 0:1320 0:1427 0:1298 0:1525 0:1449 0:2837 0:2512 0:1442 0:1327 0:1470 0:1361 0:1585 0:1497 0:2621 0:2313 k=3 0:1698 0:1494 0:2594 0:2255 0:1738 0:1635 0:2612 0:2194 0:1695 0:1473 0:2518 0:2295 0:1837 0:1623 0:2550 0:2211 k=4 0:1827 0:1582 0:2815 0:2423 0:2137 0:1721 0:3085 0:2312 0:1727 0:1527 0:3150 0:2917 0:2128 0:1721 0:2948 0:2411

22

150 GA50

0:1330 0:1275 0:1444 0:2480 0:1311 0:1361 0:1496 0:2314 0:1451 0:2185 0:1574 0:2128 0:1463 0:2315 0:1611 0:2119 0:1545 0:2579 0:1807 0:2279 0:1484 0:2350 0:1789 0:2256

1

Table 7. Root Mean Square Deviation from the true paramters: C = 0:0 for TA-res A B C D E F G H

0:1229 0:0220 0:3576 0:2311 0:1246 0:0240 0:2419 0:1977

A B C D E F G H

0:0725 0:6476 0:3440 0:0219 0:1193 0:1562 0:2523 0:0215

A B C D E F G H

0:0751 0:0428 0:3335 0:0187 0:1753 0:0399 0:2562 0:0214

100 GA20

100 GA50 k=2 0:9570 2:1431 0:7656 1:2551 1:4417 1:2852 0:0698 0:0647 0:1591 0:1541 0:0299 0:0292 0:2194 0:2195 0:0666 0:0605 k=3 0:9119 0:8722 7:8172 8:3793 0:3214 0:3499 0:0292 0:0249 0:1766 0:1599 0:0345 0:0307 0:2554 0:2217 0:0287 0:0244 k=4 0:2382 0:4204 2:0096 4:6021 0:3707 0:3372 0:0308 0:0243 0:2058 0:1795 0:0376 0:0295 0:2478 0:2400 0:0354 0:0274

23

150 GA50

2:2458 1:1836 1:0618 0:0646 0:1515 0:0290 0:2200 0:0589 1:2631 9:1683 0:3957 0:0239 0:1617 0:0314 0:2201 0:0243 0:4852 8:2830 0:2845 0:0259 0:1734 0:0324 0:2616 0:0258

2

Table 8. Root Mean Square Deviation from the true paramters: C = 0:5 for TA-res A B C D E F G H

0:1207 0:0243 0:1817 0:1915 0:1171 0:0217 0:1942 0:1690

A B C D E F G H

0:1228 0:3051 0:2093 0:0218 0:1158 0:3817 0:1994 0:0215

A B C D E F G H

0:1129 0:0383 0:2164 0:0193 0:1181 0:0848 0:2176 0:0217

100 GA20

100 GA50 k=2 0:1969 0:1767 0:0701 0:0771 0:2446 0:2287 0:0614 0:0562 0:1509 0:1410 0:0265 0:0256 0:1971 0:1826 0:0584 0:0512 k=3 0:2203 0:1896 2:1910 1:1799 0:2273 0:2150 0:0296 0:0256 0:1715 0:1517 0:0316 0:0265 0:2098 0:1868 0:0294 0:0242 k=4 0:2152 0:1879 0:1088 0:2462 0:2148 0:2082 0:0279 0:0229 0:1688 0:1543 0:0335 0:0270 0:2464 0:2191 0:0318 0:0243

24

150 GA50

0:2307 0:0878 0:2309 0:0564 0:1416 0:0257 0:1860 0:0506 0:2578 2:0080 0:2085 0:0249 0:1525 0:0272 0:1890 0:0244 0:2243 3:6032 0:2112 0:0241 0:1641 0:0248 0:2138 0:0263

2

Table 9. Root Mean Square Deviation from the true paramters: C = 1:0 for TA-res A B C D E F G H

0:1275 0:0217 0:1218 0:1528 0:1125 0:0209 0:1587 0:1368

A B C D E F G H

0:1312 0:2458 0:1367 0:0218 0:1172 0:0231 0:1648 0:0211

A B C D E F G H

0:1408 0:0219 0:1629 0:0195 0:1116 0:0191 0:1719 0:0216

100 GA20

100 GA50 k=2 0:1472 0:1309 0:0245 0:0224 0:1643 0:1545 0:0522 0:0487 0:1389 0:1307 0:0248 0:0240 0:1679 0:1589 0:0493 0:0445 k=3 0:1767 0:1528 0:0317 0:0256 0:1857 0:1680 0:0305 0:0246 0:1638 0:1436 0:0305 0:0268 0:1941 0:1705 0:0293 0:0245 k=4 0:1987 0:1749 0:0415 0:0264 0:2204 0:1900 0:0358 0:0229 0:1769 0:1607 0:0334 0:0278 0:2031 0:1710 0:0354 0:0279

25

150 GA50

0:1334 0:0224 0:1559 0:0482 0:1305 0:0239 0:1601 0:0443 0:1513 0:0253 0:1661 0:0244 0:1412 0:0260 0:1690 0:0247 0:1524 0:0265 0:1981 0:0241 0:1551 0:0255 0:2006 0:0264

2

Table 10. Bias of the estimations from the true paramters: C = 0:0 for TA-res A B C D E F G H

0:1098 0:1043 0:1471 0:9300 0:1963 0:1947 0:3561 0:9058

A B C D E F G H

0:1286 0:0010 0:2196 0:0819 0:2181 0:2149 0:4010 0:1694

A B C D E F G H

0:1286 0:1511 0:2499 0:1315 0:2177 0:2408 0:4277 0:1916

100 GA20

100 GA50

k=2 0:4602 2:0184 0:3033 0:0330 0:0083 0:0156 0:0290 0:0312 k=3 0:3385 5:1813 0:1065 0:0110 0:0188 0:0080 0:0257 0:0180 k=4 0:1455 0:8937 0:0384 0:0156 0:0378 0:0507 0:0263 0:0888

26

150 GA50

1:2243 4:0546 0:4906 0:0406 0:0102 0:0167 0:0360 0:0326

1:3522 4:2635 0:4456 0:0325 0:0077 0:0171 0:0372 0:0351

0:5891 8:1831 0:1815 0:0028 0:0172 0:0061 0:0321 0:0055

0:8116 8:5025 0:1940 0:0090 0:0164 0:0092 0:0323 0:0100

0:4306 2:0627 0:0768 0:0106 0:0242 0:0147 0:0224 0:0292

0:5328 4:4682 0:0886 0:0194 0:0485 0:0200 0:0598 0:0101

1

Table 11. Bias of the estimations from the true paramters: C = 0:5 for TA-res A B C D E F G H

0:0036 0:0047 0:0430 0:7290 0:1295 0:1211 0:2155 0:7339

A B C D E F G H

0:0011 0:0596 0:0946 0:0035 0:1330 0:0869 0:2502 0:0978

A B C D E F G H

0:0119 0:0126 0:1323 0:0495 0:1295 0:1082 0:2938 0:1287

100 GA20

k=2 0:0137 0:0510 0:0469 0:0165 0:0033 0:0112 0:0247 0:0185 k=3 0:0640 0:3299 0:0503 0:0072 0:0095 0:0118 0:0180 0:0116 k=4 0:1006 0:0254 0:0153 0:0185 0:0102 0:0449 0:0090 0:0015

27

100 GA50

150 GA50

0:0096 0:0511 0:0405 0:0219 0:0027 0:0096 0:0220 0:0208

0:0228 0:0536 0:0422 0:0312 0:0051 0:0116 0:0244 0:0214

0:0288 0:3284 0:0632 0:0006 0:0076 0:0175 0:0208 0:0006

0:0463 0:4772 0:0568 0:0035 0:0037 0:0086 0:0259 0:0084

0:0749 0:0398 0:0277 0:0379 0:0204 0:0156 0:0197 0:0530

0:0753 1:0090 0:0524 0:0105 0:0154 0:0243 0:0137 0:0353

1

Table 12. Bias of the estimations from the true paramters: C = 1:0 for TA-res A B C D E F G H

0:0016 0:0029 0:0147 0:5479 0:0719 0:0637 0:1323 0:5446

A B C D E F G H

0:0029 0:0240 0:0473 0:0021 0:0723 0:0640 0:1564 0:0554

A B C D E F G H

0:0069 0:0142 0:0651 0:0207 0:0713 0:0818 0:1641 0:0775

100 GA20

k=2 0:0096 0:0015 0:0029 0:0148 0:0006 0:0079 0:0132 0:0089 k=3 0:0072 0:0193 0:0199 0:0061 0:0061 0:0080 0:0257 0:0051 k=4 0:0210 0:0084 0:0378 0:0168 0:0214 0:0326 0:0026 0:0105

28

100 GA50

150 GA50

0:0010 0:0057 0:0062 0:0172 0:0005 0:0089 0:0153 0:0045

0:0033 0:0019 0:0064 0:0143 0:0033 0:0086 0:0143 0:0083

0:0017 0:0171 0:0178 0:0072 0:0002 0:0145 0:0132 0:0056

0:0010 0:0069 0:0158 0:0000 0:0033 0:0110 0:0166 0:0095

0:0055 0:0096 0:0185 0:0126 0:0109 0:0020 0:0057 0:0259

0:0151 0:0060 0:0394 0:0058 0:0033 0:0250 0:0061 0:0334

1

Table 13. Bias of the estimations from the true paramters: C = 0:0 for TA-res A B C D E F G H

0:0028 0:0010 0:3246 0:2155 0:0029 0:0011 0:1660 0:1816

A B C D E F G H

0:0009 0:0376 0:3050 0:0015 0:0003 0:0086 0:1697 0:0013

A B C D E F G H

0:0072 0:0084 0:3025 0:0019 0:0169 0:0051 0:1831 0:0047

100 GA20

k=2 0:0169 0:0169 0:1805 0:0041 0:0034 0:0012 0:0213 0:0051 k=3 0:0231 0:4729 0:0364 0:0010 0:0052 0:0002 0:0154 0:0009 k=4 0:0061 0:0530 0:0123 0:0030 0:0169 0:0037 0:0181 0:0025

29

100 GA50

150 GA50

0:1270 0:0198 0:2860 0:0064 0:0030 0:0013 0:0287 0:0061

0:0275 0:0164 0:2812 0:0056 0:0047 0:0014 0:0260 0:0061

0:0247 0:6066 0:0817 0:0008 0:0016 0:0014 0:0200 0:0008

0:0520 0:4265 0:0893 0:0007 0:0036 0:0017 0:0146 0:0004

0:0017 0:0835 0:0095 0:0031 0:0329 0:0007 0:0181 0:0012

0:0600 0:9376 0:0001 0:0007 0:0231 0:0036 0:0224 0:0031

2

Table 14. Bias of the estimations from the true paramters: C = 0:5 for TA-res A B C D E F G H

0:0065 0:0009 0:1306 0:1783 0:0028 0:0009 0:1169 0:1546

A B C D E F G H

0:0058 0:0145 0:1463 0:0008 0:0020 0:0002 0:1291 0:0007

A B C D E F G H

0:0032 0:0027 0:1594 0:0007 0:0145 0:0111 0:1371 0:0043

100 GA20

k=2 0:0094 0:0024 0:0499 0:0041 0:0089 0:0012 0:0151 0:0044 k=3 0:0124 0:0473 0:0352 0:0006 0:0082 0:0006 0:0101 0:0011 k=4 0:0006 0:0073 0:0125 0:0008 0:0058 0:0027 0:0178 0:0024

30

100 GA50

150 GA50

0:0087 0:0011 0:0481 0:0052 0:0077 0:0003 0:0136 0:0040

0:0147 0:0027 0:0498 0:0057 0:0083 0:0006 0:0172 0:0041

0:0074 0:0462 0:0427 0:0002 0:0011 0:0002 0:0159 0:0001

0:0006 0:0610 0:0426 0:0001 0:0036 0:0001 0:0181 0:0000

0:0083 0:0058 0:0087 0:0013 0:0177 0:0020 0:0075 0:0049

0:0017 0:3997 0:0125 0:0029 0:0096 0:0009 0:0081 0:0033

2

Table 15. Bias of the estimations from the true paramters: C = 1:0 for TA-res A B C D E F G H

0:0064 0:0013 0:0405 0:1407 0:0016 0:0002 0:0820 0:1219

A B C D E F G H

0:0059 0:0082 0:0712 0:0008 0:0011 0:0003 0:0929 0:0000

A B C D E F G H

0:0022 0:0002 0:0885 0:0004 0:0078 0:0014 0:0928 0:0031

100 GA20

k=2 0:0057 0:0015 0:0173 0:0038 0:0046 0:0008 0:0137 0:0026 k=3 0:0152 0:0028 0:0181 0:0015 0:0013 0:0008 0:0163 0:0005 k=4 0:0182 0:0020 0:0080 0:0053 0:0282 0:0024 0:0141 0:0001

31

100 GA50

150 GA50

0:0045 0:0012 0:0152 0:0039 0:0038 0:0004 0:0129 0:0021

0:0070 0:0013 0:0154 0:0032 0:0050 0:0005 0:0119 0:0024

0:0083 0:0014 0:0204 0:0008 0:0042 0:0004 0:0083 0:0002

0:0071 0:0019 0:0182 0:0010 0:0076 0:0002 0:0113 0:0007

0:0087 0:0002 0:0065 0:0009 0:0210 0:0024 0:0216 0:0055

0:0010 0:0023 0:0235 0:0010 0:0135 0:0029 0:0159 0:0040

2

Table 16. MAD of the estimations from the true paramters: C = 0:0 for TA-res A B C D E F G H

0:1315 0:1195 0:1552 0:9544 0:2152 0:2179 0:3733 0:9400

A B C D E F G H

0:1328 0:2056 0:2380 0:1184 0:2302 0:2821 0:4123 0:2002

A B C D E F G H

0:1353 0:1513 0:2828 0:1616 0:2370 0:2866 0:4406 0:2262

100 GA20

100 GA50 k=2 0:5552 1:3095 2:1055 4:1299 0:4458 0:6195 0:3130 0:2900 0:1285 0:1235 0:1280 0:1265 0:1682 0:1631 0:2955 0:2654 k=3 0:4129 0:6484 5:3932 8:3884 0:2894 0:3299 0:1893 0:1642 0:1521 0:1300 0:2198 0:1982 0:2027 0:1779 0:2209 0:1838 k=4 0:2018 0:4898 1:1027 2:2401 0:3011 0:2580 0:2119 0:1909 0:1714 0:1517 0:2452 0:2242 0:2284 0:2057 0:3029 0:2051

32

150 GA50

1:4349 4:3370 0:5727 0:2881 0:1227 0:1253 0:1634 0:2606 0:8702 8:7119 0:3366 0:1653 0:1324 0:1934 0:1764 0:1777 0:5644 4:5848 0:2581 0:1734 0:1539 0:2281 0:2183 0:1977

1

Table 17. MAD of the estimations from the true paramters: C = 0:5 for TA-res A B C D E F G H

0:1048 0:1068 0:1234 0:7486 0:1521 0:1526 0:2402 0:7549

A B C D E F G H

0:1064 0:2303 0:1582 0:1438 0:1546 0:2448 0:2744 0:1578

A B C D E F G H

0:1042 0:1491 0:1851 0:1361 0:1504 0:2068 0:3074 0:1773

100 GA20

100 GA50 k=2 0:1272 0:1157 0:1647 0:1564 0:1504 0:1436 0:2689 0:2388 0:1211 0:1136 0:1221 0:1155 0:1425 0:1351 0:2384 0:2080 k=3 0:1645 0:1359 0:5538 0:5402 0:1708 0:1690 0:2135 0:1697 0:1382 0:1227 0:2076 0:1794 0:1649 0:1477 0:2143 0:1773 k=4 0:1810 0:1519 0:2249 0:2195 0:1776 0:1529 0:2299 0:1818 0:1813 0:1439 0:2428 0:2110 0:1728 0:1522 0:2433 0:1850

33

150 GA50

0:1272 0:1558 0:1447 0:2386 0:1142 0:1150 0:1375 0:2110 0:1474 0:6740 0:1660 0:1715 0:1197 0:1818 0:1453 0:1748 0:1560 1:1848 0:1638 0:1593 0:1447 0:2150 0:1556 0:1921

1

Table 18. MAD of the estimations from the true paramters: C = 1:0 for TA-res A B C D E F G H

0:0999 0:0995 0:1053 0:5823 0:1101 0:1083 0:1664 0:5785

A B C D E F G H

0:1031 0:1897 0:1076 0:1452 0:1118 0:1538 0:1790 0:1458

A B C D E F G H

0:0971 0:1550 0:1282 0:1357 0:1056 0:1737 0:1960 0:1608

100 GA20

100 GA50 k=2 0:1195 0:1044 0:1148 0:1038 0:1195 0:1108 0:2202 0:1949 0:1152 0:1052 0:1177 0:1080 0:1264 0:1188 0:2048 0:1800 k=3 0:1370 0:1189 0:2090 0:1829 0:1367 0:1290 0:2064 0:1739 0:1362 0:1170 0:1994 0:1826 0:1440 0:1295 0:2017 0:1731 k=4 0:1467 0:1274 0:2278 0:1945 0:1604 0:1312 0:2500 0:1864 0:1387 0:1248 0:2583 0:2364 0:1679 0:1326 0:2441 0:1873

34

150 GA50

0:1057 0:1011 0:1100 0:1940 0:1044 0:1087 0:1181 0:1813 0:1172 0:1745 0:1214 0:1695 0:1169 0:1844 0:1270 0:1696 0:1226 0:2009 0:1383 0:1743 0:1194 0:1905 0:1391 0:1819

1

Table 19. MAD of the estimations from the true paramters: C = 0:0 for TA-res A B C D E F G H

0:0700 0:0121 0:3351 0:2188 0:0988 0:0188 0:2016 0:1847

A B C D E F G H

0:0530 0:0575 0:3192 0:0174 0:0949 0:0265 0:2078 0:0173

A B C D E F G H

0:0507 0:0180 0:3119 0:0152 0:1055 0:0227 0:2212 0:0176

100 GA20

100 GA50 k=2 0:4157 0:7919 0:2521 0:4658 0:3330 0:4157 0:0545 0:0513 0:1262 0:1213 0:0237 0:0230 0:1711 0:1706 0:0516 0:0475 k=3 0:2701 0:3697 2:5118 3:4666 0:2223 0:2305 0:0232 0:0196 0:1391 0:1264 0:0263 0:0239 0:2010 0:1738 0:0229 0:0196 k=4 0:1780 0:2783 0:4564 1:2239 0:2577 0:2400 0:0257 0:0193 0:1576 0:1429 0:0287 0:0240 0:1891 0:1908 0:0264 0:0226

35

150 GA50

0:8430 0:4884 0:4059 0:0510 0:1203 0:0228 0:1687 0:0466 0:4504 4:0657 0:2353 0:0188 0:1276 0:0240 0:1704 0:0195 0:2687 2:1632 0:2142 0:0204 0:1390 0:0268 0:1965 0:0214

2

Table 20. MAD of the estimations from the true paramters: C = 0:5 for TA-res A B C D E F G H

0:0963 0:0172 0:1537 0:1799 0:0915 0:0171 0:1615 0:1564

A B C D E F G H

0:0984 0:0394 0:1760 0:0174 0:0906 0:0335 0:1634 0:0173

A B C D E F G H

0:0922 0:0200 0:1889 0:0161 0:0944 0:0234 0:1772 0:0177

100 GA20

100 GA50 k=2 0:1340 0:1208 0:0276 0:0253 0:1598 0:1551 0:0490 0:0443 0:1205 0:1116 0:0212 0:0202 0:1513 0:1390 0:0460 0:0403 k=3 0:1663 0:1439 0:2333 0:1905 0:1687 0:1617 0:0237 0:0203 0:1354 0:1210 0:0248 0:0211 0:1651 0:1452 0:0232 0:0193 k=4 0:1723 0:1460 0:0554 0:0730 0:1784 0:1634 0:0224 0:0186 0:1348 0:1296 0:0266 0:0219 0:1938 0:1653 0:0264 0:0197

36

150 GA50

0:1257 0:0255 0:1566 0:0447 0:1126 0:0204 0:1405 0:0399 0:1510 0:2568 0:1572 0:0198 0:1216 0:0215 0:1474 0:0195 0:1689 0:4319 0:1660 0:0186 0:1340 0:0194 0:1660 0:0220

2

Table 21. MAD of the estimations from the true paramters: C = 1:0 for TA-res A B C D E F G H

0:1012 0:0171 0:0943 0:1429 0:0909 0:0167 0:1297 0:1242

A B C D E F G H

0:1042 0:0286 0:1101 0:0174 0:0943 0:0169 0:1351 0:0171

A B C D E F G H

0:1081 0:0171 0:1364 0:0158 0:0901 0:0157 0:1285 0:0174

100 GA20

100 GA50 k=2 0:1168 0:1042 0:0195 0:0176 0:1269 0:1168 0:0414 0:0385 0:1111 0:1049 0:0198 0:0191 0:1350 0:1275 0:0392 0:0351 k=3 0:1396 0:1229 0:0254 0:0202 0:1460 0:1311 0:0244 0:0197 0:1308 0:1147 0:0240 0:0212 0:1520 0:1364 0:0235 0:0197 k=4 0:1511 0:1340 0:0293 0:0209 0:1663 0:1538 0:0290 0:0184 0:1466 0:1309 0:0275 0:0210 0:1589 0:1397 0:0271 0:0227

37

150 GA50

0:1055 0:0178 0:1187 0:0382 0:1046 0:0188 0:1289 0:0350 0:1193 0:0200 0:1294 0:0195 0:1125 0:0206 0:1333 0:0197 0:1203 0:0215 0:1553 0:0194 0:1265 0:0207 0:1513 0:0211

2

Further Improvements in the Calculation of Censored ...

Dec 29, 2010 - The first implementation of a global optimization technique ..... in E. Then for each of these solutions construct the neighborhood set, B, and choose a random ... point for BRCENS to make more improvements, and call this procedure TB. ..... A note on estimating censored quantile regressions Center of.

244KB Sizes 1 Downloads 183 Views

Recommend Documents

Do Technological Improvements in the Manufacturing ... - CiteSeerX
by technological progress may violate a simple fact of the business .... penditure on energy as well as on nonenergy materials. ... 3 The constant terms are suppressed here for expositional ...... The first alternative specification we consider in-.

The Effect of Motion Dynamics in Calculation of ...
Detailed studies have investigated the dynamic effects of locomotion, as well as many fast-paced sports motions, on physiological loading. This study examines the significance of considering the dynamics of simulated industrial handwork when calculat

improvements in impact resistance property of metal ...
The Automotive, Aerospace ... 1JNTU College of Engineering, Anantapur, AP ... 3Department of Mechanical Engineering,JNTU College of Engineering, ...

improvements in impact resistance property of metal ...
This paper covers impact testing of ABS and Nylon6 thermoplastics under three different conditions ... to resist the fracture under stress applied at high speed [1].

RAILCARS STATION IMPROVEMENTS ... - WMATA
Apr 18, 2017 - Cell phone coverage in Metro's underground tunnels has expanded to the Red Line between Glenmont and Silver Spring. This is part of an ongoing project to bring underground cell service system wide. • Station Wi-Fi program expanding:

Governing Software Process Improvements in Globally Distributed ...
Governing Software Process Improvements in Globally Distributed Product Development..pdf. Governing Software Process Improvements in Globally Distributed ...

Name of the Course Improvements of Lower ... -
Improvements of Lower Bounds on the Sequential .... where SPE denotes ''subgame perfect equilibrium'', and OPT denotes the ''optimization'', i.e. the action profile that gives the minimum cost For a class of games I, the squential price of ...

Scalability Improvements in the NASA Goddard ...
2Earth System Science Interdisciplinary Center .... HP/Compaq (HALEM), an IBM-SP Power4, and an SGI ... pure MPI, pure OpenMP, or MPI-OpenMP hybrid.

Listing of further issues
Nov 13, 2017 - C/1, G-Block, Bandra-Kurla Complex, Bandra (E), Mumbai 400 051, India. CIN: U67120MH1992PLC069769 Tel: +91 22 26598235/36 ...

Numerical calculation of the melting phase diagram of ...
May 22, 2003 - melting phase diagram involves calculation of the free en- ergy for both the liquid and ..... 25, a better agreement with experimental data is only possible if one .... hexagonal phase and subsequent transformation to less mo-.

RAILCARS STATION IMPROVEMENTS ... - WMATA
Apr 18, 2017 - compared to the same period last year. • Railcar “Get Well Plan” seeing results: Propulsion- related delays down 39% and door-related delays.

euclidean calculation of the pose of an unmanned ...
to advances in computer vision and control theory, monocular ... the need for GPS backup systems. ... comprehensive analysis of GPS backup navigation.

Hayek, Two Pages of Fiction, The Impossibility of Socialist Calculation ...
Page 1 of 7. Page 2 of 7. Page 2 of 7. Page 3 of 7. Page 3 of 7. Hayek, Two Pages of Fiction, The Impossibility of Socialist Calculation.pdf. Hayek, Two Pages of ...

The Load Flow Calculation in Harmonic Polluted Radial ...
Abstract—Analyse of harmonic distortion in power systems consists in the ... paper presents an original solution (and dedicated software) to. Ciprian. Content.

The Load Flow Calculation in Unbalanced Radial Electric Networks ...
one source node (infinite bus), to which the specified quantities are the components of voltage, and the unknown quantities are the components of loads.

Proposed Harold Henry Park Improvements The City of ...
Proposed landscaping includes the introduction of four Pyrus kawakamii or Evergreen Pear trees at the park center, providing flowering color in the early spring. Additionally, an existing in ground garden curb is extended around the northeast corner