Comparison between Biexponential and Robust Biexponential Nonlinear Models, Using Simulation A Thesis Submitted to the Council of the College of Administration and Economics University of Sulaimani, in Partial Fulfillment of the Requirements for the Degree of Master of Science in Statistics

By Hozan Taha Abdalla

Supervised By Assistant Professor Dr. Samira Muhammad Salih

6102 A.D

2716 K 0341 H

‫بسم اهلل الزمحو الزحيم‬ ‫ي‬ ‫لمََةَفَقدِأُوِت َ‬ ‫ح ِ‬ ‫تالِ ِ‬ ‫حلَِمَةمَويَشَاء وَمَويُؤِ َ‬ ‫}يُؤتِيالِ ِ‬ ‫ب{‬ ‫أللِبَا ِ‬ ‫كثِريّا وَمَايَذَّكَّزُإٔالَّأُ ِولُواِ ا َ‬ ‫خَِيزّا َ‬ ‫ت وََعلَيهامَا‬ ‫سَب ِ‬ ‫كَ‬ ‫ال وُسعَهَاَلهَامَا َ‬ ‫الُيلَِّلِفُاْهللنَفساّ إ ٓ‬ ‫} َ‬ ‫ال‬ ‫َبهَا و َ‬ ‫التُؤاخِذنا إىَّنسِيهَا أو أخطأنَا رَّ‬ ‫اكََتسَبتِ رَبَّهَا َ‬ ‫ال‬ ‫حمِلِ َعلَيهَا إصزاّ كَمَا َحمَلتَُه عَلَى الذيومِوَقبِلهَا رَبَّهَا و َ‬ ‫تَ َ‬ ‫ت‬ ‫ف عَهَّا وَاغفِزَلهَا وَارحَمهَا أن َ‬ ‫ال طَاَقةَلَهَابِه وَاع ُ‬ ‫تُحَمِلهَامَا َ‬ ‫مَوالنافَانصُزنَا َعلَى القَومٔ اللاِفزٔيوَ {‬ ‫صدق اهلل العظيم‬ ‫سورة البقزة‬ ‫اآلية‪962 :‬‬ ‫اآلية ‪986‬‬

Dedication

Dedication This thesis is dedicated to: My dear father My dear mother My dear brothers My dear fiance With love and respect ........

Hozan T. Abdalla

Linguistic Evaluation Certification This is to certify that I, Sarah K. Othman, have proofread this thesis entitled “Comparison between Biexponential and Robust Biexponential Nonlinear Models, Using Simulation”. By Hozan Taha Abdalla. After marking and correcting the mistakes, the thesis was handed again to the researcher to make the corrections in this last copy.

Signature: Proofreader: Dr. Sarah K. Othman Date: / / 2016 Department of English, School of Language, Faculty of Humanities, University of Sulaimani.

Supervisor's certification

I certify that the preparation of thesis titled "Comparison between Biexponential and Robust Biexponential Nonlinear Models, Using Simulation", accomplished by (Hozan Taha Abdalla), was prepared under my supervision at the University of Sulaymani, College of Administration and Economics, Department of Statistics as partial fulfilment of the requirement for the degree of Master of Science in Statistics.

Signature: Supervisor: Dr. Samira Muhammad Salih Title: Assistant professor Date: / / 2016

Chairman's Certification In view of the available recommendations, I forward this thesis for debate by the examining committee.

Signature:

Name: Dr. Samira Muhammad Salih Title: Assistant Professor Higher Education Committee Date: / / 2016

Examination Committee Certification We are the exam committee, certificate that we read this thesis entitled "Comparison between Biexponential and Robust Biexponential Nonlinear Models, Using Simulation", and we examined the student (Hozan Taha Abdalla) in its contents and that in our opinion, it is adequate as a thesis for the degree of Master of Science in Statistics.

Signature:

Signature:

Name: Dr. Wasfi Tahir Saalih

Name: Dr. Nawzad Muhammad Ahmad

Title: Assistant Professor

Title: Assistant Professor

Committee head

Member

Date: / / 2016

Date:

Signature:

Signature:

Name: Dr. Nzar Abdulqader Ali

/ / 2016

Name: Dr. Samira Muhammad Salih

Title: Assistant Professor

Title: Assistant Professor

Member

Supervisor-Member

Date: / / 2016

Date: / / 2016

Signature: Name: Dr. Kawa M.Jamal Rashid Title: Assistant Professor Dean of Administration and Economics College Date: / / 2016

Acknowledgements

Acknowledgements First of all I would like to thank Allah, the most merciful, the most forgiving, for enabling me to carry out my study without any major interruptions. I would like to thank several people who have provided support and guidance during the research and the writing of this thesis. In particular, I would like to thank my supervisor, the Assistant Professor Dr. Samira Muhammad Salih for her support and patience. Her encouragement and critique of my work has been greatly appreciated. I would like to thank the Assistant Professor Dr. Nzar Abdulqader Ali for his support and his great help in this thesis because without him I would not be able to complete my work. Also, I would like to thank the Dean of Administration and Economic College the Assistant Professor Dr. Kawa Mohammad Jamal Rashid, and the head of Statistics Department Assistant Professor Dr. Mohammad Mahmood Faqe Hussein and the employees in library and Higher Education Unit for their help and support in all the stages of the study. I would like to thank all my teachers in Statistics Department and all my friends and who helped me in one way or another. My Family and My Fiance have been a source of continual encouragement. I thank them for providing the atmosphere and instilling the spirit which has allowed me to pursue my study.

Hozan Taha Abdalla

I

Abstract

Abstract The Science of statistics has become paramount importance in this age as a means and a tool for scientific method in research in all the various fields of science. Knowledge of statistics is a science that has grown and evolved into the current century and became a stable science that has its own structure and rules. Linear regression is a powerful method for analyzing data described by models which are linear in the parameters. Often, however, a researcher has a mathematical expression which relates the response to the predictor variables, and these models are usually nonlinear in the parameters. In such cases, linear regression techniques must be extended, which introduce considerable complexity. In this thesis, we compared between nonlinear regression method and robust method. Nonlinear models tend to be used either when they are suggested by theoretical considerations or to build known nonlinear behavior into a model. Even when a linear approximation works well, a nonlinear model may still be used to retain a clear interpretation of the parameters. By using R language software we generate the data that we use in this thesis for three sample sizes (25,50,100) and we took (200) reputations for each sample sizes. In the practical part of this thesis for Nonlinear Regression, we used Biexponential Nonlinear Regression Model and we estimate four parameters. We found the estimated parameters mean for our model, and we also found that the best performance for parameters by depending on the Akaike information criterion and Bayesian information criterion. We tested the parameters, so we found that they are significant also the ANOVA table where the F-test is significant.

II

Abstract

For the Biexponential Robust Nonlinear Regression Model we estimate four parameters by depending on parameters mean as an initial value from Biexponential Nonlinear Regression Model. Then we found the estimated parameters mean and the best performance for parameters by depending on the Akaike information criterion and Bayesian information criterion and we tested the parameters, so we found that they are significant and also the ANOVA table where the F-test is significant. On the other hand, we also found the Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model. We estimate four parameters then we found the estimated parameters mean and the best performance for parameters by depending on the AIC and BIC then we test the parameters they are significant and the F-test is also significant from the ANOVA table. The last part in practical side is Robust M-estimate by using two weight functions (Huber and Biweight). We estimate two parameters then we found that the best performance for parameters by depending on the Akaike information criterion and the Bayesian information criterion also we found the estimated parameters mean, and we tested the parameters so we found that, they are significant also the ANOVA table about the significancy of the model. After comparing all the practical part with each other, we conclude that the Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model is the best Model because it has the minimum AIC and BIC for sample size (100).

III

List of Contents

List of Contents Subject Acknowledgements Abstract List of Contents List of Figures List of Tables List of Abbreviations Chapter One: Introduction and Literature Review 1.1: Introduction 1.2: Literature Review 1.3: The Aim of Work 1.4: The Thesis Layout

Chapter Two: Theoretical Part 2.1: Introduction 2.2: Regression Analysis 2.3: The Linear Regression Model 2.3.1: The Least Squares Estimates 2.4: The Nonlinear Regression Model 2.5: Estimating the Parameters of a Nonlinear System 2.6: General Regression Model Assumptions 2.7: Assumptions for Nonlinear Regression 2.8: Some Commonly Used Families of Nonlinear Regression Functions 2.8.1: Exponential Family of Distributions 2.8.1.1: SSbiexp—Biexponential Model 2.8.1.2: Starting Estimates for SSbiexp 2.9: Robust Statistics 2.9.1: Robust Method 2.9.2: Why Robust Statistics are Needed 2.9.3: Main Contributions of Robust Statistics to Modern Statistics 2.10: Classical and Robust Approaches to Statistics 2.11: Some General Definitions 2.11.1: The Breakdown Value 2.11.2: Outliers 2.11.3: Robust Mahalanobis Distance 2.12: Estimation Methods

IV

Page No. I II IV VI VII X 1-7 1 2 7 7 8-49 8 8 8 10 11 12 19 21 22 23 23 24 25 25 26 29 30 31 31 33 34 35

List of Contents

2.12.1: The types of Estimations in Robust Method 2.12.1.1: The General Class of M-Estimators 2.13: Weight Functions 2.14: Influence Function of M-Estimates 2.15: Asymptotic Properties of M-Estimates 2.16: Model Selection Criteria

Chapter Three: Practical Part 3.1: Introduction 3.2: Generate Random Variables 3.2.1: Explanatory Variables 3.2.2: Dependent Variable 3.2.3: Continuous Variables 3.2.4: Discrete Variables 3.3: Description and Generate of Data 3.4: Explanation

Chapter Four: Conclusions and Recommendations 4.1: Conclusions 4.2: Recommendation References Appendixes Appendix A Appendix B Appendix C Appendix D Appendix E Appendix A on the CD Appendix B on the CD Appendix C on the CD Appendix D on the CD ‫اخلالصة‬ ‫ثوختة‬

V

36 37 43 45 46 48 50-70 50 51 51 51 51 52 52 54 71-72 71 72 73 80-108 80 101 103 105 107 109 134 161 188

List of Figures

List of Figures Figure No. Figure (1) Figure (2) Figure (3) Figure (4) Figure (5) Figure (6)

Details A Biexponential model showing the linear combination of the exponentials (Bold line) and its constituent exponential curves A sample response curve from a first-order opencompartment model Displays the and - function for the MLE, Huber and biweight estimators Biexponential Nonlinear Regression Model for sample size (25,50,100) Biexponential Robust Nonlinear Regression Model by using the Parameters Mean for sample size (25,50,100) Biexponential Robust Nonlinear Regression Model by using the Best Parameters for sample size (25,50,100)

VI

Page No. 24 52 93 58 62 66

List of Tables

List of Tables Table No. Table (1) Table (2)

Table (3)

Table (4)

Table (5) Table (6)

Table (7)

Table (8)

Table (9)

Table (10)

Details The Weight Functions The value of Mean Parameters Estimation for Biexponential Nonlinear regression Model for each sample sizes (25,50,100) The value of Best parameters for Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The values of Akaike Information Criterion and Bayesian Information Criterion in Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The Parameters Test for Biexponential Nonlinear Regression Model for sample sizes (25,50,100) The ANOVA Table for Biexponential Nonlinear Regression Model for sample sizes (25,50,100) The value of Mean Parameters Estimation for Biexponential Robust Nonlinear regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The value of Best Parameters for Biexponential Robust Nonlinear regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The values of Akaike Information Criterion and Bayesian Information Criterion for Biexponential robust Nonlinear Regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The Parameters Test for Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model for sample sizes (25,50,100)

VII

Page No. 44 56

56

57

57 58

59

59

60

60

List of Tables

Table (11)

Table (12)

Table (13)

Table (14)

Table (15)

Table (16)

Table (17)

Table (18)

Table (19)

The ANOVA Table for Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model for sample sizes (25,50,100) The value of Mean Parameters Estimation for Biexponential Robust Nonlinear Regression Model by using the best parameters as an initial value from Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The value of Best Parameters for Biexponential Robust Nonlinear Regression Model by using the best parameters as an initial value from Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The values of Akaike Information Criterion and Bayesian Information Criterion for Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model for each sample sizes (25,50,100) The Parameters Test for Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model for sample sizes (25,50,100) The ANOVA Table for Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model for sample sizes (25,50,100) The value of Best Parameters in Robust by using Mestimate (Huber and Biweight) Functions for each sample size (25,50,100) The value of Mean Parameters Estimation in Robust by using M-estimate (Huber and Biweight) Functions for each sample size (25,50,100) The values of Akaike Information Criterion and Bayesian Information Criterion in Robust by using Mestimate (Huber and Biweight) Functions for each sample size (25,50,100)

VIII

61

62

63

64

64

65

66

67

68

List of Tables

Table (20)

Table (21)

The Parameters Test for Robust by using M-estimate (Huber and Biweight) Functions for sample sizes (25,50,100) The ANOVA Table for Robust by using M-estimate (Huber and Biweight) Functions for sample sizes (25,50,100)

68

69

Table (22)

Comparison Table

70

Table (23)

The symbols in Appendix (A) on the CD

109

Table (24)

The symbols in Appendix (B) on the CD

134

Table (25)

The symbols in Appendix (C) on the CD

161

Table (26)

The symbols in Appendix (D) on the CD

188

IX

List of Abbreviations

List of Abbreviations Abbreviations

Description

AIC AM BIC FSA GLM IRWLS L L0 LA LMS LRTs LTS LWS M MLE NLME OLS R S WMLE

Akaike information criterion An Adaptive Maximum Likelihood Type Estimates Bayes information criterion Feasible Solution Algorithm Generalized Linear Model Iteratively Re-Weighted Least Squares Laplace Statistics Estimates Smaller Log Likelihood Larger Log Likelihood Least Median of Squares Estimates Likelihood ratio tests Least Trimmed of Squares Estimates Least Winsor of Squares Estimates Maximum likelihood type estimates Maximum Likelihood Estimates Nonlinear Model Estimate Ordinary Least Square Rank Statistics Estimates Scale Statistics Estimates Weight Maximum Likelihood Estimates

X

Chapter One Introduction, Literature Review and The Aim of Work

Chapter One: Introduction, Literature Review and The Aim of Work

Chapter One Introduction, Literature Review and The Aim of Work 1.1 Introduction The first and possibly the most important question about Nonlinear Model Estimate (NLME) is why would one want to use them. This question, of course, also applies to nonlinear regression models in general as does the answer: interpretability, parsimony, and validity beyond the observed range of the data [11]

. An important task in statistics is to find the relationships among random

variables, if any, that exist in a set of variables when at least one is random, being subject to random fluctuations and possibly measurement error. In regression problems typically one of the variables is often called the response or dependent variable which is of particular interest and is denoted by other variables

. The

usually called explanatory, regressors, or

independents variables, are primarily used to predict or explain the behaviors of .

[14]

Classical statistics and econometrics are based on parametric models. Typically, assumptions are made on the structural and the stochastic parts of the model and optimal procedures are derived under these assumptions. Standard examples are least squares estimators in linear models and their extensions, maximum likelihood estimators, and the corresponding likelihood-based tests. Many classical statistic and econometric procedures are well-known for not being robust, because their results may depend crucially on the exact stochastic assumptions and on the properties of a few observations in the sample. These procedures are optimal when the assumed model is exactly satisfied, but they are

1

Chapter One: Introduction, Literature Review and The Aim of Work

biased and/or inefficient when small deviations from the model are present. The results obtained by classical procedures can therefore be misleading on real data applications [40].

1.2 Literature Review Neugebauer, Shawn Patrick

[26]

, (1996), in his M.Sc thesis focused on

analyzing the robustness properties of M-estimators of nonlinear models by studying the effects of deviations from assumptions and normality on these estimators. The result shows that M-estimators must depend not only on the influence of residual but also on the influence of model. Ali, Majed Hebatullah

[18]

, (2005), a Phd thesis done in reaching robust

estimators of the survival function through studying some robust and classical methods and bayes methods in contaminated weibull distribution, by assuming three levels of contaminations. Hassan, Farah Essam [22], (2006), and her thesis done to find the efficient techniques that extend the application of response surfaces methodology to the multiple robust parameter design by using five different functions. Sadik, Nazik Ja'afar

[29]

, (2006), her M.Sc thesis done on choosing the

estimator which is most immune to asymmetric distributions and to any rate of cintimination. This estimator is the Robust Parameter Estimator (RPM), it can be considered much better than many available robust estimators. Mohammed, Mohammed Jassim [24], (2007), did a research about robust estimation in fuzzy regression. So fuzzy robust methods are used to find the estimations the fuzzy regression model parameters, through the use of iterative weighted fuzzy least squares methods. The results shows that in all linear

2

Chapter One: Introduction, Literature Review and The Aim of Work

regression fuzzy types the value of median sum squares error for estimating the parameters model will decrease by increasing the sample sizes Nayef, Qutaiba Nabeel [25], (2007), his Phd thesis done on estimating the parameters of multiple linear regression model with missing data in observations of X's or the estimate it missing value and afterwards, estimate the parameters of model. Dette, H. & et al [34], (2009), consider the problem of estimating the slope of the expected response in non-linear regression models and the results are illustrated in exponential regression and rational regression models. Riazoshams, Hossein [28], (2010), proposed a Robust Multistage Estimator (RME). The heterogeneouity of error variances is considered when the variances of residuals follows a parametric functional form of the predictors. Both Nonlinear model function parameters and variance model parameters must be robustified. MM incorporated, the generalized MM and the robustified Chi-Squares Pseudo Likelihood function in the formulation of the RME. The results of the study reveal that the RME is more efficient than the existing methods. The results of the study based on real data signify that the robust estimator is more efficient indicated by lower values of standard errors when compared to the classical estimator. Karami, Md. Jamil Hasan

[23]

, (2011), has done research which deals with

finding design points for nonlinear regression models with the possibility that the fitted model is incorrect. This indicates that they have obtained the minimizing design. In a recent paper Yang and Stufken "2012a" gave sufficient conditions for complete classes of designs for nonlinear regression models. While Dette, H.

3

Chapter One: Introduction, Literature Review and The Aim of Work

and Schorning, K.

[33]

, (2012), demonstrate that "Yang and Stufken" result is a

simple consequence of the fact that boundary points of moment spaces generated by Chebyshev systems possess unique representations. Müller, Ursula U. [38], (2012), in her article considers linear and nonlinear regression with a response variable that is allowed to be “missing at random”, the only structural assumptions on the distribution of the variables are that the errors have mean zero and are independent of the covariates. The independence assumption is important, the idea is to write the response density as a convolution integral which can be estimated by an empirical version, with a weighted residual-based kernel estimator plugged in for the error density. It has been proved that the estimator is efficient. Vilhjálmsdóttir, Eíln Ösp

[31]

, (2013), in her thesis used measurements

from Intra Venous Glucose Tolerance Test (IVGTT) and Bergman's deterministic minimal model (ODE) to estimate the insulin sensitivity based on a nonlinear mixed effect model.

Wei, J. & et al

[41]

, (2013), in their paper which is about the primary

analysis of case–control studies focuses on the relationship between disease D and a set of covariates of interest (Y,X). A secondary application of the case–control study, which is often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated owing to the case–control sampling, where the regression of Y on X is different from what it is in the population. They show how to estimate the regression parameters consistently even if the assumed model for Y given X is incorrect, and thus the estimates are model robust.

4

Chapter One: Introduction, Literature Review and The Aim of Work

Bandyopadhyay, Parag

[20]

, (2014), Ordinary least squares (OLS) is the

most commonly used regression method. However, the assumptions inherent to OLS that; a) errors are present only in the dependent variable (pressure) and b) these errors follow a Gaussian distribution, in this research report, the development of methods that address the possibility of errors in both pressure and time variables is discussed first. These methods were tested and compared to OLS and found to provide more accurate estimates in cases where random time errors are present in the data. These methods were then modified to consider errors in breakpoint flow rate measurement. OLS parameter estimates for datasets with non-Gaussian error distributions are shown to be biased. A general method was developed based on maximum likelihood estimation theory that estimates the error distribution iteratively and uses this information to estimate parameters. This method was compared to OLS and found to be more accurate for cases with non-Gaussian error distributions. Chapman, Cole Garrett

[21]

, (2014), in his thesis, claimed that the

simulation results show that while linear two-stage least squares (2SLS) generates unbiased and consistent estimates of local average treatment effects (LATE) across all scenarios, Nonlinear two-stage residual inclusion 2SRI generates unbiased estimates of average treatment effects (ATE) only under very restrictive settings. If marginal patients are unique in terms of treatment effectiveness, then nonlinear 2SRI cannot be expected to generate unbiased or consistent estimates of ATE unless all factors related to treatment effect heterogeneity are fully measured. Kadhum, Mohammed Mansour

[35]

, (2014), in his paper presents the

predictive functional control of an autoclave expansion of Portland cement, using non-linear regression equation and it was found that the multiple linear

5

Chapter One: Introduction, Literature Review and The Aim of Work

regressions are very suitable for predicting the autoclave expansion of Portland cement. Karami, J. H. and Wiens, D. P.

[36]

, (2014), in their paper, outlines the

construction of robust static designs for nonlinear regression models. The designs are robust in that they afford protection from increases in the mean squared error resulting from misspecifications of the model fitted by the experimenter. This robustness is obtained through a combination of minimax and Bayesian procedures. MA, T. & et al [37], (2014), whose main purpose of this paper is to provide a robust alternative technique to the Ordinary Least Squares nonlinear regression method. This new robust nonlinear regression method can provide accurate parameter estimates when outliers and/or influential observations are present. Two algorithms were developed to perform the robust nonlinear estimation of model parameters. This robust method provides a powerful alternative to least squares method. The robust method has influence functions bounded in both the response and the explanatory variable direction and high asymptotic breakdown point and efficiency. [32]

, (2014), in this thesis, propose a Robust Mixture via Mean

shift penalization (

) in mixture models and Robust Mixture Regression via

Yu, Chun

Mean shift penalization (

) in mixture regression, to achieve simultaneous

outlier detection and parameter estimation. The proposed methods show outstanding performance in both identifying outliers and estimating the parameters.

6

Chapter One: Introduction, Literature Review and The Aim of Work

"After all the research that I have done, I couldn't find a Thesis about Robust Biexponential Nonlinear Model in Iraq and outside Iraq. So I can confess that this is the only thesis about Robust Biexponential Nonlinear Model"

1.3 The Aim of Work The aim of this Thesis is to compare between biexponential model and robust biexponential model by using Simulation for three different sample sizes (25,50,100).

1.4 The Thesis Layout The rest of the thesis layout is organized as follows: Chapter Two: In this chapter, we discussed about nonlinear regression models especially biexponential model and robust method. Chapter Three: In the practical part for this chapter experimental results proposed and comparisons between the algorithms are made and the results are discussed. Chapter Four: Contains the main conclusions and recommendations.

7

Chapter Two Theoretical Part

Chapter Two: Theoretical Part

Chapter Two Theoretical Part 2.1 Introduction Linear regression is a powerful method for analyzing data described by models which are linear in the parameters. Often, however, a researcher has a mathematical expression which relates the response to the predictor variables, and these models are usually nonlinear in the parameters. In such cases, linear regression techniques must be extended, which introduces considerable complexity. In this part we will explain the theoretical about nonlinear regression and robust method [2] [27].

2.2 Regression Analysis [2] Regression analysis is a commonly used method for obtaining a prediction function for predicting the values of a response variable predictor variables

......

using

. We begin by discussing the concept of

subpopulations, which plays a very important role in defining the regression function of {( of

on

……

. We first consider a two-variable population

)} and suppose that we want to predict the value of

based on the value

for any population item.

We begin with a brief review of linear regression, because a thorough grounding in linear regression is fundamental to understanding nonlinear regression [2].

2.3 The Linear Regression Model [2] [27] Linear regression provides estimates and other inferential results for the parameters in the model

8

Chapter Two: Theoretical Part

= (

,…..,

) +

In this model, the random variable

(2.1)

, which represents the response for case

, has a deterministic part and a stochastic part. The deterministic part,

, depends upon the parameters

upon the predictor or regressor variables

and

The stochastic

part, represented by the random variable (

), is a disturbance which perturbs

the response for that case. The model for

cases can be written: +

(2.2)

Where : is the vector of random variables representing the data of responses we may get. : is the

matrix of regressor variables.

: is the vectors of random variables representing the disturbances.

x= [

] The deterministic part,

, a function of the parameters and the

regressor variables, gives the mathematical model or the model function for the responses. Since a nonzero mean for

can be incorporated into the model

function, we assume that (2.3) Or, equivalently,

9

Chapter Two: Theoretical Part

We therefore call model. The matrix

the expectation function for the regression

is called the derivative matrix, since the

the derivative of the

term is

row of the expectation function with respect to the

parameter. Note that for linear models, derivatives with respect to any of the parameters are independent of all the parameters. If we further assume that ( ) is normally distributed with ]=E[ Where function for

is an given

]=

(2.4)

identity matrix, then the joint probability density

and the variance

, is –

exp ( exp (

)





)

(2.5)

Where the double vertical bars denote the length of a vector. When provided with a derivative matrix ( ) and a vector of observed data ( ), we wish to make inferences about

and the parameters ( ).

2.3.1 The Least Squares Estimates [2] [27] The likelihood function, or more simply the likelihood

, for

β and σ is identical in form to the joint probability density (2.5) except that is regarded as a function of the parameters conditional on the observed data, rather than as a function of the responses conditional on the values of the parameters. Suppressing the constant



N

  y  X exp  2 2 

10

2

  

, we write:

(2.6)

Chapter Two: Theoretical Part

The likelihood is maximized with respect to

when the residual sum of

squares is given by:S ( ˆ )  y  Xˆ

2

  p     y n    x npˆ p  n 1    p 1  N

2

is a minimum. Thus the maximum likelihood estimate ̂ is the value of

(2.7) which

minimizes S(β). This ̂ is called the ordinary least squares estimate and can be written as: ̂

(2.8)

Least squares estimates can also be derived by using sampling theory, since the least squares estimator is the minimum variance unbiased estimator for , or by using a Bayesian approach with a noninformative prior density on and

. In the Bayesian approach, ( ̂ ) is the mode of the marginal

posterior density function for

. All three of these methods of inference, the

likelihood approach, the sampling theory approach, and the Bayesian approach, produce the same point estimates for

. As we will see shortly, they also

produce similar regions of "reasonable" parameter values. First, however, it is important to realize that the least squares estimates are only appropriate when the model (2.2) and the assumptions on the disturbance term, (2.3) and (2.4), are valid.

[

]

2.4 The Nonlinear Regression Model [2] [12] [27] Nonlinear regression may be a confined and narrow topic within statistics. However, the use of nonlinear regression is seen in many applied sciences, ranging from biology to engineering to medicine and pharmacology. A nonlinear regression model can be written: (2.9)

11

Chapter Two: Theoretical Part

Where: : is the expectation function : is a vector of associated regressor for the

case.

This model is of exactly the same form as (2.1) except that the expected responses are nonlinear functions of the parameters. That is, for nonlinear models, at least one of the derivatives of the expectation function with respect to the parameters depends on at least one of the parameters. To emphasize the distinction between linear and nonlinear models, we use

for the parameters

in a nonlinear model. As before, we use P for the number of parameters. When analyzing a particular set of data we consider the vectors

,

as fixed and concentrate on the dependence of the expected responses on

. We create the N-vector

with

element

And write the nonlinear regression model as (2.10) with ( ) assumed to have a spherical normal distribution. That is,

As in the linear model.

2.5 Estimating the Parameters of a Nonlinear System [4] In some nonlinear problems, it is most convenient to write down the normal Equation ∑

(

̂)

̂

(2.11)

And develop an iterative technique for solving them. Whether this works satisfactorily or not depends on the form of the equations and the iterative

12

Chapter Two: Theoretical Part

method used. We shall mention three of these: (a) Linearization, (b) Steepest descent, and (c) Marquardt's compromise. (a) Linearization The linearization (Taylor Series) method uses the results of linear least squares in a succession of stages. Suppose the postulated model is of the form of Equation Let

(2.12) be initial values for the parameters

. These

initial values may be intelligent guesses or preliminary estimates based on whatever information is available. If we carry out a Taylor series expansion of

about the point

, where

, and curtail the expansion at the first derivatives, we can say that, approximately, when

is close to

,



(2.13)

If we set ,

(2.14) Where

[For Nonlinear Case] We can see that Equation (2.12) is of the form, approximately, ∑

(2.15)

We can now estimate the parameters squares theory. If we write 13

by applying linear least

Chapter Two: Theoretical Part

(2.16)

[

]

and [

(2.17)

] [

]

Say, with an obvious notation, then the estimate of

is

given by (2.18) The vector

will minimize the sum of squares ∑

With respect to the



,

where

Then the estimates of

(2.19)

Let us write

can be thought of as the revised best

.

We can now place the values as were played above by the values

, the revised estimates in the same roles and go through exactly the same

procedure described above by Equation (2.13) through (2.19), but replacing all the zero subscripts by ones. This will lead to another set to revised estimates 14

,

Chapter Two: Theoretical Part

and so on. In the vector form, extending the pervious notation in an obvious way, we can write (2.20) (

)

(

)

Where

(2.21)

This iterative process is continued until the solution converges, that is, until in successive iterations |{ Where procedure

}

|

is some prespecified small amount, at each stage of the iterative

( ) can be evaluated to see if a reduction in its value has actually

been achieved. The linearization procedure has possible drawbacks for some problems in that: 1. It may converge very slowly; that is, a very large number of iterations may be required before the solution stabilizes even though the sum of squares ( ) may decrease consistently as increases. This sort of behavior is not common but can occur. 2. It may oscillate widely, continually reversing direction and often increasing, as well as decreasing the sum of squares. Nevertheless, the solution may stabilize eventually. 3. It may not converge at all, and even diverge so that the sum of squares increases iteration after iteration without bound.

15

Chapter Two: Theoretical Part

(b) Steepest descent The steepest descent method involves concentration on the sum of square function,



as defined by Equation

and use of

an iterative process to find the minimize of this function. The basic idea is to move, from an initial point

, along the vector with components ,…,

,

,

Whose values change continuously as the path is followed. One way of achieving this in practice, without evaluating functional derivatives, is to estimate the vector slop components at various places on the surface

by

fitting planar approximating functions. This is a technique of great value in experimental work for finding stationary values of response surface. The procedure is as follows. Starting in one particular region of the space (or the parameter space as it is called) several runs are made, by selecting

(say) combinations of levels of

,

,…,

and evaluating

at

those combinations of levels. The runs are usually chosen in the pattern of a Two-level factorial design. Using the evaluated

values as observations of a

dependent variable and the combinations of levels of

,

,…,

as the

observations of corresponding predictor variables, we fit the model ∑

"Observed

̅

By standard least squares. Here ̅ denotes the mean of the levels , of ∑ by

̅

used in the runs, and

,

is a scaling factor chosen so that

constant. This implies that we believe the true surface defined

can be approximately represented by a plane in the region of parameter

space in which we made our runs. The estimated coefficient ,

16

,…,

Chapter Two: Theoretical Part

Indicate the direction of steepest ascent so the negative of these, namely, ,…,

,

Indicate the direction of steepest descent. This means that as long as the linear approximation is realistic the maximum decrease in

will be obtained

by moving along the line which contains points such that: ̅

Which denote the proportionality factor by contains points

,…,

,

such that ̅

Where

, or

, ̅

By giving

selected values, the path of steepest descent can be followed.

A number of values of as long as

, the path of steepest descent

are selected and the path of steepest descent is followed

decreases. When it doesn't, another experimental design is set

down and the process is continued until it converges to the value ̂ that minimizes

.

While theoretically, the steepest descent method will converge, it may do so in practice with agonizing slowness after some rapid initial progress. Slow convergence is particularly likely when the

contours are attenuated and

banana-shaped (as they often are in practice), and it happens when the path of steepest descent zigzags slowly up a narrow ridge, each iteration bringing only a slight reduction in

. This is less of a problem in laboratory-type

investigation where human intervention can be permitted at each stage of calculation since then the experimental design can be revised, the scales of the independent variables can be changed, and so on. This difficulty has led to modifications of the basic steepest descent procedure when used for nonlinear

17

Chapter Two: Theoretical Part

fitting. One possible modification is to use a second-order approximating rather than a first-order or planar approximation. While this provides better graduation of the true surface, it also requires additional computation in the iterative procedures. A further disadvantage of the steepest descent method is that it is not scale invariant. The indicated direction of movement changes if the scales

of

the variables are changed, unless all are changed by the same factor. The steepest descent method is on the whole, slightly less favored than the linearization method but will work satisfactorily for many nonlinear problems, especially if modifications are made to the basic technique. On the whole, steepest descent works well when the current position in the -space is far from the desired ̂ , which is usually on the early iterations. As ̂ is approached, the "zigzagging" behavior of steepest descent previously described is likely, and linearization tends to work better. (c)Marquardt's compromise A method developed by Marquardt (1963) enlarged considerably a number of practical problems that can be tackled by nonlinear estimation. Marquardt's method represents a compromise between the linearization (or Taylor series) method and the steepest descent method and appears to combine the best features of both while avoiding their most serious limitations. It is good in that it almost always converges and does not "slow down" as the steepest descent method often does. However, as we again emphasize, the other methods will work perfectly well on many practical problems that do not violate the limitations of the methods. In general, we must keep in mind that, given a particular method, a problem can usually be constructed to defeat it. Alternatively, given a particular problem and a suggested method, ad hoc modifications can often provide quicker convergence than an alternative method. The Marquardt method is the one that appears to work well in many

18

Chapter Two: Theoretical Part

circumstances and thus is a sensible practical choice. For the reasons stated above, no method can be called "best" for all nonlinear problems. The idea of Marquardt's method can be explained briefly as follows. Suppose that we start from a certain point in the parameter space, . If the method of steepest descent is applied, a certain vector direction,

, where g

stands for gradient, is obtained for movement away from the initial point. Because of attenuation in the

contours this may be the best local direction

in which to move to attain smaller values of

but may not be the best overall

direction. However, the best direction must be within 90º of

or else

will

get larger locally. The linearization (or Taylor series) method leads to another correction vector

given by a formula like Equation (2.18). Marquardt found

that for a number of practical problems he studied, the angle, and

fell in the range 80º

say, between

90º. In other words, the two directions were

almost at right angles! The Marquardt algorithm provides a method for interpolating between the vectors

and

and for obtaining a suitable step size

as well.

2.6 General Regression Model Assumptions [42] [43] [45] 1. Random variable for any observed value is normally distributed and the expected value (average) equals to zero

and the variance of the

random variable equals to a fixed amount

does not depend on the values of

the explanatory variables.

2. The distribution of ( ) around the average equals to zero is identically normally distributed at each value of the independent variable values (

19

).

Chapter Two: Theoretical Part

3. There is lack of correlation between the random variable for any observed value and the random variable for any other observed value. In other words, different values for ( ) are independent from each other, and by multiplying them to each other it will equal to Zero

And also at the different periods ( ) for the period does not depend on the value of ( ) for the period

4. Lack of correlation value ( ) with any of the independent variables, where the expected value of their multiplication = zero

5. Lack of correlation between the independent variables with each other, Independence

of

any

specific

variables

with

each

other

means

(No Multicollinearity) ie:

6. The intended relationship to estimate its parameters have been diagnosed the diagnosis is mathematically distinctive, the variables that contain in a relationship is different from the variables in another relationship in the same field of research and the researcher then is confident that the parameters obtained is actually symmetric for the relationship under study.

20

Chapter Two: Theoretical Part

2.7 Assumptions for Nonlinear Regression [7] The most commonly used set of assumptions for nonlinear regression is the same as the assumptions below for linear regression, the only exception being that the regression function

is a nonlinear function of the

unknown parameters instead of a linear function of the parameters. The Assumptions for Nonlinear Regression The

variable population

is the study population under

investigation.

(A): (Population) Assumption 1-

The

mean

of

the

subpopulation

of

is denoted by of

unknown

parameters.

At

values

determined

by

, and is a nonlinear function times

we

find

it

useful

to

write

for the regression function to emphasize the fact that it depends on the parameters

.

2- The standard deviations of the

values are the same for each subpopulation

determined by specified values of the predictor variables

,...,

. This

common standard deviation of all the subpopulations is denoted by but for simplicity of notation we simply write when there is no possibility of confusion. 3- Each subpopulation of variables

values, determined by fixed values of the predictor

.

(B): (Sample) Assumption 1- The sample of size

is selected either by simple random sampling or by

sampling with preselected values of 2- All sample values

. for =

21

are observed without error.

Chapter Two: Theoretical Part

2.8 Some Commonly Used Families of Nonlinear Regression Functions [7] While simple and multiple linear regression functions are adequate for modeling a wide variety of relationships between response variables and predictor variables, many situations require nonlinear functions. Certain types of nonlinear regression functions have served, and will continue to serve, as useful models for describing various physical and biological systems. We list a few of these situations for the case of a single predictor variable. 1- Exponential 2- Quadratic 3- Cubic 4- Logarithmic 5- Inverse 6- Compound 7- Logistic 8- Power 9- Weibull distribution 10- Poisson distribution

We used Biexponential Nonlinear Regression Model (SSbiexp) in this thesis. Where (SSbiexp) is selfStart model evaluates the biexponential model function and its gradient. It has an initial attribute that creates initial estimates of the parameters

,

,

and

[1].

We can represent the Biexponential Model as follow [1] [5] [12]:

22

Chapter Two: Theoretical Part

2.8.1 Exponential Family of Distributions [3] A single random variable (Y) whose probability distribution depends on a single parameter ( ). The distribution belongs to the exponential family if it can be written in the form (2.22) Where and

and (t) are known functions. Notice the symmetry between

. This is emphasized if equation (2.22) is rewritten as (2.23)

Where

and If a(y) = y, the distribution is said to be in canonical (that is, standard)

form and b( ) is sometimes called the natural parameter of the distribution. If there are other parameters, in addition to the parameter of interest ( ), they are regarded as nuisance parameters forming parts of the functions a, b, c and d, and they are treated as they are known. Many well-known distributions belong to the exponential family. For example, the Poisson, Normal and binomial distributions can all be written in the canonical form.

2.8.1.1 SSbiexp—Biexponential Model [11] [12] The biexponential model is a linear combination of two negative exponential terms (2.24) The parameters

and

and the parameters

and

sets of parameters (

,

are the coefficients of the linear combination, are the logarithms of the rate constants. The two

) and (

,

) are exchangeable, meaning that the

values of the pairs can be exchanged without changing the value of create an identifiable parameterization by requiring that

23

>

. We

Chapter Two: Theoretical Part

A representative biexponential model, along with its constituent exponential curve is shown in Figure (1)

Figure (1): A biexponential model show the linear combination of the exponentials (Bold line) and its constituent exponential curves (dashed line and dotted line) [11].

2.8.1.2 Starting Estimates for SSbiexp [11] The starting estimates for the biexponential model are determined by curve peeling, which involves: 1. Choosing half the data with the largest

values and fitting the simple linear

regression model

2. Setting

and

and calculating the

residuals for the half of the data with the smallest values.

Fit the simple linear regression model

24

Chapter Two: Theoretical Part

3. Setting

and using an algorithm for partially linear

models to refine the estimates of linear in and

and

,

,

and

. Because the model is

, the only starting estimates used in this step are those for

and the iterations are with respect to these two parameters.

The estimates obtained in this way are the final nonlinear regression estimates.

Figure (2): A sample response curve from a first-order open-compartment model [11].

2.9 Robust Statistics 2.9.1 Robust Method

[9]

Statistical inferences are based only in part upon the observations. An equally important base is formed by prior assumptions about the underlying situation. Even in the simplest cases, there are explicit or implicit assumptions about randomness and independence, about distributional models, perhaps prior distributions for some unknown parameters, and so on. These assumptions are not supposed to be exactly true, they are mathematically convenient rationalizations of often fuzzy knowledge or belief. As in every other branches of applied mathematics, such rationalizations or simplifications are vital, and one justifies their use by appealing to a vague continuity or stability principle; a

25

Chapter Two: Theoretical Part

minor error in a mathematical model should cause only a small error in the final conclusions. Unfortunately, this does not always hold. During the past decades, one has become increasingly aware that some of the most common statistical procedures "in particular, those optimized for an underlying normal distribution" are excessively sensitive to seemingly minor deviations from the assumptions, and a plethora of alternative "Robust" procedures have been proposed. The word "Robust" is loaded with many-sometimes inconsistentconnotations. We use it in a relatively narrow sense: for our purposes, robustness signifies insensitivity to small deviations from the assumptions. Robust regression is an alternative to least squares regression when data is contaminated with outliers or influential observations and it can also be used for the purpose of detecting influential observations. Primarily, we are concerned with distributional robustness: the shape of the true underlying distribution deviates slightly from the assumed model, usually the Gaussian law. This is both the most important case and the best understood one. Much less is known about what happens when the other standard assumptions of statistics are not quite satisfied and about the appropriate safeguards in these other cases.

2.9.2 Why Robust Statistics are Needed [10] All statistical methods rely explicitly or implicitly on a number of assumptions. These assumptions generally aim at formalizing what the statistician knows or conjectures about the data analysis or statistical modeling problem he or she is faced with, and at the same time aim at making the resulting model manageable from the theoretical and computational points of view. However, it is generally understood that the resulting formal models are simplifications of reality and that their validity is at best approximate. The most widely used model formalization is the assumption that the observed data have a normal (Gaussian) distribution. This assumption has been present in statistics for

26

Chapter Two: Theoretical Part

two centuries, and has been the framework for all the classical methods in regression, analysis of variance and multivariate analysis. There have been attempts to justify the assumption of normality with theoretical arguments, such as the central limit theorem. These attempts, however, are easily proven wrong. The main justification for assuming a normal distribution is that it gives an approximate representation to many real data sets, and at the same time is theoretically quite convenient because it allows one to derive explicit formulas for optimal statistical methods such as maximum likelihood and likelihood ratio tests, as well as the sampling distribution of inference quantities such as t-statistics. We refer to such methods as classical statistical methods, and note that they rely on the assumption that normality holds exactly. The classical statistics are by modern computing standards quite easy to compute. Unfortunately, theoretical and computational convenience do not always deliver an adequate tool for the practice of statistics and data analysis. But some observations follow a different pattern or no pattern at all. In the case when the randomness in the model is assigned to observational errors—as in astronomy, which was the first instance of the use of the least-squares method—the reality is that while the behavior of many sets of data appeared rather normal, this held only approximately, with the main discrepancy being that a small proportion of observations were quite atypical by virtue of being far from the bulk of the data. Behavior of this type is common across the entire spectrum of data analysis and statistical modeling applications. Such atypical data are called outliers, and even a single outlier can have a large distorting influence on a classical statistical method that is optimal under the assumption of normality or linearity. The kind of ―approximately‖ normal distribution that gives rise to outliers is one that has a normal shape in the central region, but has tails that are heavier or ―fatter‖ than those of a normal distribution. One might naively expect that if such approximate normality holds, then the results of using a normal distribution theory would also hold approximately. 27

Chapter Two: Theoretical Part

This is unfortunately not the case. If the data are assumed to be normally distributed but their actual distribution has heavy tails, then estimates based on the maximum likelihood principle not only cease to be ―best‖ but may have unacceptably low statistical efficiency (unnecessarily large variance) if the tails are symmetric and may have very large bias if the tails are asymmetric. Furthermore, for the classical tests their level may be quite unreliable and their power quite low, and for the classical confidence intervals their confidence level may be quite unreliable and their expected confidence interval length may be quite large. The robust approach to statistical modeling and data analysis aims at deriving methods that produce reliable parameter estimates and associated tests and confidence intervals, not only when the data follow a given distribution exactly. A more informal data-oriented characterization of robust methods is that they fit the bulk of the data well: if the data contain no outliers the robust method gives approximately the same results as the classical method, while if a small proportion of outliers are present, the robust method gives approximately the same results as the classical method applied to the ―typical‖ data. As a consequence of fitting the bulk of the data well, robust methods provide a very reliable method of detecting outliers, even in high-dimensional multivariate situations. We note that one approach of dealing with outliers is the diagnostic approach. Diagnostics are statistics generally based on classical estimates that aim at giving numerical or graphical clues for the detection of data departures from the assumed model. There is a considerable literature on outlier diagnostics, and a good outlier diagnostic is clearly better than doing nothing. However, these methods present two drawbacks. One is that they are in general not as reliable for detecting outliers as examining departures from a robust fit to the data. The other is that, once suspicious observations have been flagged, the actions to be taken with them remain the analyst’s personal decision, and thus there is no objective way to establish the properties of the result of the overall procedure. 28

Chapter Two: Theoretical Part

2.9.3 Main Contributions of Robust Statistics to Modern Statistics [40]

Here is a list of main ideas, concepts, and tools which robust statistics contributed to modern statistics. We focus only on those basic ideas which were developed in robust statistics but which are nowadays general tools in modern statistics. a) Models are Only Approximations to Reality Of course, this is a standard statement in science, but robust statistics helped to stress and quantify this point. Starting with Tukey (1960), it demonstrated the dramatic loss of efficiency of optimal procedures in the presence of tiny deviations from the assumed stochastic model. This opened up the door to search for better alternatives and for multiple tools for data analysis. b) Multiple Analyses and Solutions of a Data-Analysis Problem This point, among many others, was put forward by Tukey (1962) in his path-breaking paper on the future of data analysis. Robust statistics contributed to develop the idea that multiple tools are necessary to analyze real data and that real problems can have multiple solutions. c) The Minimax Approach This approach is borrowed from the game theory of Huber’s (1964) elegant solution of the robustness problem, viewed as a game between Nature which chooses a distribution of the data in a neighbourhood of the model and the statistician who chooses an estimator in a given class. The payoff is the asymptotic variance of the estimator at a given distribution. Sometimes minimax solutions can be pessimistic, but it turned out that this wasn’t the case here. The resulting estimator, Huber’s estimator, became the basic building block of any robust procedure and is a basic tool beyond robust statistics.

29

Chapter Two: Theoretical Part

d) Statistical Functionals (and Expansions); Gateaux and Fréchet Differentiability Statistical functionals had already been considered by von Mises (1947) but Hampel’s (1968, 1974) approach recasts the robustness problem in the language of functional analysis continuity, differentiability, etc. In particular, the influence function (the Gateaux derivative of a functional) became the most important single heuristic tool to analyze the stability of statistical procedures and to develop new robust procedures. Its many links with the classical statistical theory (linear term in the asymptotic expansion of an estimator, basic tool to compute the asymptotic variance of an estimator etc.) and with other important ideas such as the sensitivity curve and the jacknife, make the influence function an important concept in modern statistics. Moreover, statistical function played an important role later in the development of the bootstrap and nonparametric techniques. e) M-Estimators (and Estimating Equations) Huber’s (1964) M-estimators represent a very flexible and general class of estimators which played an important role in the development of robust statistics and in the construction of robust procedures. However, this idea is much more general and is an important building block in many different fields including, for instance, longitudinal data, econometrics, and biostatistics.

2.10 Classical and Robust Approaches to Statistics [10] This introduction is an informal overview of the main issues to be treated in detail. Its main aim is to illustrate the following facts: 1. Data collected in a broad range of applications frequently contain one or more atypical observations called outliers; that is, observations that are well separated from the majority or ―bulk‖ of the data, or in some way deviate from the general pattern of the data.

30

Chapter Two: Theoretical Part

2. Classical estimates such as the sample mean, the sample variance, sample covariances and correlations, or the least-squares fit of a regression model, can be very adversely influenced by outliers, even by a single one, and often fail to provide good fits to the bulk of the data. 3. There exist robust parameter estimates that provide a good fit to the bulk of the data when the data contain outliers, as well as when the data are free of them. A direct benefit of a good fit to the bulk of data is the reliable detection of outliers, particularly in the case of multivariate data. Mean while it is important to be aware of the following performance distinctions between classical and robust statistics at the outset.

2.11 Some General Definitions 2.11.1 The Breakdown Value [13] [40] [46] [47] The breakdown point introduced by Hampel (1968, 1971) is a measure of global stability for a statistical function and as such is a typical robustness measure. However, the quest for high breakdown point estimators in the field of robust statistics has pushed the development, among other things, of general computational techniques and resampling algorithms which can be used in more general settings. It is one of the criteria for measuring robustness. Its oldest definition (Hodges 1967) was restricted to one-dimensional estimation of location, whereas Hampel (1971) gave a much more general formulation. Unfortunately, the latter definition was asymptotic and rather mathematical in nature, which may have restricted its dissemination. We prefer to work with a simple "finite-sample" version of the breakdown point, introduced by Donoho and Huber (1983). Take any sample of (n) data points, = {(

),…,(

)},

and let

be a regression estimator. This means that applying

sample (

yields a vector of regression coefficients

31

(2.25) to such a

Chapter Two: Theoretical Part

̂

(2.26)

Now consider all possible corrupted samples replacing any

, that are obtained by

of the original data points by arbitrary values, this allows for

very bad outliers. Let us denote by bias( ; , ) the maximum bias that can be caused by such a contamination: maxbias

́‖

=

‖,

Where the supremum is over all possible ( ), this means that

(2.27)

). If bias(

) is infinite

outliers can have an arbitrarily large effect on (T), which

may be expressed by saying that the estimator ―breaks down.‖ Therefore, the finite-sample breakdown point of the estimator

at the sample

is defined

as (

= min { ;bias(

)=

}

(2.28)

In other words, it is the smallest fraction of contamination that can cause the estimator

to take on values arbitrarily far from

. Note that this

definition contains no probability distributions! For Ordinary Least Squares Method (OLS), we have seen that one outlier is enough to carry

over all bounds. Therefore, its breakdown point equals (

=

which tends to zero for increasing sample size

(2.29) , so it can be said that (OLS)

has a breakdown point of 0%. This again reflects the extreme sensitivity of the (OLS) method to outliers (

= 0%

(2.30)

The best breakdown point can be expected which is 50% but in the case that Percentage of pollution is higher than 50%, it will be impossible to

32

Chapter Two: Theoretical Part

distinguish between the good part and the part that is not good because of the outlier in the data. And the highest breakdown point of the estimated (

= 50%

is (2.31)

So when you use a Least Median Squares Method (LMS) and a Least Trimmed Squares Method (LTS) in the case that an outlier exists by 50%, we will get a breakdown point of the estimated (

:

= 50%

50% is a very reasonable rate, so-called high breakdown estimates.

2.11.2 Outliers [46] [47] In most of the data, which owns a large number of variables as part of a single sample of the first steps toward getting a coherent analysis is to detect abnormal observations, with that outliers is often confusion or error and may carry important information. outliers detection is an important step. Otherwise, the outliers will lead to a shaded model or estimates of the parameters biased and incorrect results. Therefore, it is important to identify the outliers before the construction of the specimen and analysis The precise definition of the outliers often depends on hidden assumptions regarding the structure of the data and the application of the detection method Hawkins (1980) knew outliers as those observations that deviate from a village raises doubts Barnet and Lewis (1994) signal that the abnormal or outliers observation is one so that they appear to deviate significantly from the rest of the sample elements and similarly Johnson (1992) regarded outliers as an observation from the data set so that they appear to be inconsistent with the rest of the data set.

33

Chapter Two: Theoretical Part

And that the methods of detecting outliers suggested numerous applications including: 1- Clinical attempts 2- The proposed irregularity analysis 3- Geographic information analysis (GIS) 4- Achievement active analysis 5- Multiple networks 6- Theoretical data

2.11.3 Robust Mahalanobis Distance [13] [17] [19] The traditional method used to detect outliers in the multivariate observations is Mahalanobis squared distance as follows: (

)

(

)

Because ∑ ⁄



(

)

Where:

: A vector after ( : A vector after (

) Balance for observation. ) for averages.

: Inverse matrix of common variations after ( Then drawing the calculated of Statistics

).

for each observation against the value

and this method combines between impose Normal distribution

and detect outliers observation as well as being a description of the diagrammatic sample. This method suffers from the fact that it is based on the highly sensitive statistics towards outliers, and especially when there are many outliers within the sample, it will shrinkage the correlations and swell the 34

Chapter Two: Theoretical Part

variances and generally will reduce the distance of Mahalanobis to the outliers observation which gives a distorted impression around the diagrammatic shape. To overcome this weakness, the Mahalanobis distance must be calculated based on the robust location parameters estimation and dispersion instead of sample means and sample variances if the robust distance is a reduplication of the traditional robust Mahalanobis distance by using robust estimation for location and dispersion.

2.12 Estimation Methods [39] [46] The most important concerns of statistical science is data analysis in a study or scientific research and interpret the results obtained from them. Data interpretation is built on the basis of the rules and special methods and in some cases the researcher may face the problem in the data such as the move away sample data on the supposed distribution and due to the presence of outliers (anomalous) values or different distribution of the community under study for the supposed distribution, which this is due to deviations from assumptions made traditional methods like (OLS) to estimate the linear model parameters. Many researchers have reached the conclusion that these methods are not efficient in the case of non-achievement of the assumptions or conditions adopted by these methods. Many researchers have worked to find more efficient methods that are not affected by deviations from specific assumptions. These methods called Robust methods were less affected by the status of data breaches as a condition of the user analysis. The robust estimators are close to the efficiency of the capabilities of ordinary least squares in case of verifying assumptions and better in the case of deviation from these assumptions, and it is suitable for a wide class of distributions in the estimation of linear model parameters. Although there are different methods of robust but mostly have two key points: First: give less weight to Outliers, if they exist, so as to minimize their

35

Chapter Two: Theoretical Part

effect. Second: the use of an iterative method to reduce the problem of autocorrelation and multicollinearity. Several robust estimates for regression with high breakdown point have been proposed. These include the least median of squares estimate (LMSE) and the least trimmed squares estimate (LTSE) proposed by Rousseeuw (1984), the scale (S) estimates proposed by Rousseeuw and Yohai (1984) the proposed by Yohai (1987), and the

estimates

estimates proposed by Yohai and Zamar

(1988). These estimates have a very high computational complexity, and thus the usual algorithms compute only approximate solutions. Rousseeuw (1984) proposed an approximate algorithm based on drawing random subsamples of the same size as the number of independent variables. Ruppert (1991) proposed a refinement of this algorithm for

estimates that seems to be more efficient than

Rousseeuw's. Stromberg (1991) gave an exact algorithm for computing the LMSE, but it requires generating all possible subsamples of size

. A more

efficient algorithm that eventually computes the exact LMSE or the LTSE, the feasible solution algorithm (FSA), was proposed by Hawkins (1993, 1994). However, all of these algorithms require computation time that increases exponentially with the number of independent variables, and thus can be applied only when this number is not too large.

2.12.1 The Types of Estimations in Robust Method [13] [46] 1- Maximum likelihood type estimates (M) 2- Least Median of Squares Estimates (LMS) 3- Least Trimmed of Squares Estimates (LTS) 4- Least Winsor of Squares Estimates (LWS) 5- Rank Statistics Estimates (R) 6- Laplace Statistics Estimates (L) 7- An Adaptive Maximum Likelihood Type Estimates (AM) 8- Scale Statistics Estimates (S)

36

Chapter Two: Theoretical Part

2.12.1.1 The General Class of M-Estimators [30] An estimator is often chosen as a member of a general class of estimators that is optimal in some sense or fulfills a set of good properties. Huber (1964, 1967) proposed a class of M-estimators that naturally generalize the is given by the solution ̂

MLE. An M-estimator of

of the minimization

problem min ∑

(2.32) of ∑

Or, alternatively, by the solution for For suitable

and

(2.33)

functions,

Where (2.34) Setting Or

put in put in In general,

(2.32) (2.33)

needs not to be the differentiable of some

–function

with respect to the parameter of interest, therefore (2.32) is more general and is often referred as the proper definition of an M-estimator. ∫

(2.35)

M-estimators include the so-called weighted MLE (WMLE) defined as the solution for

of

∑ Where

,

(2.36)

is a consistency correction factor. The MLE corresponds to the

choice (2.37) For all and, consequently,

0.

37

Chapter Two: Theoretical Part

To construct a robust WMLE, one simply chooses weights that make bounded. The weights can depend on the observations only, on a quantity that depends itself on the observations and the parameters or, more generally, directly on the score function. In the linear regression model

(For simplicity) The score function has a similar expression but is proportional to

,the residual ,and

,the covariate

function itself. In the univariate case, popular choices are Huber's weights (2.38)

i.e. the weight is equal to one for all small values of satisfying

and

otherwise. The Huber

-function (Huber, 1964) is simply (2.39)

With the corresponding

–function

{

(2.40)

Note that in a regression model with known scale, the Huber estimator is an M-estimator associated with (2.41)

38

Chapter Two: Theoretical Part

A bounded function of The

- and

(or the response ).

-functions of the MLE and Huber proposal are depicted in Figure

(3), left and middle panels.

Figure (3)[30] Figure (3) displays the and - function for the MLE, Huber and biweight estimators. For the MLE, the

-function score and corresponding

are unbounded. In contrast, the Huber estimator has a score function which is bounded by , and linear in the tails. Finally the biweight the corresponding

function is quadratic in the middle and -function score is redescending with

-function being symmetric and constant for

.

Huber’s weights can be used with many different models where the corresponding M-estimator is defined through (2.42) Where: : is a consistency correction factor. : is residual. : is constant value.

39

Chapter Two: Theoretical Part

The latter is chosen on the base of the lower weights, the more robust the estimator and the lower

, that is the lower the is, the more observations

are down weighted, the less efficient the estimator. The role of the function

is to control the effect of large residuals;

therefore it has to be bounded. Common choices for

are functions that level

off such as the Huber function or functions that are redescending, The function is usually tuned with a constant , which is typically chosen to guarantee a given level of asymptotic efficiency. A value of

between 1.2 and 1.8 is often adequate. The default value is

set to 1.345, the value that guarantees 95% efficiency for the normal-identity link GLM model. This value is also often a reasonable choice for the other models, such as the binomial and Poisson models. Note that when

, the

classical GLM estimators are reproduced. In practice, very large values of (e.g. ≥ 10) have the same effect. M-estimators for the GLM model, given the Pearson residuals (2.43)



The M-estimating equations for

of model (2.44)

are given by the solution of the following estimating equations ∑

[

]





=0

(2.45)

Where: And ∑

(2.46)



With the expectation taken over the distribution of

⁄ , the constant

correction term. Comparing with the estimating equations

40

is a

Chapter Two: Theoretical Part



*

+

(2.47)

we call the estimator issued from (2.45) a Mallows-type estimator. It simplifies to a Huber-type estimator when

1 for all .

It is noting that the estimating equations (2.45) can be conveniently rewritten as ∑



]



(2.48)

Where ⁄

̂

(2.49)

In this form, the estimating equations (2.45) can be interpreted as the classical estimating equations weighted (both with respect to via

and

) and re-centered

to ensure consistency. The estimation procedure issued from (2.45) can

be written as an IRWLS, in the same manner as it is usually presented for the classical GLM estimating equations. The choice of

is also suggested by

robust estimators in linear models the simplest approach is to use: √ Where distances

(2.50)

is the leverage or choosing weights based on the Mahalanobis of the form (2.51)



(

) √

Where: ∑



Finally we can set the producer for this method by following. The manner in which the least squares method requires a minimized sum of squares of random errors on any: ∑



Where: 41

(2.52)

Chapter Two: Theoretical Part

That the required method of estimating M the minimized ∑ Since the

(2.53)

convex function and symmetric and to achieve property are as

follows: ∑ Where

(2.54)

̂

any symmetric function The method M becomes equivalent to the OLS and takes the derivative of

a function

for the vector parameters ∑

Where:

[

and equals to zero.

*

+

̂

(2.55)

]

The equation can be re-Written as ∑ That

*

̂

+

(2.56)

the weight function is calculated as:

(

)

[ [

]

̂

(

) ̂

]

(2.57)

Vector of parameters is estimated from OLS method. ̂ A parameter scale and can be estimated once and under the formula ̂

(2.58)

42

Chapter Two: Theoretical Part

And the equation can be resolved by using weighted Least Squares method according to the formula: ̂

(2.59)

: is diagonal matrix the elements of weights. It can be estimated in the first iteration and the estimated values in this frequency used to calculate the weight and thus continue to be the difference between the results of successive taken very small, then stop and the estimated assessment is the final regression model parameters. Since the robustness of the estimated output of this method depends on the function and there are many forms of this function to the formula uses.

2.13 Weight Functions [15] You can specify the weight function for M estimation with the [Weight Function = option]. The ROBUSTREG procedure provides 10 weight functions. By default, the procedure uses the bisquare weight function. In most cases, M estimates are more sensitive to the parameters of these weight functions than to the type of the weight function. The median weight function is not stable and is seldom recommended in data analysis; it is included in the procedure for completeness. You can specify the parameters for these weight functions. Except for the Hampel and median weight functions, default values for these parameters are defined such that the corresponding M estimates have

asymptotic efficiency in the location model with the Gaussian

distribution (Holland and Welsch 1977). The table below shows the weight functions:

43

Chapter Two: Theoretical Part

Table (1) The Weight Functions[15]

Functions Name

Functions Formula

Andrews

{

Bisquare

{

Cauchy

Fair

Hampel {

Huber

{

Logistic

Median {

44

Figures

Chapter Two: Theoretical Part

,

Talworth

Welsch

2.14 Influence Function of M-Estimates [9] To calculate the influence function of an M -estimate, we insert = (1 - t)F + tG ∫ (

For F into

)

(2.60)

And take the derivative with respect to t at t = 0. In detail, if we put for short ̇= Then we obtain, by differentiation of the defining equation (2.60), ̇∫

(

)

∫ (

)

(2.61)

For the moment we do not worry about regularity conditions. For G =

, ̇ gives the value of the influence function at , so, by solving

(2.61) for ̇ , we obtain IC ( ;F,T) =

(2.62)



In other words the influence function of an M -estimate is proportional to . In the special case of a location problem, IC ( ;F,T) =

(

)=

( - ), we obtain (2.63)

∫ ́

45

Chapter Two: Theoretical Part

We conclude from this in a heuristic way that √ (

-

) is asymptotically

normal with mean 0 and variance (

)=∫

(2.64)

However, this must be checked by a rigorous proof.

2.15 Asymptotic Properties of M-Estimates [9] A fairly simple and straightforward theory is possible if

(

) is

monotone in ; Assume that

(

) is measurable in

and decreasing

(i.e., nonincreasing) in , from strictly positive to strictly negative values. Put

{t|∑ = inf {t|∑

}, }.

= sup

Clearly, -

(2.65)

, and any value

satisfying

can serve as our estimate. Exhibit ∑

, may help with the interpretation of

and

.

Note that ∑ ∑

{ {

}, }.

(2.66)

Hence ∑ ∑

}, }.

(2.67)

At the continuity points t of the left-hand side. The distribution of the customary midpoint estimate ( somewhat difficult to work out, but the randomized estimate

46

) is

, which selects

Chapter Two: Theoretical Part

one of

at random with equal probability, has an explicitly

expressible distribution function ∑



It follows that the exact distributions of from the convolution powers of found by expanding

(

(2.68)

,

and

can be calculated

). Asymptotic approximations can be

= (∑

) into asymptotic series.

We may take the traditional Edgeworth expansion ( )

(2.69)



However, this gives a somewhat poor approximation in the tails, that is, precisely in the region in which we are most interested. Therefore, it is preferable to use so-called saddlepoint techniques and recenter the distributions at the point of interest. Thus, if we have independent random variables density

and would like to determine the distribution

the point , we replace the original density

of

with

+···+

by a conjugate density

at

: (2.70)

Where

and

are chosen such that this is a probability density with

expectation 0. See Daniels (1954). Later, Hampel (1973b) noticed that the principal error term of the saddlepoint method seems to reside in the normalizing constant (standardizing the total mass of density

to 1), so it would be advantageous not to expand

, but rather ́ /

or its

, and then to determine the normalizing constant by

numerical integration. This method appears to give fantastically accurate approximations down to very small sample sizes (n = 3 or 4). We now turn to the limiting distribution of (

47

. Put ).

(2.71)

Chapter Two: Theoretical Part

If

exists and is finite for at least one value of , then it exists and is

monotone, although not necessarily finite, for all . This follows at once from the remark that

(

)

(

defined expectation (possibly +

) is positive for

and hence has a well-

).

2.16 Model Selection Criteria [16] [30] Model selection involves the choice of an appropriate model among a set of candidate models. When specific scientific issues are involved, the issues often dictate the choice of model or portions of the model. Model selection tools are a useful set of techniques for screening through the many different covariance models. Likelihood ratio tests (LRTs) can be used to compare two models when one model is a special case of the other. The alternative model allows additional parameters to vary, whereas the null model fixes those additional parameters at known values. The test statistic is two times the difference of the log maximized likelihoods for each model. When fitting a model, the larger alternative model always will have the larger log likelihood LA , where A indicates the alternative, whereas the null model has log maximized likelihood L 0 < LA . The important question is how much larger. To consider this, we use the test statistic: (2.72) The two most popular covariance model selection criteria are AIC and BIC, short for an information criterion or Akaike information criterion and Bayes information criterion, respectively. AIC for a given model m is defined as: (2.73)

48

Chapter Two: Theoretical Part

Except that instead of the 2 multiplying the number of covariance parameters where m : the number of parameter in the model. Model selection proceeds similarly for both criteria. Each covariance model under consideration is fit to the data, and the models are ranked according to either AIC or BIC. The model with the smallest value of AIC or BIC is selected as best. As can be inferred from the differing penalty functions, AIC tends to select models with more covariance parameters and BIC selects models with somewhat fewer covariance parameters. There are 4 equivalent variants of AIC and BIC. Our version of AIC and BIC are in smaller is better form. Some programs multiply our definitions by −1 and have a "larger is better" version, and some programs may multiply AIC and BIC by ±.5 in the definition. It is important to confirm which version is being used. There is no particular advantage to any of the versions. Our version usually keeps most of the numbers positive, and both AIC and BIC are on the −2log likelihood scale, so that any intuition about differences on the −2log likelihood scale can be used to measure differences in AIC or BIC. BIC is nearly identical m, the penalty log (n)

(2.74) Where m: the number of parameter in the model , n: sample size

49

Chapter Three Practical Part

Chapter Three: Practical Part

Chapter Three Practical Part 3.1 Introduction The definition of simulation in general, as the tradition of practical reality, is employing models or configuring a large number of virtual instances appear to be the results of a more comprehensive analysis and generalization. Basically the help of simulation has emerged as one of the secretions of the progress made in the electronic computing both in terms of the industry of Hardware or Software, as well as progress in the use of numerical analysis on the light in a lot of mathematical or statistical analyzes, on the other side. The rationale for simulation work are often to make sure the check along with an existing application, or to the difficulty of obtaining data on the availability of accurate information on a particular phenomenon, or when it is difficult to prove the theoretical mathematical proof form to indicate the specific methods of estimating an advantage over another. High Simulation flexibility and freedom also shown in the selection of random samples are supposed to represent the size of the study population a better representation, as well as the high potentials of biological diversity of random errors. It can show the state of pollution in the data. Also, they show the ability to re-make the entire experience a number of times, and for all the studied models in order to judge the results of the analysis. Not to mention all the other benefits envisaged from the simulation process, namely shorthand effort, time and cost. Based on the progress, Inability to obtain the data that can be used for the purposes of applied research and being one of the advanced theoretical side research, which often deals with phenomena of contemporary interest in the task of medical experiments, such as cases of healing after surgery to cultivate human organs, or surveys huge pertaining to the environment on the planet, such as

05

Chapter Three: Practical Part

surveys of the temperature of the surface of the earth to study global warming, and others that are not handled on a large field, for a sample size that is relatively large for what is a familiar parametric in the analysis, the simulation is the perfect tool for data generation and to conduct the required tests. Notice that the modern trend in estimating nonparametric regression generally, and the smoothing methods especially, can be done by depending on the simulation [48].

3.2 Generate Random Variables The implementation of simulation experiments by using three volumes of samples, as follows: (n = 25), (n = 50) and (n = 100). Loops (Replicates = 200) for each experiment simulation, are as follows:

3.2.1 Explanatory Variables They are

es distributed as uniform distribution independently,

ie:

,

3.2.2 Dependent Variable is a generated variable directly through the Biexponential model used in the simulation experiments, using Nonlinear regression functions in terms of the explanatory variables.

3.2.3 Continuous Variables [44] Continuous variable is the variable that takes an observation or an individual that contains a numeric value in a certain range. Examples of the continuous variables are: the weight and quantity of the crop and the temperature and time, because it cannot be measured in small portions and take any value located within certain limits.

05

Chapter Three: Practical Part

3.2.4 Discrete Variables [44] Discrete variable is the variable that takes an observation or an individual that contains spaced values or discrete (intermittent) non-continuous. The examples of the Discrete variables are: number of fruits on the plants or the number of production units in the factory or the number of students in the first class of some University. They are usually correct numbers. In general, all the data that we get from the counting data is considered a separate variable.

3.3 Description and Generate of Data By using (R) language (R version 3.2.4). The data come from a laboratory study on the pharmacokinetics of the drug indomethacin and we depend on Two observations ( where

=

) as an initial value to generate the data and

=

since

and

are

time of Indomethacin administration we can see that plasma concentration of Indomethacin decrease by increasing Time of administration. We used runif ( ) to Generate random numbers from the uniform distribution where

runif ( )

generates random deviates and we use three sample sizes; as follows: ,

and

for each sample size we took (200)

Reputations so we can use our data in a Nonlinear Regression and apply our data to Biexponential Nonlinear Regression Model, Biexponential Robust Nonlinear Regression Model and Robust M-Estimates (Huber and Bisquare (Biweight)).

05

Chapter Three: Practical Part

This is the code that used to generate the data by depending on the plasma concentration of Indomethacin administration: rm(list = ls()) set.seed(1200) m=200 # No of iterations for (j in 1:m) { n=(25,50,100) # No of y in each x x=c() y=c() for (i in 1:n) { x1= c(seq (0.25,1.25,by=0.25)) x2= c(2,3,4,5,6,8) x=append(x,x1) x=append(x,x2) y1=exp(-x1) y2 = runif(5, 0, 1) y3 = sort(y2, decreasing=TRUE) y4=y1+y3 yy1=exp(-x2) yy2 = runif(6, 0, 1) yy3 = sort(yy2, decreasing=TRUE) yy4=yy1+yy3 y=append(y,y4) y=append(y,yy4) } dat<- data.frame(x,y) }

* For more detail about all the practical part see Appendix (A,B,C,D,E) and Appendix (A,B,C,D) on the CD.

05

Chapter Three: Practical Part

3.4 Explanation 1- For Biexponential Nonlinear Regression Model We estimate four parameters in Biexponential Nonlinear Regression Model for (200) Reputations, then we found the mean parameters estimation as shown in Table (2) and in Table (3) the best parameters have been found by depending on the Akaike information criterion and Bayesian information criterion in Table (4) for each sample sizes (25,50,100) where the sample size (100) has the best Akaike and Bayesian information criterion because they have the minimum values and we test the Biexponential Nonlinear Regression Model's parameters as shown in Table (5), the estimated parameters was significant for each sample sizes. Then we found the ANOVA table and the calculated F-test was significant for the Biexponential Nonlinear Regression Model as shown in Table (6).

2- For Biexponential Robust Nonlinear Regression Model A- In Biexponential Robust Nonlinear Regression Model we estimate Four parameters by depending on parameters mean as an initial value from Biexponential Nonlinear Regression Model for (200) Reputations, Then we found mean parameters estimation as shown in Table (7) and the best parameters in Table (8) by depending on the Akaike information criterion and Bayesian information criterion in Table (9) for each sample sizes (25,50,100) where the sample size (100) has the best Akaike and Bayesian information criterion because they have the minimum values. And in Table (10) we tested the parameters they were significant also in Table (11) we found the ANOVA table, where the calculated F-test was significant for the Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model. B- In Biexponential Robust Nonlinear Regression Model we estimate Four parameters by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model for (200) Reputations. Then we found

05

Chapter Three: Practical Part

the mean parameters estimation as shown in Table (12) and the best parameters in Table (13) by depending on the Akaike information criterion and Bayesian information criterion in Table (14) for each sample sizes (25,50,100) where the sample size (100) has the minimum Akaike and Bayesian information criterion. And in Table (15) we tested the parameters they were significant also in Table (16) we found the ANOVA table, where the calculated F-test was significant for the Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model.

3- Robust M-Estimate (Huber and Bisquare(Biweight)) In Robust M-Estimate we estimate Two parameters for (200) Reputations. Then we found the best parameters as shown in Table (17) by depending on the Akaike information criterion and the Bayesian information criterion and in Table (18) we found the mean parameters estimation and in Table (19) for Maximum Likelihood Estimation in Robust by using (Huber and Biweight) Weight Functions we have the Akaike information criterion and the Bayesian information criterion for each sample sizes (25,50,100), where the sample size (25) has the best Akaike and Bayesian information criterion because they have the minimum values for both Huber and Biweight functions. And in Table (20) we tested the parameters they were significant also in Table (21) we found the ANOVA table, where the calculated F-test was significant for the Robust M-Estimates (Huber and Biweight) Weight Functions. In Tables (4,9,14,19) respectively for Biexponential Nonlinear Regression Model, Biexponential Robust Nonlinear Regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model, Biexponential Robust Nonlinear Regression Model by using the best parameters as an initial values from Biexponential Nonlinear Regression Model and Maximum Likelihood Estimation in Robust by using (Huber and Biweight) Weight Functions for each sample sizes (25,50,100), the best Akaike and Bayesian information criterion (which means the Minimum AIC and BIC) is from Table (9) for 00

Chapter Three: Practical Part

Biexponential Robust Nonlinear Regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model for sample size (100). Table (2): The value of Mean Parameters Estimation for Biexponential Nonlinear regression Model. Sample size 1.9218028 1.9261400 1.9240320

1.2938830 1.3248406 1.3241795

0.9358788 0.9282851 0.9287286

-1.8676772 -1.8947399 -1.8960803

Result: 1- For sample size

, the Model is:

2- For sample size

, the Model is:

3- For sample size

, the Model is:

Table (3): The value of Best parameters for Biexponential Nonlinear Regression Model. Sample size 1.7868610 1.864754 1.771918

1.273806 1.290247 1.279541

Result: 1- For sample size

, the Model is:

2- For sample size

, the Model is:

05

0.9499278 0.9468054 0.9661654

-1.786601 -1.852526 -1.851838

Chapter Three: Practical Part

3- For sample size

, the Model is:

Table (4): The values of Akaike Information Criterion and Bayesian Information Criterion in Biexponential Nonlinear Regression Model. Sample size

AIC -75.7905398 -128.5648466 -227.396284

The sample size

BIC -57.7066843 -107.0152552 -202.380957

has the best AIC = (-227.396284) and the best

BIC = (-202.380957) because they have the minimum values.

Table (5): The Parameters Test for Biexponential Nonlinear Regression Model. Sample Sizes 25 Sample Sizes 50 Sample Sizes 100

Parameters

Estimate 1.7868610 1.273806 0.9499278 -1.786601 Parameters Estimate 1.864754 1.290247 0.9468054 -1.852526 Parameters Estimate 1.771918 1.279541 0.9661654 -1.851838

Std. Error 0.28051 0.19866 0.06049 0.10923 Std. Error 0.21034 0.13872 0.04200 0.08008 Std. Error 0.14787 0.10393 0.03044 0.05665

t value 6.370 6.412 15.703 -16.356 t value 8.865 9.301 22.543 -23.134 t value 11.98 12.31 31.75 -32.69

Pr (>|t|) 8.07e-10 ** 6.35e-10 ** < 2e-16 ** < 2e-16 ** Pr (>|t|) <2e-16 ** <2e-16 ** <2e-16 ** <2e-16 ** Pr (>|t|) <2e-16 ** <2e-16 ** <2e-16 ** <2e-16 **

From this table above we can show that the Estimated Parameters are Significant. Significant codes: ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

05

Chapter Three: Practical Part

Table (6): The ANOVA Table for Biexponential Nonlinear Regression Model. Sample Size

S.O.V D.f 4 Regression 271 25 Residual Error Uncorrected Total 275 Sample Size S.O.V D.f 4 Regression 546 50 Residual Error Uncorrected Total 550 Sample Size S.O.V D.f 4 Regression 1096 100 Residual Error Uncorrected Total 1100

S.S 41.59898 11.78619 53.38517 S.S 81.97559 25.03075 107.00634 S.S 160.9054 51.90284 212.80824

M.S F-test 10.399745 239.1214 0.043491476 M.S F-test 20.4938975 447.0368 0.045843864 M.S F-test 40.22635 849.4348 0.047356605

From the ANOVA table since all the F-test's is greater than the F-table, where F(4,∞,0.01) = 3.3192 and F(4,∞,0.05) = 2.3719 so the Biexponential Nonlinear Regression Model is significant.

1.0 0.4

0.6

0.8

Yhad

1.2

1.4

1.6

Nonlinear Regression

0

2

4

6

8

x

Figure (4): Biexponential Nonlinear Regression Model. Figure (4) shows that the generated data is biexponential nonlinear regression model by taking (200) reputation for each sample sizes (25,50,100) . 05

Chapter Three: Practical Part

Table (7): The value of Mean Parameters Estimation for Biexponential Robust Nonlinear regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model. Sample size 1.9173893 1.9129706 1.9120252

1.3376692 1.3380924 1.3379816

0.9413825 0.9446653 0.9451059

-1.8725682 -1.8669093 -1.8687686

Result: 1- For sample size

, the Model is:

2- For sample size

, the Model is:

3- For sample size

, the Model is:

Table (8): The value of Best Parameters for Biexponential Robust Nonlinear regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model. Sample size 1.999734 1.900491 1.907797

1.337423 1.350350 1.350823

Result: 1- For sample size

, the Model is:

2- For sample size

, the Model is:

3- For sample size

, the Model is:

05

0.9071101 0.9789866 0.9683418

-1.770363 -1.788555 -1.825621

Chapter Three: Practical Part

Table (9): The values of Akaike Information Criterion and Bayesian Information Criterion for Biexponential robust Nonlinear Regression Model by using parameters mean as an initial value from Biexponential Nonlinear Regression Model. Sample size

AIC -130.9635985 -227.280365 -356.559176

The sample size

BIC -112.879743 -205.730774 -331.543849

has the best AIC = (-356.559176) and the best

BIC = (-331.543849) because they have the minimum values.

Table (10): The Parameters Test for Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model. Sample Sizes 25 Sample Sizes 50 Sample Sizes 100

Parameters

Estimate 1.999734 1.337423 0.9071101 -1.770363 Parameters Estimate 1.900491 1.350350 0.9789866 -1.788555 Parameters Estimate 1.907797 1.350823 0.9683418 -1.825621

Std. Error 0.38241 0.22847 0.07135 0.13236 Std. Error 0.29051 0.17948 0.05240 0.09125 Std. Error 0.21854 0.13300 0.03823 0.06969

t value 5.229 5.854 12.714 -13.376 t value 6.542 7.524 18.683 -19.600 t value 8.73 10.16 25.33 -26.20

Pr (>|t|) 3.41e-07 ** 1.38e-08 ** < 2e-16 ** < 2e-16 ** Pr (>|t|) 1.40e-10 ** 2.21e-13 ** < 2e-16 ** < 2e-16 ** Pr (>|t|) <2e-16 ** <2e-16 ** <2e-16 ** <2e-16 **

From this table above we can show that the Estimated Parameters are Significant. Significant codes: ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

55

Chapter Three: Practical Part

Table (11): The ANOVA Table for Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model. Sample Size

S.O.V Regression 25 Residual Error Uncorrected Total Sample Size S.O.V Regression 50 Residual Error Uncorrected Total Sample Size S.O.V Regression 100 Residual Error Uncorrected Total

D.f 4 271 275 D.f 4 546 550 D.f 4 1096 1100

S.S 41.75762 11.81611 53.57373 S.S 83.62147 26.12004 109.74151 S.S 162.4692 54.17941 216.64861

M.S F-test 10.439405 239.4255 0.043601881 M.S 20.9053675 0.047838901

F-test 436.9951

M.S F-test 40.6173 821.6509 0.049433768

From the ANOVA table since all the F-test's is greater than the F-table, where F(4,∞,0.01) = 3.3192 and F(4,∞,0.05) = 2.3719 so the Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from the Biexponential Nonlinear Regression Model is significant.

55

Chapter Three: Practical Part

1.0 0.4

0.6

0.8

Yhad

1.2

1.4

1.6

Nonlinear Regression

0

2

4

6

8

x

Figure (5): Biexponential Robust Nonlinear Regression Model by using the Parameters Mean.

Figure (5) shows that the generated data is biexponential robust nonlinear regression by depending on the mean parameters estimation as an initial value from biexponential nonlinear regression model, by taking (200) reputations for each sample sizes (25,50,100).

Table (12): The value of Mean Parameters Estimation for Biexponential Robust Nonlinear Regression Model by using the best parameters as an initial value from Biexponential Nonlinear Regression Model. Sample size 1.9173906 1.9129674 1.9120141

1.3376698 1.3380923 1.3379819

55

0.9413825 0.9446664 0.9451097

-1.8725680 -1.8669085 -1.8687659

Chapter Three: Practical Part

Result: 1- For sample size

, the Model is:

2- For sample size

, the Model is:

3- For sample size

, the Model is:

Table (13): The value of Best Parameters for Biexponential Robust Nonlinear Regression Model by using the best parameters as an initial value from Biexponential Nonlinear Regression Model. Sample size 1.999734 1.900475 1.907790

1.337423 1.350340 1.350829

Result: 1- For sample size

, the Model is:

2- For sample size

, the Model is:

3- For sample size

, the Model is:

55

0.9071100 0.9789849 0.9683468

-1.770363 -1.788558 -1.825623

Chapter Three: Practical Part

Table (14): The values of Akaike Information Criterion and Bayesian Information Criterion for Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model. Sample size

AIC -130.9635993 -227.279446 -356.547958

The sample size

BIC -112.8797438 -205.729855 -331.5326302

has the best AIC = (-356.547958) and the best

BIC = (-331.5326302) because they have the minimum values.

Table (15): The Parameters Test for Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model. Sample Sizes 25 Sample Sizes 50 Sample Sizes 100

Parameters

Estimate 439999.1 43..91.. 039094400 -43990... Parameters Estimate 43900191 43.10.10 039999919 -43999119 Parameters Estimate 43909990 43.109.9 039.9.1.9 -439.1...

Std. Error 0.38241 0.22847 0.07135 0.13236 Std. Error 0.29051 0.17948 0.05240 0.09126 Std. Error 0.21855 0.13301 0.03823 0.06969

t value 5.229 5.854 12.714 -13.376 t value 6.542 7.524 18.683 -19.599 t value 8.729 10.156 25.332 -26.195

Pr (>|t|) 3.41e-07 ** 1.38e-08 ** < 2e-16 ** < 2e-16 ** Pr (>|t|) 1.40e-10 ** 2.21e-13 ** < 2e-16 ** < 2e-16 ** Pr (>|t|) <2e-16 ** <2e-16 ** <2e-16 ** <2e-16 **

From this table above we can show that the Estimated Parameters are Significant. Significant codes: ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

55

Chapter Three: Practical Part

Table (16): The ANOVA Table for Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model. Sample Size

S.O.V Regression 25 Residual Error Uncorrected Total Sample Size S.O.V Regression 50 Residual Error Uncorrected Total Sample Size S.O.V Regression 100 Residual Error Uncorrected Total

D.f 4 271 275 D.f 4 546 550 D.f 4 1096 1100

S.S 41.75762 11.81611 53.57373 S.S 83.62146 26.12004 109.7415 S.S 162.4683 54.17944 216.64774

M.S 10.439405 0.043601881

F-test 239.4255

M.S 20.905365 0.047838901

F-test 436.995

M.S F-test 40.617075 821.6459 0.049433795

From the ANOVA table since all the F-test's is greater than the F-table, where F(4,∞,0.01) = 3.3192 and F(4,∞,0.05) = 2.3719 so the Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from the Biexponential Nonlinear Regression Model is significant.

50

Chapter Three: Practical Part

1.0 0.4

0.6

0.8

Yhad

1.2

1.4

1.6

Nonlinear Regression

0

2

4

6

8

x

Figure (6): Biexponential Robust Nonlinear Regression Model by using the Best Parameters.

Figure (6) shows that the generated data is biexponential robust nonlinear regression by depending on the best parameters as an initial value from biexponential nonlinear regression model, by taking (200) reputations for each sample sizes (25,50,100).

Table (17): The value of Best Parameters in Robust by using M-estimate (Huber and Biweight) Functions. Sample size

M-estimate (Huber) 1.1467938 1.1416988 1.1735276

-0.1347307 -0.1352859 -0.140319

55

M-estimate (Bisquare (Biweight)) 1.1466908 1.1421745 1.1772377

-0.1347983 -0.135303 -0.140988

Chapter Three: Practical Part

Results: 1- For sample size =

for M-estimate (Huber),

(-0.1347307)

= (1.1466908) and

and

M-estimate

for M-estimate (Huber),

(-0.1352859)

= (1.1421745) and

and

M-estimate

(Bisquare

= (1.1416988) and (Biweight))

for M-estimate (Huber),

(-0.140319)

= (1.1772377) and

where

where

= (-0.135303).

3- For sample size =

(Biweight))

= (-0.1347983).

2- For sample size =

(Bisquare

= (1.1467938) and

and

M-estimate

(Bisquare

= (1.1735276) and (Biweight))

where

= (-0.140988).

Table (18): The value of Mean Parameters Estimation in Robust by using Mestimate (Huber and Biweight) Functions. Sample size

M-estimate (Huber) 1.142367872 1.145244716 1.146013212

M-estimate (Bisquare (Biweight))

-0.135815329 -0.136366809 -0.136387891

1.143630863 1.146289633 1.147087269

-0.136041553 -0.136560498 -0.136592265

Results: 1- For sample size =

(-0.135815329)

= (1.143630863) and 2- For sample size =

(-0.136366809)

= (1.146289633) and

for M-estimate (Huber), and

(-0.136387891)

= (1.147087269) and

(Bisquare

(Biweight))

where

= (-0.136041553). for M-estimate (Huber), and

M-estimate

(Bisquare

= (1.145244716) and (Biweight))

where

= (-0.136560498).

3- For sample size =

M-estimate

= (1.142367872) and

for M-estimate (Huber), and

M-estimate

= (-0.136592265).

55

(Bisquare

= (1.146013212) and (Biweight))

where

Chapter Three: Practical Part

Table (19): The values of Akaike Information Criterion and Bayesian Information Criterion in Robust by using M-estimate (Huber and Biweight) Functions. Sample size

M-estimate Huber AIC

M-estimate ( Bisquare (Biweight))

BIC

62.06881 176.8746 349.7569 The sample size

72.91912 189.8043 364.7661

AIC

BIC

62.05119 176.925 350.0742

72.90151 189.8548 365.0834

for M-estimate Huber the best AIC = (62.06881)

and the best BIC = (72.91912) and M-estimate (Bisquare (Biweight)) the best AIC = (62.05119) and the best BIC = (72.90151) because they have the minimum values.

Table (20): The Parameters Test for Robust by using M-estimate (Huber and Biweight) Functions. Sample Sizes

Functions Huber

25 Bisquare Sample Sizes

Functions Huber

50 Bisquare Sample Sizes

Functions Huber

100 Bisquare

Parameters

Estimate 1.1467938 -0.1347307 1.1466908 -0.1347983 Parameters Estimate 1.1416988 -0.1352859 1.1421745 -0.135303 Parameters Estimate 1.1735276 -0.140319 1.1772377 -0.140988

Std. Error 030.19 0.0065 030.11 0.0067 Std. Error 030490 0.0050 03049. 0.0051 Std. Error 0304.0 0.0034 0304.1 0.0035

t value 1.31.. -20.6775 113.044 -20.1023 t value 19399.4 -26.8867 1931944 -26.6973 t value 9031199 -40.9196 9934.19 -39.9281

From this table above we can show that the Estimated Parameters are Significant. Since

then t(2,∞,0.05) = 1.645 and t(2,∞,0.01) = 2.326.

55

Chapter Three: Practical Part

Table (21): The ANOVA Table for Robust by using M-estimate (Huber and Biweight) Functions. Sample Size Functions

25

Sample Size

50

Sample Size

100

S.O.V Regression Huber Residual Error Uncorrected Total S.O.V Regression Bisquare Residual Error Uncorrected Total Functions S.O.V Regression Huber Residual Error Uncorrected Total S.O.V Regression Bisquare Residual Error Uncorrected Total Functions S.O.V Regression Huber Residual Error Uncorrected Total S.O.V Regression Bisquare Residual Error Uncorrected Total

D.f 2 273 275 D.f 2 273 275 D.f 2 548 550 D.f 2 548 550 D.f 2 1098 1100 D.f 2 1098 1100

S.S 29.87993 19.74264 49.62257 S.S 29.90846 19.74137 49.64983 S.S 60.24854 43.93548 104.18402 S.S 60.26777 43.93951 104.20728 S.S 129.5914 88.03149 217.62289 S.S 130.8579 88.0569 218.9148

M.S 14.939965 0.072317362

F-test 206.5889

M.S 14.95423 0.07231271

F-test 206.7994

M.S 30.12427 0.080174233

F-test 375.7350

M.S 30.133885 0.080181587

F-test 375.8205

M.S 64.7957 0.080174398

F-test 808.1844

M.S 65.42895 0.08019754

F-test 815.8473

From the ANOVA table since all the F-test's is greater than the F-table, where F(2,∞,0.01) = 4.6052 and F(2,∞,0.05) = 2.9957 so the Robust M-Estimate by using (Huber and Bisquare) Weight Functions is significant.

55

Chapter Three: Practical Part

*

Table (22): Comparison Table. Biexponential Nonlinear Regression Model Sample size AIC BIC -75.7905398 -57.7066843 -128.5648466 -107.0152552 -227.396284 -202.380957 Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model Sample size AIC BIC -130.9635985 -112.879743 -227.280365 -205.730774 -356.559176 -331.543849 Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model Sample size AIC BIC -130.9635993 -112.8797438 -227.279446 -205.729855 -356.547958 -331.5326302 Robust by using M-estimate (Huber and Biweight) Functions Sample size M-estimate Huber M-estimate ( Bisquare(Biweight)) AIC BIC AIC BIC 62.06881 72.91912 62.05119 72.90151 176.8746 189.8043 176.925 189.8548 349.7569 364.7661 350.0742 365.0834

* Represents the AIC and BIC in (Biexponential Nonlinear Regression Model, Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model, Biexponential Robust Nonlinear Regression Model by depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model and Robust by using M-estimate (Huber and Biweight) Weight Functions).

55

Chapter Four Conclusions and Recommendations

Chapter Four: Conclusions and Recommendations

Chapter Four Conclusions and Recommendations 4.1 Conclusions Through the application of methods to estimate the parameters and the model, we can deduce the following: 1- Generating data for different sample sizes (25,50,100) with (200) Reputations for each sample sizes for parameters estimation led to the same Akaike and Bayesian information criterion, except in some cases, where the minimum AIC and BIC were regarded in this thesis. 2- In Biexponential Nonlinear Regression Model we estimate four parameters and we found the parameters mean and the best parameters for the model. We tested the parameters they were significant then we found the ANOVA table where the F-test is also significant and we found the Akaike and Bayesian information criterion. To choose the best model we depend on the minimum AIC and BIC. 3- A cording to Biexponential Robust Nonlinear Regression Model: A- By depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model: We estimate four parameters. Then we found the mean parameters estimation and the best parameters for the model. The parameters have been tested and the ANOVA table also found they were both significant and we found the Akaike information criterion and Bayesian information criterion. To choose the best model we depend on the minimum AIC and BIC. B- By depending on the best parameters as an initial value from Biexponential Nonlinear Regression Model:

71

Chapter Four: Conclusions and Recommendations

We estimate four parameters. Then we found the mean parameters estimation and the best parameters for the model. The parameters have been tested and the ANOVA table also found they were both significant and we found the Akaike information criterion and Bayesian information criterion. To choose the best model we depend on the minimum AIC and BIC. 4- In Robust M-Estimate by using two weight function (Huber and Biweight) we estimate two parameters then we found the mean parameters estimation and the best parameters for the model. The parameters have been tested and the ANOVA table also found they were both significant and we found the Akaike information criterion and Bayesian information criterion. To choose the best model we depend on the minimum AIC and BIC. 5- After comparing all the four practical part with each other for all sample sizes (25,50,100) we can say that the best model is Biexponential Robust Nonlinear Regression Model by depending on the parameters mean as an initial value from Biexponential Nonlinear Regression Model for sample size (100) because it has the minimum AIC and BIC. 6- In classical nonlinear regression and also in robust nonlinear regression, we conclude that the sample size affects the results of the estimated value, whenever the sample size increases, the case results of the methods of estimation stay stable more.

4.2 Recommendations We recommend: 1- To use computer programs for simulating data and also for actual data. 2- We recommend using all type of robust methods because robust can be use briefly, it can outstretch both of nonlinear regression and non-normality. 3- Of the M-method, there are ten functions of the weights and researchers in statistics can use these functions and compare them with each other.

72

References

References

References Books [1] Adler, Joseph, (2009), “R in a Nutshell”, 1st Edition, O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472, Publication. [2] Bates, D. M. and Watts, D. G., (1988), "Nonlinear regression analysis and its applications", © 1988 by John Wiley & Sons, Inc., Publication. [3] Dobson, Annette J., (2002), “An Introduction to Generalized Linear Models”, 2nd Edition, © 2002 by Chapman & Hall/CRC. [4] Draper, N. R. and Smith, H., (1998), "Applied regression analysis", 3rd Edition, © 1998 by John Wiley & Sons, Inc., Publication. [5] Fox, J. and Weisberg, S., (2010), “Nonlinear Regression and Nonlinear Least Squares in R: An Appendix to An R Companion to Applied Regression”, 2nd edition. [6] Gentleman, Robert, (2009), "R Programming for Bioinformatics", © 2009 by Taylor & Francis Group, LLC. [7] Graybill, F. A. and Iyer, H. K., (1962), "Regression Analysis: Concepts and Applications", Duxbury Press, An Imprint of Wadsworth Publishing Company Belmont, California. [8] Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J. and Stahel, W. A., (1986), "Robust Statistics the Approach Based on Influence Functions", © 1986 by John Wiley & Sons, Inc., Publication. [9] Huber, P. J. and Ronchetti, E. M., (2009), "Robust Statistics", 2nd Edition, John Wiley & Sons, Inc., Hoboken, New Jersey, Publication.

73

References

[10] Maronna, R. A., Martin, R. D. and Yohai, V. J., (2006), "Robust Statistics: Theory and Methods", © 2006 John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England [11] Pinheiro, J. C. and Bates, D. M., (2000), "Statistics and Computing: MixedEffects Models in S and S-Pluss", © 2000 Springer-Verlag New York, Inc. [12] Ritz, C. and Streibig, J. C., (2008), "Nonlinear Regression with R", © Springer Science+Business Media, LLC 2008. [13] Rousseeuw, P. J. and Leroy, A. M., (1987), "Robust Regression and Outlier Detection", ©1987 by John Wiley & Sons, Inc., Publication. [14] SEBER, G. A. F. and WILD, C. J., (2003), "Nonlinear Regression", © 2003 by John Wiley & Sons, Inc., Hoboken, New Jersey, Publication. [15] User’s Guide, "

9.3", (2011), © 2011, SAS Institute Inc., Cary,

NC, USA. [16] West, B. T., Welch, K. B. and Gałecki, A. T., (2015), “Linear Mixed Models a Practical Guide Using Statistical Software”, 2nd Edition, © 2015 by Taylor & Francis Group, LLC. [17] Wilcox, Rand R., (2005), “Introduction to Robust Estimation and Hypothesis Testing”, 2nd Edition, © 2005, Elsevier Inc.

Thesis [18] Ali, Majed Hebatullah, (2005), "A Comparative Study of the Robust Estimation Methods of Survival Function with Practical Application on Blood

74

References

Cancer Patients in Yemen", Phd thesis, College of Administration & Economics, University of Baghdad.

[19] Al-Nidawi, Sura Sabah keiteb, (2008), "A Comparison of Some Robust Estimators in discriminant Function with Practical Application", M.Sc thesis, College of Administration & Economics, University of Baghdad. [20] Bandyopadhyay, Parag, (2014), "Robust Non- Linear Regression for Parameter Estimation in Pressure Transient Analysis", M.Sc thesis, Department of Energy Resources Engineering of Stanford University. [21] Chapman, Cole Garrett, (2014), "Identification of Population Average Treatment Effects Using Nonlinear Instrumental Variables Estimators: Another Cautionary Note", Phd thesis in Pharmacy, The University of Iowa, Iowa City, Iowa. [22] Hassan, Farah Essam, (2006), "The Application of Response Surfaces Methodology to the Multiple Robust Parameter Design ", Phd thesis, College of Administration & Economics, University of Baghdad. [23] Karami, Md. Jamil Hasan, (2011), "Designs for Nonlinear Regression With a Prior on the Parameters", M.Sc thesis, Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta. [24] Mohammed, Mohammed Jassim, (2007), "Robust Estimations for Fuzzy Regression", Phd thesis, College of Administration & Economics, University of Baghdad.

[25] Nayef, Qutaiba Nabeel, (2007), "A Comparison of Robust Bayesian Approaches with other Methods for Estimating Parameters of Multiple Linear

75

References

Regression Model with Missing Data", Phd thesis, College of Administration & Economics, University of Baghdad. [26] Neugebauer, Shawn Patrick, (1996), "Robust Analysis of M-Estimators of Nonlinear Models", M.Sc thesis in Electrical Engineering, the Faculty of the Virginia Polytechnic Institute and State University, Blacksburg, Virginia. [27] Omer, Amira Wali, (2010), "Fitting Non-Linear Regression Models to Thalassaemia patients", M.Sc Thesis, Collage of Administration and Economics University of Sulaimani.

[28] Riazoshams, Hossein, (2010), "Outlier Detections and Robust Estimation Methods for Nonlinear Regression Model Having Autocorrelated and Heteroscedastic Errors", Phd thesis, University Putra Malaysia. [29] Sadik, Nazik Ja'afar, (2006), "Comparing the Robust Estimator of the Mode with Some Other Estimators for Location Parameter", M.Sc Thesis, College of Administration & Economics, University of Baghdad. [30] Salih, Samira M., (2011), "Comparison among Some Estimation Methods in Generalized Linear Mixed Models by Simulation and Practical Data", Phd Thesis, Collage of Administration and Economics, University of Sulaimani. [31] Vilhjálmsdóttir, Eíln Ösp, (2013), "Deterministic and Stochastic Modeling of Insulin Sensitivity", M.Sc thesis, Department of Mathematical Science, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden.

[32] Yu, Chun, (2014), "Robust Mixture Modeling", Phd thesis, Department of Statistics, College of Arts and Sciences, Kansas State University, Manhattan, Kansas.

76

References

Papers [33] Dette, H. and Schorning, K., (2012), "Complete Classes of Designs for Nonlinear Regression Models and Principal Representations of Moment Spaces", Ruhr-Universität Bochum, Fakultät für Mathematik, 44780 Bochum, Germany. [34]

Dette, H., Ruhr-Universität Bochum, Fakultät für Mathematik,44780

Bochum, Germany, Melas, V. B. and Shpilev, P., St. Petersburg State University, Department of Mathematics, St. Petersburg, Russia, (2009), "Optimal Designs for Estimating the Slope in Nonlinear Regression". [35] Kadhum, Mohammed Mansour, (2014), "A New Statistical Model for the Estimation of Autoclave Expansion of Portland Cement", Department of Civil Eng. College of Engineering/University of Babylon,

European Scientific

Journal February 2014 edition vol.10, No.5 ISSN: 1857 – 7881 (Print) e - ISSN 1857- 7431. [36] Karami, J. H., Department of Statistics Biostatistics & Informatics, University of Dhaka, Dhaka1000,Bangladesh, Wiens, D. P., Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada T6G2G1, (2014), "Robust Static Designs for Approximately Specified Nonlinear Regression Models", Journal of Statistical Planning and Inference 144, 55-62.

[37] MA, Tabatabai and et al, (2014), "A New Robust Method for Nonlinear Regression", J Biomet Biostat, ISSN: 2155-6180 JBMBS, an open access journal, Volume 5, Issue 5, 10002110.

77

References

[38] Müller, Ursula U., (2012), "Estimating the Density of a Possibly Missing Response Variable in Nonlinear Regression", Department of Statistics, Texas A&M University, College Station, TX 77843-3143, USA. [39] Pena, D. & Yohai, V. (1999): “A Fast Procedure for Outlier Diagnostics in Large Regression Problems”, J.A.S.A., Vo1.94, No.446.

[40] Ronchetti, Elvezio M., (2006), "The Historical Development of Robust Statistics", University of Geneva, Switzerland, ICOTS-7, 2006: Ronchetti, [email protected].

[41] Wei, J., Carroll, R. J. and Müller, U. U., Texas A&M University, College Station, USA, Keilegom, I. V., Université catholique de Louvain, Louvain-laNeuve, Belgium, and Tilburg University, The Netherlands,

Chatterjee, N.,

National Cancer Institute, Rockville, USA, (2013),"Robust Estimation for Homoscedastic Regression in the Secondary Analysis of Case–Control Data", J. R. Statist. Soc. B (2013) 75, Part 3, pp.

78

‫‪References‬‬

‫يصادر انعربُت‬ ‫الكتب‬ ‫]‪ [42‬انذسُاوٌ‪ ,‬ايىري هادي كاظى‪( ,)8888( ,‬طرق انمُاش االلتصادي)‪ ,‬يطبعت انتعهُى انعانٍ‪ ,‬جايعت‬ ‫بغذاد‪( ,‬انعراق)‬ ‫]‪ [43‬انذسُاوٌ‪ ,‬ايىري هادي كاظى‪ ,‬انمُسٍ‪ ,‬باسى شهُبت يسهى‪( ,)2002( ,‬انمُاش االلتصادي انًتمذو‬ ‫انُظرَت و انتطبُك)‪ ,‬جايعت بغذاد‪( ,‬انعراق)‬ ‫]‪ [44‬انراوٌ‪ ,‬خاشع يذًىد‪( ,)8881( ,‬انًذخم انً االدصاء)‪ ,‬كهُت انسراعـت وانغابـاث‪ ,‬جايعت‬ ‫انًىصم‬ ‫]‪ [45‬انسُفى‪ ,‬ونُذ اسًاعُم‪( ,)8888( ,‬يذخم انً االلتصاد انمُاسٍ)‪ ,‬وزارة انتعهُى انعانٍ و انبذث‬ ‫انعهًٍ‪-‬جايعت انًىصم‪ ,‬يذَرَت دار انكتب نهطباعت و انُشر جايعت انًىصم‪ ,‬ص ‪.98‬‬

‫رسائل و أطاريح‬ ‫]‪ [46‬انذربُذٌ‪ ,‬ثرى خاٌ عبذهللا عًر‪( ,(2005) ,‬يمارَت بٍُ انًمذراث انطبُعُت وبعض انًمذراث‬ ‫انذصُُت انًعهًاث ًَىرج االَذذار انخطٍ يع انتطبُك فٍ يجال االَىاء انجىَت)‪ ,‬رسانت ياجستُر‪,‬‬ ‫جايعت صالح انذٍَ‪( ,‬اربُم)‬ ‫]‪ [47‬انمهىاتٍ‪ ,‬وصفٍ طاهر صانخ انمهىاتٍ ‪) ,(2000) ,‬بعض طرق انتمذَراث انذصُُت فٍ تذسٍُ‬ ‫انصىر انرلًُت) ‪ ,‬أطرودت دكتىرا‪ ,‬جايعت بغذاد‪( ,‬انعراق)‬ ‫]‪ [48‬عهٍ‪ ,‬عًر عبذ انًذسٍ‪( ,)2002( ,‬يمارَت يم ّذراث انًُارج انتجًُعُت انًع ًًّت بإستخذاو انشرائخ‬ ‫انتًهُذَت عُذ تذهُم اإلَذذار اناليعهًٍ وشبه انًعهًٍ)‪ ,‬أطرودت دكتىرا‪ ,‬جايعت بغذاد‪( ,‬انعراق(‬

‫‪79‬‬

Appendixes

Appendix A

Practical part

Appendix A rm(list = ls()) set.seed(1200) m=200 # No of iterations for (j in 1:m) { n=(25,50,100) # No of y in each x x=c() y=c() for (i in 1:n) { x1= c(seq (0.25,1.25,by=0.25)) x2= c(2,3,4,5,6,8) x=append(x,x1) x=append(x,x2) y1=exp(-x1) y2 = runif(5, 0, 1) y3 = sort(y2, decreasing=TRUE) y4=y1+y3 yy1=exp(-x2) yy2 = runif(6, 0, 1) yy3 = sort(yy2, decreasing=TRUE) yy4=yy1+yy3 y=append(y,y4) y=append(y,yy4) } dat<- data.frame(x,y) } From this code we are going to generate the data for three different sample sizes (25,50,100)

This is the data that we generate it for sample size (25). Number

X

Y

Number

X

Y

1 2 3 4 5 6 7 8 9 10 11 12 13 14

0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75

1.618980379 1.430010446 1.01894126 0.766446524 0.50624743 0.901026219 0.754849091 0.467256539 0.318219662 0.257808891 0.054719994 1.742266493 1.301938859 1.130082655

138 139 140 141 142 143 144 145 146 147 148 149 150 151

2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4

1.132181445 0.943417286 0.680785495 0.48054145 0.377448341 0.350265991 1.62513323 1.372893995 1.183507763 0.872812805 0.535671628 0.680949994 0.566582603 0.519132698

80

Appendix A 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62

Practical part 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3

0.829915263 0.294829002 1.066778246 0.803809132 0.746145496 0.47503121 0.132770816 0.01755435 1.771930443 1.53633081 0.954844432 0.54922616 0.439018428 1.054652194 0.909107596 0.749629408 0.641267655 0.435670764 0.20084026 1.71154291 1.522649492 0.803590346 0.511490024 0.312471838 1.048485322 0.915789458 0.75136964 0.524103506 0.334879058 0.151239401 1.757902237 1.21272992 0.942894103 0.574024881 0.416043671 0.961784384 0.839074387 0.770792855 0.646836521 0.442694205 0.086157822 1.320702233 1.146856479 0.996362089 0.887854998 0.654319589 0.853805075 0.757873455

152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199

81

5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25

0.226389274 0.20413102 0.150366126 1.476215508 1.130906312 0.955362635 0.595314863 0.3842336 1.015075118 0.421600959 0.381153368 0.249013783 0.17553269 0.151112443 1.434454033 1.175634575 0.934299987 0.567318887 0.33389579 1.126528615 0.973712419 0.777771007 0.704549496 0.664146707 0.564267814 1.41930237 1.16180293 1.004801888 0.875110958 0.612370767 0.848472622 0.621499928 0.527333776 0.463718363 0.3119579 0.092053849 1.638129727 1.294876976 0.814135977 0.609122099 0.309824806 1.008011886 0.878525407 0.575016389 0.29977028 0.277956673 0.151695892 1.674339693

Appendix A 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110

Practical part 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

0.723146754 0.585433452 0.434821759 0.132428479 1.762334775 0.822738281 0.600778117 0.453009427 0.306629571 0.869600319 0.721902509 0.615873124 0.282276534 0.065560382 0.037568711 1.743526869 1.22749253 0.709290374 0.515059207 0.340653511 0.834135304 0.747990078 0.648304379 0.558039735 0.419058772 0.245630111 1.476871142 1.146444535 0.92681799 0.51943004 0.34627403 1.052573138 0.855350903 0.813773186 0.744776908 0.736335473 0.293788545 1.776667086 1.568511713 1.065181504 0.81498299 0.306748292 0.992137093 0.860088206 0.781379746 0.627879551 0.506676171 0.426415948

200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247

82

0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25

0.937618416 0.645080155 0.404274257 0.294237664 1.06843628 0.819996404 0.504802476 0.4674499 0.329647604 0.270561195 1.608926266 0.999680068 0.629268033 0.46917145 0.365236943 1.062757203 0.868629207 0.78346711 0.604893725 0.485854511 0.40723631 1.333134919 1.023484708 0.838721648 0.63906946 0.359399506 1.121334677 0.946394933 0.666706872 0.537441428 0.528614297 0.503112036 1.727144293 1.465070943 1.023754576 0.810327177 0.454214505 0.992126974 0.630660165 0.217829367 0.126030875 0.04739405 0.006114617 1.637491546 1.010559807 0.63082972 0.444043459 0.328642102

Appendix A 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137

Practical part 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25

1.776797426 1.400808531 1.255860565 0.916204592 0.586360285 1.056148594 0.800577822 0.716738825 0.61194634 0.455179989 0.43408566 1.478224363 1.050713038 0.867056376 0.685461068 0.587910408 1.127655777 0.834668428 0.585951323 0.379530136 0.184820079 0.070400047 1.587593132 1.319810209 1.061854074 0.910163716 0.749011511

248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275

2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

0.836638381 0.725349812 0.620452661 0.516754114 0.272539928 0.209937879 1.658911225 1.454251334 1.157373527 0.653186973 0.556888579 0.856850131 0.590476924 0.282301468 0.192825892 0.094607752 0.070375549 1.689655839 1.257650784 1.047800238 0.630872223 0.403812056 0.859952981 0.644368559 0.47848729 0.366026655 0.324435799 0.017429315

This is the data that we generate it for sample size (50). Number

X

Y

Number

X

Y

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1

1.557625481 1.233241185 0.706066893 0.554342421 0.353189399 0.959168428 0.687919793 0.327282799 0.227244204 0.173984403 0.113470948 1.525671116 1.246208479 0.86336533 0.568174

276 277 278 279 280 281 282 283 284 285 286 287 288 289 290

0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1

1.412095563 0.93498555 0.748469751 0.515892358 0.326740786 0.955757769 0.781028321 0.731710442 0.430505988 0.327715792 0.096101646 1.695094227 1.494255816 1.245077016 0.887712345

83

Appendix A 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63

Practical part 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4

0.392387263 0.819467703 0.700084431 0.529004971 0.477865686 0.250521469 0.033219137 1.583243773 1.197305939 1.021739101 0.719947619 0.551951176 0.650040326 0.30249613 0.218176654 0.088580715 0.06651996 0.05561411 1.672706774 1.041424647 0.708295651 0.554416658 0.38124443 0.894953799 0.785765117 0.632046405 0.594835808 0.280274139 0.14116766 1.569024491 1.275269042 0.679038544 0.44457468 0.329287607 1.003938282 0.803657199 0.635072603 0.538206701 0.498054068 0.393549858 1.638222821 1.379962035 0.81505053 0.699534616 0.553083043 0.96224585 0.796737273 0.491446744

291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338

84

1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4

0.482144077 1.001523549 0.760711011 0.548153568 0.277798811 0.189772405 0.172035123 1.644173341 1.462634787 1.139996119 0.941815344 0.351724869 1.057392001 0.834855476 0.563391885 0.234403665 0.215174626 0.076089454 1.749835464 1.395442361 1.224800529 0.496397202 0.400831017 1.083751264 0.817713122 0.779701551 0.764587699 0.655645901 0.381366343 1.759308157 1.475905884 1.225385243 0.805209976 0.312794462 0.965255472 0.778391818 0.58433778 0.389428675 0.116973323 0.091095507 1.695118127 1.371517004 1.191995533 0.825107745 0.663106811 1.096344266 0.978822332 0.798918585

Appendix A 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111

Practical part 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25

0.451050764 0.29554794 0.147555451 1.552467452 1.155618809 0.923989578 0.804823234 0.486153513 0.912071798 0.708313117 0.623604895 0.521650386 0.126106502 0.042110614 1.716317639 1.503522635 1.047515303 0.589515786 0.496471459 1.005166723 0.864106501 0.687784122 0.606554441 0.204211626 0.134096585 1.73583236 1.451205233 1.260402138 0.891204055 0.596092945 0.995911028 0.896129431 0.603152479 0.541429815 0.319428923 0.046720571 1.530991092 1.210857505 0.829394932 0.698193573 0.363356146 0.954639711 0.690933955 0.638523835 0.397691177 0.30709247 0.270093792 1.633411214

339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386

85

5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25

0.535173904 0.408513822 0.035772724 1.740904111 1.202857515 0.828563083 0.62720451 0.311388614 1.047080111 0.933965155 0.887807984 0.705150411 0.266552436 0.033047686 1.541460399 1.233486198 1.02094846 0.665878623 0.397695797 0.966638333 0.843356683 0.747634549 0.327176346 0.265183133 0.086935496 1.468628659 1.245572271 1.094700369 0.403341435 0.287482952 1.033537893 0.832021052 0.658545967 0.387708633 0.367968931 0.107463824 1.599069234 1.327597313 1.131401071 0.688208877 0.428821457 0.768448411 0.566989808 0.489324908 0.218542769 0.208797562 0.168783073 1.595016513

Appendix A 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159

Practical part 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25

1.349567072 0.623018797 0.431001568 0.330018933 0.894121684 0.794697339 0.525121149 0.40404405 0.194823078 0.151034639 1.747244158 1.342206536 1.056660173 0.766421466 0.345152573 1.111535185 0.877773189 0.789040549 0.473787808 0.366153751 0.363891033 1.64564391 1.286504608 0.96106178 0.69008573 0.472176799 1.117137169 0.984727674 0.765629797 0.62367216 0.412663198 0.0779641 1.661725521 1.267539855 0.863343111 0.757919606 0.436741841 1.117346107 0.866279608 0.247116485 0.229361535 0.069801862 0.014765193 1.367236245 0.992616033 0.852310805 0.712955639 0.538517017

387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434

86

0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25

1.346638813 0.949541715 0.732918497 0.525148203 0.837097858 0.617538394 0.358607088 0.321801141 0.238350784 0.196674259 1.527754851 1.319165909 1.099938293 0.798729669 0.631043721 0.884097565 0.659784176 0.592517396 0.159952479 0.09594583 0.012373789 1.761416399 1.378630593 1.091164688 0.834043436 0.634762908 1.095626601 0.923458021 0.758893755 0.642645652 0.630259693 0.298128953 1.564317854 1.363938612 1.012974787 0.845577303 0.532302471 0.719126237 0.580215382 0.478937468 0.461797941 0.33503293 0.281405753 1.610445121 1.360325309 0.731932727 0.513082866 0.40970925

Appendix A 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207

Practical part 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5

0.958085244 0.754331039 0.629632924 0.356183518 0.23027595 0.222311472 1.767786683 1.25659606 1.00430031 0.57647857 0.463359812 1.044796491 0.795099099 0.70943579 0.504573243 0.228582631 0.116156755 1.545538943 1.276341946 1.080914889 0.504101777 0.402533313 1.006112619 0.887954108 0.678697605 0.512002434 0.488190776 0.176917398 1.610323184 1.218474542 1.028701216 0.610608084 0.489144803 0.807453458 0.579281691 0.499633908 0.424050258 0.362640012 0.066574058 1.677792324 0.915233878 0.692663233 0.555268702 0.331593078 0.880393042 0.689456396 0.440173857 0.300945225

435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482

87

2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5

1.071841152 0.839426855 0.759909167 0.554154091 0.366678745 0.082275443 1.687963749 1.404624126 1.255652501 0.906390425 0.69233895 0.9529331 0.837619764 0.418862276 0.337720995 0.147982586 0.105404808 1.341275956 1.114100039 0.879047462 0.635578786 0.527851381 0.991340931 0.740899889 0.361355485 0.232233976 0.202162409 0.005830838 1.452132586 1.007661317 0.872447845 0.631092454 0.438082305 0.949682831 0.549314702 0.455900589 0.387339368 0.261410326 0.106823766 1.690446074 1.417347149 1.227079972 0.697314375 0.431493143 1.131488911 0.967128365 0.780743974 0.624047294

Appendix A 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255

Practical part 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5

0.065412396 0.059185578 1.714833077 1.216440428 1.014343722 0.731814648 0.313890218 1.090552769 0.881502387 0.748667187 0.307283791 0.26743548 0.251072255 1.491841528 1.219570341 0.932796994 0.48463249 0.359451816 0.987924903 0.648141927 0.610135957 0.57237812 0.434640338 0.231025612 1.483094917 1.236431332 0.894521297 0.547073625 0.307753875 1.109223074 0.931682882 0.867157906 0.602444359 0.350720075 0.157970243 1.441293129 1.229375639 0.869813092 0.628193442 0.48775869 1.134967571 0.88397461 0.785644917 0.535944111 0.420404892 0.386510403 1.774360969 1.470235599

483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530

88

6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5

0.146645834 0.089875392 1.675678126 1.337764399 1.042692358 0.624992396 0.539303761 0.624987979 0.51522383 0.432446326 0.377553399 0.121009205 0.041558319 1.240172941 0.940223736 0.558870086 0.407609526 0.315273639 1.117136721 0.944459992 0.825951352 0.75758639 0.275058524 0.069286408 1.761706627 1.588398512 1.36712437 1.008490186 0.312371657 0.952806239 0.66879109 0.370120488 0.227262683 0.198910086 0.040406699 1.137626082 0.917090058 0.700062259 0.454516386 0.305178298 1.027083695 0.91469539 0.48382369 0.265334366 0.258191038 0.143257332 1.614286664 1.249842831

Appendix A 256 257 258 259 260 261 262 263 264 265 266 267 268

269 270 271 272 273 274 275

Practical part 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

1.220235687 0.903239602 0.727528348 1.108317423 0.928394035 0.736172272 0.568652925 0.527927787 0.27812245 1.533557177 1.32025595 1.161289146 0.786688303 0.473707855 1.11895078 1.02512775 0.976055271 0.574197478 0.312588685 0.113866666

531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550

0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

0.998033961 0.736320986 0.550107639 1.130726548 0.994002582 0.774438591 0.652301966 0.235245687 0.158523796 1.664925287 1.262952112 0.979442056 0.51144247 0.332239211 0.708401689 0.592462324 0.315853705 0.252530778 0.232820128 0.008220103

This is the data that we generate it for sample size (100). Number

X

Y

Number

X

Y

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25

1.388294862 1.026798092 0.640978736 0.485814521 0.379920218 0.783693441 0.683351145 0.642044812 0.559992527 0.518826577 0.117482568 1.483594599 1.282223512 0.964881392 0.817887579 0.734090151 0.954201092 0.821282844 0.615007643 0.426117809 0.326664769 0.054936416 1.75363815

551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573

0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25

1.678797963 1.50480104 0.670928527 0.409392471 0.326664878 0.983813546 0.78864391 0.454175315 0.366835686 0.136005632 0.074006689 1.348034031 1.127417965 0.829227663 0.607396087 0.411775099 1.098531455 0.73305819 0.325835528 0.183479812 0.150718574 0.115046506 1.696054476

89

Appendix A 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71

Practical part 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25

1.24299594 0.915743852 0.74834051 0.302952806 0.910591311 0.724294582 0.690969665 0.662947414 0.352518226 0.313986271 1.60797494 0.927238847 0.759110909 0.440036175 0.331860165 1.114381923 0.99246832 0.808623127 0.65022108 0.602455843 0.443873892 1.662387659 1.156757942 0.999990608 0.829030949 0.529318532 0.830597521 0.71873213 0.393127491 0.219296207 0.155709333 0.127859549 1.666669571 1.396319172 1.145180283 0.707142404 0.625761141 0.814950695 0.703973428 0.597013214 0.459224953 0.389003143 0.349908038 1.76998966 1.32953393 1.103441185 0.899163204 0.526367335

574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621

90

0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25

1.327229675 1.026367143 0.590991547 0.447675552 0.759434931 0.574811615 0.364152046 0.197208186 0.085934283 0.032558925 1.770730008 1.48191215 1.131665446 0.884624615 0.721465058 1.104276474 0.804183467 0.662122648 0.394821667 0.079924268 0.040392446 1.688218917 1.347051814 0.700162097 0.520052599 0.365024854 1.064028959 0.761812659 0.722612151 0.296757752 0.10018048 0.077524235 1.757865716 1.427765675 0.798018687 0.512890247 0.382889363 1.048291589 0.81844091 0.679190572 0.125224831 0.088616308 0.045830861 1.701817237 1.345585168 0.901176636 0.753699656 0.660462262

Appendix A 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119

Practical part 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5

0.87287879 0.513083905 0.386822426 0.247153868 0.122713708 0.007015414 1.753592559 1.308619782 0.86738962 0.609179261 0.445368644 1.001030823 0.698364558 0.616732869 0.440134681 0.354947597 0.302743587 1.568370076 1.068126007 0.723180214 0.488716202 0.336331501 1.085037685 0.89582883 0.434873851 0.354884655 0.300382431 0.246078283 1.655752147 1.434530071 1.213503797 0.869816283 0.776818746 1.131728126 0.957178752 0.847411764 0.47215495 0.465595704 0.298092656 1.28693504 0.876401271 0.698317371 0.567282092 0.298666781 1.134591656 0.652404543 0.608203096 0.593775868

622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669

91

2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5

1.120308387 0.844314194 0.690942136 0.505601082 0.48888227 0.059378785 1.349132534 0.94520014 0.762651229 0.516351977 0.355766346 0.925046112 0.494952961 0.425455231 0.356033049 0.201415035 0.151497522 1.698139972 0.800419143 0.551042424 0.408787839 0.325289009 0.670404997 0.504986355 0.395947821 0.332105729 0.325921066 0.06540569 1.713124478 1.486914803 1.033257774 0.914487134 0.639643282 0.910296761 0.713602532 0.637217841 0.59124848 0.109035256 0.022650335 1.77683154 1.026746976 0.807028467 0.594198174 0.511928935 0.952552809 0.835803862 0.597494544 0.381699337

Appendix A 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167

Practical part 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5

0.480062893 0.14263362 1.504444503 1.298419008 1.024461501 0.690068527 0.338220166 0.92976887 0.646969178 0.554976875 0.440833568 0.134379698 0.037906662 1.756108499 1.279012272 1.138070751 0.913673622 0.321287899 1.089230877 0.8543761 0.751357967 0.378306839 0.2203443 0.208703469 1.601925065 1.286179662 1.145271555 0.853065179 0.767936078 0.757849387 0.47572395 0.395192331 0.137326146 0.127993864 0.077318606 1.720612152 0.826385644 0.636994783 0.470822789 0.359531347 0.889666474 0.715848592 0.268708351 0.150087068 0.088549254 0.033308916 1.669002987 1.430702745

670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717

92

6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5

0.282444287 0.029916616 1.50173015 1.211372443 1.043989514 0.695479454 0.500803213 1.115845045 1.016019679 0.870937106 0.719758834 0.234127379 0.159169697 1.729011478 1.497098461 1.179102655 0.586643049 0.370870859 1.045310225 0.660491497 0.499260006 0.319410035 0.220258577 0.157426914 1.40881418 1.033333466 0.87648146 0.620293723 0.304438502 1.098427151 0.959340838 0.786082022 0.681119021 0.546680283 0.078658413 1.586256847 1.314055628 1.079621432 0.729999775 0.311937746 0.770632458 0.682886522 0.61867407 0.30008538 0.125799684 0.102980703 1.683486339 1.479417954

Appendix A 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215

Practical part 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2

1.169064537 0.838630002 0.501045917 1.121760214 0.580876622 0.432387672 0.325263503 0.277731747 0.17477124 1.365478048 1.099032941 0.627408244 0.518219975 0.350806009 0.920077779 0.706769845 0.50449411 0.23053135 0.176698418 0.117336191 1.765237553 1.508510586 0.739727969 0.476481599 0.289754987 1.075038598 0.911099594 0.873978496 0.571992518 0.533202556 0.241394666 1.453298177 1.156958774 0.952487694 0.571744157 0.30557666 1.03221185 0.759038449 0.621678556 0.471065434 0.175163729 0.127660943 1.583332699 1.386955826 0.767107336 0.410278245 0.307985708 0.997915055

718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765

93

0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2

1.285614408 1.012890295 0.377717625 0.391440238 0.304812825 0.244434352 0.216561754 0.211044981 0.027502309 1.751219792 1.542886773 1.170826049 0.94627992 0.690194987 1.100365858 0.851734307 0.637321168 0.188929209 0.166909583 0.08752389 1.303510249 1.082289476 0.83211042 0.558488102 0.459801557 0.953635758 0.77743512 0.396663053 0.331152355 0.317615092 0.037556029 1.756855087 1.506755322 1.155542119 0.891371628 0.631610483 1.07163605 0.92883969 0.78088531 0.665991021 0.637704936 0.600991143 1.644102938 1.089355123 0.842099318 0.681292916 0.290333829 1.038249362

Appendix A 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263

Practical part 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6

0.908809537 0.396516935 0.296659747 0.085821079 0.052976634 1.715202739 1.350217961 0.955708712 0.746713388 0.46685747 0.890590052 0.699345372 0.656630817 0.419577529 0.381329087 0.072671081 1.716116527 1.413321181 1.250677709 0.769298557 0.505744601 0.584574978 0.469024417 0.350787765 0.295938974 0.147889922 0.050525039 1.734664613 1.553863936 1.203534375 0.691957023 0.521747362 1.083144151 0.870064095 0.705615463 0.356840851 0.326970597 0.058613262 1.652400523 1.427351863 1.117937775 0.749747135 0.464489981 1.107203414 0.778744139 0.626172998 0.449755804 0.217709608

766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813

94

3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6

0.874495719 0.747292285 0.481513579 0.448134328 0.131583675 1.667356304 1.339864969 1.203375851 0.934460981 0.44155457 1.106597094 0.773183858 0.718787064 0.701213298 0.588879594 0.040644615 1.74837838 1.433156705 0.983597263 0.869596649 0.33335748 0.761085497 0.673821881 0.628662247 0.292291456 0.258329845 0.228340365 1.724436262 1.502723763 1.285512026 0.732867979 0.619132952 1.067015282 0.962616149 0.915005948 0.444906409 0.344726203 0.336999587 1.493646202 1.199350214 0.979981765 0.590288599 0.311925461 1.135125627 1.033410481 0.826102769 0.664720534 0.309680965

Appendix A 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311

Practical part 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75

0.204993547 1.548762186 1.191103252 0.946729578 0.636552644 0.543183193 1.097696633 0.871909127 0.631782341 0.583227781 0.548396059 0.253242116 1.71500214 1.196511681 1.019836488 0.899799492 0.559236555 1.008193279 0.866262188 0.498366794 0.313511341 0.157000201 0.12131194 1.583546267 1.172305014 1.012596636 0.419952437 0.293663841 1.016248273 0.921971706 0.835138376 0.44296274 0.355877492 0.183954949 1.680784633 1.409652831 1.184624233 0.669878464 0.390506678 0.920066777 0.723605419 0.472584249 0.320925607 0.303904847 0.27858953 1.778403385 1.587467211 1.270327938

814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861

95

8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75

0.094809738 1.516413449 1.174209778 0.999875209 0.764046034 0.414821066 0.799840645 0.664829564 0.430438669 0.385032017 0.233321969 0.190071047 1.777694324 1.486502522 0.789936898 0.551445157 0.455567048 1.096281986 0.91615812 0.743442939 0.594667706 0.407469854 0.092506361 1.363148949 1.180095705 0.75030905 0.482178552 0.343431438 1.022057626 0.6604237 0.610359719 0.387333787 0.360176829 0.268971804 1.123713778 0.802187846 0.573473291 0.442240274 0.329920587 0.894570153 0.784894726 0.63421298 0.533626543 0.528045998 0.47350642 1.764186518 1.586599444 0.832911635

Appendix A 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359

Practical part 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3

0.941393327 0.763730545 0.823756553 0.687316292 0.599191252 0.329981691 0.2495658 0.212507623 1.722136837 1.228946329 0.899310042 0.561838531 0.331729796 1.047752781 0.859732873 0.479416461 0.352187059 0.21246323 0.027119418 1.694691602 1.295122666 1.138462446 1.025517918 0.48316776 1.031103804 0.923322632 0.506676584 0.442863786 0.434302013 0.324563488 1.610160555 1.217872969 0.967891757 0.8632045 0.641519828 1.106028483 0.899234663 0.834000367 0.553839653 0.466350471 0.428847546 1.566410078 1.322722803 0.783373227 0.642669377 0.450525574 0.587665446 0.429745071

862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909

96

1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3

0.723807851 0.372573265 1.07580896 0.65411865 0.50064823 0.38522356 0.337847134 0.175917809 1.520426207 1.25652802 0.926477508 0.723513703 0.33376796 1.128890777 0.812988151 0.750463246 0.564918408 0.520474934 0.271184485 1.537273514 1.197091282 0.618672546 0.482199638 0.299827562 1.086503594 0.716178735 0.516986827 0.340964052 0.320391326 0.196483507 1.315557537 1.112807166 0.854889089 0.680665234 0.580082712 1.101147855 0.874644118 0.767538442 0.501557514 0.482330188 0.196141621 1.461228743 1.278592474 1.13058589 0.998537642 0.848537808 0.862982907 0.431965713

Appendix A 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407

Practical part 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

0.332752252 0.318429283 0.103624263 0.095668269 1.755899699 1.541686762 1.214793313 0.59908268 0.4368824 1.065751864 0.772523601 0.365450388 0.303115656 0.244074956 0.170357754 1.667938928 1.485977459 0.806207635 0.653161027 0.399387349 1.131229243 0.948019246 0.876229694 0.782335027 0.446706564 0.366784052 1.718139431 1.317418422 1.087181278 0.731024042 0.640533407 1.003985202 0.743023692 0.242378857 0.181498927 0.136382553 0.113368177 1.54709453 1.039771019 0.89876094 0.743433068 0.362980107 0.92534973 0.513892473 0.2942126 0.228031637 0.223274077 0.12178949

910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957

97

4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

0.397268961 0.228284966 0.180860657 0.086086415 1.724430504 1.074500509 0.854080153 0.466930211 0.38435397 1.09638551 0.946734723 0.804672759 0.728144529 0.411442237 0.168623475 1.72165654 1.527072132 0.922487934 0.430727358 0.293244647 1.076599378 0.815518265 0.668839004 0.339007607 0.26995852 0.078658211 1.7424698 1.268486969 1.110872361 0.586594351 0.499086925 1.08087986 0.800448287 0.719743718 0.63393885 0.378384578 0.107647786 1.741187578 0.971308886 0.760129705 0.560043697 0.290003958 1.03330215 0.868610912 0.813637812 0.325957874 0.055483143 0.034986543

Appendix A 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455

Practical part 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1

1.232657186 1.034527264 0.864981037 0.760050542 0.501300317 1.122422322 0.809690091 0.612477811 0.460434 0.433938161 0.228240401 1.666492011 1.465157044 1.158609388 0.968464793 0.760212418 1.037826095 0.907573392 0.867295634 0.630298763 0.468785103 0.353604416 1.760504446 1.417752654 0.877034793 0.764434042 0.536806836 0.776639141 0.535374824 0.43725856 0.353756319 0.275824567 0.250487002 1.654640072 1.383642667 0.989684572 0.712789595 0.577675375 1.060276125 0.597273509 0.40271573 0.259236053 0.211006475 0.107332061 1.693312206 1.271156337 1.083818517 0.595510848

958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005

98

0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1

1.731337056 1.017926799 0.647173233 0.525855701 0.36299511 0.758438025 0.54022005 0.508265444 0.447394339 0.415159356 0.195589491 1.771526969 1.305907899 1.053638766 0.827291578 0.301809449 1.097921915 0.879572042 0.790682061 0.748386848 0.443427614 0.302383817 1.717597662 1.521876539 0.922038654 0.600510971 0.467598041 1.114388546 0.812792785 0.343476651 0.248005319 0.207459909 0.070640588 1.70251448 1.374131767 1.078059911 0.86418329 0.324409729 0.970996054 0.858423971 0.707889912 0.69468849 0.420783077 0.361416461 1.591169568 1.352901156 1.125477417 0.535592644

Appendix A 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503

Practical part 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4

0.331086548 1.098207475 0.884641728 0.445408594 0.300685804 0.282188172 0.273725865 1.73459223 1.484140289 1.000081136 0.558132469 0.342448863 0.959353794 0.821310838 0.678544715 0.383178226 0.310181347 0.089585856 1.768307357 1.478442673 0.942901348 0.806257021 0.694953779 1.13100475 0.813448313 0.770194855 0.463308226 0.402615938 0.221999087 1.680743191 1.309670557 0.695366401 0.544825017 0.348835019 0.992992162 0.614418893 0.380375908 0.305550216 0.160133617 0.121158858 1.421967913 1.202568987 0.967395519 0.853242159 0.647976958 1.024788868 0.871934673 0.658597636

1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053

99

1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4

0.287956819 0.65767184 0.450575539 0.39652109 0.316921683 0.180320594 0.051388543 1.772268528 1.293512221 1.110456209 0.790620071 0.417019188 1.081211522 0.790268198 0.647009552 0.596018774 0.401815768 0.153385945 1.281426235 0.946895081 0.797357361 0.593667913 0.378437471 1.058408388 0.969932843 0.827653315 0.690801497 0.055875212 0.036322449 1.596793625 1.258560665 0.969838341 0.601968878 0.388840545 1.05743107 0.863549547 0.76148541 0.495898733 0.360272411 0.07458137 1.247689367 1.049096263 0.878937903 0.756348474 0.327691766 0.93372006 0.772890508 0.729948561

Appendix A 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550

Practical part 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

0.266589449 0.244377855 0.032122342 1.460055865 1.182717133 0.999491755 0.684131961 0.429786048 1.093691572 0.897441977 0.405263203 0.121291404 0.096788204 0.031134353 1.724076985 1.481214076 1.244437276 1.01584607 0.899775793 1.049216724 0.814997299 0.571410907 0.332586172 0.254992227 0.065264215 1.6825704 1.436754545 1.23788585 0.512220685 0.41159066 0.871823032 0.574569635 0.494448057 0.334144351 0.262810858 0.009710178 1.699884839 1.31526231 1.180762553 0.871463873 0.523543682 1.082220127 0.877172123 0.730947424 0.397206847 0.096845529 0.032249106

1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100

100

5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8 0.25 0.5 0.75 1 1.25 2 3 4 5 6 8

0.555236136 0.385314411 0.341376606 1.696393952 1.273962982 0.835363102 0.650298565 0.416032567 1.111370286 0.992730541 0.949474563 0.858845101 0.819782337 0.458317242 1.616738803 1.318930733 0.941784403 0.388742024 0.295426643 1.08255175 0.731831936 0.424285163 0.374755931 0.182214186 0.158211123 1.685262239 1.482737644 1.286203574 1.167802889 0.743443163 1.114276485 1.015690772 0.930476823 0.590149772 0.545625774 0.496167501 1.769967654 1.487163247 0.902672992 0.43819535 0.336663633 1.039408143 0.915585574 0.60704478 0.416665387 0.087650169 0.072083146

Appendix B

Practical part

Appendix B B1: Biexponential Nonlinear Regression Model rm(list = ls()) set.seed(1200) m=200 # No of iterations cof=as.matrix(mat.or.vec(m,4)) AICV=as.matrix(mat.or.vec(m,2)) BICV=as.matrix(mat.or.vec(m,2)) n=(25,50,100) # No of y in each x for (j in 1:m) { x=c() y=c() for (i in 1:n) { x1= c(seq (0.25,1.25,by=0.25)) x2= c(2,3,4,5,6,8) x=append(x,x1) x=append(x,x2) y1=exp(-x1) y2 = runif(5, 0, 1) y3 = sort(y2, decreasing=TRUE) y4=y1+y3 yy1=exp(-x2) yy2 = runif(6, 0, 1) yy3 = sort(yy2, decreasing=TRUE) yy4=yy1+yy3 y=append(y,y4) y=append(y,yy4) } dat<- data.frame(x,y) fm1 <- nls(y ~ SSbiexp(x, A1, lrc1, A2, lrc2), data = dat) cof[j,1]=coef(summary(fm1))["A1","Estimate"] cof[j,2]=coef(summary(fm1))["lrc1","Estimate"] cof[j,3]=coef(summary(fm1))["A2","Estimate"] cof[j,4]=coef(summary(fm1))["lrc2","Estimate"] cofmean=colMeans(cof) AICV[j,1]=j AICV[j,2]=AIC(fm1) BICV[j,1]=j BICV[j,2]=BIC(fm1) print(j) summary(fm1) print(summary(fm1)) print(j) sse<-sum(residuals(fm1)^2) print(sse) print(j) ssr<-sum((predict(fm1) - mean(y))^2) print(ssr) } cofficents <- setNames(data.frame(cof), c("A1","lrc1","A3","lrc2"))

101

Appendix B

Practical part

AICV=data.frame(AICV) BICV=data.frame(BICV) AICMIN=min(AICV) AICLOC=which(AICV==min(AICV),arr.ind = TRUE) BICMIN=min(BICV) BICLOC=which(BICV==min(BICV),arr.ind = TRUE) cofficents AICV BICV AICMIN AICLOC BICMIN BICLOC yhad=SSbiexp(x, cofmean[1], cofmean[2], cofmean[3], cofmean[4]) dathad <- data.frame(x,yhad) ldat = head(dathad,n=11) plot(ldat$x,ldat$yhad,main="Nonlinear Regression", xlab="x", ylab="Yhad") lines(ldat$x, ldat$y, col = "red")

* fm1: The Biexponential Nonlinear Regression Model Equation. ** Through this code we were able to find Four Parameters for Biexponential Nonlinear Regression Model and we also found the Mean Parameters Estimation for (200) Reputations, we found the Best Parameters by depending on the AIC and BIC, we test the parameters and we found the ANOVA table for three different sample sizes (25,50,100).

102

Appendix C

Practical part

Appendix C C1: Biexponential Robust Nonlinear Regression Model library(robustbase) rm(list = ls()) set.seed(1200) m=200 # No of iterations cof=as.matrix(mat.or.vec(m,4)) AICV=as.matrix(mat.or.vec(m,2)) BICV=as.matrix(mat.or.vec(m,2)) n=(25,50,100) # No of y in each x for (j in 1:m) { x=c() y=c() for (i in 1:n) { x1= c(seq (0.25,1.25,by=0.25)) x2= c(2,3,4,5,6,8) x=append(x,x1) x=append(x,x2) y1=exp(-x1) y2 = runif(5, 0, 1) y3 = sort(y2, decreasing=TRUE) y4=y1+y3 yy1=exp(-x2) yy2 = runif(6, 0, 1) yy3 = sort(yy2, decreasing=TRUE) yy4=yy1+yy3 y=append(y,y4) y=append(y,yy4) } dat<- data.frame(x,y) fm1 <- nlrob(y ~ SSbiexp(x, A1, lrc1, A2, lrc2), data = dat,start = list(A1 = , lrc1 =,A2 =,lrc2 =)) cof[j,1]=coef(summary(fm1))["A1","Estimate"] cof[j,2]=coef(summary(fm1))["lrc1","Estimate"] cof[j,3]=coef(summary(fm1))["A2","Estimate"] cof[j,4]=coef(summary(fm1))["lrc2","Estimate"] cofmean=colMeans(cof) AICV[j,1]=j AICV[j,2]=AIC(fm1) BICV[j,1]=j BICV[j,2]=BIC(fm1) print(j) summary(fm1) print(summary(fm1)) print(j) sse<-sum(residuals(fm1)^2) print(sse) print(j) ssr<-sum((predict(fm1) - mean(y))^2) print(ssr) } cofficents <- setNames(data.frame(cof),c("A1","lrc1","A3","lrc2"))

103

Appendix C

Practical part

AICV=data.frame(AICV) BICV=data.frame(BICV) AICMIN=min(AICV) AICLOC=which(AICV==min(AICV),arr.ind = TRUE) BICMIN=min(BICV) BICLOC=which(BICV==min(BICV),arr.ind = TRUE) cofficents AICV BICV AICMIN AICLOC BICMIN BICLOC yhad=SSbiexp(x,cofmean[1],cofmean[2],cofmean[3],cofmean[4]) dathad <- data.frame(x,yhad) ldat = head(dathad, n=11) plot(ldat$x,ldat$yhad,main="Nonlinear Regression", xlab="x", ylab="Yhad") lines(ldat$x, ldat$y, col = "blue")

* fm1: The Biexponential Robust Nonlinear Regression Model equation by using the parameters mean from Biexponential Nonlinear Regression Model as an initial value. ** We can use this code for three sample sizes (25,50,100). But for each sample size we found Four parameters, we test the parameters, we found the ANOVA table and AIC and BIC by using the parameters mean from Biexponential Nonlinear Regression Model as an initial value for Biexponential Robust Nonlinear Regression Model where: A1 =

, lrc1 =

, A2 =

, lrc2 =

the initial values are: = 1.9218028 = 1.2938830 = 0.9358788 = -1.8676772 the initial values are: = 1.9261400 = 1.3248406 = 0.9282851 = -1.8947399 the initial values are: = 1.9240320 = 1.3241795 = 0.9287286 = -1.8960803

104

Appendix D

Practical part

Appendix D D1: Biexponential Robust Nonlinear Regression Model library(robustbase) rm(list = ls()) set.seed(1200) m=200 # No of iterations cof=as.matrix(mat.or.vec(m,4)) AICV=as.matrix(mat.or.vec(m,2)) BICV=as.matrix(mat.or.vec(m,2)) n=(25,50,100) # No of y in each x for (j in 1:m) { x=c() y=c() for (i in 1:n) { x1= c(seq (0.25,1.25,by=0.25)) x2= c(2,3,4,5,6,8) x=append(x,x1) x=append(x,x2) y1=exp(-x1) y2 = runif(5, 0, 1) y3 = sort(y2, decreasing=TRUE) y4=y1+y3 yy1=exp(-x2) yy2 = runif(6, 0, 1) yy3 = sort(yy2, decreasing=TRUE) yy4=yy1+yy3 y=append(y,y4) y=append(y,yy4) } dat<- data.frame(x,y) fm1 <- nlrob(y ~ SSbiexp(x, A1, lrc1, A2, lrc2), data = dat,start = list(A1 = , lrc1 = ,A2 = ,lrc2 = )) cof[j,1]=coef(summary(fm1))["A1","Estimate"] cof[j,2]=coef(summary(fm1))["lrc1","Estimate"] cof[j,3]=coef(summary(fm1))["A2","Estimate"] cof[j,4]=coef(summary(fm1))["lrc2","Estimate"] cofmean=colMeans(cof) AICV[j,1]=j AICV[j,2]=AIC(fm1) BICV[j,1]=j BICV[j,2]=BIC(fm1) print(j) summary(fm1) print(summary(fm1)) print(j) sse<-sum(residuals(fm1)^2) print(sse) print(j) ssr<-sum((predict(fm1) - mean(y))^2) print(ssr) } cofficents <- setNames(data.frame(cof), c("A1","lrc1","A3","lrc2"))

105

Appendix D

Practical part

AICV=data.frame(AICV) BICV=data.frame(BICV) AICMIN=min(AICV) AICLOC=which(AICV==min(AICV),arr.ind = TRUE) BICMIN=min(BICV) BICLOC=which(BICV==min(BICV),arr.ind = TRUE) cofficents AICV BICV AICMIN AICLOC BICMIN BICLOC yhad=SSbiexp(x,cofmean[1],cofmean[2],cofmean[3], cofmean[4]) dathad <- data.frame(x,yhad) ldat = head(dathad, n=11) plot(ldat$x,ldat$yhad,main="Nonlinear Regression", xlab="x", ylab="Yhad") lines(ldat$x, ldat$y, col = "red")

* fm1: The Biexponential Robust Nonlinear Regression Model equation by using the best parameters from Biexponential Nonlinear Regression Model as an initial value. ** We can use this code for three sample sizes (25,50,100). But for each sample size we found Four parameters, we test the parameters, we found the ANOVA table and AIC and BIC by using the best parameters from Biexponential Nonlinear Regression Model as an initial value for Biexponential Robust Nonlinear Regression Model where: A1 =

, lrc1 =

, A2 =

, lrc2 =

the initial values are: = 1.7868610 = 1.273806 = 0.9499278 = -1.786601 the initial values are: = 1.864754 = 1.290247 = 0.9468054 = -1.852526 the initial values are: = 1.771918 = 1.279541 = 0.9661654 = -1.851838

106

Appendix E

Practical part

Appendix E E1: Robust (M-Estimate (Huber and Biweight (Bisquare))) library(MASS) set.seed(1200) rm(list = ls()) m=200 # No of iterations for (j in 1:m) { n=(25,50,100) # No of y in each x x=c() y=c() for (i in 1:n) { x1= c(seq (0.25,1.25,by=0.25)) x2= c(2,3,4,5,6,8) x=append(x,x1) x=append(x,x2) y1=exp(-x1) y2 = runif(5, 0, 1) y3 = sort(y2, decreasing=TRUE) y4=y1+y3 yy1=exp(-x2) yy2 = runif(6, 0, 1) yy3 = sort(yy2, decreasing=TRUE) yy4=yy1+yy3 y=append(y,y4) y=append(y,yy4) } dat<- data.frame(x,y) Function <- rlm(y~x,dat,psi = psi.huber,psi = psi.bisquare) print(j) summary(Function) print(summary(Function)) print(j) sse<-sum(residuals(Function)^2) print(sse) print(j) ssr<-sum((predict(Function) - mean(y))^2) print(ssr) print(j) print(AIC(Function)) print(j) print(BIC(Function)) }

* Function: The equation for Robust M-Estimate (Huber and Biweight). ** From the above code we found the parameters by using Robust M-Estimate for Two Weight Functions (Huber and Biweight (Bisquare)), we test the parameters, we found the ANOVA table and the information criterion AIC and BIC.

107

Appendix E

Practical part

E2: Robust (M-Estimate (Huber and Biweight (Bisquare))) library(MASS) set.seed(1200) rm(list = ls()) m=200 # No of iterations for (j in 1:m) { n=(25,50,100) # No of y in each x x=c() y=c() for (i in 1:n) { x1= c(seq (0.25,1.25,by=0.25)) x2= c(2,3,4,5,6,8) x=append(x,x1) x=append(x,x2) y1=exp(-x1) y2 = runif(5, 0, 1) y3 = sort(y2, decreasing=TRUE) y4=y1+y3 yy1=exp(-x2) yy2 = runif(6, 0, 1) yy3 = sort(yy2, decreasing=TRUE) yy4=yy1+yy3 y=append(y,y4) y=append(y,yy4) } dat<- data.frame(x,y) Function <- rlm(y~x,dat,psi = psi.huber, psi = psi.bisquare) AICV[j,1]=j AICV[j,2]=AIC(Function) BICV[j,1]=j BICV[j,2]=BIC(Function) } AICV=data.frame(AICV) BICV=data.frame(BICV) AICMIN=min(AICV) AICLOC=which(AICV==min(AICV),arr.ind = TRUE) BICMIN=min(BICV) BICLOC=which(BICV==min(BICV),arr.ind = TRUE) AICV BICV

*** Function: The equation for Robust M-Estimate by using Two Weight Functions (Huber and Biweight). **** From the above code we found the information criterion AIC and BIC by using Robust M-Estimate for Two Weight Functions (Huber and Biweight (Bisquare)).

108

‫الخالصة‬ ‫أصثر عيٌ اإلزصاء ىٔ إَٔ‪ٞ‬ح قص٘‪ ٙ‬ف‪ٕ ٜ‬زا اىعصش م٘س‪ٞ‬يح ٗأداج ى‪ٞ‬س فقػ ىيْٖح اىعيَ‪ ٜ‬ف‪ ٜ‬اىثسث ىنِ‬ ‫ف‪ ٜ‬خَ‪ٞ‬ع ٍداالخ اىعيً٘ اىَخريفح‪ .‬عيٌ االزصاء ٕ٘ اىعيٌ اىز‪َّ ٛ‬ا ٗذط٘س ف‪ ٜ‬اىقشُ اىساى‪ٗ ،ٜ‬أصثر‬ ‫عيَا ٍسرقشا ىٔ ٕ‪ٞ‬نئ ّٗظأٍ اىخاص‪.‬‬ ‫االّسذاس اىخط‪ٗ ٕ٘ ٜ‬س‪ٞ‬يح ق٘‪ٝ‬ح ىرسي‪ٞ‬و اىث‪ٞ‬اّاخ ٗٗصفٖا ىيَْارج اىخط‪ٞ‬ح ف‪ ٜ‬اىَعا‪ٝ‬ش‪ .‬ف‪ ٜ‬مث‪ٞ‬ش ٍِ‬ ‫األز‪ٞ‬اُ‪ٍٗ ،‬ع رىل‪ ،‬فإُ اىثازث ىذ‪ ٔٝ‬ذعث‪ٞ‬ش س‪ٝ‬اظ‪ٝ ٜ‬عَو عي‪ ٚ‬ستػ االسرداتح ىر٘قع اىَرغ‪ٞ‬شاخ‪ٕٗ ،‬زٓ‬ ‫اىَْارج غ‪ٞ‬ش خط‪ٞ‬ح ف‪ ٜ‬اىَعيَاخ اٗ اىَعا‪ٝ‬ش‪ .‬ف‪ٍ ٜ‬ثو ٕزٓ اىساالخ‪ٝ ،‬دة أُ ذَرذ اى‪ ٚ‬ذقْ‪ٞ‬اخ االّسذاس‬ ‫اىخط‪ ،ٜ‬اىز‪ٝ ٛ‬عشض ّفسح ٍعقذا‪.‬‬ ‫ف‪ٕ ٜ‬زٓ اىشساىح‪ ،‬قاسّْا ت‪ ِٞ‬غش‪ٝ‬قح االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح ٗ اىطش‪ٝ‬قح اىسص‪ْٞ‬ح ٗ رىل ال‪ٝ‬داد ٗس‪ٞ‬يح ق٘‪ٝ‬ح‬ ‫َّ‪ٞ‬و السرخذاٍٖا إٍا عْذٍا ‪ٝ‬قرشذ ٍِ قثو االعرثاساخ اىْظش‪ٝ‬ح أٗ ىثْاء اىسي٘ك اىَعشٗف غ‪ٞ‬ش اىخط‪ٞ‬ح ٍِ‬ ‫اىَْ٘رج‪ .‬زر‪ ٚ‬عْذٍا ّقشب َّ٘رج اىخط‪ ،ٜ‬ال ذضاه َّ٘رج غ‪ٞ‬ش اىخط‪ٞ‬ح ‪َٞٝ‬و إى‪ ٚ‬اإلتقاء ٗ ذعط‪ْٞ‬ا ذفس‪ٞ‬شا‬ ‫ٗاظسا ىيَعيَاخ‪.‬‬ ‫تأسرخذاً اىثشٍد‪ٞ‬اخ اىيغح ‪ّ R‬سِ ّ٘ىذ اىث‪ٞ‬اّاخ اىر‪ّ ٜ‬سرخذٍٖا ف‪ٕ ٜ‬زٓ اىشساىح ىَذج ثالثح أزداً ٍِ‬ ‫اىع‪ْٞ‬اخ (‪ٗ ،)055 ،25 ،52‬أخزّا (‪ )555‬ذنشاساخ ىنو زدٌ ٍِ اىع‪ْٞ‬اخ‪.‬‬ ‫ف‪ ٜ‬اىدضء اىعَي‪ ٜ‬ىٖزٓ اىشساىح ىالّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح‪ّ ،‬فسش َّ٘رج االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح‬ ‫ىـ(‪ٗ )Biexponential‬ذقذ‪ٝ‬ش اىَعيَاخ األستع‪ .‬ىقذ ٗخذّا ذقذ‪ٝ‬ش اىَعيَاخ ٗ ّعْ‪ ٜ‬ىَْ٘رخْا‪ٗ ،‬تٖزا ٗخذّا‬ ‫أ‪ٝ‬عا أفعو أداء ىيَعيَاخ تاالعرَاد عي‪ٍ ٚ‬ع‪ٞ‬اس اىَعيٍ٘اخ ‪ٍٗ Akaike‬ع‪ٞ‬اس اىَعيٍ٘اخ اىْظش‪ٝ‬ح‬ ‫االفرشاظ‪ٞ‬ح (‪ .)Bayesian‬اخرثشّا اىَعيَاخ ز‪ٞ‬ث اىَعيَاخ ماّد ٍعْ٘‪ٝ‬ح‪ٗ ٗ .‬خذّا خذٗه ذسي‪ٞ‬و اىرثا‪ِٝ‬‬ ‫(‪ )ANOVA‬ز‪ٞ‬ث ماُ اخرثاس )‪ (F-test‬ا‪ٝ‬عا ٍعْ٘‪ٝ‬ح‪.‬‬ ‫ٗ ىرفس‪ٞ‬ش َّ٘رج االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح اىسص‪ْٞ‬ح ىـ(‪ّ )Biexponential‬قذس أستع ٍع‪ٞ‬اساخ تاالعرَاد‬ ‫عي‪ ٚ‬اى٘سػ اىَعيَاخ مق‪َٞ‬ح اىثذائ‪ٞ‬ح ف‪َّ٘ ٜ‬رج االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح ىـ(‪ .)Biexponential‬ثٌ ٗخذّا‬ ‫ذقذ‪ٝ‬ش اىَعيَاخ ٗ أفعو أداء ىيَعيَاخ تاالعرَاد عي‪ٍ ٚ‬ع‪ٞ‬اس اىَعيٍ٘اخ ‪ٍٗ Akaike‬ع‪ٞ‬اس اىَعيٍ٘اخ‬ ‫اىْظش‪ٝ‬ح االفرشاظ‪ٞ‬ح (‪ ٗ )Bayesian‬اخرثشّا اىَعيَاخ ز‪ٞ‬ث اىَعيَاخ ماّد ٍعْ٘‪ٝ‬ح‪ٗ ٗ .‬خذّا خذٗه‬ ‫ذسي‪ٞ‬و اىرثا‪ )ANOVA( ِٝ‬ز‪ٞ‬ث ماُ اخرثاس )‪ (F-test‬ا‪ٝ‬عا ٍعْ٘‪ٝ‬ح‪.‬‬ ‫ٗ ٍِ ّاز‪ٞ‬ح أخش‪ ٙ‬ىيَْ٘رج االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح اىسص‪ْٞ‬ح ىـ(‪ّ )Biexponential‬قذس اىَعيَاخ األستع‬ ‫تاالعرَاد عي‪ ٚ‬أفعو اىَعيَاخ مق‪َٞ‬ح اىثذائ‪ٞ‬ح ف‪َّ٘ ٜ‬رج االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح ىـ(‪.)Biexponential‬‬

‫ثٌ ٗخذّا ذقذ‪ٝ‬ش اىَعيَاخ ٗ أفعو أداء ىيَعيَاخ تاالعرَاد عي‪ٍ ٚ‬ع‪ٞ‬اس اىَعيٍ٘اخ ‪ٍ ٗ Akaike‬ع‪ٞ‬اس‬ ‫اىَعيٍ٘اخ اىْظش‪ٝ‬ح االفرشاظ‪ٞ‬ح (‪ )Bayesian‬تعذ رىل اخرثشّا اىَعيَاخ ز‪ٞ‬ث اىَعيَاخ ماّد ٍعْ٘‪ٝ‬ح ٗ‬ ‫اخرثاس )‪ (F-test‬ا‪ٝ‬عا ٍعْ٘‪ٝ‬ح ٍِ خاله خذٗه ذسي‪ٞ‬و اىرثا‪.)ANOVA( ِٝ‬‬ ‫اىدضء األخ‪ٞ‬ش ف‪ ٜ‬اىداّة اىعَي‪ ٕ٘ ٜ‬اىرقذ‪ٝ‬ش ازرَاه اىسذ األقص‪ ٚ‬اىسص‪ M-Estimate ِٞ‬تاسرخذاً‬ ‫اىذاىر‪ ِٞ‬اى٘صُ (‪ّٗ .)Biweight ٗ Huber‬سِ ّقذس ٍعيَر‪ .ِٞ‬ثٌ ٗخذّا أفعو أداء ىيَعيَاخ تاالعرَاد‬ ‫عي‪ٍ ٚ‬ع‪ٞ‬اس اىَعيٍ٘اخ ‪ٍ ٗ Akaike‬ع‪ٞ‬اس اىَعيٍ٘اخ اىْظش‪ٝ‬ح االفرشاظ‪ٞ‬ح (‪ٗ ٗ )Bayesian‬خذّا‬ ‫ذقذ‪ٝ‬شاخ اىَعيَاخ ٗ اخرثشّا اىَعيَاخ ز‪ٞ‬ث اىَعيَاخ ماّد ٍعْ٘‪ٝ‬ح‪ ٗ .‬ا‪ٝ‬عا خذٗه ذسي‪ٞ‬و اىرثا‪ِٝ‬‬ ‫(‪ )ANOVA‬ز٘ه ٍعْ٘‪ٝ‬ح اىَْ٘رج‪.‬‬ ‫تعذ ٍقاسّح مو خضء عَي‪ٍ ٜ‬ع تععٖا اىثعط‪ ،‬فإّْا ّسرْرح أُ َّ٘رج االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح اىسص‪ْٞ‬ح‬ ‫ىـ(‪ )Biexponential‬تاالعرَاد عي‪ ٚ‬اى٘سػ اىَعيَاخ مق‪َٞ‬ح اىثذائ‪ٞ‬ح ف‪َّ٘ ٜ‬رج االّسذاس غ‪ٞ‬ش اىخط‪ٞ‬ح‬ ‫ىـ(‪ ٕ٘ )Biexponential‬أفعو َّ٘رج ألّٔ ىذ‪ ٔٝ‬اىسذ األدّ‪ BIC ٗ AIC ٍِ ٚ‬ىسدٌ اىع‪ْٞ‬ح ( ‪.)055‬‬

‫پوخته‬ ‫زاوستّ ئامار گًضًیًکی زۆر َ فراَاوی کردََي لًم سًرديمً َ گروگیًکی زۆرِ ًٌیً َ بَُي بً‬ ‫ڕێگاێك یاخُد ئامرازێك بۆ میتۆدِ زاوستّ لً تُێژیىًَەکاودا لً ًٌمَُ بًضً جۆراَجۆريکاوّ‬ ‫زاوستدا‪ ،‬زاویىّ ئامار زاوستێكً كً گًضًِ سًودََە َ گًضًِ كردََە بۆ سًدەِ ئێستاکً َ بَُي بً‬ ‫زاوستێكّ سًقامگیر‪ ،‬کً داڕضتً َ یاسای خۆی ًٌیً‪.‬‬ ‫الرِ بَُن (پاضۆچَُوّ) ٌێڵّ میتۆدێکی بًٌێسي بۆ ضّ كردوًَەِ ئًَ زاویاریاوًی کً لًالیًن‬ ‫مۆدێلًکان باسکراَن کً بریتییه لً لیىیًری (ٌێڵی) لً ٌۆكاريکان َ پێُيريکان‪ ،‬لًگًڵ ئًَەضدا‬ ‫زۆرجار‪ ،‬تُێژەرێك دەربڕیىێكّ بیركارییاوًِ ًٌیً بۆداڕضتىّ پًیُەودِ وێُان ًٌردََ گۆڕاَيکاوی‬ ‫پێطبیىیكًر َ َياڵم دير‪ ،‬ئًم مۆدێالوًش زۆر کات ًٌمیطً (وا‪ٌ-‬ێڵیه) لً ٌۆکاريکان َ پێُيريکان‪ .‬لً‬ ‫ئًَ حاڵًتاوًدا ٌێڵّ تًكىیكّ الرِ بَُن (پاضۆچَُن) فراَان دەبێت َ ئاڵۆزیىًكان ڕەچاَ ديكات‪.‬‬ ‫لًم تێسەدا بًردراَردمان كرد لًوێُان میتۆدی پاضۆچَُوی وا‪ٌ-‬ێڵّ َ میتۆدی ڕۆبًست (پارێسبًود)‪،‬‬ ‫مۆدێلً وا‪ٌ-‬ێڵیًکان بًكار دەٌێىرن کاتێک لً الیًن تێڕاماوی بیردۆزیًکاوًَي پێطىیار دەكرێه یان لً‬ ‫کاتی بىیاتىاوی ڕەفتارێکی زاوراَی وا‪ٌ-‬ێڵّ بۆ مۆدێلێك‪ .‬تًواوًت ئًَ كاتًی وسیكکرديَيی ٌێڵیص بً‬ ‫باضی كار ديکات ‪ ،‬مۆدێلّ وا‪ٌ-‬ێڵّ لًَاوًیً ٌێطتا بًكار بٍێىرێت تاکُ لێكداوًَەیێكّ ڕََوّ ٌۆكار َ‬ ‫پێُيريکان بٍێلێتًَە‪.‬‬ ‫بً بًكارٌێىاوّ زماوّ سۆفتُێرِ ‪ R‬ئێمً زاویاریًكً درَست دەكًیه بً ضێُيیًک کً لًم تێسەدا سێ‬ ‫قًبارەِ جیاَاز َيک ومَُوً بًكار دەٌێىیه کً بریتیه لً ( ‪ ) 555 ( َ ,) 055 ،25 ،52‬جار پرۆسًی‬ ‫دََباريکردوًَيمان بۆ ًٌر قًبارەیًک بًکارٌێىاَي‪.‬‬ ‫لً بًضی كردارِ ئًم تێسەدا بۆ الرِ بَُن (پاضۆچَُوّ) وا‪ٌ-‬ێڵّ‪ ،‬مۆدێلّ الرِ بَُن (پاضۆچَُوی)‬ ‫وا‪ٌ-‬ێڵّ بای ‪-‬ئێكسپًَوێىطًل (دَاوًی تُاوّ)مان بًكارٌێىا َي َ خًماڵودمان بۆ چُار داوً لً پێُير َ‬ ‫ٌۆکاريکان کرد َ کۆی پێُاوًِ خًماڵودومان دۆزیًَي بۆ مۆدێلًکًمان ًٌَرَيٌا باضتریه‬ ‫جێبًجێكارمان بۆ ٌۆكار َ پێُيريکان دۆزیًَي بً پطت بًسته بً پێُاوًِ زاویارِ ‪َ Akaike‬پێُاوًِ‬ ‫زاویارِ ‪َ .Bayesian‬ي تاقیكردوًَيمان بۆ پێُيريكان كرد َ بایًخداربَُن َي بً ًٌمان ضێُي خطتًِ‬ ‫(‪ )ANOVA‬مان دۆزیًَي كً تیایدا تاقیكردوًَيِ (‪ )F-test‬بایًخداربَُ‪.‬‬ ‫بۆ مۆدێلی وا‪ٌ-‬ێڵی الرِ بَُن (پاضۆچَُوی) ڕۆبًستی بای ‪-‬ئێكسپًَوێىطًل خًماڵودمان بۆ چُار داوً‬ ‫لً پێُير َ ٌۆکاريکان کرد بً پطت بًسته بً تێكراِ پێُيرەكان َيكُ ورخێكّ سًريتاِ لً مۆدێلی‬

‫الرِ بَُن (پاضۆچَُوی) وا‪ٌ-‬ێڵی بای ‪-‬ئێكسپًَوێىطًل ‪ ،‬پاضان کۆی پێُاوًِ خًماڵودومان دۆزیًَي‬ ‫ًٌَرَيٌا باضتریه جێبًجێكارمان بۆ ٌۆكار َ پێُيريکان دۆزیًَي بً پطت بًسته بً پێُاوًِ زاویارِ‬ ‫‪َ Akaike‬پێُاوًِ زاویارِ ‪َ .Bayesian‬ي تاقیكردوًَيمان بۆ پێُيريكان كرد َ بایًخداربَُن َي‬ ‫خطتًِ (‪ )ANOVA‬مان دۆزیًَي كً تیایدا تاقیكردوًَيِ (‪ )F-test‬بً ًٌمان ضێُي بایًخداربَُ‪.‬‬ ‫لً الیًكّ تريَي‪ ,‬بۆ مۆدێلی وا‪ٌ-‬ێڵی الرِ بَُن (پاضۆچَُوی) ڕۆبًستی بای –ئێكسپًَوێىطًل بً پطت‬ ‫بًسته بً باضتریه پێُيرەكان َيكُ ورخێكّ سًريتاِ لً مۆدێلی الرِ بَُن (پاضۆچَُوی) وا‪ٌ-‬ێڵی‬ ‫بای ‪-‬ئێكسپًَوێىطًل خًماڵودمان بۆ چُار داوً لً پێُير َ ٌۆکاريکان کرد ‪ ،‬پاضان کۆی پێُاوًِ‬ ‫خًماڵودومان دۆزیًَي ًٌَرَيٌا باضتریه جێبًجێكارمان بۆ ٌۆكار َ پێُيريکان دۆزیًَي بً پطت‬ ‫بًسته بً پێُاوًِ زاویارِ ‪ َ Akaike‬پێُاوًِ زاویارِ ‪َ .Bayesian‬ي تاقیكردوًَيمان بۆ پێُيريكان‬ ‫كرد َ بایًخداربَُن َي تاقیكردوًَيِ (‪ )F-test‬بً ًٌمان ضێُي بایًخداربَُلً واَ خطتًِ‬ ‫(‪ )ANOVA‬دا‪.‬‬ ‫کۆتا بًضی کرداری بریتییً لً خًماڵودوی ڕۆبًست ‪ M-estimate‬بً بًكارٌێىاوّ دََ کار َ‬ ‫خاسێتی كێطّ ٌُبًر َ بایُيیت (‪ .)Huber and bisweight‬بۆ دََان لً ٌۆکار َ پێُيريکان‬ ‫خًماڵوداومان کرد‪ ،‬دَاتر باضتریه جێبًجێكارمان بۆ ٌۆكار َ پێُيريکان دۆزیًَي بً پێی پێُاوًی‬ ‫زاویاری ‪ًٌ َ Akaike‬رَيٌا پێُاوًی زاویاری ‪َ ،Bayesian‬ي کۆی پێُاوًِ خًماڵودومان دۆزیًَي‪،‬‬ ‫َي تاقیكردوًَيمان بۆ پێُيريكان كرد َ بایًخداربَُن َي خطتًِ (‪ )ANOVA‬مان دۆزیًَي بۆ‬ ‫بایًخدارِ مۆدێلًكً‪.‬‬ ‫لً پاش بًراَردكردن لًوێُان ًٌریًك لًَ چُار بًضً کردارییًدا‪ ،‬گًیطتیه بًَ ئًوجامًی كً مۆدێلی‬ ‫وا‪ٌ-‬ێڵی الرِ بَُن (پاضۆچَُوی) ڕۆبًستی بای ‪-‬ئێكسپًَوێىطًل بً پطت بًسته بً تێكراِ پێُيرەكان‬ ‫َيكُ ورخێكّ سًريتاِ لً مۆدێلی الرِ بَُن (پاضۆچَُوی) وا‪ٌ-‬ێڵی بای –ئێكسپًَوێىطًل بریتییً لً‬ ‫باضتریه مۆدێلً لًبًر ئًَيی كًمتریه ‪ًٌ BIC َ AIC‬یً بۆ قًبارەِ ومَُوًی (‪.)055‬‬

‫مقاروت بيه الىمارج االسي الثىائي )‪ (Biexponential‬و غير‬ ‫الخطيت االسي الثىائي (‪ (Biexponential‬الحصيىت‪ ،‬بأستخذام‬ ‫المحاكاة‬

‫رسالت مقدمت‬ ‫الى مجلس كليت االدارة و االقتصاد في جامعت السليمانيت‬ ‫كجزء من متطلباث درجت الماجستير علوم في االحصاء‬

‫مه قبل‬ ‫هۆزان طه عبذهللا‬

‫بأشراف‬ ‫األستار المساعذ‬ ‫د‪ .‬سميرة محمذ صالح‬

‫م ‪2016‬‬

‫ك ‪2716‬‬ ‫ه ‪7341‬‬

‫بوراوردكردن لونێوان مۆدێلوكاني باى‪-‬ئێكسپۆنێنشول‬ ‫(دوانوى تواني) و ناىێڵي (نۆن ‪ -‬لينيور) پارێسبوند‬ ‫(ڕۆبوست)ى باى‪-‬ئێكسپۆنێنشول (دوانوى تواني)‪،‬‬ ‫بو بوكارىێناني سيمولێيشون‬

‫نامەكە پێشكەشي‬ ‫ئەنجومەني كۆلێجي كارگێڕى و ئابوورى كراوە‬ ‫لە زانکۆی سلێمانی وەك بە شێك لە پێداویستیەکانی پلەی ماستەر لە‬ ‫زانستەکانی ئامار‬

‫لوالیون‬ ‫ىۆزان طو عبدهللا‬ ‫بو سورپورشتي‬ ‫پرۆفيسۆرى یاریدهدهر‬ ‫د‪.‬سوميره محمد صالح‬ ‫م ‪2016‬‬

‫ك ‪2716‬‬ ‫ك ‪7341‬‬

Comparison between Biexponential and Robust Biexponential ...

Comparison between Biexponential and Robust Biexponential Nonlinear Models, Using Simulation.pdf. Comparison between Biexponential and Robust ...

8MB Sizes 6 Downloads 245 Views

Recommend Documents

Absolute and relative signals: a comparison between melanin- and ...
Summary. Some signalling traits may be highly influenced by environmental factors, thereby decreasing between-year trait repeatability what could compromise the honesty of signal. However, although the repeatability of environmentally-influenced trai

Multilineage Differentiation Capability Comparison between ... - hikari
MSCs were observed under the electron microscopes. Scanning .... the process of human development” [Verfaillie et al., 2003] urges us to reexamine the.

20140228 Comparison between MARATHON.pdf
garbage to the course. Strict prohibition!Violators will be immediately disqualified & will be. asked to leave the course. Mostly no Environmental Protection. Rules & Guidelines. If you find any trash on the. course. Be a HERO, by picking it up and t

Multilineage Differentiation Capability Comparison between ... - hikari
After blocked with PBS containing 2% BSA, cells were permeabilized with 0.1% Triton-X 100 for 10 min. Slides were incubated sequentially overnight at 4°C with ...

Comparison between discrete dipole implementations ...
The geometry of the scatterer is read from a file and all the parameters of .... unlimited number of dipoles, since ADDA is not limited by the memory of a single ... symmetry of the interaction matrix is used to decrease storage requirement of its ..

Comparison between discrete dipole implementations ...
in astronomy and in some technological applications has greatly increased in the last years. ..... [16] or the more advanced package 'fastest Fourier transform in the west' (FFTW) [26]. ...... science and is owned by the Ministry of Education.

Comparison of insect biodiversity between organic and conventional ...
Aug 26, 2014 - 3 Centre for Wildlife Studies, CWS, Bengaluru, Karnataka, India. 4 UAS - University of Agricultural Sciences, GKVK Campus, ... data; AG supervised the research plan and implementation of the field work and wrote up the paper. .... Figu

Modesty in selfpresentation: A comparison between the USA and Japan
Modesty in self-presentation: A comparison between the USA and Japanajsp_ 60..68. Toshio Yamagishi,1 Hirofumi Hashimoto,1 Karen S. Cook,2 Toko Kiyonari,3. Mizuho Shinada,1 Nobuhiro Mifune,1 Keigo Inukai,1 Haruto Takagishi,1 Yutaka Horita1 and Yang Li

A Comparison Between Broad Histogram and ... - Springer Link
KEY WORDS: Microcanonical averages; numerical simulation. We could find three ... choose the energy E0 at the center of the spectrum, and equate our numerically .... authors re-formulated their criticism concerning what they call ''systemati-.

A comparison between 3G and 802.11 wireless ...
A comparison between 3G and 802.11 wireless technologies for Inter- ... Vehicular Ad Hoc Network, a VANET, between the ..... coverage. In order to guarantee no interference of any kind, the decision was taken to test each technology ...

Comparison between Course Options and Youth Options August 2015 ...
There was a problem previewing this document. Retrying. ... Comparison between Course Options and Youth Options August 2015 v1.0.pdf. Comparison ...

Comparison between Position and Posture Recovery in ...
Service System Technology Development for Ubiquitous City). Fig. 1. Three Types of Recovery Methods [2] the robot on the path. It does not consider orientation ...

A Comparison Between Broad Histogram and ... - Springer Link
called Entropic Sampling, which, from now on, we call ESM. We present ... to note that these movements are virtual, since they are not actually per- ..... B. A. Berg, in Proceedings of the International Conference on Multiscale Phenomena and.

A Comparison Between Allophone, Syllable, and ...
done manually by using of Wavesurfer software. The results of this system and .... dêr têr mall tall şill çill jar yar pek tek duê kuê zall sall mîn tîn van ban sall tall ... [7] A. Youssef , et al, "An Arabic TTS System Based on the IBM. Tra

Comparison of Food Plant Knowledge between Urban ...
veloping my traditional Vietnamese knowledge. However, though ... with few fruits and vegetables (Internet site. "ethnomed"). ... food plant use in Hawai'i are available for in- dividual .... the area where they originally lived in Vietnam, and the .

economics of the internet: a comparison between
using the number of Internet hosts for Western Europe as dependent ... (HOST), telephone lines per 1000 people (TELLINE), rural population as a ..... countries, SMS through mobile phones has become affordable and a popular mode of ... Hargittai, E.,

A comparison of ground geoelectric activity between three regions of ...
A comparison of ground geoelectric activity between three regions of different level of ..... To go further inside in the comparison of our data sets, we constructed ...

Comparison between the lambda response of eye ...
reversal VEP—especially, to compare their equivalent current dipoles. ... in a sound-proof and shielded room, with their heads moderately fixed in a headrest.

The small-polaron crossover: Comparison between ...
Jun 1, 1998 - Here, we study the behavior of the ground-state energy E0 using the exact solutions ED- .... Y, Brown D. W. and Lindenberg K., J. Chem. Phys.

A comparison between concept maps, mind maps ...
without drawbacks 15–17 and they may not fit all types of target groups (such as non-academics), learning tasks (i.e. developing .... There is a myriad of node-link mapping methods from such diverse areas as psychology, computer .... and facilitati

A Comparison between the stochastic intensity SSRD ...
Sep 12, 2004 - We introduce an analytical approximation for the SSRD im- ..... not necessary to store all the paths. ..... 6.1 Market data and Simulation Setup.