فهرست مطالب

Statistical Society - Volume:20 Issue: 1, Spring 2021

Journal of Iranian Statistical Society
Volume:20 Issue: 1, Spring 2021

  • تاریخ انتشار: 1400/06/27
  • تعداد عناوین: 16
|
  • Syed Ejaz Ahmed*, Dursun Aydın, Ersin Yılmaz Pages 1-26
    Objective

    This paper aims to introduce a modified kernel-type ridge estimator for partially linear models under randomly-right censored data. Such models include two main issues that need to be solved: multi-collinearity and censorship. To address these issues, we improved the kernel estimator based on synthetic data transformation and kNN imputation techniques. The key idea of this paper is to obtain a satisfactory estimate of the partially linear model with multi-collinear and right-censored using a modified ridge estimator.

    Results

    To determine the performance of the method, a detailed simulation study is carried out and a kernel-type ridge estimator for PLM is investigated for two censorship solution techniques. The results are compared and presented with tables and figures. Necessary derivations for the modified semiparametric estimator are given in appendices.

    Keywords: Kernel Smoothing, KNN Imputation, Multi-Collinear Data, Partially Linear Model, Ridge Type Estimator, Right-Censored Data
  • Omid Ardakani, Majid Asadi, Nader Ebrahimi, Ehsan Soofi* Pages 27-59

    In recent years, we have studied information properties of various types of mixtures of probability distributions and introduced a new type, which includes previously known mixtures as special cases. These studies are disseminated in different fields: reliability engineering, econometrics, operations research, probability, the information theory, and data mining. This paper presents a holistic view of these studies and provides further insights and examples. We note that the insightful probabilistic formulation of the mixing parameters stipulated by Behboodian (1972) is required for a representation of the well-known information measure of the arithmetic mixture. Applications of this information measure presented in this paper include lifetime modeling, system reliability, measuring uncertainty and disagreement of forecasters, probability modeling with partial information, and information loss of kernel estimation. Probabilistic formulations of the mixing weights for various types of mixtures provide the Bayes-Fisher information and the Bayes risk of the mean residual function.

    Keywords: Arithmetic Mixture, Geometric Mixture, Jensen-Shannon, Kullback-Leibler
  • Barry Arnold*, Matthew Arvanitis Pages 61-81

    Alternative specifications of univariate asymmetric Laplace models are described and investigated. A more general mixture model is then introduced. Bivariate extensions of these models are discussed in some detail, with particular emphasis on associated parameter estimation strategies. Multivariate versions of the models are briefly introduced.

    Keywords: Asymmetric Laplace, Bivariate Laplace, Exponential Minima, Gamma Components, Generalized Asymmetric Laplace, Laplace
  • Narayanaswamy Balakrishnan*, Ghobad Barmalzan, Ali Akbar Hosseinzadeh Pages 83-100

    In this paper, we consider series-parallel and parallel-series systems with independent subsystems consisting of dependent homogeneous components whose joint lifetimes are modeled by an Archimedean copula. Then, by considering two such systems with different numbers of components within each subsystem, we establish hazard rate and reversed hazard rate orderings between the two system lifetimes, and also discuss how these systems age relative to each other in terms of hazard rate and reversed hazard rate functions.

    Keywords: Relative Ageing Orders, Hazard Rate Order, Reversed Hazard Rate Order, Series-Parallel Systems, Parallel-Series Systems, Archimedean Copulas
  • Fiaz Ahmad Bhatti*, Sedigheh Mirzaei Salehabadi, Gholamhossein G Hamedani Pages 101-121

    We introduce a flexible lifetime distribution called Burr III-Inverse Weibull (BIII-IW). The new proposed distribution has well-known sub-models. The BIII-IW density function includes exponential, left-skewed, right-skewed and symmetrical shapes. The BIII-IW model’s failure rate can be monotone and non-monotone depending on the parameter values. To show the importance of the BIII-IW distribution, we establish various mathematical properties such as random number generator, ordinary moments, conditional moments, residual life functions, reliability measures and characterizations. We address the maximum likelihood estimates (MLE) for the BIII-IW parameters and estimate the precision of the maximum likelihood estimators via a simulation study. We consider applications to two COVID-19 data sets to illustrate the potential of the BIII-IW model.

    Keywords: Moment, Reliability, Characterizations, Maximum Likelihood Estimation
  • Erhard Cramer*, Benjamin Laumen Pages 123-152

    We consider a stage life testing model and assume that the information at which levels the failures occurred is not available. In order to find estimates for the lifetime distribution parameters, we propose an EM-algorithm approach which interprets the lack of knowledge about the stages as missing information. Furthermore, we illustrate the implementation difficulties caused by an increasing number of stages. The study is supplemented by a data example as well as simulations.

    Keywords: EM-Algorithm, Exponential Distribution, Missing Information, Progressive Censoring, Stage Life Testing, Weibull Distribution
  • Adeleh Fallah, Akbar Asgharzadeh*, Hon Keung Tony Ng Pages 153-181

    In this paper, we discuss the prediction problem based on censored coherent system lifetime data when the system structure is known and the component lifetime follows the proportional reversed hazard model. Different point and interval predictors based on classical and Bayesian approaches are derived. A numerical example is presented to illustrate the prediction methods used in this paper. Monte Carlo simulation study is performed to evaluate and compare the performances of different prediction methods.

    Keywords: Bayesian Predictor, Best Unbiased Predictor, Coherent System, Conditional Median Predictor, Maximum Likelihood Predictor, Prediction Intervals
  • Sirous Fathi Manesh*, Muhyiddin Izadi, Baha-Eldin Khaledi Pages 183-196

    Suppose that a policyholder faces $n$ risks X1, ..., Xn which are insured under the policy limit with the total limit of l. Usually, the policyholder is asked to protect each Xi with an arbitrary limit of li such that ∑ni=1li=l. If the risks are independent and identically distributed with log-concave cumulative distribution function, using the notions of majorization and stochastic orderings, we prove that the equal limits provide the maximum of the expected utility of the wealth of policyholder. If the risks with log-concave distribution functions are independent and ordered in the sense of the reversed hazard rate order, we show that the equal limits is the most favourable allocation among the worst allocations. We also prove that if the joint probability density function is arrangement increasing, then the best arranged allocation maximizes the utility expectation of policyholderchr('39')s wealth. We apply the main results to the case when the risks are distributed according to a log-normal distribution.

    Keywords: Arrangement Increasing Function, Log-Normal Distribution, Majorization, Schur-Convex Function, Stochastic Orders, Utility Function
  • Reza Modarres* Pages 197-218

    We study the blocks of interpoint distances, their distributions, correlations, independence and the homogeneity of their total variances. We discuss the exact and asymptotic distribution of the interpoint distances and their average under three models and provide connections between the correlation of interpoint distances with their vector correlation and test of sphericity. We discuss testing independence of the blocks based on the correlation of block interpoint distances. A homogeneity test of the total variances in each block and a simultaneous plot to visualize their relative ordering are presented.

    Keywords: Elliptical Model, Sphericity, Homogeneity, Total Variance
  • Hossein-Ali Mohtashami Borzadaran, Hadi Jabbari, Mohammad Amini*, Ali Dolati Pages 219-246

    In Demography and modelling mortality (or failure) data the univariate Makeham-Gompertz is well-known for its extension of exponential distribution. Here, a bivariate class of Gompertz--Makeham distribution is constructed based on random number of extremal events. Some reliability properties such as ageing intensity, stress-strength based on competing risks are given. Also dependence properties such as dependence structure, association measures and tail dependence measures are obtained. A simulation study and a performance analysis is given based on estimators such as MLE, Tau-inversion and Rho-inversion.

    Keywords: Bivariate Exponential Distribution, Demography, Survival Function, Hazard Function
  • Rahim Moineddin*, Christopher Meaney, Sumeet Kalia Pages 247-267

    Interrupted Time Series (ITS) analysis represents a powerful quasi-experime-ntal design in which a discontinuity is enforced at a specific intervention point in a time series, and separate regression functions are fitted before and after the intervention point. Segmented linear/quantile regression can be used in ITS designs to isolate intervention effects by estimating the sudden/level change (change in intercept) and/or the gradual change (change in slope). To our knowledge, the finite-sample properties of quantile segmented regression for detecting level and gradual change remains unaddressed. In this study, we compared the performance of segmented quantile regression and segmented linear regression using a Monte Carlo simulation study where the error distributions were: IID Gaussian, heteroscedastic IID Gaussian, correlated AR(1), and T (with 1, 2 and 3 degrees of freedom, respectively). We also compared segmented quantile regresison and segmented linear regression when applied to a real dataset, employing an ITS design to estimate intervention effects on daily-mean patient prescription volumes. Both the simulation study and applied example illustrate the usefulness of quantile segmented regression as a complementary statistical methodolo-gy for assessing the impacts of interventions in ITS designs.

    Keywords: Interrupted Time-Series, Segmented Linear Regression, Segmented Quanti-le Regression, Monte Carlo Simulation Study
  • Sameen Naqvi, Neeraj Misra*, Ping Shing Chan Pages 269-287

    In the literature on Statistical Reliability Theory and Stochastic Orders, several results based on theory of TP2/RR2 functions have been extensively used in establishing various properties. In this paper, we provide a review of some useful results in this direction and highlight connections between them.

    Keywords: Hazard Rate Order, Likelihood Ratio Order, Reversed Hazard Rate Order, RR2, TP2
  • Janet Van Niekerk, Andriette Bekker*, Mohammad Arashi Pages 289-306

    Matrix-variate beta distributions are applied in different fields of hypothesis testing, multivariate correlation analysis, zero regression, canonical correlation analysis and etc. A methodology is proposed to generate matrix-variate beta generator distributions by combining the matrix-variate beta kernel with an unknown function of the trace operator. Several statistical characteristics, extensions and developments are presented. Special members are then used in a univariate and multivariate Bayesian analysis setting. These models are fitted to simulated and real datasets, and their fitting and performance are compared to well-established competitors.

    Keywords: Bayesian Analysis, Binomial, Eigenvalues, Gaussian Sample, Gibbs Sampling, Matrix-Variate Beta
  • Bardia Panahbehagh, Rainer Bruggemann, Mohammad Salehi* Pages 307-331

    We introduce a new method for ranked set sampling with multiple criteria. The method relaxes the restriction of selecting just one individual variable from each ranked set. Under the new method for ranking, units are ranked in sets based on linear extensions in partially order set theory with considering all variables simultaneously. Results willbe evaluated by a relatively extensive simulation studies on Bivariate normal distribution and two real case studies on commercial and medicinal use of flowers, and the pollution of herb-layer by Lead, Cadmium, Zinc and Sulfur in some regions of the southwest of Germany.

    Keywords: Environmental Pollution, Linear Extension, Medicinal Use of Flowers, Multiple Variables Ranked Set Sampling, Partially Order Set, Theory
  • Mehdi Razzaghi* Pages 333-345

    The beta-binomial distribution is resulted when the probability of success per trial in the binomial distribution varies in successive trials and the mixing distribution is from the beta family. For experiments with binary outcomes, often it may happen that observations exhibit some extra binomial variation and occur in clusters. In such experiments the beta-binomial distribution can generally provide an adequate fit to the data. Here, we introduce an alternative when the mixing distribution is assumed to be from the log-Lindley family. The properties of this new model are explored and it is shown that similar to the beta-binomial distribution, the log-Lindley binomial distribution can also be applied in modeling clustered binary outcomes. An example with real experimental data from a developmental toxicity experiment is utilized to provide further illustration.

    Keywords: Beta-Binomial, Clustered Binary Outcomes, Distribution Mixtures. Extra Binomial Variation, Log-Lindley
  • Mahmoud Torabi*, Alexander R. De Leon Pages 347-370

    In their conventional formulation as linear mixed models (LMMs) and generalized LMMs (GLMMs), a commonly indispensable assumption in settings involving longitudinal non-Gaussian data is that the longitudinal observations from subjects are conditionally independent, given subject-specific random effects. Although conventional Gaussian LMMs are able to incorporate conditional dependence of longitudinal observations, they require that the data are, or some transformation of them is, Gaussian, a serious limitation in a wide variety of practical applications. Here, we introduce the class of Gaussian copula conditional regression models (GCCRMs) as flexible alternatives to conventional LMMs and GLMMs. One advantage of GCCRMs is that they extend conventional LMMs and GLMMs in a way that reduces to conventional LMMs, when the data are Gaussian, and to conventional GLMMs, when conditional independence is assumed. We implement likelihood analysis of GCCRMs using existing software and statistical packages and evaluate the finite-sample performance of maximum likelihood estimates for GCCRM empirically via simulations vis-a-vis the `naivechr('39') likelihood analys is that incorrectly assumes conditionally independent longitudinal data. Our results show that the `naivechr('39') analysis yields estimates with possibly severe bias and incorrect standard errors, leading to misleading inferences. We use bolus count data on patientschr('39') controlled analgesia comparing dosing regimes and data on serum creatinine from a renal graft study to illustrate the applications of GCCRMs.

    Keywords: Exponential Family, Gaussian Copula, Marginal Distribution, Maximum Likelihood Estimation, Random Effects