[SOLVED] R C algorithm Scheme math scala graph statistic software theory Received: 9 January 2018 Revised: 12 August 2018 Accepted: 21 August 2018 DOI: 10.1002sim.7963

$25

File Name: R_C_algorithm_Scheme_math_scala_graph_statistic_software_theory_Received:_9_January_2018_Revised:_12_August_2018_Accepted:_21_August_2018_DOI:_10.1002sim.7963.zip
File Size: 1488.36 KB

5/5 - (1 vote)

Received: 9 January 2018 Revised: 12 August 2018 Accepted: 21 August 2018 DOI: 10.1002sim.7963
RESEARCH ARTICLE
Modeling semicontinuous longitudinal data with order constraints
Guohai Zhou Lang Wu Department of Statistics, University of
British Columbia, Vancouver, BC, Canada
Correspondence
Guohai Zhou, Department of Statistics, University of British Columbia, Vancouver, BC V6T 1Z4, Canada. Email: guohai.zhoustat.ubc.ca
Funding information
Natural Sciences and Engineering Research Council of Canada NSERC Discovery, GrantAward Number: 22R80742
Semicontinuous longitudinal data are characterized by withinsubjects repeated measurements that either indicate absence of abnormality or reflect differ ent amount of abnormality. Joint models for semicontinuous longitudinal data have been increasingly receiving attention in the literature. Such models permit flexible characterization of covariatesoutcome associations. Orderrestricted statistical inference has been well established in the literature but has not yet been applied to joint models for semicontinuous longitudinal data. We incorporate general orderrestricted inference into the general joint models for semicontinuous longitudinal data previously proposed. We develop computa tional methods to address general order restrictions. Through simulations and a realdata example, we demonstrate the advantages of orderrestricted infer ence in terms of increased power in hypothesis testing and increased precision in parameter estimation.
KEYWORDS
longitudinal data, orderrestricted inference, semicontinuous data
1 INTRODUCTION
Semicontinuous data frequently arise in practice, especially in longitudinal studies. A wellknown example is zeroinflated data where there is a large amount of data values being zeros, making distributional assumptions such as normal ity unreasonable. Here the zeros may be other excessive values, which form a distinct cluster from the rest of data. Neelon et al1 presented a recent overview of methods to analyze semicontinuous data. Other practical examples include consumption of fruits and vegetables,2 biomarker data,3 and monthly medical cost.4 Olsen and Schafer5 proposed to jointly model semicontinuous longitudinal data with a twocomponent model, ie, one submodel for the nonzero data and another submodel for the zero values, with covariates in the models used to partially explain the variation in the response data or excessive zeros. A common method for statistical inference is based on the joint likelihood for the two submod els and an expectationmaximization algorithm for parameter estimation. Smith et al6 recently concluded that ignoring semicontinuity leads to unreliable inference when data contains more than 10 zeros.
In many applications, there are natural constraints or restrictions on some parameters in the models for semicontin uous data, such as certain parameters being positive. For example, Albert and Shen7 described a study in which three intervention groups are compared, ie, a control group receiving medication, a treatment group receiving medication plus sham acupuncture minimal needling that are believed not to be biologically provocative, and another treatment group receiving medication plus active acupuncture. Since acupuncture is a nonchemical supplement to medication, it is natu rally expected that there are nondecreasingly stronger treatment effects across the three groups. Such constraints may be
47582018 John WileySons, Ltd. wileyonlinelibrary.comjournalsim Statistics in Medicine. 2018;37:47584770.

ZHOU AND WU 4759
FIGURE 1 The two shaded regions represent the alternative hypothesis H1
formulated as order restrictions on some parameters in the models. Statistical inference incorporating these restrictions is known to be more efficient than that ignoring the restrictions, making it more likely to detect treatment effects if present.8 In the study described in the work of Albert and Shen,7 they considered a semicontinuous longitudinal response, ie, the volume of vomited substance, with lower values indicating improved health conditions and zero values indicating full recovery ie, a qualitatively distinct state. Let 1 and 2, respectively, denote the mean difference in the log odds of vomiting between the medication plus active acupuncture group and the medication only group, and that between the medication plus sham acupuncture group and the medication only group. Then, it is natural to expect the constraint 120, which suggests that more active use of acupuncture will on average not increase the risk of vomiting. Similarly, let 3 and 4, respectively, denote the mean difference in the amount of vomiting between the medication plus active acupuncture group and the medication only group, and that between the medication plus sham acupuncture group and the medication only group. Then, it is naturally to expect the constraint 340, which indicates that more active use of acupuncture will on average not increase the amount of vomiting conditional on the occurrence of vomiting.
It is then of interest to test the following onesided hypotheses:
H012340, vs H1120, 340,
with at least one strict inequality in H1. Figure 1 visualizes the constrained alternative hypothesis H1.
There is a large literature on constrained or orderrestricted statistical inference since 1960s eg, the work of Perlman9. More recent literature includes the work of Davidov and Rosen10 for linear mixed effect models, the work of Davis et al11 for generalized linear mixed effects models, and the work of Wang and Wu12 for nonlinear mixed effects models, among others. Silvapulle and Sen8 gave a nice review of the literature. However, to our knowledge, orderrestricted inference has not been considered for models of semicontinuous longitudinal data. In this article, we consider orderrestricted inference for models of semicontinuous longitudinal data. Although the basic ideas of the methodology are similar to those in the literature, special considerations are needed for semicontinuous models with order restrictions, due to their unique features. In particular, computation can be a main challenge for likelihood inference. Thus, we propose a computationally
efficient approximate method for likelihood inference.
The rest of this article is organized as follows. Section 2.1 formulates the model for semicontinuous longitudinal data
and proposes methods for constrained inference. Section 2.2 develops general computational methods. Section 3 applies the models and methods to a lung cancer study. Section 4 demonstrates the advantages of constrained inference in a simulation study. We conclude this article with some discussion in Section 5.
2 MODELS AND LIKELIHOOD INFERENCE FOR SEMICONTINUOUS DATA WITH ORDER RESTRICTIONS
2.1 The models
Consider a longitudinal study where Yij denotes the response of interest for subject i measured at time tij, i1, 2, , n; j1, 2, , ni . In many cases, the values of Yi j may be appropriately transformed eg, logYiso that the withinindividual repeated measurements of the transformed response are approximately normally distributed. We con sider situations where Yi j may take excessive zero values, which make the normality assumption unreasonable due to

4760 ZHOU AND WU large amount of zero values. We call such data as semicontinuous data. In statistical modeling, we must handle the zero
values separately from the nonzero values for Yi j . Let Ui jIYi j0 be the indicator of Yi j being zero or not, and let Y
denote the transformed nonzero values of Yi j under a monotone increasing transformation eg, Y logYi . i
i
Following the common approach in the literature eg, the work of Olsen and Schafer,1, we model the response of inter est Yi j and the indicator Ui j jointly. Specifically, we consider the following mixed effects models for the two longitudinal processes:
logit PrUi1logit ixTi11zTi1bi1, Y xTzT b,
i i2 2 i2 i2 i
where i jPr Ui j1 is the probability of a nonzero response; xi j1, zi j1, xi j2, and zi j2 are vectors of covariates;T1 ,T2 T is a vector of fixed effects parameters; and bi1 and bi2 are vectors of random effects. We assume that the random effects in the aforementioned two models are correlated

bi bT,bTT N, 11 12 i1 i22122
,
where the covariance matrices i j s are unstructured and the covariance matrix 21 incorporates the association between the two vectors of random effects bi1 and bi2. In other words, the association between the normal response Y and the
i
binary indicator Ui j is incorporated through matrix 21. We assume that i js are independent and identically distributed
i.i.d. N0,2 and ijs are independent of bi, for j1, ,ni and i1, ,n. We consider models where the mean parametershave order restrictions
A,
where 0 means that all components of the vectorare nonnegative and at least one inequality hold strictly. For example, consider the example given in Section 1 where 1, 2, 3, 4T. The order restrictions described there, 120and340,canbewrittenasA0,with
11 0 0 A010 0.
0 0 1 10 0 01
We consider the likelihood method for inference. Since the nonzero response Yand the zero indicator Ui j are associated i
through correlated random effects, we consider a joint likelihood for both variables. Letdenote the collection of all unknown parameters. The loglikelihood function up to a constant is given by
n
log expl1iexpl2ibidbi, 1 where l1i is the log likelihood for the zero indicator Ui j
l
ni l1i log Ui1i1Ui
ni Ui xTi11zTi1bi1log 1exp xTi11zTi1bi1
12 exp12bT 1 bii
1
and l2i is the log likelihood for the nonzero response Y
i 1
l2i2 log 22
with nni Ui being the number of nonzero responses from subject i andbi
i1
i ni21T T2
Yixi22zi2bi2 ,
,
i 1
corresponds to the joint distribution of random effects.
Yi 0

ZHOU AND WU 4761 2.2 Approximate likelihood computation
When the random effects bi have high dimensions, evaluation of the integrals in the log likelihood in 1 is a major chal lenge, especially under order restrictions. In this section, we discuss and propose computationally efficient approximate methods to evaluate the log likelihood.
To evaluate the joint likelihood l in 1, note that
expl1iexpl2ibi dbi expl1ibi1 expl2ibi2 bi1 dbi2 dbi1 .
Following the work of Olsen and Schafer,1 we consider Laplace approximations to the integrals. First, we decompose fbifbi1fbi2bi1. Note that f bi2bi1 is a multivariate normal density with mean i2211bi1 and variance
22221112. Thus, we have 11
expl b b dbexpbTAb Bb Cdb2i i2 i1 i2i2 i i2 i i2 i i2
d22A 1 expB A1BT4C , iiiii
11
where z 1,,d1
z , , z 1 d1
dimbi1, bi1 is the minimizer of

NGQ NGQ expl2ibi2bi1dbi2 d22Ai12h1

T2z
11 1,,d1
d1
, 3
k

with d2dimbi2, Ai12 zi1zT 22, Bi YxT 2zT 2T 1bi2, and 22 Yi0 i2Yi0 i i2 i2 i2 2
Cid2 log22log22n logT 1i22 i i2 2
YxT 2222. Note that the row vector Yi0 i i2
Bi and the scalar Ci depend on bi1 through i2. Therefore, the integral in 1 can then be computed as
expl1iexpl2ibi dbi expl1ibi1 expl2ibi2 bi1 dbi2 dbi1 d22A12 expl b expBA1BT4Cdb
i1ii1 iii ii1
d22Ai12
d22Ai122d12H 12 exphbi1, 2
where d1
and H is the Hessian of hbi1 evaluated at bi1.
Note that, when the dimension of the random effects bi1 is not high, the Gaussian quadrature may be used to better
approximate the integral with respect to bi1.13 Specifically, let z, j, j1, , NGQ, respectively, denote the abscissas and the weights for the unidimensional Gaussian quadrature rule from the N0, 1 kernel; then,
ni hbi1Ui xTi11zTi1bi1log 1exp xTi11zTi1bi1

exphbi1dbi1
1
bT 1bi12log1122d12CiBiA1BT,
i1 11 i i
11 d21
T and h1bi1exphbi1bT 1bi12log1122d12.
i1 11
2.3 Constrained optimization and inference
In the presence of constraints on the parameters , computation and likelihood inference can be more challenging. In
this section, we consider testing the following general multidimensional onesided hypotheses using likelihood methods: H0 R, vs H1 R,
where strict inequality holds for at least one component in R in H1 and propose methods for computation.
Before we calculate the maximum likelihood estimations of the parameters under order constraints, we first consider
a reparameterization ABso that the constrained components in the parameter vectorcan be rearranged as the first p components in the new parameter vector , where A and B are appropriate matrices of 0 or 1s and p is the
k1

4762 ZHOU AND WU number of rows of A. Such reparameterization permits the use of boxconstrained optimizers. As an example of such
reparameterization, suppose, , , , T, with constraints 0 and 0. Then, we may 01234 12 34
consider the reparameterization 12, 2, 34, 4, 0TAB , with 011 0 0
A0 0 1 0 0 ,B10000. 00 011
0 0 0 0 1 Therefore, the original constraints can be written as k0, k1, 2, 3, 4.
In order to address the constraints on variance component parameters, eg, the standard deviationbeing positive and
the covariance matrix 11 12 being positive definite, we can apply a logCholesky parameterization, following 21 22
the recommendation of Pinheiro and Bates.14 Specifically, we may apply the Choleskey decomposition LTL, where L is an upper triangular matrix with positive diagonal entries. Then, we log transform all diagonal entries of L and stack columns of L so that the unique elements in L form an unconstrained vector . We may also use 1log to replace . Let 1T,1,TT denote all unknown parameters after reparameterization. The constraint H1A0 is equivalent to the first p components of 1 being nonnegative. Let R be a matrix such that R1 is a vector containing the first p components of 1. Let lap1 denote the approximate log likelihood based on the approximation methods described in the previous section and under the reparameterization using 1.
TheapproximatemaximumlikelihoodestimationsundertheconstraintR1 0canbecalculatedbyaboxconstrained
optimization method, called bound optimization by quadratic approximation.15 The idea behind the bound optimization
by quadratic approximation algorithm is to use a quadratic function Q1 to approximate lap1, with Q1 being equal
to lap1 at m interpolation points, where m is a tuning parameter and is typically chosen as m2dim11.
Section 2 in the work of Powell15 described a method to calculate the initial interpolations points x, , xand the m
initial function Q 0. At the t1th iteration, one of the interpolation points xt, , xtis replaced by a newly 1m
constructedpointtoformxt,,xt,withtheinequalityconstraintsR 0beingincorporatedinthisupdating m1
scheme details in Section 3 of the work of Powell.15 Then, an updated Q1 t1 was constructed from Q1 t to form a more accurate approximation of lap1 and was used to calculate the updated t1 details in Section 4 in the work of
1
Powell15 . The algorithm stops when the difference in each component between t1 and t is smaller than a prespecified 11
tolerance such as 0.001.
We consider two tests commonly used in orderrestricted inference, ie, the Waldtype test and the likelihood ratio test
LRT. The Wald and LRT statistics are given, respectively, by
TRTRI1RT1R,
Wn TLR 2 laplap .

Theorem 1. Under standard regularity conditions the work of Wang and Wu,12 details in Appendix A1, the two test statistics TW and TLR have the same asymptotic distribution, ie, the chibarsquare distribution,8
r1T2
PrTtobsH0 i r,RI0 R Pr i tobs , asn, 4
i0
where the test statistic T is either TW or TLR; tobs is the observed value of T given a data set, 0 is the true parameter vector;
and I is the limit of n12lT evaluated at 0; ir,RI1RT are chibar probability weights whose calcu 0 0
lations are described in Appendix A2 or in section 3.5 in the work of Silvapulle and Sen8; and 02 denotes a distribution that places probability one at the point mass zero, and r is the rank of R.
Since the null hypothesis H0 typically only specifies values of a subset of the mean parameters in , the null distribution in 4 involves unknown parameters in , especially the variancecovariance parameters. For this reason, following the works of Wang and Wu12 and Zhou et al,16 we propose two methods to compute approximate pvalues in practice.
The substitution method: Approximate pvalues are obtained through substituting unknown parameters by their
consistent estimators in I in 4, such as I n, where I 2 lap . 0 n n T

ZHOU AND WU 4763Theboundmethod:ApproximateconservativepvaluesareobtainedbyusingtheupperboundgivenbyPerlman9
r Pr2tPr2t
i r , R I1 R T P r2t o b sr1 o b s r o b s .5
i0
The substitution method may perform well when the sample size is large, but it may perform poorly for small samples. The bound method may be too conservative, but it is computationally much simpler. We will evaluate these two methods via simulations later.
3 ANALYSIS OF A LUNG CANCER DATA
We consider a longitudinal retrospective casecontrol study on lung cancer progression described in the work of Ishizumi et al.17 In this study, 101 cancer patients were identified and were retrospectively matched with 101 control patients who exhibit comparable distributions of demographic characteristics such as age and gender and smoking sta tus. The response of interest is subjects biopsy grade based on tumor size. The biopsy grade include the following levels: normal, hyperplasia, metaplasia, mild dysplasia, moderate dysplasia, severe dysplasia, carcinoma in situ, and cancer. For each subject at each time point, the biopsy grade was measured at multiple locations within the lung. For simplicity, we consider the average numerical biopsy grade over multiple lung locations for each subject at each time point and use this lungaverage biopsy grade as the response variable. The data are available through a formal application for ethical approval.
Table 1 shows a descriptive summary of the data. Figure 2 shows the trajectories of the lungaverage biopsy grade of four randomly selected patients from the cancer group and four randomly selected patients from the control group. We see that the lungaverage biopsy grade exhibits both intersubjects and intrasubjects variabilities. We focus on the 65 cancer subjects and the 64 control patients who are at least 45 years old and who have smoking duration for at least 30 years, since such a subgroup seems of special interest.
Let ti j denote the time in years at which measurement j for subject i was taken. We use j0 for baseline and j1
for followup times. Let Yi j denote the lungaverage biopsy grade from subject i at time ti j. From a clinical point of view,
Yi j values smaller than 3 may be considered as normal or very mild cells, while values over 3 indicate possible nonnormal
cells, which may possibly lead to cancer, so Yi j values over 3 are of primary interest. We thus introduce the indicator
variable Ui jIYi j3 as an indicator of abnormality of the cells. In other words, we can view Yi j values smaller than 3
as zeros or a cluster of normal cases, and we view Yi j values larger than 3 as continuous data with larger values being
more severe abnormality. We also consider a transformed response variable Y log Yi2.5 for Yi j3 to make i 10
the response data more normally distributed. The proportion of abnormal lungaverage biopsy grade is 86 in the cancer group and is 82 in the control group.
Since the cancer group and the control group were matched with respect to demographic covariates, we consider the following simple and empirical model, which includes group indicator, baseline lungaverage biopsy grade Yi0, and time
TABLE 1 Summary of the lung cancer data in the work of Ishizumi et al17
0 i 2
Variable
Age years; meanSD Men:women number of subjects Current:former smoker number of subjects Smoking duration years; meanSD Years quit former smokers; mean SD Followup duration years; meanSD Lungaverage biopsy grade meanSD baseline
0 to 5year followup
5 to 10year followup
10 to 15year followup
Cancer Group
617 59:42 57:44 419 117 9.83.9
3.940.87 4.061.36 4.211.32 3.840.69
Control Group
618 59:42 57:44 408 107 9.23.9
3.870.85 3.691.01 3.321.28 3.450.10
Abbreviations: SD, standard deviation.

4764 ZHOU AND WU
FIGURE 2 Trajectory of four randomly selected subjects from each group. The dashed line corresponds to 3, the cutoff defining abnormality in lungaverage biopsy grade
as predictors. Specifically, let ICi denote the group indicator of subject i, with 1 for cancer group and 0 for control group. We consider the following models:
logit PrUi1bi101ICi2Yi03tibi1, 6 Y 4 5ICi 6Yi0 7ti bi2 i, 7
i
where i j is i.i.d. N0, 2 and bi1, bi2T are i.i.d. bivariate Gaussian with mean 0 and covariance matrix .
For this study, a natural research question of interest is if the disease progression, measured by the values of Ui j and Y over time, is i different for the cancer and control groups and ii related to the baseline values Yi0. Statistically, the
i
questions can be answered by testing the following two hypotheses: H01150 and H02260. The
alternative hypotheses may be the unrestricted alternatives Hu , ie, not Hu ie, at least one of 1 and 5 is not zero, or 11 01
2 20,andHu ,ie,notH02 ie,atleastoneof2 and6 isnotzero,or2 20.However,inthisapplication, 15 12 26
it is natural to expect that patients in the cancer group should have worse disease progression than those in the control group. Similarly, patients having worse baseline disease statuses should be more likely to progress poorly. In other words, the values of the parameters, 1, 5, 2, 6 should all be nonnegative. Therefore, the orderrestricted alternatives
Hr 10,50, Hr 20,60 11 12
with at least one inequality strictly hold should lead to more powerful tests since they incorporate order restrictions on the parameter values ie, they use more information. Such orderrestricted alternatives are particularly valuable if the differences to be detected are small, ie, significant results may be obtained based on orderrestricted alternatives but not based on unrestricted alternatives. Therefore, they have important clinical implications.
For comparison, we conducted both unrestricted and orderrestricted hypothesis testing for the effects of cancergroup
group and baseline values on the response variable to see if any new findings can be obtained. Table 2 displays esti
mates for the fixed effects parameters for the two models. The estimated variance components are 0.24 and
1.39 0.01 . Table 3 shows the pvalues associated with unrestricted test and orderrestricted tests. For 0.01 0.005
orderrestricted tests, we consider both the bound method and the substitution method. We see that the orderrestricted tests yield smaller pvalues than the unrestricted tests, suggesting that incorporating the restrictions on the parameters does lead to more powerful test. For example, there is stronger evidence on the difference between the cancer and control groups on the response over time, and the substitution method is even more powerful than the bound method, suggest ing that the bound method may be too conservative. Although the significances of the unrestricted and restricted tests appear to be the same based on say 5 level, the unrestricted tests produce pvalues very close to the significance level

ZHOU AND WU 4765 TABLE 2 Estimates of the fixed parameters in models for Yand Uij
Model
Model for Ui j
Model for Yi
i Parameter Estimate Standard error
0 1.74 0.93 1 0.55 0.44 2 0.02 0.23 3 0.01 0.07
4 0.01 0.08
5 0.07 0.03 6 0.02 0.02 7 0.01 0.01
TABLE 3 Pvalues from testing H01 or H02 versus the unrestricted alternative H1u and restricted alternative H1r the bound method and the substitution method are both used for H1r
Testing for cancercontrol group effects ie, 1 and 5
Hu Wald test 0.042 LRT 0.048
Hr bound 0.027
0.031
Hr substitution 0.016
0.018
Hu 2 2 0, Hr 1 0,50withatleastonestrictinequality 1115 11
Testing for baseline sickness effects ie, 2 and 6
Hu Wald test 0.566
LRT 0.565
Hr bound 0.426 0.425
Hr substitution 0.282
0.281
Hu 2 2 0, Hr 2 0,60withatleastonestrictinequality 1226 12
Abbreviations: LRT, likelihood ratio test.
for testing the group effects, while the orderrestricted tests produce pvalues much smaller than the significance level, making the conclusions more convincing.
In summary, by using the proposed orderrestricted tests, we have found more convincing evidence for the differences between the cancer and control groups than the usual unrestricted tests.
4 SIMULATION STUDIES
We conduct some simulation studies to demonstrate the advantages of orderrestricted hypothesis testing over unre stricted testing under models for semicontinuous data, in situations when the order restrictions are reasonable. A main advantage of orderrestricted inference is its power gain by incorporating reasonable restrictions in parameters. The objec tive of the simulation studies in this section is to confirm this advantage in the context of semicontinuous data models. We consider two simulation settings, one setting mimics models in the work of Albert and Shen,7 as described in Section 1, and the other setting mimics the models in the data analysis in previous section.
The first simulation study is based on a simplified version of the models for semicontinuous data considered in the
work of Albert and Shen.7 Let ILi and IHi, respectively, be the dummy variable for a light treatment the medication
plus sham acupuncture group and a heavy treatment the medication plus active acupuncture group. Let Ui j denote
the indicator of volume from subject i at time ti j being positive and let Ylogvolume1 of the transformed volume of i
subject i at time ti j . Similar to the work of Albert and Shen,7 we consider the following models in the simulation:
logit PrUi1bi12.3ti1IHi2ILibi1, 8
Y5.5ti 3IHi 4ILi bi2 i, 9 i
i.i.d.
iN0, 0.3,

bi1 i.i.d. 0 0.26 0.0046
bi20,0.00460.02 .

4766
ZHOU AND WU
TABLE 4 Comparison of type I error rates and powers for restricted and unrestricted tests
Sample size
Walda Hu Hr bd
5.25 1.45 5.55 1.30 5.50 1.35 5.90 1.30 4.60 1.50 6.20 1.55 5.00 1.50
32 34 47 49 61 64 71 74 78 80 90 92 97 98
LRTa Hr sub Hu Hr bd
Type I error rates in
Hr sub 3.80
3.25 3.25 3.15 3.60 3.35 3.40
47
64
76
84
90
97
99
30 45 60 75 90b 120 150
30 45 60 75 90b 120 150
4.30 6.00 3.40 6.30 3.20 5.70 3.45 5.55 3.60 4.45 3.65 5.85 3.40 5.05
Power in49 31 64 45 76 59 85 70 91 77 97 90 99 97
1.70 1.50 1.15 1.35 1.55 1.50 1.35
32
47
62
73
80
91
97
a H1u ,
H1r bd,
restricted alternative hypothesis with the bound method, and restricted alternative hypoth esis with the substitution method.
bSimilar to sample size in Albert and Shen.7 Abbreviations: LRT, likelihood ratio test.
and
H1r sub, respectively, denote
unrestricted
alternative
hypothesis,
For each simulated data set, we consider both unrestricted hypotheses H0 1 2 3
not H0 ie, at least one j0, and the orderrestricted alternative H1r120 and 340, with at least one inequality strictly holds. In the simulation, we first evaluate type I error rates under the null hypothesis H0. Then, we compare the powers at the alternatives 10.8, 20.4, 30.2, and 40.1, similar to those in the work of Albert and Shen.7 We consider various sample size choices.
For orderrestricted tests, the substitution method and the bound method are applied in both Wald tests and LRT tests, and are compared with their unrestricted counterparts. The simulation was repeated 2000 times so that the Monte Carlo errors associated with empirical power or type I error probability will be limited to at most 1.1.
Table 4 shows the multivariate hypothesis testing results of the first simulation study. The null hypothesis H0 is the same for all tests, while the alternative hypotheses are either unrestricted or restricted. The objective is to check if the restricted alternatives offer substantial power advantages over unrestricted alternatives. All tests achieve nominal type I error probability 0.05, so we can compare the powers. Compared with unrestricted alternative H1u, both the substitution method and bound method for restricted alternative yield substantially higher powers than unrestricted alternative for both Wald and LRT tests. The substitution method shows substantial power advantage over the bound method, which is expected since the bound method is conservative.
Table 5 shows the mean squared error results about parameter estimation. The use of order constraint will yield esti mates with slightly larger bias, but lower standard errors and lower root mean squared errors RMSEs. This is as expected since the order constraints adjust unreasonable sample estimates to be reasonable estimates. For example, the constraints 20 and 40, which correspond to that shame acupuncture on top of medication should be as least as good as medication alone, will adjust positive estimates of 2 and 4 to be zeros. Such adjustments will greatly reduce the vari ability of the estimators of 2 and 4 at the cost of a little bit more downwards bias, leading to improved overall RMSE. This finding is consistent with the work of Davidov and Rosen,10 which shows that order constraints improve RMSE in the context of linear mixed effects models.
The second simulation study is based on the realdata analysis in previous section, based on models 6 and 7, with true parameters close to the corresponding estimates from the data analysis results. We investigate various values of 1,5 to mimic scenarios in which the true differences between the treatment group and the control group ie, the effect sizes do not exist, and are weaker than, equal to or stronger than the estimates from the real data example respectively.
4
0 versus H1u , ie,

ZHOU AND WU
4767
TABLE 5 Parameter estimation with 90 subjects
1 2
3
Bias SE
0.017 0.337
0.015 0.350
RMSE
0.337
0.351
RMSE Improvement in
4.0
10.9
4.8 11.3
Without order constraints
With order constraints Bias SE RMSE
0.034 0.322 0.324
0.028 0.311 0.312
0.000 0.076 0.076 0.000 0.070 0.070
TABLE 6 Comparison of type I error rates and powers in the second simulation study
0.004 0.080 0.002 0.079
0.080 0.079
4
Abbreviations: SD, standard error, RMSE, root mean squared error.
Effect size Walda LRTa
1,5 Hu Hr bd Hr sub Hu Hr bd Hr sub
Type I error rates
0,0 5.35 2.95 5.00 5.70 3.20 5.15 Power
0.50,0.06 28 34 42 29 34 42 0.55,0.07b 33 41 49 33 41 49 0.60,0.08 38 45 55 38 46 55
a H1u , H1r bd, and H1r sub, respectively, denote unrestricted alternative hypothesis, restricted alternative hypothesis with the bound method, and restricted alternative hypoth esis with the substitution method.
bEqualtoestimatesfromrealdataexampleinTable2. Abbreviations:LRT,likelihoodratio test.
FIGURE 3
Median and interquartile range of pvalue reductions benefiting from order restrictions in the second simulation study. LRT, likelihood ratio test
Table 6 displays the results of the second simulation study. We again have similar findings as those in the first simulation study, ie, the restricted tests outperform the unrestricted test for both Wald and LRT tests. Figure 3 shows that the median and interquartile range of percent reductions in pvalues resulting from the use of orderrestricted tests. The substitution method shows over 50 median reduction in pvalues, compared with unrestricted tests, regardless of the effect size and the type of test statistics LRT or Wald.

4768 ZHOU AND WU
5 DISCUSSION
The use of orderrestricted inference provides a formal framework to incorporate scientific insight about directional effects into statistical inference procedures. We demonstrate that statistical inference for models of semicontinuous longitudinal data incorporating order restrictions can improve powers in multiparameter hypothesis testing and reduce mean squared errors in parameter estimation.
Orderrestricted inference for mean parameters typically relies on asymptotic theory, such as the chibar square dis tributions. When sample size is small, the performance of the methods needs to be evaluated. Moreover, the asymptotic distributions usually involve unknown variancecovariance matrices. In this paper, we have used the bound method and substitution method. The former can be quite conservative, while the latter requires large samples. Methodology devel opments for these issues are still needed. In addition, the variances are positive and the covariance matrices are positive definite. Incorporating such information in statistical inference may also lead to more efficient inference. In addition, inference for the variance components itself can be of interest, such as choices of random effects eg, if a random effect has variance close to zero, it may be removed from the model.
In addition to semicontinuous responses, longitudinal data may involve other complications such as missing data or censoring. In particular, informative censoring is common for human immunodeficiency virus HIV viral load data, due to lower detection limits of viral load see the work of Wu18. Since antiHIV therapies usually reduce virus levels among HIV infected populations, incorporating such order restrictions may provide higher powers to detect possible efficacies of HIV therapies.12 We are currently investigating such applications of orderrestricted inference.
Although there has been extensive literature on orderrestricted inference over the past few decades eg, since the work of Perlman,9, earlier literature mostly focused on multivariate normal distributions and theoretical developments. In practice, the use of orderrestricted hypothesis testing seems still limited. Part of the reasons may be that methods for practical important models, such as models for semicontinuous longitudinal data, are still unavailable. Another possible reason may be that software is still very limited in this area. We hope that contributions in this paper fill some of the gaps and lead to more common use of orderrestricted inference. We also plan to develop userfriendly software for the proposed methods.
ORCID
Guohai Zhou http:orcid.org000000028826403X REFERENCES
1. Neelon B, OMalley AJ, Smith VA. Modeling zeromodified count and semicontinuous data in health services research part 1: background and overview. Statist Med. 2016;35:50705093.
2. RodriguesMotta M, Galvis Soto DM, Lachos VH, et al. A mixedeffect model for positive responses augmented by zeros. Statist Med. 2015;34:17611778.
3. Zhang B, Liu W, Zhang H, Chen Q, Zhang Z. Composite likelihood and maximum likelihood methods for joint latent class modeling of disease prevalence and highdimensional semicontinuous biomarker data. Comput Stat. 2016;31:425449.
4. Liu L. Joint modeling longitudinal semicontinuous data and survival, with application to longitudinal medical cost data. Statist Med. 2009;28:972986.
5. Olsen MK, Schafer JL. A twopart randomeffects model for semicontinuous longitudinal data. J Am Stat Assoc. 2001;96:730745.
6. Smith VA, Neelon B, Maciejewski ML, Preisser JS. Two parts are better than one: modeling marginal means of semicontinuous data.
Health Serv Outcomes Res Methodol. 2017;17:198218.
7. Albert PS, Shen J. Modelling longitudinal semicontinuous emesis volume data with serial correlation in an acupuncture clinical trial.
J Royal Stat Soc Ser C Appl Stat. 2005;54:707720.
8. Silvapulle MJ, Sen PK. Constrained Statistical Inference: Order, Inequality, and Shape Constraints. Hoboken, NJ: John WileySons; 2004.
9. Perlman MD. Onesided testing problems in multivariate analysis. Ann Math Stat. 1969;40:549567.
10. Davidov O, Rosen S. Constrained inference in mixedeffects models for longitudinal data with application to hearing loss. Biostatistics. 2010;12:327340.
11. Davis KA, Park CG, Sinha SK. Testing for generalized linear mixed models with cluster correlated data under linear inequality constraints. Can J Stat. 2012;40:243258.
12. Wang T, Wu L. Multivariate onesided tests for nonlinear mixedeffects models. Can J Stat. 2013;41:453465.
13. Joe H. Accuracy of Laplace approximation for discrete response mixed models. Comput Stat Data Anal. 2008;52:50665074.
14. Pinheiro JC, Bates DM. Unconstrained parametrizations for variancecovariance matrices. Stat Comput. 1996;6:289296.

ZHOU AND WU 4769
15. Powell MJ. The BOBYQA Algorithm for Bound Constrained Optimization Without Derivatives. Cambridge NA Report 200906. Cambridge,
UK: University of Cambridge; 2009.
16. Zhou G, Wu L, Brant R, Ansermino JM. A likelihoodbased approach for multivariate onesided tests with missing data. J Appl Stat. 2017;44:20002016.
17. Ishizumi T, McWilliams A, MacAulay C, Gazdar A, Lam S. Natural history of bronchial preinvasive lesions. Cancer Metastasis Rev. 2010;29:514.
18. Wu L. Mixed Effects Models for Complex Data. Boca Raton, FL: CRC Press; 2009.
19. Zhou G. Multivariate onesided tests for multivariate normal and mixed effects regression models with missing data, semicontinuous data
and censored data PhD thesis. Vancouver, Canada: University of British Columbia; 2017.
APPENDIX A1 REGULARITY CONDITIONS
The following regularity conditions are applicable to:
liyilog expl1iexpl2ibidbi,
the loglikelihood based on subject i with yi being the response vector for subject i.
C0: Different values ofcorrespond to different distributions of yi identifiability.
C1: For each , where R0 denote the overall parameter space, liyi is differentiable up to order
three with respect toat almost all yi the exceptional set is common for alland is a zeroprobability event. C2: There exist realvalued functions hiyiwithhiyi dyi such that, for each , all first and secondorder
How to cite this article: Zhou G, Wu L. Modeling semicontinuous longitudinal data with order constraints. Statistics in Medicine. 2018;37:47584770. https:doi.org10.1002sim.7963
partial derivatives of liyi are bounded by hi yi in absolute value.
C3: There exist realvalued functions Hiyiand a closedball neighborhood of 0 denoted as B0 , where 0 denotes
the true parameter vector, such that, for each B0, all thirdorder partial derivatives of liyi are ni1 Hiyi
bounded by Hi yi in absolute value. In addition, limnOp1. T n
C4: For each , IiEliyiliyi
thej1, j2 entry of Ii.
1n i
has finite entries and is positive definite. Let Ii1,2 denote

for all i, h, and l and for each .
C7: There exists a 20 such that Lyapunovs condition is satisfied for any linear combinations aTliyi. That is,
C5: For each , limn n i1 II exists and is positive definite. I can be interpreted as the average information matrix.
C6: There exists a 10 and a finite M such that, for the hi yi in C2,1
E hiyiIi1,2 1M,
for any a0,

1 n T 2
Ea liyi2 0,asn.
n T i 122 i1i1a I a
1 n
Under these regularity conditions, the theorem in Section 2.3 can be derived by applying a secondorder Taylor expansion
p
C 8 : T h e r e e x i s t s a f u n c t i o n ls u c h t h a t s u p n i1 l iy i l 0 a s n , a n d s u p 0l l 0f o r
any0.
to 1 ni1 liyi. Specific details can be found in the work of Zhou.19 n

4770 ZHOU AND WU APPENDIX A2
COMPUTATION OF THE CHIBAR PROBABILITY WEIGHTS
Following Section 3.5 in the work of Silvapulle and Sen,8 the following Monte Carlo simulations can be used to computedir , V, w h e r e VR I1 R T .
0
a Generate XN0, V .
b Compute
whereO b1,,brT bk 0fork1,,r. c Repeat steps a and b N times, say, N10 000.
Then,fori0, ,r,wehave
ir,V ni, N
Vp;Oarg
minXbTV1Xb, A1 bO
where ni is the number of times in which Vp; Oin b has exactly i positive components.
The solution to A1 is a standard quadratic programming problem that is well coded, say, in the R packages quadprog
or nloptr. APPENDIX A3
COMPUTER CODE
The code for simulation studies can be found at github.comGuohaiZhouSemiOrd.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] R C algorithm Scheme math scala graph statistic software theory Received: 9 January 2018 Revised: 12 August 2018 Accepted: 21 August 2018 DOI: 10.1002sim.7963
$25