In this paper, we suggest a unit root test for a system of equations using a spectral variance decomposition method based on the Maximal Overlap Discrete Wavelet Transform. We obtain the limiting distribution of the test statistic and study its small sample properties using Monte Carlo simulations. We find that, for multiple time series of small lengths, the wavelet-based method is robust to size distortions in the presence of cross-sectional dependence. The wavelet-based test is also more powerful than the Cross-sectionally Augmented Im et al. unit root test (Pesaran, M. H. 2007. "A Simple Panel Unit Root Test in the Presence of Cross-section Dependence." Journal of Applied Econometrics 22 (2): 265-312.) for time series with between 20 and 100 observations, using systems of 5 and 10 equations. We demonstrate the usefulness of the test through an application on evaluating the Purchasing Power Parity theory for the Group of 7 countries and find support for the theory, whereas the test by Pesaran (Pesaran, M. H. 2007. "A Simple Panel Unit Root Test in the Presence of Cross-section Dependence." Journal of Applied Econometrics 22 (2): 265-312.) finds no such support.
This article introduces two different non-parametric wavelet-based panel unit-root tests in the presence of unknown structural breaks and cross-sectional dependencies in the data. These tests are compared with a previously suggested non-parametric wavelet test, the parameteric Im-Pesaran and Shin (IPS) test and a Wald type of test. The results from the Monte Carlo simulations clearly show that the new wavelet-ratio tests are superior to the traditional tests both in terms of size and power in panel unit-root tests because of its robustness to cross-section dependency and structural breaks. Based on an empirical Central American panel application, we can, in contrast to previous research (where bias due to structural breaks is simply disregarded), find strong, clear-cut support for purchasing power parity (PPP) in this developing region.
We investigate the importance of ethnic origin and local labour markets conditions for self-employment propensities in Sweden. In line with previous research, we find differences in the self-employment rate between different immigrant groups as well as between different immigrant cohorts. We use a multilevel regression approach in order to quantify the role of ethnic background, point of time for immigration and local market conditions in order to further understand differences in self-employment rates between different ethnic groups. We arrive at the following: The self-employment decision is to a major extent guided by factors unobservable in register data. Such factors might be, that is, individual entrepreneurial ability and access to financial capital. The individual’s ethnic background and point of time for immigration play a smaller role for the self-employment decision but are more important than local labour market conditions.
Part-time work is one of the most well-known « atypical » working time arrangements. In contrast to previous studies focusing on the supply side, the originality of our research is to investigate the demand-side of part-time work and to examine how and why companies use part-time work. Based on a large and unique sample of European firms operating in 21 member states, we use a multilevel multinomial modeling in a Bayesian environment. Our results suggest that the variations in the extent of part-time workers at the establishment level is determined more by country-specific features than by industry specific factors.
This article tests the home-country self-employment hypothesis on immigrants in Sweden. The results show that the self-employment rates vary between different immigrant groups but we find no support for the home-country self-employment hypothesis using traditional estimation methods. However, when applying quantile regression method we find such evidence when testing results from the 90th quantile. This indicates that home-country self-employment traditions are important for the self-employment decision among immigrant groups with high self-employment rates in Sweden. Furthermore, the result underlines the importance of utilizing robust estimation methods when the home-country self-employment hypothesis is tested.
In this paper we generalize four tests of multivariate linear hypothesis to panel data unit root testing. The test statistics are invariant to certain linear transformations of data and therefore simulated critical values may conveniently be used. It is demonstrated that all four tests remains well behaved in cases of where there are heterogeneous alternatives and cross-correlations between marginal variables. A Monte Carlo simulation is included to compare and contrast the tests with two well-established ones.
In this paper, a short background of the Jarque and McKenzie (JM) test for non-normality is given, and the small sample properties of the test is examined in view of robustness, size and power. The investigation has been performed using Monte Carlo simulations where factors like, e.g., the number of equations, nominal sizes, degrees of freedom, have been varied.
Generally, the JM test has shown to have good power properties. The estimated size due to the asymptotic distribution is not very encouraging though. The slow rate of convergence to its asymptotic distribution suggests that empirical critical values should be used in small samples.
In addition, the experiment shows that the properties of the JM test may be disastrous when the disturbances are autocorrelated. Moreover, the simulations show that the distribution of the regressors may also have a substantial impact on the test, and that homogenised OLS residuals should be used when testing for non-normality in small samples.
Experimental studies often measure an individual’s quality of life before and after an intervention, with the data organized into a square table and analyzed using matched pair modeling. However, it is not unusual to find missing data in either round (i.e., before and/or after) of such studies and the use of multiple imputations with matched-pair modeling remains relatively unreported in the applied statistics literature. In this paper we introduce an approach which maintains dependency of responses over time and makes a match between the imputer and the analyst. We use ‘before’ and ‘after’ quality-of-life data from a randomized controlled trial to demonstrate how multiple imputation and matched-pair modeling can be congenially combined, avoiding a possible mismatch of imputation and analyses, and to derive a properly consolidated analysis of the quality-of-life data. We illustrate this strategy with a real-life example of one item from a quality-of-life study that evaluates the effectiveness of patients’ self-management of anticoagulation versus standard care as part of a randomized controlled trial.
Ridge regression is a variant of ordinary multiple linear regression whose goal is to circumvent the problem of predictors collinearity. It gives up the Ordinary Least Squares (OLS) estimator as a method for estimating the parameters [] of the multiple linear regression model [] . Different methods of specifying the ridge parameter k were proposed and evaluated in terms of Mean Square Error (MSE) by simulation techniques. Comparison is made with other ridge-type estimators evaluated elsewhere. The new estimators of the ridge parameters are shown to have very good MSE properties compared with the other estimators of the ridge parameter and the OLS estimator. Based on our results from the simulation study, we may recommend the new ridge parameters to practitioners.
This article analyzes the effects of multicollienarity on the maximum likelihood (ML) estimator for the Tobit regression model. Furthermore, a ridge regression (RR) estimator is proposed since the mean squared error (MSE) of ML becomes inflated when the regressors are collinear. To investigate the performance of the traditional ML and the RR approaches we use Monte Carlo simulations where the MSE is used as performance criteria. The simulated results indicate that the RR approach should always be preferred to the ML estimation method.
In this paper we generalize different approaches of estimating the ridge parameter k proposed by Muniz et al. (Comput Stat, 2011) to be applicable for logistic ridge regression (LRR). These new methods of estimating the ridge parameter in LRR are evaluated by means of Monte Carlo simulations along with the some other estimators of k that has already been evaluated by Månsson and Shukur (Commun Stat Theory Methods, 2010) together with the traditional maximum likelihood (ML) approach. As a performance criterion we use the mean squared error (MSE). In the simulation study we also calculate the mean value and the standard deviation of k. The average value is interesting firstly in order to see what values of k that are reasonable and secondly if several estimators have equal variance then the estimator that induces the smallest bias should be chosen. The standard deviation is interesting as a performance criteria if several estimators of k have the same MSE, then the most stable estimator (with the lowest standard deviation) should be chosen. The result from the simulation study shows that LRR outperforms ML approach. Furthermore, some of new proposed ridge estimators outperformed those proposed by Månsson and Shukur (Commun Stat Theory Methods, 2010).
This paper proposes several estimators for estimating the ridge parameter k based for Poisson ridge regression (RR) model. These estimators have been evaluated by means of Monte Carlo simulations. As performance criteria, we have calculated the mean squared error (MSE), the mean value and the standard deviation of k. The first criterion is commonly used, while the other two have never been used when analyzing Poisson RR. However, these performance criterion are very informative because, if several estimators have an equal estimated MSE then those with low average value and standard deviation of k should be preferred. Based on the simulated results we may recommend some biasing parameters which may be useful for the practitioners in the field of health, social and physical sciences.
The zero-inflated Poisson regression model is commonly used when analyzing economic data that come in the form of non-negative integers since it accounts for excess zeros and overdispersion of the dependent variable. However, a problem often encountered when analyzing economic data that has not been addressed for this model is multicollinearity. This paper proposes ridge regression (RR) estimators and some methods for estimating the ridge parameter k for a non-negative model. A simulation study has been conducted to compare the performance of the estimators. Both mean squared error and mean absolute error are considered as the performance criteria. The simulation study shows that some estimators are better than the commonly used maximum-likelihood estimator and some other RR estimators. Based on the simulation study and an empirical application, some useful estimators are recommended for practitioners.
In this paper, we use simulated data to investigate the power of different causality tests in a two-dimensional vector autoregressive (VAR) model. The data are presented in a nonlinear environment that is modelled using a logistic smooth transition autoregressive function. We use both linear and nonlinear causality tests to investigate the unidirection causality relationship and compare the power of these tests. The linear test is the commonly used Granger causality F test. The nonlinear test is a non-parametric test based on Baek and Brock [A general test for non-linear Granger causality: Bivariate model. Tech. Rep., Iowa State University and University of Wisconsin, Madison, WI, 1992] and Hiemstra and Jones [Testing for linear and non-linear Granger causality in the stock price–volume relation, J. Finance 49(5) (1994), pp. 1639–1664]. When implementing the nonlinear test, we use separately the original data, the linear VAR filtered residuals, and the wavelet decomposed series based on wavelet multiresolution analysis. The VAR filtered residuals and the wavelet decomposition series are used to extract the nonlinear structure of the original data. The simulation results show that the non-parametric test based on the wavelet decomposition series (which is a model-free approach) has the highest power to explore the causality relationship in nonlinear models.
In this article, we propose a nonlinear Dickey-Fuller F test for unit root against first-order Logistic Smooth Transition Autoregressive (LSTAR) (1) model with time as the transition variable. The nonlinear Dickey-Fuller F test statistic is established under the null hypothesis of random walk without drift and the alternative model is a nonlinear LSTAR (1) model. The asymptotic distribution of the test is analytically derived while the small sample distributions are investigated by Monte Carlo experiment. The size and power properties of the test were investigated using Monte Carlo experiment. The results showed that there is a serious size distortion for the test when GARCH errors appear in the Data Generating Process (DGP), which led to an over-rejection of the unit root null hypothesis. To solve this problem, we use the Wavelet technique to count off the GARCH distortion and improve the size property of the test under GARCH error. We also discuss the asymptotic distributions of the test statistics in GARCH and wavelet environments.
For testing unit root in single time series, most tests concentrate on the time domain. Recently, Fan and Gençay (Econom Theory 26:1305–1331, 2010) proposed a wavelet ratio test which took advantage of the information from the frequency domain by using a wavelet spectrum methodology. This test shows a better power than many time domain based unit root tests including the Dickey–Fuller (J Am Stat Assoc74:427–431, 1979) type of test in the univariate time series case. On the other hand, various unit root tests in multivariate time series have appeared since the pioneering work of Levin and Lin (Unit root test in panel data: new results, University of California at San Diego, Discussion Paper, 1993). Among them, the Im–Pesaran–Shin (IPS) (J Econ 115(1):53–74, 1997) test is widely used for its straightforward implementation and robustness to heterogeneity. The IPS test is a group mean test which uses the average of the test statistics for each single series. As the test statistics in each seriescan be flexible, this paper will apply the wavelet ratio statistic to give a comparison with the test by using Dickey–Fuller t statistic in the single series. Simulation results show a gain in power by employing the wavelet ratio test instead of the Dickey–Fullert statistic in the panel data case. As the IPS test is sensitive to cross sectional dependence, we further compare the robustness of both test statistics when there exists crosscorrectional dependence among the units in the panel data. Finally we apply a residual based wavestrapping methodology to reduce the over biased size problem brought up by the cross correlation for both test statistics.
In this article, we use the wavelet technique to improve the over-rejection problem of the traditional Dickey-Fuller tests for unit root when the data is associated with volatility like the GARCH(1, 1) effect. The logic of this technique is based on the idea that the wavelet spectrum decomposition can separate out information of different frequencies in the data series. We prove that the asymptotic distribution of the test in the wavelet environment is still the same as the traditional Dickey-Fuller type of tests. The finite sample property is improved when the data suffers from GARCH error. The investigation of the size property and the finite sample distribution of the test is carried out by Monte Carlo experiment. An empirical example with data on the net immigration to Sweden during the period 1950-2000 is used to illustrate the performance of the wavelet improved test under GARCH errors. The results reveal that using the traditional Dickey-Fuller type of tests, the unit root hypothesis is rejected while our wavelet improved test do not reject as it is more robust to GARCH errors in finite samples.
In ridge regression, the estimation of the ridge parameter is an important issue. This article generalizes some methods for estimating the ridge parameter for probit ridge regression (PRR) model based on the work of Kibria et al. (2011). The performance of these new estimators is judged by calculating the mean squared error (MSE) using Monte Carlo simulations. In the design of the experiment, we chose to vary the sample size and the number of regressors. Furthermore, we generate explanatory variables that are linear combinations of other regressors, which is a common situation in economics. In an empirical application regarding Swedish job search data, we also illustrate the benefits of the new method.
In this paper we propose ridge regression estimators for probit models since the commonly applied maximum likelihood (ML) method is sensitive to multicollinearity. An extensive Monte Carlo study is conducted where the performance of the ML method and the probit ridge regression (PRR) is investigated when the data is collinear. In the simulation study we evaluate a number of methods of estimating the ridge parameter k that have recently been developed for use in linear regression analysis. The results from the simulation study show that there is at least one group of the estimators of k that regularly has a lower MSE than the ML method for all different situations that has been evaluated. Finally, we show the benefit of the new method using the classical Dehejia and Wahba (1999) dataset which is based on a labor market experiment.
This paper investigates the effect of spillover (i.e. causality in variance) on the Johansens tests for cointegration by conducting a Monte Carlo experiment where 16 different data generating processes (DGP) are used and a number of factors that might affect the properties of the Johansens cointegration tests are varied. The result from the simulation study clearly shows that spillover effect leads to an over-rejection of the true null hypothesis. Hence, in the presence of spillover it becomes very hard to make inferential statements since it will often lead to erroneous claims that cointegration relationships exist.
In this paper we review some existing and propose some new estimators for estimating the ridge parameter. All in all 19 different estimators have been studied. The investigation has been carried out using Monte Carlo simulations. A large number of different models have been investigated where the variance of the random error, the number of variables included in the model, the correlations among the explanatory variables, the sample size and the unknown coefficient vector were varied. For each model we have performed 2000 replications and presented the results both in term of figures and tables. Based on the simulation study, we found that increasing the number of correlated variable, the variance of the random error and increasing the correlation between the independent variables have negative effect on the mean squared error. When the sample size increases the mean squared error decreases even when the correlation between the independent variables and the variance of the random error are large. In all situations, the proposed estimators have smaller mean squared error than the ordinary least squares and other existing estimators.
In this article, we propose a restricted Liu regression estimator (RLRE) for estimating the parameter vector, β, in the presence of multicollinearity, when the dependent variable is binary and it is suspected that β may belong to a linear subspace defined by Rβ = r. First, we investigate the mean squared error (MSE) properties of the new estimator and compare them with those of the restricted maximum likelihood estimator (RMLE). Then we suggest some estimators of the shrinkage parameter, and a simulation study is conducted to compare the performance of the different estimators. Finally, we show the benefit of using RLRE instead of RMLE when estimating how changes in price affect consumer demand for a specific product.
This paper suggests some new estimators of the ridge parameter for binary choice models that may be applied in the presence of a multicollinearity problem. These new ridge parameters are functions of other estimators of the ridge parameter that have shown to work well in the previous research. Using a simulation study we investigate the mean square error (MSE) properties of these new ridge parameters and compare them with the best performing estimators from the previous research. The results indicate that we may improve the MSE properties of the ridge regression estimator by applying the proposed estimators in this paper, especially when there is a high multicollinearity between the explanatory variables and when many explanatory variables are included in the regression model. The benefit of this paper is then shown by a health related data where the effect of some risk factors on the probability of receiving diabetes is investigated.
This paper introduces a shrinkage estimator for the logit model which is a generalization of the estimator proposed by Liu (1993) for the linear regression. This new estimation method is suggested since the mean squared error (MSE) of the commonly used maximum likelihood (ML) method becomes inflated when the explanatory variables of the regression model are highly correlated. Using MSE, the optimal value of the shrinkage parameter is derived and some methods of estimating it are proposed. It is shown by means of Monte Carlo simulations that the estimated MSE and mean absolute error (MAE) are lower for the proposed Liu estimator than those of the ML in the presence of multicollinearity. Finally the benefit of the Lie estimator is shown in an empirical application where different economic factors are used to explain the probability that municipalities have net increase of inhabitants.
In this article, we propose some new estimators for the shrinkage parameter d of the weighted Liu estimator along with the traditional maximum likelihood (ML) estimator for the logit regression model. A simulation study has been conducted to compare the performance of the proposed estimators. The mean squared error is considered as a performance criteria. The average value and standard deviation of the shrinkage parameter d are investigated. In an application, we analyze the effect of usage of cars, motorcycles, and trucks on the probability that pedestrians are getting killed in different counties in Sweden. In the example, the benefits of using the weighted Liu estimator are shown. Both results from the simulation study and the empirical application show that all proposed shrinkage estimators outperform the ML estimator. The proposed D9 estimator performed best and it is recommended for practitioners.
This paper suggests some Liu type shrinkage estimators for the dynamic ordinary least squares (DOLS) estimator that may be used to combat the multicollinearity problem. DOLS is an estimator suggested to solve the finite sample bias of OLS caused by endogeneity issue when estimating regression models based on cointegrated variables. In this paper using simulation techniques it is shown that multicollinearity and non-normality of the error term is a problem in finite samples for the DOLS model. The merit of proposed Liu type estimator are shown by means of a Monte Carlo simulation study and using an empirical application.
This paper introduces shrinkage estimators (Ridge DOLS) for the dynamic ordinary least squares (DOLS) cointegration estimator, which extends the model for use in the presence of multicollinearity between the explanatory variables in the cointegration vector. Both analytically and by using simulation techniques, we conclude that our new Ridge DOLS approach exhibits lower mean square errors (MSE) than the traditional DOLS method. Therefore, based on the MSE performance criteria, our Monte Carlo simulations demonstrate that our new method outperforms the DOLS under empirically relevant magnitudes of multicollinearity. Moreover, we show the advantages of this new method by more accurately estimating the environmental Kuznets curve (EKC), where the income and squared income are related to carbon dioxide emissions. Furthermore, we also illustrate the practical use of the method when augmenting the EKC curve with energy consumption. In summary, regardless of whether we use analytical, simulation-based, or empirical approaches, we can consistently conclude that it is possible to estimate these types of relationships in a considerably more accurate manner using our newly suggested method.
A new shrinkage estimator for the Poisson model is introduced in this paper. This method is a generalization of the Liu (1993) estimator originally developed for the linear regression model and will be generalized here to be used instead of the classical maximum likelihood (ML) method in the presence of multicollinearity since the mean squared error (MSE) of ML becomes inflated in that situation. Furthermore, this paper derives the optimal value of the shrinkage parameter and based on this value some methods of how the shrinkage parameter should be estimated are suggested. Using Monte Carlo simulation where the MSE and mean absolute error (MAE) are calculated it is shown that when the Liu estimator is applied with these proposed estimators of the shrinkage parameter it always outperforms the ML.