{ "instances": [ { "instance_id": "R108331xR108307", "comparison_id": "R108331", "paper_id": "R108307", "text": "Knowledge modelling in weakly\u2010structured business processes In this paper we present a new approach for integrating knowledge management and business process management. We focus on the modelling of weakly\u2010structured knowledge\u2010intensive business processes. We develop a framework for modelling this type of processes that explicitly considers knowledge\u2010related tasks and knowledge objects and present a workflow tool that is an implementation of our theoretical meta\u2010model. As an example, we sketch one case study, the process for granting full old age pension as it is performed in the Greek Social Security Institution. Finally we briefly describe some related approaches and compare them to our work and draw the main conclusions and further research directions." }, { "instance_id": "R108331xR108321", "comparison_id": "R108331", "paper_id": "R108321", "text": "Modelling knowledge transfer: A knowledge dynamics perspective The increasing complexity in design activities leads designers to collaborate and share knowledge within distributed teams. This makes designers use systems such as knowledge management systems to reach their goal. In this article, our aim is to investigate on improving the use of knowledge management systems by defining a framework for modelling knowledge transfer in such context. The proposed framework is partly based on reuse of existing models found in the literature and on a participant observation methodology. Then, we tested this framework through several case studies presented in this article. These investigations enable us to observe, define and model more finely the knowledge dynamics that occur between knowledge workers and knowledge management systems." }, { "instance_id": "R108358xR108156", "comparison_id": "R108358", "paper_id": "R108156", "text": "Potential Use of Airborne Hyperspectral AVIRIS-NG Data for Mapping Proterozoic Metasediments in Banswara, India Airborne Visible InfraRed Imaging Spectrometer \u2014 Next Generation (AVIRIS-NG) data with high spectral and spatial resolutions are used for mapping metasediments in parts of Banswara district, Rajasthan, India. The AVIRIS\u2014NG image spectra of major metasedimentary rocks were compared with their respective laboratory spectra to identify few diagnostic spectral features or absorption features of the rocks. These spectral features were translated from laboratory to image and consistently present in the image spectra of these rocks across the area. After ensuring the persistency of absorption features from sample to image pixels, three AVIRIS\u2014NG based spectral indices is proposed to delineate calcareous (dolomite), siliceous (quartzite) and argillaceous (phyllite) metasedimentary rocks. The index image composite was compared with the reference lithological map of Geological Survey of India and also was validated in the field. The study demonstrates the efficiency of AVIRIS \u2014 NG data for mapping metasedimentary units from the Aravalli Supergroup that are known to host strata bound mineral deposits." }, { "instance_id": "R108358xR108144", "comparison_id": "R108358", "paper_id": "R108144", "text": "Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords\u2014Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper." }, { "instance_id": "R108358xR108153", "comparison_id": "R108358", "paper_id": "R108153", "text": "Comparative analysis of mineral mapping for hyperspectral and multispectral imagery The traditional approaches of mineral-mapping are time consuming and expensive process. Remote sensing is a tool to map the minerals precisely using their physical, chemical and optical properties. In the present study, Tirunelveli district in Tamil Nadu is selected to extract the abundant mineral such as Limestone using Hyperion and Landsat-8 OLI imageries. The chemical composition of the mineral is identified using scanning electron microscope (SEM) and energy dispersive X-ray spectroscopy (EDS) analysis. The spectral reflectance of minerals is characterized using analytical spectral device (ASD) field spectroradiometer. The minerals showed deep absorption in short wave infrared region from 1800 to 2500 nm. The mineral mapping in hyperspectral data is performed using various preliminary processing such as bad band removal, vertical strip removal, radiance and reflectance generation and postprocessing steps such as data dimensional reduction, endmember extraction and classification. To improve the classification accuracy, the vertical strip removal process is performed using a local destriping algorithm. Absolute reflectance of Hyperion and Landsat-8 OLI (Operational Land Imager) imageries is carried out using the FLAASH (fast line-of-sight atmospheric analysis of hypercubes) module. Spectral data reduction techniques in reflectance bands performed using minimum noise fraction method. The noiseless reflectance bands spatial data reduced by the Pixel Purity Index method in the threshold limit of 2.5 under 10,000 repetitions. The obtained reflectance imagery spectra compared with the spectral libraries such as USGS (United States Geological Survey), JPL (Jet Propulsion Laboratory) and field spectra. Endmembers of minerals are carried out using high probability score obtained from the various methods such as SAM (spectral angle mapper), SFF (spectral feature fitting) and BE (binary encoding). The mineral mapping of both imageries is carried out using a supervised classification approach. The results showed that hyperspectral remote sensing performed good results as compared to multispectral data." }, { "instance_id": "R108358xR108132", "comparison_id": "R108358", "paper_id": "R108132", "text": "Analysis of spectral absorption features in hyperspectral imagery Abstract Spectral reflectance in the visible and near-infrared wavelengths provides a rapid and inexpensive means for determining the mineralogy of samples and obtaining information on chemical composition. Absorption-band parameters such as the position, depth, width, and asymmetry of the feature have been used to quantitatively estimate composition of samples from hyperspectral field and laboratory reflectance data. The parameters have also been used to develop mapping methods for the analysis of hyperspectral image data. This has resulted in techniques providing surface mineralogical information (e.g., classification) using absorption-band depth and position. However, no attempt has been made to prepare images of the absorption-band parameters. In this paper, a simple linear interpolation technique is proposed in order to derive absorption-band position, depth and asymmetry from hyperspectral image data. AVIRIS data acquired in 1995 over the Cuprite mining area (Nevada, USA) are used to demonstrate the technique and to interpret the data in terms of the known alteration phases characterizing the area. A sensitivity analysis of the methods proposed shows that good results can be obtained for estimating the absorption wavelength position, however the estimated absorption-band-depth is sensitive to the input parameters chosen. The resulting parameter images (depth, position, asymmetry of the absorption) when carefully examined and interpreted by an experienced remote sensing geologist provide key information on surface mineralogy. The estimates of depth and position can be related to the chemistry of the samples and thus allow to bridge the gap between field geochemistry and remote sensing." }, { "instance_id": "R109612xR108803", "comparison_id": "R109612", "paper_id": "R108803", "text": "Dinitrogen fixation rates in the Bay of Bengal during summer monsoon Abstract Biological dinitrogen (N 2 ) fixation exerts an important control on oceanic primary production by providing bioavailable form of nitrogen (such as ammonium) to photosynthetic microorganisms. N 2 fixation is dominant in nutrient poor and warm surface waters. The Bay of Bengal is one such region where no measurements of phototrophic N 2 fixation rates exist. The surface water of the Bay of Bengal is generally nitrate-poor and warm due to prevailing stratification and thus, could favour N 2 fixation. We commenced the first N 2 fixation study in the photic zone of the Bay of Bengal using 15 N 2 gas tracer incubation experiment during summer monsoon 2018. We collected seawater samples from four depths (covering the mixed layer depth of up to 75 m) at eight stations. N 2 fixation rates varied from 4 to 75 \u03bc mol N m \u22122 d \u22121 . The contribution of N 2 fixation to primary production was negligible (<1%). However, the upper bound of observed N 2 fixation rates is higher than the rates measured in other oceanic regimes, such as the Eastern Tropical South Pacific, the Tropical Northwest Atlantic, and the Equatorial and Southern Indian Ocean." }, { "instance_id": "R109612xR109396", "comparison_id": "R109612", "paper_id": "R109396", "text": "No nitrogen fixation in the Bay of Bengal? Abstract. The Bay of Bengal (BoB) has long stood as a biogeochemical enigma, with subsurface waters containing extremely low, but persistent, concentrations of oxygen in the nanomolar range which \u2013 for some, yet unconstrained, reason \u2013 are prevented from becoming anoxic. One reason for this may be the low productivity of the BoB waters due to nutrient limitation and the resulting lack of respiration of organic material at intermediate waters. Thus, the parameters determining primary production are key in understanding what prevents the BoB from developing anoxia. Primary productivity in the sunlit surface layers of tropical oceans is mostly limited by the supply of reactive nitrogen through upwelling, riverine flux, atmospheric deposition, and biological dinitrogen (N2) fixation. In the BoB, a stable stratification limits nutrient supply via upwelling in the open waters, and riverine or atmospheric fluxes have been shown to support only less than one-quarter of the nitrogen for primary production. This leaves a large uncertainty for most of the BoB's nitrogen input, suggesting a potential role of N2 fixation in those waters. Here, we present a survey of N2 fixation and carbon fixation in the BoB during the winter monsoon season. We detected a community of N2 fixers comparable to other oxygen minimum zone (OMZ) regions, with only a few cyanobacterial clades and a broad diversity of non-phototrophic N2 fixers present throughout the water column (samples collected between 10 and 560 m water depth). While similar communities of N2 fixers were shown to actively fix N2 in other OMZs, N2 fixation rates were below the detection limit in our samples covering the water column between the deep chlorophyll maximum and the OMZ. Consistent with this, no N2 fixation signal was visible in \u03b415N signatures. We suggest that the absence of N2 fixation may be a consequence of a micronutrient limitation or of an O2 sensitivity of the OMZ diazotrophs in the BoB. Exploring how the onset of N2 fixation by cyanobacteria compared to non-phototrophic N2 fixers would impact on OMZ O2 concentrations, a simple model exercise was carried out. We observed that both photic-zone-based and OMZ-based N2 fixation are very sensitive to even minimal changes in water column stratification, with stronger mixing increasing organic matter production and export, which can exhaust remaining O2 traces in the BoB." }, { "instance_id": "R109612xR109579", "comparison_id": "R109612", "paper_id": "R109579", "text": "Nitrogen fixation rates in the eastern Arabian Sea Abstract The Arabian Sea experiences bloom of the diazotroph Trichodesmium during certain times of the year when optimal sea surface temperature and oligotrophic condition favour their growth. We measured nitrogen fixation rates in the euphotic zone during one such event in the Eastern Arabian Sea using 15 N 2 tracer gas dissolution method. The measured rates varied between 0.8 and 225 \u03bcmol N m \u22123 d \u22121 and were higher than those reported from most other oceanic regions. The highest rates (1739 \u03bcmol N m \u22122 d \u22121 ; 0\u201310 m) coincided with the growth phase of Trichodesmium and led to low \u03b4 15 N ( Trichodesmium bloom nitrogen fixation rates were low (0.9\u20131.5 \u03bcmol N m \u22123 d \u22121 ). Due to episodic events of diazotroph bloom, contribution of N 2 fixation to the total nitrogen pool may vary in space and time." }, { "instance_id": "R109904xR109894", "comparison_id": "R109904", "paper_id": "R109894", "text": "A Hybrid Approach Toward Research Paper Recommendation Using Centrality Measures and Author Ranking The volume of research articles in digital repositories is increasing. This spectacular growth of repositories makes it rather difficult for researchers to obtain related research papers in response to their queries. The problem becomes worse when a researcher with insufficient knowledge of searching research articles uses these repositories. In the traditional recommendation approaches, the results of the query miss many high-quality papers, in the related work section, which are either published recently or have low citation count. To overcome this problem, there needs to be a solution which considers not only structural relationships between the papers but also inspects the quality of authors publishing those articles. Many research paper recommendation approaches have been implemented which includes collaborative filtering-based, content-based, and citation analysis-based techniques. The collaborative filtering-based approaches primarily use paper-citation matrix for recommendations, whereas the content-based approaches only consider the content of the paper. The citation analysis considers the structure of the network and focuses on papers citing or cited by the paper of interest. It is therefore very difficult for a recommender system to recommend high-quality papers without a hybrid approach that incorporates multiple features, such as citation information and author information. The proposed method creates a multilevel citation and relationship network of authors in which the citation network uses the structural relationship between the papers to extract significant papers, and authors\u2019 collaboration network finds key authors from those papers. The papers selected by this hybrid approach are then recommended to the user. The results have shown that our proposed method performs exceedingly well as compared with the state-of-the-art existing systems, such as Google scholar and multilevel simultaneous citation network." }, { "instance_id": "R109904xR109860", "comparison_id": "R109904", "paper_id": "R109860", "text": "Applying weighted PageRank to author citation networks This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956\u20132008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. \u00a9 2011 Wiley Periodicals, Inc." }, { "instance_id": "R109904xR109878", "comparison_id": "R109904", "paper_id": "R109878", "text": "Betweenness and diversity in journal citation networks as measures of interdisciplinarity\u2014A tribute to Eugene Garfield Journals were central to Eugene Garfield\u2019s research interests. Among other things, journals are considered as units of analysis for bibliographic databases such as the Web of Science and Scopus. In addition to providing a basis for disciplinary classifications of journals, journal citation patterns span networks across boundaries to variable extents. Using betweenness centrality (BC) and diversity, we elaborate on the question of how to distinguish and rank journals in terms of interdisciplinarity. Interdisciplinarity, however, is difficult to operationalize in the absence of an operational definition of disciplines; the diversity of a unit of analysis is sample-dependent. BC can be considered as a measure of multi-disciplinarity. Diversity of co-citation in a citing document has been considered as an indicator of knowledge integration, but an author can also generate trans-disciplinary\u2014that is, non-disciplined\u2014variation by citing sources from other disciplines. Diversity in the bibliographic coupling among citing documents can analogously be considered as diffusion or differentiation of knowledge across disciplines. Because the citation networks in the cited direction reflect both structure and variation, diversity in this direction is perhaps the best available measure of interdisciplinarity at the journal level. Furthermore, diversity is based on a summation and can therefore be decomposed; differences among (sub)sets can be tested for statistical significance. In the appendix, a general-purpose routine for measuring diversity in networks is provided." }, { "instance_id": "R109904xR109866", "comparison_id": "R109904", "paper_id": "R109866", "text": "Influence of co-authorship networks in the research impact: Ego network analyses from Microsoft Academic Search The main objective of this study is to analyze the relationship between research impact and the structural properties of co-author networks. A new bibliographic source, Microsoft Academic Search, is introduced to test its suitability for bibliometric analyses. Citation counts and 500 one-step ego networks were extracted from this engine. Results show that tiny and sparse networks \u2013 characterized by a high Betweenness centrality and a high Average path length \u2013 achieved more citations per document than dense and compact networks \u2013 described by a high Clustering coefficient and a high Average degree. According to disciplinary differences, Mathematics, Social Sciences and Economics & Business are the disciplines with more sparse and tiny networks; while Physics, Engineering and Geosciences are characterized by dense and crowded networks. This suggests that in sparse ego networks, the central author have more control on their collaborators being more selective in their recruitment and concluding that this behaviour has positive implications in the research impact." }, { "instance_id": "R111045xR111023", "comparison_id": "R111045", "paper_id": "R111023", "text": "Access to divalent lanthanide NHC complexes by redox-transmetallation from silver and CO2 insertion reactions
Divalent NHC\u2013lanthanide complexes were obtained by redox-transmetallation. Treatment with CO2 led to insertion reactions without oxidation of the metal centre.
" }, { "instance_id": "R111045xR110993", "comparison_id": "R111045", "paper_id": "R110993", "text": "Anilido-oxazoline-ligated rare-earth metal complexes: synthesis, characterization and highly cis-1,4-selective polymerization of isopreneAnilido-oxazoline-ligated rare-earth metal complexes show strong fluorescence emissions and good catalytic performance on isoprene polymerization with high
Supported silver and gold nanoparticles are highly stereo and chemoselective catalysts for the three-phase hydrogenation of alkynes in continuous mode.
" }, { "instance_id": "R25900xR25880", "comparison_id": "R25900", "paper_id": "R25880", "text": "Palladium nanoparticles supported on mpg-C3N4 as active catalyst for semihydrogenation of phenylacetylene under mild conditions Pd-nanoparticles supported on mesoporous graphitic carbon nitride is found to be an effective, heterogeneous catalyst for the liquid-phase semihydrogenation of phenylacetylenes under mild conditions." }, { "instance_id": "R25900xR25872", "comparison_id": "R25900", "paper_id": "R25872", "text": "Metal-Ligand Core-Shell Nanocomposite Catalysts for the Selective Semihydrogenation of Alkynes In recent years, hybrid nanocomposites with core\u2013shell structures have increasingly attracted enormous attention in many important research areas such as quantum dots, optical, magnetic, and electronic devices, and catalysts. In the catalytic applications of core\u2013shell materials, core-metals having magnetic properties enable easy separation of the catalysts from the reaction mixtures by a magnet. The core-metals can also affect the active shell-metals, delivering significant improvements in their activities and selectivities. However, it is difficult for core-metals to act directly as the catalytic active species because they are entirely covered by the shell. Thus, few successful designs of core\u2013shell nanocomposite catalysts having active metal species in the core have appeared to date. Recently, we have demonstrated the design of a core\u2013shell catalyst consisting of active metal nanoparticles (NPs) in the core and closely assembled oxides with nano-gaps in the shell, allowing the access of substrates to the core-metal. The shell acted as a macro ligand (shell ligand) for the core-metal and the core\u2013shell structure maximized the metal\u2013ligand interaction (ligand effect), promoting highly selective reactions. The design concept of core\u2013shell catalysts having core-metal NPs with a shell ligand is highly useful for selective organic transformations owing to the ideal structure of these catalysts for maximizing the ligand effect, leading to superior catalytic performances compared to those of conventional supported metal NPs. Semihydrogenation of alkynes is a powerful tool to synthesize (Z)-alkenes which are important building blocks for fine chemicals, such as bioactive molecules, flavors, and natural products. In this context, the Lindlar catalyst (Pd/ CaCO3 treated with Pb(OAc)2) has been widely used. [13] Unfortunately, the Lindlar catalyst has serious drawbacks including the requirement of a toxic lead salt and the addition of large amounts of quinoline to suppress the over-hydrogenation of the product alkenes. Furthermore, the Lindlar catalyst has a limited substrate scope; terminal alkynes cannot be converted selectively into terminal alkenes because of the rapid over-hydrogenation of the resulting alkenes to alkanes. Aiming at the development of environmentally benign catalyst systems, a number of alternative lead-free catalysts have been reported. 15] Recently, we also developed a leadfree catalytic system for the selective semihydrogenation consisting of SiO2-supported Pd nanoparticles (PdNPs) and dimethylsulfoxide (DMSO), in which the addition of DMSO drastically suppressed the over-hydrogenation and isomerization of the alkene products even after complete consumption of the alkynes. This effect is due to the coordination of DMSO to the PdNPs. DMSO adsorbed on the surface of PdNPs inhibits the coordination of alkenes to the PdNPs, while alkynes can adsorb onto the PdNPs surface because they have a higher coordination ability than DMSO. This phenomenon inspired us to design PdNPs coordinated with a DMSO-like species in a solid matrix. If a core\u2013shell structured nanocomposite involving PdNPs encapsulated by a shell having a DMSO-like species could be constructed, it would act as an efficient and functional solid catalyst for the selective semihydrogenation of alkynes. Herein, we successfully synthesized core\u2013shell nanocomposites of PdNPs covered with a DMSO-like matrix on the surface of SiO2 (Pd@MPSO/SiO2). The shell, consisting of an alkyl sulfoxide network, acted as a macroligand and allowed the selective access of alkynes to the active center of the PdNPs, promoting the selective semihydrogenation of not only internal but also terminal alkynes without any additives. Moreover, these catalysts were reusable while maintaining high activity and selectivity. Pd@MPSO/SiO2 catalysts were synthesized as follows. Pd/ SiO2 prepared according to our procedure [16] was stirred in n-heptane with small amounts of 3,5-di-tert-butyl-4-hydroxytoluene (BHT) and water at room temperature. Next, methyl3-trimethoxysilylpropylsulfoxide (MPSO) was added to the mixture and the mixture was heated. The slurry obtained was collected by filtration, washed, and dried in vacuo, affording Pd@MPSO/SiO2 as a gray powder. Altering the molar ratios of MPSO to Pd gave two kinds of catalysts: Pd@MPSO/SiO21 (MPSO:Pd = 7:1), and Pd@MPSO/SiO2-2 (MPSO:Pd = 100:1). [*] Dr. T. Mitsudome, Y. Takahashi, Dr. T. Mizugaki, Prof. Dr. K. Jitsukawa, Prof. Dr. K. Kaneda Department of Materials Engineering Science Graduate School of Engineering Science, Osaka University 1\u20133, Machikaneyama, Toyonaka, Osaka 560-8531 (Japan) E-mail: kaneda@cheng.es.osaka-u.ac.jp" }, { "instance_id": "R25900xR25888", "comparison_id": "R25900", "paper_id": "R25888", "text": "Formation and Characterization of PdZn Alloy: A Very Selective Catalyst for Alkyne Semihydrogenation The formation of a PdZn alloy from a 4.3% Pd/ZnO catalyst was characterized by combined in situ high-resolution X-ray diffraction (HRXRD) and X-ray absorption spectroscopy (XAS). Alloy formation started already at around 100 \u00b0C, likely at the surface, and reached the bulk with increasing temperature. The structure of the catalyst was close to the bulk value of a 1:1 PdZn alloy with a L1o structure (RPd\u2212Pd = 2.9 A, RPd\u2212Zn = 2.6 A, CNPd\u2212Zn = 8, CNPd\u2212Pd = 4) after reduction at 300 \u00b0C and above. The activity of the gas-phase hydrogenation of 1-pentyne decreased with the formation of the PdZn alloy. In contrast to Pd/SiO2, no full hydrogenation occurred over Pd/ZnO. Over time, only slight decomposition of the alloy occurred under reaction conditions." }, { "instance_id": "R25900xR25861", "comparison_id": "R25900", "paper_id": "R25861", "text": "One-step Synthesis of Core-Gold/Shell- Ceria Nanomaterial and Its Catalysis for Highly Selective Semi- hydrogenation of Alkynes We report a facile synthesis of new core-Au/shell-CeO2 nanoparticles (Au@CeO2) using a redox-coprecipitation method, where the Au nanoparticles and the nanoporous shell of CeO2 are simultaneously formed in one step. The Au@CeO2 catalyst enables the highly selective semihydrogenation of various alkynes at ambient temperature under additive-free conditions. The core-shell structure plays a crucial role in providing the excellent selectivity for alkenes through the selective dissociation of H2 in a heterolytic manner by maximizing interfacial sites between the core-Au and the shell-CeO2." }, { "instance_id": "R25999xR25995", "comparison_id": "R25999", "paper_id": "R25995", "text": "Information theoretic analysis of postal address fields for automatic address interpretation This paper concerns a study of information content in postal address fields for automatic address interpretation. Information provided by a combination of address components and information interaction among components is characterized in terms of Shannon's entropy. The efficiency of assignment strategies for determining a delivery point code can be compared by the propagation of uncertainty in address components. The quantity of redundancy between components can be computed from the information provided by these components. This information is useful in developing a strategy for selecting a useful component for recovering the value of an uncertain component. The uncertainty of a component based on another known component can be measured by conditional entropy. By ranking the uncertainty quantity, the effective processing flow for determining the value of a candidate component can be constructed." }, { "instance_id": "R26063xR26059", "comparison_id": "R26063", "paper_id": "R26059", "text": "Strength of adhesive joints with adherend yielding: I. Analytical model A sandwich element can be isolated in all two-dimensional adhesive joints, thereby simplifying the analysis of strain and stress. An adhesive sandwich model has been developed that accommodates arbitrary loading, a bilinear adherend stress-strain response, and any form of nonlinear adhesive behavior. The model accounts for both the bending deformation and the shear deformation of the adherends. Stress and strain distributions in the adhesive were obtained by solving a system of six differential equations using a finite-difference method. For a sample adhesive sandwich, the adhesive strains and stresses from the new model were compared with those of other models. Finally, the model was coupled with an analytical solution for the detached section of an adhesive joint in peel. The stress and strain distributions in the adhesive and the root curvature of the peel adherend were then compared with finite element results. An accompanying article in this issue uses the model with experimental peel data to investigate the suitability of various adhesive failure criteria." }, { "instance_id": "R26063xR26035", "comparison_id": "R26063", "paper_id": "R26035", "text": "Stresses in Adhesively Bonded Joints: A Closed-Form Solution In this paper the general plane strain problem of adhesively bonded struc tures which consist of two different orthotropic adherends is considered. Assuming that the thicknesses of the adherends are constant and are small in relation to the lateral dimensions of the bonded region, the adherends are treated as plates. Also, assuming that the thickness of the adhesive is small compared to that of the adherends, the thickness variation of the stresses in the adhesive layer is neglected. However, the transverse shear effects in the adherends and the in-plane normal strain in the adhesive are taken into ac count. The problem is reduced to a system of differential equations for the adhesive stresses which is solved in closed form. A single lap joint and a stif fened plate under various loading conditions are considered as examples. To verify the basic trend of the solutions obtained from the plate theory and to give some idea about the validity of the plate assumption itself, a sample pro blem is solved by using the finite element method and by treating the adherends and the adhesive as elastic continua. It is found that the plate theory used in the analysis not only predicts the correct trend for the adhesive stresses but also gives rather surprisingly accurate results. The solution is ob tained by assuming linear stress-strain relations for the adhesive. In the Ap pendix the problem is formulated by using a nonlinear material for the adhesive and by following two different approaches." }, { "instance_id": "R26063xR26051", "comparison_id": "R26063", "paper_id": "R26051", "text": "Analysis of Adhesive-Bonded Joints, Square-End, and Spew-Fillet\u2014High-Order Theory Approach The analysis of adhesive-bonded joints using a closed-form high-order theory (CFHO theory) is presented, and its capabilities are demonstrated numerically for the case of single lap joints with and without a \u201cspew-fillet.\u201d The governing equations based on the CFHO theory are presented along with the appropriate boundary/continuity conditions at the free edges. The joints consist of two metallic or composite laminated adherents that are interconnected through equilibrium and compatibility requirements by a 2D linear elastic adhesive layer. The CFHO theory predicts that the distributions of the displacements through the thickness of the adhesive layer are nonlinear in general (high-order effects) and are a result of \\Inot presumed\\N displacement patterns. The spew-fillet is modeled through an equivalent tensile bar, which enables quantification of the effects of the spew-fillet size on the stress fields. Satisfactory comparisons with two-parameter elastic foundation solution (Goland-Reissner type) results and finite-element results are presented." }, { "instance_id": "R26063xR26023", "comparison_id": "R26063", "paper_id": "R26023", "text": "Two Dimensional Displacement-Stress Distributions in Adhesive Bonded Composite Structures Abstract Computerized analysis of composite structures formed by the adhesive bonding of materials is presented. The adhesive is considered to be a part of a linearly elastic system whose components are individually characterized by two bulk property elastic constants. Solution is obtained by finite difference minimization of the internal energy distribution in a discretized, piecewise homogeneous continuum. The plane-stress, plane-strain problems are considered, and yield displacement and stress distributions for the composite system. Displacement and/or stress boundary conditions are allowed. Acute contour angles are not allowed. This is the only restriction for otherwise arbitrary plane geometries. Results are presented for typical lap shear specimens as well as for a particular case of a butt joint in which a void exists in the adhesive layer." }, { "instance_id": "R26063xR26027", "comparison_id": "R26063", "paper_id": "R26027", "text": "The efficient design of adhesive bonded joints Abstract A concise method of analysis is used to study the numerous parameters influencing the stress distribution within the adhesive of a single lap joint. The formulation includes transverse shear and normal strain deformations. Both isotropic or anisotropic material systems of similar or dissimilar adherends are analysed. Results indicate that the primary Young's modulus of the adherend, the overlap length, and the adhesive's material properties are the parameters most influential in optimizing the design of a single lap joint." }, { "instance_id": "R26107xR26084", "comparison_id": "R26107", "paper_id": "R26084", "text": "Thermal comfort in residential buildings \u2013 Failure to predict by Standard model Abstract A field study, conducted in 189 dwellings in winter and 205 dwellings in summer, included measurement of hygro-thermal conditions and documentation of occupant responses and behavior patterns. Both samples included both passive and actively space-conditioned dwellings. Predicted mean votes (PMV) computed using Fanger's model yielded significantly lower-than-reported thermal sensation (TS) values, especially for the winter heated and summer air-conditioned groups. The basic model assumption of a proportional relationship between thermal response and thermal load proved to be inadequate, with actual thermal comfort achieved at substantially lower loads than predicted. Survey results also refuted the model's second assumption that symmetrical responses in the negative and positive directions of the scale represent similar comfort levels. Results showed that the model's curve of predicted percentage of dissatisfied (PPD) substantially overestimated the actual percentage of dissatisfied within the partial group of respondents who voted TS > 0 in winter as well as within the partial group of respondents who voted TS" }, { "instance_id": "R26107xR26095", "comparison_id": "R26107", "paper_id": "R26095", "text": "Field study on occupant comfort and the office thermal environment in rooms with displacement ventilation UNLABELLED A field survey of occupants' response to the indoor environment in 10 office buildings with displacement ventilation was performed. The response of 227 occupants was analyzed. About 24% of the occupants in the survey complained that they were daily bothered by draught, mainly at the lower leg. Vertical air temperature difference measured between head and feet levels was less than 3 degrees C at all workplaces visited. Combined local discomfort because of draught and vertical temperature difference does not seem to be a serious problem in rooms with displacement ventilation. Almost one half (49%) of the occupants reported that they were daily bothered by an uncomfortable room temperature. Forty-eight per cent of the occupants were not satisfied with the air quality. PRACTICAL IMPLICATIONS The PMV and the Draught Rating indices as well as the specifications for local discomfort because of the separate impact of draught and vertical temperature difference, as defined in the present standards, are relevant for the design of a thermal environment in rooms with displacement ventilation and for its assessment in practice. Increasing the supply air temperature in order to counteract draught discomfort is a measure that should be considered carefully; even if the desired stratification of pollution in the occupied zone is preserved, an increase of the inhaled air temperature may have a negative effect on perceived air quality." }, { "instance_id": "R26107xR26099", "comparison_id": "R26107", "paper_id": "R26099", "text": "Linking indoor environment conditions to job satisfaction: a field study Physical and questionnaire data were collected from 95 workstations at an open-plan office building in Michigan, US. The physical measurements encompassed thermal, lighting, and acoustic variables, furniture dimensions, and an assessment of potential exterior view. Occupants answered a detailed questionnaire concerning their environmental and job satisfaction, and aspects of well-being. These data were used to test, via mediated regression, a model linking the physical environment, through environmental satisfaction, to job satisfaction and other related measures. In particular, a significant link was demonstrated between overall environmental satisfaction and job satisfaction, mediated by satisfaction with management and with compensation. Analysis of physical data was limited to the lighting domain. Results confirmed the important role of window access at the desk in satisfaction with lighting, particularly through its effect on satisfaction with outside view. Des donn\u00e9es physiques et des donn\u00e9es obtenues par questionnaire ont \u00e9t\u00e9 recueillies aupr\u00e8s de 95 postes de travail dans un immeuble de bureaux d\u00e9cloisonn\u00e9s du Michigan, aux Etats-Unis. Les mesures physiques comprenaient des variables thermiques, acoustiques et relatives \u00e0 l'\u00e9clairage, les dimensions des meubles, ainsi qu'une \u00e9valuation de la vue ext\u00e9rieure potentielle. Les occupants ont r\u00e9pondu \u00e0 un questionnaire d\u00e9taill\u00e9 portant sur la satisfaction \u00e0 l'\u00e9gard de leur environnement et de leur travail, et sur des aspects relatifs au bien-\u00eatre. Ces donn\u00e9es ont \u00e9t\u00e9 utilis\u00e9es pour tester, au moyen d'une r\u00e9gression m\u00e9diatis\u00e9e, un mod\u00e8le liant l'environnement physique, par la satisfaction \u00e0 l'\u00e9gard de l'environnement, \u00e0 la satisfaction dans le travail et aux autres mesures li\u00e9es. Il a en particulier \u00e9t\u00e9 d\u00e9montr\u00e9 qu'il existe un lien important entre la satisfaction globale \u00e0 l'\u00e9gard de l'environnement et la satisfaction dans le travail, m\u00e9diatis\u00e9 par la satisfaction vis-\u00e0-vis de la direction et de la r\u00e9mun\u00e9ration. L'analyse des donn\u00e9es physiques a \u00e9t\u00e9 limit\u00e9e au domaine de l'\u00e9clairage. Les r\u00e9sultats ont confirm\u00e9 que le fait de pouvoir acc\u00e9der \u00e0 une fen\u00eatre au bureau joue un r\u00f4le important dans la satisfaction \u00e0 l'\u00e9gard de l'\u00e9clairage, en particulier par son effet sur la satisfaction vis-\u00e0-vis de la vue ext\u00e9rieure. Mots cl\u00e9s: satisfaction \u00e0 l'\u00e9gard de l'environnement, satisfaction dans le travail, \u00e9clairage, perception par les occupants, bureaux, productivit\u00e9 organisationnelle, vue, bien-\u00eatre" }, { "instance_id": "R26127xR26111", "comparison_id": "R26127", "paper_id": "R26111", "text": "Underground activity and institutional change: Productive, protective and predatory behavior in transition economies This paper examines why some transitions are more successful than others by focusing attention on the role of productive, protective and predatory behaviors from the perspective of the new institutional economics. Many transition economies are characterized by a fundamental inconsistency between formal and informal institutions. When formal and informal rules clash, noncompliant behaviors proliferate, among them, tax evasion, corruption, bribery, organized criminality, and theft of government property. These wealth redistributing protective and predatory behaviors activities absorb resources that could otherwise be used for wealth production resulting in huge transition costs. Noncompliant behaviors--evasion, avoidance, circumvention, abuse, and/or corruption of institutional rules--comprise what we can be termed underground economies. A variety of underground economies can be differentiated according to the types of rules violated by the noncompliant behaviors. The focus of the new institutional economics is on the consequences of institutions--the rules that structure and constrain economic activity--for economic outcomes. Underground economics is concerned with instances in which the rules are evaded, circumvented, and violated. It seeks to determine the conditions likely to foster rule violations, and to understand the various consequences of noncompliance with institutional rules. Noncompliance with \u2018bad\u201d rules may actually foster development whereas non compliance with \u201cgood\u201d rules will hinder development. Since rules differ, both the nature and consequences of rule violations will therefore depend on the particular rules violated. Institutional economics and underground economics are therefore highly complementary. The former examines the rules of the game, the latter the strategic responses of individuals and organizations to those rules. Economic performance depends on both the nature of the rules and the extent of compliance with them. Institutions therefore do affect economic performance, but it is not always obvious which institutional rules dominate. Where formal and informal institutions are coherent and consistent, the incentives produced by the formal rules will affect economic outcomes. Under these circumstances, the rule of law typically secures property rights, reduces uncertainty, and lowers transaction costs. In regimes of discretionary authority where formal institutions conflict with informal norms, noncompliance with the formal rules becomes pervasive, and underground economic activity is consequential for economic outcomes." }, { "instance_id": "R26146xR26140", "comparison_id": "R26146", "paper_id": "R26140", "text": "Integrating the unofficial economy into the dynamics of post-socialist economies: A framework of analysis and evidence Over a third of economic activity in theformer Soviet countries was estimated to occur in the unofficial economy by the mid-1990s; in Central and Eastern Europe, the average is close to one-quarter. Intraregional variations are great: in some countries 10 to 15 percent of economic activity is unofficial, and in some more than half of it. The growth of unofficial activity in most post-socialist countries, and its mitigating effect on the decline in official output during the early stages of the transition, have been marked. In this paper, the authors challenge the conventional view of how post-socialist economies function by incorporating the unofficial economy into an analysis of the full economy. Then they advance a simple framework for understanding the evolution of the unofficial economy, and the links between both economies, highlighting the main characteristics of\"officialdom,\"contrasting conventional notions of\"informal\"or\"shadow\"economies, and focusing on what determines the decision to cross over from one segment to another. The initial empirical results seem to support hypothetical explanations of what determines the dynamics of the unofficial economy. The authors emphasize the speedy liberalization of markets, macro stability, and a stable and moderate tax regime. Although widespread, most\"unofficialdom\"in the region is found to be relatively shallow--subject to reversal by appropriate economic policies. The framework and evidence presented here have implications for measurement, forecasting, and policymaking--calling for even faster liberalization and privatization than already advocated. And the lessons in social protection and taxation policy differ from conventional advice." }, { "instance_id": "R26194xR26173", "comparison_id": "R26194", "paper_id": "R26173", "text": "An Integrated Inventory Allocation and Vehicle Routing Problem We address the problem of distributing a limited amount of inventory among customers using a fleet of vehicles so as to maximize profit. Both the inventory allocation and the vehicle routing problems are important logistical decisions. In many practical situations, these two decisions are closely interrelated, and therefore, require a systematic approach to take into account both activities jointly. We formulate the integrated problem as a mixed integer program and develop a Lagrangian-based procedure to generate both good upper bounds and heuristic solutions. Computational results show that the procedure is able to generate solutions with small gaps between the upper and lower bounds for a wide range of cost structures." }, { "instance_id": "R26194xR26167", "comparison_id": "R26194", "paper_id": "R26167", "text": "An Allocation and Distribution Model for Perishable Products This paper presents an allocation model for a perishable product, distributed from a regional center to a given set of locations with random demands. We consider the combined problem of allocating the available inventory at the center while deciding how these deliveries should be performed. Two types of delivery patterns are analyzed: the first pattern assumes that all demand points receive individual deliveries; the second pattern subsumes the frequently occurring case in which deliveries are combined in multistop routes traveled by a fleet of vehicles. Computational experience is reported." }, { "instance_id": "R26262xR26244", "comparison_id": "R26262", "paper_id": "R26244", "text": "A branch-and-cut algorithm for a vendor-managed inventory-routing problem We consider a distribution problem in which a product has to be shipped from a supplier to several retailers over a given time horizon. Each retailer defines a maximum inventory level. The supplier monitors the inventory of each retailer and determines its replenishment policy, guaranteeing that no stockout occurs at the retailer (vendor-managed inventory policy). Every time a retailer is visited, the quantity delivered by the supplier is such that the maximum inventory level is reached (deterministic order-up-to level policy). Shipments from the supplier to the retailers are performed by a vehicle of given capacity. The problem is to determine for each discrete time instant the quantity to ship to each retailer and the vehicle route. We present a mixed-integer linear programming model and derive new additional valid inequalities used to strengthen the linear relaxation of the model. We implement a branch-and-cut algorithm to solve the model optimally. We then compare the optimal solution of the problem with the optimal solution of two problems obtained by relaxing in different ways the deterministic order-up-to level policy. Computational results are presented on a set of randomly generated problem instances." }, { "instance_id": "R26262xR26222", "comparison_id": "R26262", "paper_id": "R26222", "text": "Deterministic Order-Up-To Level Policies in an Inventory Routing Problem We consider a distribution problem in which a set of products has to be shipped from a supplier to several retailers in a given time horizon. Shipments from the supplier to the retailers are performed by a vehicle of given capacity and cost. Each retailer determines a minimum and a maximum level of the inventory of each product, and each must be visited before its inventory reaches the minimum level. Every time a retailer is visited, the quantity of each product delivered by the supplier is such that the maximum level of the inventory is reached at the retailer. The problem is to determine for each discrete time instant the retailers to be visited and the route of the vehicle. Various objective functions corresponding to different decision policies, and possibly to different decision makers, are considered. We present a heuristic algorithm and compare the solutions obtained with the different objective functions on a set of randomly generated problem instances." }, { "instance_id": "R26262xR26201", "comparison_id": "R26262", "paper_id": "R26201", "text": "An interactive, computer-aided ship scheduling system Abstract This paper is concerned with a fleet scheduling and inventory resupply problem faced by an international chemical operation. The firm uses a fleet of small ocean-going tankers to deliver bulk fluid to warehouses all over the world. The scheduling problem centers around decisions on routes, arrival/departure times, and inventory replenishment quantities. An interactive computer system was developed and implemented at the firm, and was successfully used to address daily scheduling issues as well as longer range planning problems. The purpose of this paper is to first present how the underlying decision problem was analyzed using both a network flow model and a mixed integer programming model, and then to describe the components of the decision support system developed to generate schedules. The use of the system in various decision making applications is also described." }, { "instance_id": "R26262xR26228", "comparison_id": "R26262", "paper_id": "R26228", "text": "A Periodic Inventory Routing Problem at a Supermarket Chain Albert Heijn, BV, a supermarket chain in the Netherlands, faces a vehicle routing and delivery scheduling problem once every three to six months. Given hourly demand forecasts for each store, travel times and distances, cost parameters, and various transportation constraints, the firm seeks to determine a weekly delivery schedule specifying the times when each store should be replenished from a central distribution center, and to determine the vehicle routes that service these requirements at minimum cost. We describe the development and implementation of a system to solve this problem at Albert Heijn. The system resulted in savings of 4% of distribution costs in its first year of implementation and is expected to yield 12%-20% savings as the firm expands its usage. It also has tactical and strategic advantages for the firm, such as in assessing the cost impact of various logistics and marketing decisions, in performance measurement, and in competing effectively through reduced lead time and increased frequency of replenishment." }, { "instance_id": "R26262xR26209", "comparison_id": "R26262", "paper_id": "R26209", "text": "Solving An Integrated Logistics Problem Arising In Grocery Distribution AbstractA complex allocation-routing problem arising in grocery distribution is described. It is solved by means of a heuristic that alternates between these two components. Tests on real and artificial data confirm the efficiency and the robustness of the proposed approach." }, { "instance_id": "R26352xR26292", "comparison_id": "R26352", "paper_id": "R26292", "text": "Minimization of logistic costs with given frequencies We study the problem of shipping products from one origin to several destinations, when a given set of possible shipping frequencies is available. The objective of the problem is the minimization of the transportation and inventory costs. We present different heuristic algorithms and test them on a set of randomly generated problem instances. The heuristics are based upon the idea of solving, in a first phase, single link problems, and of locally improving the solution in subsequent phases." }, { "instance_id": "R26352xR26346", "comparison_id": "R26352", "paper_id": "R26346", "text": "Replenishment routing problems between a single supplier and multiple retailers with direct delivery We consider the replenishment routing problems of one supplier who can replenish only one of multiple retailers per period, while different retailers need different periodical replenishment. For simple cases satisfying certain conditions, we obtain the simple routing by which the supplier can replenish each retailer periodically so that shortage will not occur. For complicated cases, using number theory, especially the Chinese remainder theorem, we present an algorithm to calculate a feasible routing so that the supplier can replenish the selected retailers on the selected periods without shortages." }, { "instance_id": "R26352xR26272", "comparison_id": "R26352", "paper_id": "R26272", "text": "On the Effectiveness of Direct Shipping Strategy for the One-Warehouse Multi-Retailer R-Systems We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of one depot and many geographically dispersed retailers. All stock enters the system through the depot and is distributed to the retailers by vehicles of limited constant capacity. We assume that each one of the retailers faces a constant, retailer specific, demand rate and that inventory is charged only at the retailers but not at the depot. We provide a lower bound on the long run average cost over all inventory-routing strategies. We use this lower bound to show that the effectiveness of direct shipping over all inventory-routing strategies is at least 94% whenever the Economic Lot Size of each of the retailers is at least 71% of vehicle capacity. The effectiveness deteriorates as the Economic Lot Sizes become smaller. These results are important because they provide useful guidelines as to when to embark into the much more difficult task of finding cost-effective routes. Additional advantages of direct shipping are lower in-transit inventory and ease of coordination." }, { "instance_id": "R26352xR26290", "comparison_id": "R26352", "paper_id": "R26290", "text": "Direct shipping and the dynamic single-depot/ multi-retailer inventory system In this paper we study a single-depot/multi-retailer system with independent stochastic stationary demands, linear inventory costs, and backlogging at the retailers over an infinite horizon. In addition, we also consider the transportation cost between the depot and the retailers. Orders are placed each period by the depot. The orders arrive at the depot and are allocated and delivered to the retailers. No inventory is held at the depot. We consider a specific policy of direct shipments. That is, a lower bound on the long run average cost per period for the system over all order/delivery strategies is developed. The simulated long term average cost per period of the delivery strategy of direct shipping with fully loaded trucks is examined via comparison to the derived lower bound. Simulation studies demonstrate that very good results can be achieved by a direct shipping policy." }, { "instance_id": "R26352xR26324", "comparison_id": "R26352", "paper_id": "R26324", "text": "Modeling inventory routing problems in supply chains of high consumption products Given a distribution center and a set of sales-points with their demand rates, the objective of the inventory routing problem (IRP) is to determine a distribution plan that minimizes fleet operating and average total distribution and inventory holding costs without causing a stock-out at any of the sales-points during a given planning horizon. We propose a new model for the long-term IRP when demand rates are stable and economic order quantity-like policies are used to manage inventories of the sales-points. The proposed model extends the concept of vehicle routes (tours) to vehicle multi-tours. To solve the nonlinear mixed integer formulation of this problem, a column generation based approximation method is suggested. The resulting sub-problems are solved using a savings-based approximation method. The approach is tested on randomly generated problems with different settings of some critical factors to compare our model using multi-tours as basic constructs to the model using simple tours as basic constructs." }, { "instance_id": "R26352xR26313", "comparison_id": "R26352", "paper_id": "R26313", "text": "The Stochastic Inventory Routing Problem with Direct Deliveries Vendor managed inventory replenishment is a business practice in which vendors monitor their customers' inventories, and decide when and how much inventory should be replenished. The inventory routing problem addresses the coordination of inventory management and transportation. The ability to solve the inventory routing problem contributes to the realization of the potential savings in inventory and transportation costs brought about by vendor managed inventory replenishment. The inventory routing problem is hard, especially if a large number of customers is involved. We formulate the inventory routing problem as a Markov decision process, and we propose approximation methods to find good solutions with reasonable computational effort. Computational results are presented for the inventory routing problem with direct deliveries." }, { "instance_id": "R26352xR26330", "comparison_id": "R26352", "paper_id": "R26330", "text": "On the Interactions Between Routing and Inventory-Management Policies in a One-WarehouseN-Retailer Distribution System This paper examines the interactions between routing and inventory-management decisions in a two-level supply chain consisting of a cross-docking warehouse and N retailers. Retailer demand is normally distributed and independent across retailers and over time. Travel times are fixed between pairs of system sites. Every m time periods, system inventory is replenished at the warehouse, whereupon an uncapacitated vehicle departs on a route that visits each retailer once and only once, allocating all of its inventory based on the status of inventory at the retailers who have not yet received allocations. The retailers experience newsvendor-type inventory-holding and backorder-penalty costs each period; the vehicle experiences in-transit inventory-holding costs each period. Our goal is to determine a combined system inventory-replenishment, routing, and inventory-allocation policy that minimizes the total expected cost/period of the system over an infinite time horizon. Our analysis begins by examining the determination of the optimal static route, i.e., the best route if the vehicle must travel the same route every replenishment-allocation cycle. Here we demonstrate that the optimal static route is not the shortest-total-distance (TSP) route, but depends on the variance of customer demands, and, if in-transit inventory-holding costs are charged, also on mean customer demands. We then examine dynamic-routing policies, i.e., policies that can change the route from one system-replenishment-allocation cycle to another, based on the status of the retailers' inventories. Here we argue that in the absence of transportation-related cost, the optimal dynamic-routing policy should be viewed as balancing management's ability to respond to system uncertainties (by changing routes) against system uncertainties that are induced by changing routes. We then examine the performance of a change-revert heuristic policy. Although its routing decisions are not fully dynamic, but determined and fixed for a given cycle at the time of each system replenishment, simulation tests with N = 2 and N = 6 retailers indicate that its use can substantially reduce system inventory-related costs even if most of the time the chosen route is the optimal static route." }, { "instance_id": "R26352xR26350", "comparison_id": "R26352", "paper_id": "R26350", "text": "A practical solution approach for the cyclic inventory routing problem Vendor managed inventory (VMI) is an example of effective cooperation and partnering practices between up- and downstream stages in a supply chain. In VMI, the supplier takes the responsibility for replenishing his customers' inventories based on their consumption data, with the aim of optimizing the over all distribution and inventory costs throughout the supply chain. This paper discusses the challenging optimization problem that arises in this context, known as the inventory routing problem (IRP). The objective of this IRP problem is to determine a distribution plan that minimizes average distribution and inventory costs without causing any stock-out at the customers. Deterministic constant customer demand rates are assumed and therefore, a long-term cyclical approach is adopted, integrating fleet sizing, vehicle routing, and inventory management. Further, realistic side-constraints such as limited storage capacities, driving time restrictions and constant replenishment intervals are taken into account. A heuristic solution approach is proposed, analyzed and evaluated against a comparable state-of-the-art heuristic." }, { "instance_id": "R26352xR26343", "comparison_id": "R26352", "paper_id": "R26343", "text": "Scenario Tree-Based Heuristics for Stochastic Inventory-Routing Problems In vendor-managed inventory replenishment, the vendor decides when to make deliveries to customers, how much to deliver, and how to combine shipments using the available vehicles. This gives rise to the inventory-routing problem in which the goal is to coordinate inventory replenishment and transportation to minimize costs. The problem tackled in this paper is the stochastic inventory-routing problem, where stochastic demands are specified through general discrete distributions. The problem is formulated as a discounted infinite-horizon Markov decision problem. Heuristics based on finite scenario trees are developed. Computational results confirm the efficiency of these heuristics." }, { "instance_id": "R26421xR26405", "comparison_id": "R26421", "paper_id": "R26405", "text": "Purification, Characterization, and Gene Analysis of a Chitosanase (ChoA) from Matsuebacter chitosanotabidus3001 ABSTRACT The extracellular chitosanase (34,000 M r ) produced by a novel gram-negative bacterium Matsuebacter chitosanotabidus 3001 was purified. The optimal pH of this chitosanase was 4.0, and the optimal temperature was between 30 and 40\u00b0C. The purified chitosanase was most active on 90% deacetylated colloidal chitosan and glycol chitosan, both of which were hydrolyzed in an endosplitting manner, but this did not hydrolyze chitin, cellulose, or their derivatives. Among potential inhibitors, the purified chitosanase was only inhibited by Ag + . Internal amino acid sequences of the purified chitosanase were obtained. A PCR fragment corresponding to one of these amino acid sequences was then used to screen a genomic library for the entire choA gene encoding chitosanase. Sequencing of the choA gene revealed an open reading frame encoding a 391-amino-acid protein. The N-terminal amino acid sequence had an excretion signal, but the sequence did not show any significant homology to other proteins, including known chitosanases. The 80-amino-acid excretion signal of ChoA fused to green fluorescent protein was functional in Escherichia coli . Taken together, these results suggest that we have identified a novel, previously unreported chitosanase." }, { "instance_id": "R26421xR26391", "comparison_id": "R26421", "paper_id": "R26391", "text": "Biochemical and Genetic Properties ofPaenibacillusGlycosyl Hydrolase Having Chitosanase Activity and Discoidin Domain Cells of \u201cPaenibacillus fukuinensis\u201d D2 produced chitosanase into surrounding medium, in the presence of colloidal chitosan or glucosamine. The gene of this enzyme was cloned, sequenced, and subjected to site-directed mutation and deletion analyses. The nucleotide sequence indicated that the chitosanase was composed of 797 amino acids and its molecular weight was 85,610. Unlike conventional family 46 chitosanases, the enzyme has family 8 glycosyl hydrolase catalytic domain, at the amino-terminal side, and discoidin domain at the carboxyl-terminal region. Expression of the cloned gene in Escherichia coli revealed \u03b2-1,4-glucanase function, besides chitosanase activity. Analyses by zymography and immunoblotting suggested that the active enzyme was, after removal of signal peptide, produced from inactive 81-kDa form by proteolysis at the carboxyl-terminal region. Replacements of Glu115 and Asp176, highly conserved residues in the family 8 glycosylase region, with Gln and Asn caused simultaneous loss of chitosanase and glucanase activities, suggesting that these residues formed part of the catalytic site. Truncation experiments demonstrated indispensability of an amino-terminal region spanning 425 residues adjacent to the signal peptide." }, { "instance_id": "R26421xR26399", "comparison_id": "R26421", "paper_id": "R26399", "text": "Production of Two Chitosanases from a Chitosan-Assimilating Bacterium, Acinetobacter sp. Strain CHB101. A bacterial strain capable of utilizing chitosan as a sole carbon source was isolated from soil and was identified as a member of the genus Acinetobacter. This strain, designated CHB101, produced extracellular chitosan-degrading enzymes in the absence of chitosan. The chitosan-degrading activity in the culture fluid increased when cultures reached the early stationary phase, although the level of activity was low in the exponential growth phase. Two chitosanases, chitosanases I and II, which had molecular weights of 37,000 and 30,000, respectively, were purified from the culture fluid. Chitosanase I exhibited substrate specificity for chitosan that had a low degree of acetylation (10 to 30%), while chitosanase II degraded colloidal chitin and glycol chitin, as well as chitosan that had a degree of acetylation of 30%. Rapid decreases in the viscosities of chitosan solutions suggested that both chitosanases catalyzed an endo type of cleavage reaction; however, chitosan oligomers (molecules smaller than pentamers) were not produced after a prolonged reaction." }, { "instance_id": "R26421xR26379", "comparison_id": "R26421", "paper_id": "R26379", "text": "Purification and characterization of an extracellular chitosanase produced by Amycolatopsis sp. CsO-2 Abstract Extracellular chitosanase produced by Amycolatopsis sp. CsO-2 was purified to homogeneity by precipitation with ammonium sulfate followed by cation exchange chromatography. The molecular weight of the chitosanase was estimated to be about 27,000 using SDS-polyacrylamide gel electrophoresis and gel filtration. The maximum velocity of chitosan degradation by the enzyme was attained at 55\u00b0C when the pH was maintained at 5.3. The enzyme was stable over a temperature range of 0\u201350\u00b0C and a pH range of 4.5\u20136.0. About 50% of the initial activity remained after heating at 100\u00b0C for 10 min, indicating a thermostable nature of the enzyme. The isoelectric point of the enzyme was about 8.8. The enzyme degraded chitosan with a range of deacetylation degree from 70% to 100%, but not chitin or CM-cellulose. The most susceptible substrate was 100% deacetylated chitosan. The enzyme degraded glucosamine tetramer to dimer, and pentamer to dimer and trimer, but did not hydrolyze glucosamine dimer and trimer." }, { "instance_id": "R26421xR26417", "comparison_id": "R26421", "paper_id": "R26417", "text": "Purification and Mode of Action of a Chitosanase from Penicillium islandicum Penicillium islandicum produced an inducible extracellular chitosanase when grown on chitosan. Large-scale production of the enzyme was obtained using Rhizopus rhizopodiformis hyphae as substrate. Chitosanase was purified 38-fold to homogeneity by ammonium sulphate fractionation and sequential chromatography on DEAE-Biogel A, Biogel P60 and hydroxyl-apatite. Crude enzyme was unstable at 370C, but was stabilized by 1\u00b70 mm-Ca2+. The pH optimum for activity was broad and dependent on the solubility of the chitosan substrate. Various physical and chemical properties of the purified enzyme were determined. Penicillium islandicum chitosanase cleaved chitosan in an endo-splitting manner with maximal activity on polymers of 30 to 60% acetylation. No activity was found on chitin (100% acetylated chitosan) or trimers and tetramers of N-acetylglucosamine. The latter two oligomers and all small oligomers of glucosamine inhibited the activity of chitosanase on 30% acetylated chitosan. The pentamer of N-acetylglucosamine and glucosamine oligomers were slowly cleaved by the enzyme. Analysis of the reaction products from 30% acetylated chitosan indicated that the major oligomeric product was a trimer; with 60% acetylated chitosan as substrate a dimer was also found. The new terminal reducing groups produced by chitosanase hydrolysis of 30% acetylated chitosan were reduced by sodium boro[3H]hydride. The new end residues were found to be N-acetylglucosamine. The analyses strongly indicated that P. islandicum chitosanase cleaved chitosan between N-acetylglucosamine and glucosamine. Both residues were needed for cleavage, and polymers containing equal proportions of acetylated and non-acetylated sugars were optimal for chitosanase activity. The products of reaction depended on the degree of acetylation of the polymer." }, { "instance_id": "R26421xR26411", "comparison_id": "R26421", "paper_id": "R26411", "text": "In Vitro Suppression of Mycelial Growth of Fusarium oxysporum by Extracellular Chitosanase of Sphingobacterium multivorum and Cloning of the Chitosanase Gene csnSM1 A chitosan-degrading bacterium, isolated from field soil that had been amended with chitin, was identified as Sphingobacterium multivorum KST-009 on the basis of its bacteriological characteristics. The extracellular chitosanase (SM1) secreted by KST-009 was a 34-kDa protein and could be purified through ammonium sulfate precipitation, gel permeation column chromatography and SDS polyacrylamide gel electrophoresis. A chitosanase gene (csnSM1) was isolated from genomic DNA of the bacteria, and the entire nucleotide sequence of the gene and the partial N-terminal amino acid sequence of the purified SM1 were determined. The csnSM1 gene was found to encode 383 amino acids, 72 N-terminal amino acid residues were processed to produce the mature enzyme during the secretion process. Germinated microconidia of four formae speciales (lycopersici, radicis-lycopersici, melonis, and fragariae ) of Fusarium oxysporum were treated with SM1. Chitosanase treatment caused morphological changes, such as swelling of hyphal cells or indistinctness of hyphal cell tips and cessation or reduction of mycelial elongation." }, { "instance_id": "R26421xR26407", "comparison_id": "R26421", "paper_id": "R26407", "text": "The bifunctional enzyme chitosanase-cellulase produced by the gram-negative microorganism Myxobacter sp. AL-1 is highly similar to Bacillus subtilis endoglucanases Abstract The gram-negative bacterium Myxobacter sp. AL-1 produces chitosanase-cellulase activity that is maximally excreted during the stationary phase of growth. Carboxymethylcellulase zymogram analysis revealed that the enzymatic activity was correlated with two bands of 32 and 35 kDa. Ion-exchange-chromatography-enriched preparations of the 32-kDa enzyme were capable of degrading the cellulose fluorescent derivatives 4-methylumbelliferyl-\u03b2-d-cellobioside and 4-methylumbelliferyl-\u03b2-d-cellotrioside. These enzymatic preparations also showed a greater capacity at 70\u00b0 C than at 42\u00b0 C to degrade chitosan oligomers of a minimum size of six units. Conversely, the \u03b2-1,4 glucanolytic activity was more efficient at attacking carboxymethylcellulose and methylumbelliferyl-cellotrioside at 42\u00b0 C than at 70\u00b0 C. The 32-kDa enzyme was purified more than 800-fold to apparent homogeneity by a combination of ion-exchange and molecular-exclusion chromatography. Amino-terminal sequencing indicated that mature chitosanase-cellulase shares more than 70% identity with endocellulases produced by strains DLG, PAP115, and 168 of the gram-positive microorganism Bacillus subtilis." }, { "instance_id": "R26550xR26538", "comparison_id": "R26550", "paper_id": "R26538", "text": "Use of Chitosan as Coagulant to Treat Wastewater from Milk Processing Plant Chitosan is a natural high molecular polymer made from crab, shrimp and lobster shells. When used as coagulant in water treatment, not like aluminum and synthetic polymers, chitosan has no harmful effect on human health, and the disposal of waste from seafood processing industry can also be solved. In this study the wastewater from the system of cleaning in place (CIP) containing high content of fat and protein was coagulated using chitosan, and the fat and the protein can be recycled. Chitosan is a natural material, the sludge cake from the coagulation after dehydrated could be used directly as feed supplement, therefore not only saving the spent on waste disposal but also recycling useful material. The result shows that the optimal result was reached under the condition of pH 7 with the coagulant dosage of 25 mg/l. The analysis of cost-effective shows that no extra cost to use chitosan as coagulant in the wastewater treatment, and it is an expanded application for chitosan." }, { "instance_id": "R26550xR26444", "comparison_id": "R26550", "paper_id": "R26444", "text": "Colon-specific delivery of peptide drugs and antiinflammatory drugs using chitosan capsules Nous avons etudie la delivrance specifique au niveau du colon de principes actifs peptidique ou anti-irfammatoire par des capsules de chitosan. Une faible liberation in vitro de 5(6)-carboxyfluoresceine a ete observee en milieu tampon phosphate. Cependant, elle est notablement augmentee en presence des micro-organismes qui sont largement repandus dans le colon. L'absorption intestinale d'insuline, choisie comme modele de principe actif peptidique, a ete evaluee par mesure des tauxplasmatiques en insuline et de leur effet hypoglycemique apres administration orale de capsules de chitosan. Une augmentation marquee de l'insuline et une diminution correspondante des taux de glucose sanguin ont ete observees apres administration orale de ces capsules renfermant 20 UI d'insuline et du glycocholate de sodium (disponibilite pharmacologique : 3,49%) par rapport a des capsules renfermant seulement 20 UI d'insuline (disponibilite pharmacologique : 1,62%). L'effet hypoglycemique a commence 8 h apres l'administration des capsules de chitosan, quand les capsules entrent dans le colon. Nous avons egalement etudie la liberation specifique au niveau du colon d'un produit actif contre les ulceres coliques, l'acide 5-aminosalicylique (5-ASA), a partir de capsules de chitosan, afin d'accelerer la cicatrisation de colites induites chez le rat par le sel de sodium de l'acide 2,4,6-trinitro-benzene sulfonique. Les concentrations en 5-ASA dans la muqueuse du gros intestin apres administration du produit ont ete superieures a celles obtenues a partir d'une suspension dans la carboxymethyl cellulose (CMC). En outre, l'effet therapeutique du 5-ASA a ete significativement ameliore par l'emploi de capsules de chitosan a base de 5-ASA par rapport a l'emploi de suspension de 5-ASA dans la CMC. Ces resultats suggerent que les capsules de chitosan peuvent etre des transporteurs utiles pour l'administration specifique au niveau du colon de principes actifs peptidiques, y compris l'insuline, aussi bien que de produits anti-inflammatoires, y compris le 5-ASA." }, { "instance_id": "R26550xR26522", "comparison_id": "R26550", "paper_id": "R26522", "text": "Chitin Biotechnology Applications This review article describes the current status of the production and consumption of chitin and chitosan, and their current practical applications in biotechnology with some attempted uses. The applications include: 1) cationic agents for polluted waste-water treatment, 2) agricultural materials, 3) food and feed additives, 4) hypocholesterolemic agents, 5) biomedical and pharmaceutical materials, 6) wound-healing materials, 7) blood anticoagulant, antithrombogenic and hemostatic materials, 8) cosmetic ingredients, 9) textile, paper, film and sponge sheet materials, 10) chromatographic and immobilizing media, and 11) analytical reagents." }, { "instance_id": "R26550xR26519", "comparison_id": "R26550", "paper_id": "R26519", "text": "Effects of natural products on soil organisms and plant health enhancement TerraPy, Magic Wet and Chitosan are soil and plant revitalizers based on natural renewable raw materials. These products stimulate microbial activity in the soil and promote plant growth. Their importance to practical agriculture can be seen in their ability to improve soil health, especially where intensive cultivation has shifted the biological balance in the soil ecosystem to high numbers of plant pathogens. The objective of this study was to investigate the plant beneficial capacities of TerraPy, Magic Wet and Chitosan and to evaluate their effect on bacterial and nematode communities in soils. Tomato seedlings (Lycopersicum esculentum cv. Hellfrucht Fr\u00fchstamm) were planted into pots containing a sand/soil mixture (1:1, v/v) and were treated with TerraPy, Magic Wet and Chitosan at 200 kg/ha. At 0, 1, 3, 7 and 14 days after inoculation the following soil parameters were evaluated: soil pH, bacterial and fungal population density (cfu/g soil), total number of saprophytic and plant-parasitic nematodes. At the final sampling date tomato shoot and root fresh weight as well as Meloidogyne infestation was recorded. Plant growth was lowest and nematode infestation was highest in the control. Soil bacterial population densities increased within 24 hours after treatment between 4-fold (Magic Wet) and 19-fold (Chitosan). Bacterial richness and diversity were not significantly altered. Dominant bacterial genera were Acinetobacter (41%) and Pseudomonas (22%) for TerraPy, Pseudomonas (30%) and Acinetobacter (13%) for Magic Wet, Acinetobacter (8.9%) and Pseuodomonas (81%) for Chitosan and Bacillus (42%) and Pseudomonas (32%) for the control. Increased microbial activity also was associated with higher numbers of saprophytic nematodes. The results demonstrated the positive effects of natural products in stimulating soil microbial activity and thereby the antagonistic potential in soils leading to a reduction in nematode infestation and improved plant growth." }, { "instance_id": "R26550xR26483", "comparison_id": "R26550", "paper_id": "R26483", "text": "Chitosan: A New Hemostatic Chitosan is a deacetylated derivative of arthropod chitin. We found that it formed a coagulum in contact with defibrinated blood, heparinized blood, and washed red cells. When knitted DeBakey grafts were treated with chitosan, they were impermeable to blood. Examination of these grafts at 24 hours revealed no rebleeding. Examination at one, two, three, and four months showed the grafts to be encased in smooth muscle with a living endothelial lining and an abundant vasa vasorum. Control grafts showed the usual fibrous healing." }, { "instance_id": "R26550xR26533", "comparison_id": "R26550", "paper_id": "R26533", "text": "The antimicrobial activity of cotton fabrics treated with different crosslinking agents and chitosan Abstract Cotton fabrics were treated with two different crosslinking agents [butanetetracarboxylic acid (BTCA) and Arcofix NEC (low formaldehyde content)] in the presence of chitosan to provide the cotton fabrics a durable press finishing and antimicrobial properties by chemical linking of chitosan to the cellulose structure. Both type and concentration of finishing agent in the presence of chitosan as well as the treatment conditions significantly affected the performance properties and antimicrobial activity of treated cotton fabrics. The treated cotton fabrics showed broad-spectrum antimicrobial activity against gram-positive and gram-negative bacteria and fungi tested. Treatment of cotton fabrics with BTCA in the presence of chitosan strengthened the antimicrobial activity more than the fabrics treated with Arcofix NEC. The maximum antimicrobial activity was obtained when the cotton fabrics were treated with 0.5\u20130.75% chitosan of molecular weight 1.5\u20135 kDa, and cured at 160 \u00b0C for 2\u20133 min. Application of different metal ions to cotton fabrics treated with finishing agent and chitosan showed a negligible effect on the antimicrobial activity. Partial replacement of Arcofix NEC with BTCA enhanced antimicrobial activity of the treated fabrics in comparison with that of Arcofix NEC alone. Transmission electron microscopy showed that the exposure of bacteria and yeast to chitosan treated fabrics resulted in deformation and shrinkage of cell membranes. The site of chitosan action is probably the microbial membrane and subsequent death of the cell." }, { "instance_id": "R26550xR26450", "comparison_id": "R26550", "paper_id": "R26450", "text": "Chitosan as a novel nasal delivery system for vaccines A variety of different types of nasal vaccine systems has been described to include cholera toxin, microspheres, nanoparticles, liposomes, attenuated virus and cells and outer membrane proteins (proteosomes). The present review describes our work on the use of the cationic polysaccharide, chitosan as a delivery system for nasally administered vaccines. Several animal studies have been carried out on influenza, pertussis and diphtheria vaccines with good results. After nasal administration of the chitosan-antigen nasal vaccines it was generally found that the nasal formulation induced significant serum IgG responses similar to and secretory IgA levels superior to what was induced by a parenteral administration of the vaccine. Animals vaccinated via the nasal route with the various chitosan-antigen vaccines were also found to be protected against the appropriate challenge. So far the nasal chitosan vaccine delivery system has been tested for vaccination against influenza in human subjects. The results of the study showed that the nasal chitosan influenza vaccine was both effective and protective according to the CPMP requirements. The mechanism of action of the chitosan nasal vaccine delivery system is also discussed." }, { "instance_id": "R26550xR26541", "comparison_id": "R26550", "paper_id": "R26541", "text": "Low-cost adsorbents for heavy metals uptake from contaminated water: a review In this article, the technical feasibility of various low-cost adsorbents for heavy metal removal from contaminated water has been reviewed. Instead of using commercial activated carbon, researchers have worked on inexpensive materials, such as chitosan, zeolites, and other adsorbents, which have high adsorption capacity and are locally available. The results of their removal performance are compared to that of activated carbon and are presented in this study. It is evident from our literature survey of about 100 papers that low-cost adsorbents have demonstrated outstanding removal capabilities for certain metal ions as compared to activated carbon. Adsorbents that stand out for high adsorption capacities are chitosan (815, 273, 250 mg/g of Hg(2+), Cr(6+), and Cd(2+), respectively), zeolites (175 and 137 mg/g of Pb(2+) and Cd(2+), respectively), waste slurry (1030, 560, 540 mg/g of Pb(2+), Hg(2+), and Cr(6+), respectively), and lignin (1865 mg/g of Pb(2+)). These adsorbents are suitable for inorganic effluent treatment containing the metal ions mentioned previously. It is important to note that the adsorption capacities of the adsorbents presented in this paper vary, depending on the characteristics of the individual adsorbent, the extent of chemical modifications, and the concentration of adsorbate." }, { "instance_id": "R26550xR26432", "comparison_id": "R26550", "paper_id": "R26432", "text": "Basic study for stabilization of w/o/w emulsion and its application to transcatheter arterial embolization therapy Stabilization of w/o/w emulsion and its application to transcatheter arterial embolization (TAE) therapy are reviewed. W/o/w emulsion was stabilized by making inner aqueous phase hypertonic, addition of chitosan in inner phase, and techniques of phase-inversion with porous membrane. Lipiodol w/o/w emulsion for TAE therapy was prepared by using a two-step pumping emulsification procedure. The procedure is so easy that the emulsion could be prepared even during the surgical operation. The deposition after hepatic arterial administration of the emulsion was detected by an X-ray CT scanner. The concentration of epirubicin hydrochloride (EPI) in liver was increased and its residence was prolonged by encapsulating it in the w/o/w emulsion. The toxic effects of EPI and lipiodol on the normal hepatic cells were reduced. The w/o/w emulsion prepared by us is a suitable formulation for the TAE therapy." }, { "instance_id": "R26550xR26471", "comparison_id": "R26550", "paper_id": "R26471", "text": "Antidiabetic Effects of Chitosan Oligosaccharides in Neonatal Streptozotocin-Induced Noninsulin-Dependent Diabetes Mellitus in Rats The antidiabetic effect of chitosan oligosaccharide (COS) was investigated in neonatal streptozotocin (STZ)-induced noninsulin-dependent diabetes mellitus rats. The fasting glucose level was reduced by about 19% in diabetic rats after treatment with 0.3% COS. Glucose tolerance was lower in the diabetic group compared with the normal group. After diabetic rats had been treated with 0.3% COS for 4 weeks, glucose tolerance increased significantly versus the diabetic control group, and glucose-inducible insulin expression increased significantly. In addition, fed-triglyceride (TG) levels in diabetic rats drinking 0.3% COS were reduced by 49% compared with those in diabetic control rats. The cholesterol levels of animals treated with COS were reduced by about 10% in fed or fasting conditions versus the corresponding controls, although the difference was not statistically significant. It was found that COS has a TG-lowering effect in diabetic rats, and that COS reduces signs of diabetic cardiomyopathy such as vacuolation of mitochondria and the separation and degeneration of myofibrils. In conclusion, these results indicate that COS can be used as an antidiabetic agent because it increases glucose tolerance and insulin secretion and decreases TG." }, { "instance_id": "R26550xR26441", "comparison_id": "R26550", "paper_id": "R26441", "text": "Transdermal permeation enhancement of N-trimethyl chitosan for testosterone The aim of this study was to evaluate the transdermal permeation enhancement of N-trimethyl chitosan (TMC) with different degrees of quaternization (DQ). TMCs with DQ of 40 and 60% (TMC40 and TMC60) were synthesized and characterized by (1)H NMR. Testosterone (TS) used as an effective drug, four different gels were prepared without enhancer, with 5% TMC40, 5% TMC60 or 2% Azone, respectively as enhancer. The effect of TMC60 on the stratum corneum was studied by Attenuated Total Reflection-Fourier Transform Infrared Spectroscopy (ATR-FTIR) combined with the technique of deconvolution. The results showed that TMC60 could significantly affect the secondary structure of keratin in stratum corneum. In vitro permeation studies were carried out using Franz-diffusion cells and in vivo studies were performed in rabbits. Both in vitro and in vivo permeation studies suggested the transdermal permeation enhancement of TMCs. Compared to the TS gel without enhancer, TS gels with enhancers all showed significant enhancing effect on transdermal permeation of TS (P<0.05). Meanwhile, compared to 2% Azone, 5% TMC60 had a stronger enhancement (P<0.05) while 5% TMC40 had a similar effect (P>0.05). The results suggested that the enhancement of TMCs increased with the increase of DQ." }, { "instance_id": "R26654xR26643", "comparison_id": "R26654", "paper_id": "R26643", "text": "Energy-Driven Adaptive Clustering Hierarchy (EDACH) for Wireless Sensor Networks Wireless sensor network consists of small battery powered sensors. Therefore, energy consumption is an important issue and several schemes have been proposed to improve the lifetime of the network. In this paper we propose a new approach called energy-driven adaptive clustering hierarchy (EDACH), which evenly distributes the energy dissipation among the sensor nodes to maximize the network lifetime. This is achieved by using proxy node replacing the cluster-head of low battery power and forming more clusters in the region relatively far from the base station. Comparison with the existing schemes such as LEACH (Low-Energy Adaptive Clustering Hierarchy) and PEACH (Proxy-Enabled Adaptive Clustering Hierarchy) reveals that the proposed EDACH approach significantly improves the network lifetime." }, { "instance_id": "R26654xR26646", "comparison_id": "R26654", "paper_id": "R26646", "text": "A clustering method for energy efficient routing in wireless sensor networks Low-Energy Adaptive Clustering Hierarchy (LEACH) is one of the most popular distributed cluster-based routing protocols in wireless sensor networks. Clustering algorithm of the LEACH is simple but offers no guarantee about even distribution of cluster heads over the network. And it assumes that each cluster head transmits data to sink over a single hop. In this paper, we propose a new method for selecting cluster heads to evenly distribute cluster heads. It avoids creating redundant cluster heads within a small geographical range. Simulation results show that our scheme reduces energy dissipation and prolongs network lifetime as compared with LEACH." }, { "instance_id": "R26654xR26628", "comparison_id": "R26654", "paper_id": "R26628", "text": "The Concentric Clustering Scheme for Efficient Energy Consumption in the PEGASIS The wireless sensor network is a type of the wireless ad-hoc networks. It is composed of a collection of sensor nodes. Sensor nodes collect and deliver necessary data in response to user's specific requests. It is expected to apply the wireless sensor network technology to various application areas such as the health, military and home. However, because of several limitations of sensor nodes, the routing protocols used in the wireless ad-hoc network are not suitable for the wireless sensor networks. For this reasons, many novel routing protocols for the wireless sensor networks are proposed recently. One of these protocols, the PEGASIS (power-efficient gathering in sensor information systems) protocol is a chain-based protocol. In general, the PEGASIS protocol presents twice or more performance in comparison with the LEACH (low energy adaptive clustering hierarchy) protocol. However, the PEGASIS protocol causes the redundant data transmission since one of nodes on the chain is selected as the head node regardless of the base station's location. In this paper, we propose the enhanced PEGASIS protocol based on the concentric clustering scheme to solve this problem. The main idea of the concentric clustering scheme is to consider the location of the base station to enhance its performance and to prolong the lifetime of the wireless sensor networks. As simulation results, the enhanced PEGASIS protocol using the concentric clustering scheme performs better than the current PEGASIS protocol by about 35%." }, { "instance_id": "R26654xR26649", "comparison_id": "R26654", "paper_id": "R26649", "text": "An Adaptive Data Dissemination Strategy for Wireless Sensor Networks Future large-scale sensor networks may comprise thousands of wirelessly connected sensor nodes that could provide an unimaginable opportunity to interact with physical phenomena in real time. However, the nodes are typically highly resource-constrained. Since the communication task is a significant power consumer, various attempts have been made to introduce energy-awareness at different levels within the communication stack. Clustering is one such attempt to control energy dissipation for sensor data dissemination in a multihop fashion. The Time-Controlled Clustering Algorithm (TCCA) is proposed to realize a network-wide energy reduction. A realistic energy dissipation model is derived probabilistically to quantify the sensor network's energy consumption using the proposed clustering algorithm. A discrete-event simulator is developed to verify the mathematical model and to further investigate TCCA in other scenarios. The simulator is also extended to include the rest of the communication stack to allow a comprehensive evaluation of the proposed algorithm." }, { "instance_id": "R26729xR26711", "comparison_id": "R26729", "paper_id": "R26711", "text": "EACLE : Energy-Aware Clustering Scheme with Transmission Power Control for Sensor Networks In this paper, we propose a new energy efficient clustering scheme with transmission power control named \u201cEACLE\u201d (Energy-Aware CLustering scheme with transmission power control for sEnsor networks) for wireless sensor networks, which are composed of the following three components; \u201cEACLE clustering\u201d is a distributed clustering method by means of transmission power control, \u201cEACLE routing\u201d builds a tree rooted at a sink node and sets the paths from sensor nodes taking energy saving into consideration, and \u201cEACLE transmission timing control\u201d changes the transmission timing with different levels of transmission power to avoid packet collisions and facilitates packet binding.With an indoor wireless channel model which we obtained from channel measurement campaigns in rooms and corridors and an energy consumption model which we obtained from a measurement of a chipset, we performed computer simulations to investigate the performance of EACLE in a realistic environment. Our simulation results indicate that EACLE outperforms a conventional scheme such as EAD (Energy-Aware Data-centric routing) in terms of communication success rate and energy consumption. Furthermore, we fully discuss the impact of transmission power and timing control on the performance of EACLE." }, { "instance_id": "R26729xR26679", "comparison_id": "R26729", "paper_id": "R26679", "text": "Distributed clustering with directional antennas for wireless sensor networks This paper proposes a decentralized algorithm for organizing an ad hoc sensor network into clusters with directional antennas. The proposed autonomous clustering scheme aims to reduce the sensing redundancy and maintain sufficient sensing coverage and network connectivity in sensor networks. With directional antennas, random waiting timers, and local criterions, cluster performance may be substantially improved and sensing redundancy can be drastically suppressed. The simulation results show that the proposed scheme achieves connected coverage and provides efficient network topology management." }, { "instance_id": "R26729xR26715", "comparison_id": "R26729", "paper_id": "R26715", "text": "Mobility-based clustering protocol for wireless sensor networks with mobile nodes In this study, the authors propose a mobility-based clustering (MBC) protocol for wireless sensor networks with mobile nodes. In the proposed clustering protocol, a sensor node elects itself as a cluster-head based on its residual energy and mobility. A non-cluster-head node aims at its link stability with a cluster head during clustering according to the estimated connection time. Each non-cluster-head node is allocated a timeslot for data transmission in ascending order in a time division multiple address (TDMA) schedule based on the estimated connection time. In the steady-state phase, a sensor node transmits its sensed data in its timeslot and broadcasts a joint request message to join in a new cluster and avoid more packet loss when it has lost or is going to lose its connection with its cluster head. Simulation results show that the MBC protocol can reduce the packet loss by 25% compared with the cluster-based routing (CBR) protocol and 50% compared with the low-energy adaptive clustering hierarchy-mobile (LEACH-mobile) protocol. Moreover, it outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment." }, { "instance_id": "R26775xR26742", "comparison_id": "R26775", "paper_id": "R26742", "text": "An energy-efficient distributed unequal clustering protocol for wireless sensor networks Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly." }, { "instance_id": "R26775xR26757", "comparison_id": "R26775", "paper_id": "R26757", "text": "Unequal clustering scheme based leach for wireless sensor networks Clustering technique is an effective topology control approach which can improve the scalability and lifetime in wireless sensor networks (WSNs). LEACH is a classical clustering algorithm for low energy scheme, however, it still have some deficiencies. This paper studies LEACH protocol, and put an Improved LEACH protocol which has more reasonable set-up phase. In the cluster heads election phase, we put the energy ratio and competition distance as two elements to join the cluster head election. Simulation results demonstrate that Improved LEACH algorithm has better energy balance and prolong network lifetime." }, { "instance_id": "R26775xR26751", "comparison_id": "R26775", "paper_id": "R26751", "text": "Luca: an energy-efficient unequal clustering algorithm using location information for wireless sensor networks Over the last several years, various clustering algorithms for wireless sensor networks have been proposed to prolong network lifetime. Most clustering algorithms provide an equal cluster size using node\u2019s ID, degree and etc. However, many of these algorithms heuristically determine the cluster size, even though the cluster size significantly affects the energy consumption of the entire network. In this paper, we present a theoretical model and propose a simple clustering algorithm called Location-based Unequal Clustering Algorithm (LUCA), where each cluster has a different cluster size based on its location information which is the distance between a cluster head and a sink. In LUCA, in order to minimize the energy consumption of entire network, a cluster has a larger cluster size as increasing distance from the sink. Simulation results show that LUCA achieves better performance than conventional equal clustering algorithm for energy efficiency." }, { "instance_id": "R26775xR26763", "comparison_id": "R26775", "paper_id": "R26763", "text": "UHEED - An Unequal Clustering Algorithm for Wireless Sensor Networks Prolonging the lifetime of wireless sensor networks has always been a determining factor when designing and deploying such networks. Clustering is one technique that can be used to extend the lifetime of sensor networks by grouping sensors together. However, there exists the hot spot problem which causes an unbalanced energy consumption in equally formed clusters. In this paper, we propose UHEED, an unequal clustering algorithm which mitigates this problem and which leads to a more uniform residual energy in the network and improves the network lifetime. Furthermore, from the simulation results presented, we were able to deduce the most appropriate unequal cluster size to be used." }, { "instance_id": "R26850xR26846", "comparison_id": "R26850", "paper_id": "R26846", "text": "A deterministic tabu search algorithm for the fleet size and mix vehicle routing problem The fleet size and mix vehicle routing problem consists of defining the type, the number of vehicles of each type, as well as the order in which to serve the customers with each vehicle when a company has to distribute goods to a set of customers geographically spread, with the objective of minimizing the total costs. In this paper, a heuristic algorithm based on tabu search is proposed and tested on several benchmark instances. The computational results show that the proposed algorithm produces high quality results within a reasonable computing time. Some new best solutions are reported for a set of test problems used in the literature." }, { "instance_id": "R26850xR26841", "comparison_id": "R26850", "paper_id": "R26841", "text": "A column generation approach to the heterogeneous fleet vehicle routing problem We consider a vehicle routing problem with a heterogeneous fleet of vehicles having various capacities, fixed costs and variable costs. An approach based on column generation (CG) is applied for its solution, hitherto successful only in the vehicle routing problem with time windows. A tight integer programming model is presented, the linear programming relaxation of which is solved by the CG technique. A couple of dynamic programming schemes developed for the classical vehicle routing problem are emulated with some modifications to efficiently generate feasible columns. With the tight lower bounds thereby obtained, the branch-and-bound procedure is activated to obtain an integer solution. Computational experience with the benchmark test instances confirms that our approach outperforms all the existing algorithms both in terms of the quality of solutions generated and the solution time." }, { "instance_id": "R26850xR26778", "comparison_id": "R26850", "paper_id": "R26778", "text": "A comparison of techniques for solving the fleet size and mix vehicle routing problem SummaryIn the fleet size and mix vehicle routing problem, one decides upon the composition and size of a possibly heterogeneous fleet of vehicles so as to minimize the sum of fixed vehicle acquisition costs and routing costs for customer deliveries. This paper reviews some existing heuristics for this problem as well as a lower bound procedure. Based on the latter, a new heuristic is presented. Computational results are provided for a number of benchmark problems in order to compare the performance of the different solution methods.ZusammenfassungDas Problem der Bestimmung einer optimalen Anzahl von m\u00f6glicherweise verschiedenen Fahrzeugen in einem Fuhrpark sowie die bestm\u00f6gliche Zusammensetzung verschiedener Fahrzeuge bei der Tourenplanung, wobei die Fixkosten der Beschaffung und die Kosten f\u00fcr die laufende Unterhaltung der Routen minimiert werden soll, wird diskutiert. Einige bekannte Heuristiken und ein Algorithmus zur Bestimmung einer unteren Schranke werden besprochen. Mit diesen Grundlagen wird eine neue Heuristik vorgeschlagen. Um die Leistungsf\u00e4higkeit der verschiedenen L\u00f6sungsmethoden zu vergleichen, werden anschlie\u00dfend Rechenergebnisse verschiedener benchmark Probleme vorgestellt." }, { "instance_id": "R26850xR26848", "comparison_id": "R26850", "paper_id": "R26848", "text": "An effective genetic algorithm for the fleet size and mix vehicle routing problems This paper studies the fleet size and mix vehicle routing problem (FSMVRP), in which the fleet is heterogeneous and its composition to be determined. We design and implement a genetic algorithm (GA) based heuristic. On a set of twenty benchmark problems it reaches the best-known solution 14 times and finds one new best solution. It also provides a competitive performance in terms of average solution." }, { "instance_id": "R26850xR26780", "comparison_id": "R26850", "paper_id": "R26780", "text": "The fleet size and mix vehicle routing problem Abstract In this paper, we address the problem of routing a fleet of vehicles from a central depot to customers with known demand. Routes originate and terminate at the central depot and obey vehicle capacity restrictions. Typically, researchers assume that all vehicles are identical. In this work, we relax the homogeneous fleet assumption. The objective is to determine optimal fleet size and mix by minimizing a total cost function which includes fixed cost and variable cost components. We describe several efficient heuristic solution procedures as well as techniques for generating a lower bound and an underestimate of the optimal solution. Finally, we present some encouraging computational results and suggestions for further study." }, { "instance_id": "R26850xR26784", "comparison_id": "R26850", "paper_id": "R26784", "text": "A new heuristic for determining fleet size and composition In the fleet size and composition vehicle routing problem (FSCVRP), one decides upon the composition of a possibly heterogeneous fleet of vehicles so as to minimize the sum of fixed vehicle acquisition costs and routing costs for customer deliveries. In this note, we build upon a previously-described lower bound procedure for the FSCVRP in order to present a new heuristic. Computational results to date have been encouraging." }, { "instance_id": "R26850xR26791", "comparison_id": "R26850", "paper_id": "R26791", "text": "Adaptation of some vehicle fleet mix heuristics Standard models used for the combined vehicle routing and vehicle fleet composition problem use the same value for the unit running cost across vehicles. In practice such a parameter depends on several factors and particularly on the capacity of the vehicle. The purpose of this paper is to show simple modifications of some well known methods to allow for variable running costs; and also to assess the effect of neglecting such variability. Interesting numerical results, measured in terms of changes in total cost or/and fleet configuration, are found at no extra computational effort." }, { "instance_id": "R26850xR26805", "comparison_id": "R26850", "paper_id": "R26805", "text": "A tabu search heuristic for the heterogeneous fleet vehicle routing problem Abstract The Heterogeneous Fleet Vehicle Routing Problem (HVRP) is a variant of the classical Vehicle Routing Problem in which customers are served by a heterogeneous fleet of vehicles with various capacities, fixed costs, and variable costs. This article describes a tabu search heuristic for the HVRP. On a set of benchmark instances, it consistently produces high-quality solutions, including several new best-known solutions. Scope and purpose In distribution management, it is often necessary to determine a combination of least cost vehicle routes through a set of geographically scattered customers, subject to side constraints. The case most frequently studied is where all vehicles are identical. This article proposes a solution methodology for the case where the vehicle fleet is heterogeneous. It describes an efficient tabu search heuristic capable of producing high-quality solutions on a series of benchmark test problems." }, { "instance_id": "R26850xR26816", "comparison_id": "R26850", "paper_id": "R26816", "text": "Tabu search variants for the mix fleet vehicle routing problem The Mix Fleet Vehicle Routing Problem (MFVRP) involves the design of a set of minimum cost routes, originating and terminating at a central depot, for a fleet of heterogeneous vehicles with various capacities, fixed costs and variable costs to service a set of customers with known demands. This paper develops new variants of a tabu search meta-heuristic for the MFVRP. These variants use a mix of different components, including reactive tabu search concepts; variable neighbourhoods, special data memory structures and hashing functions. The reactive concept is used in a new way to trigger the switch between simple moves for intensification and more complex ones for diversification of the search strategies. The special data structures are newly introduced to efficiently search the various neighbourhood spaces. The combination of data structures and strategic balance between intensification and diversification generates an efficient and robust implementation, which is very competitive with other algorithms in the literature on a set of benchmark instances for which some new best-known solutions are provided." }, { "instance_id": "R26881xR26861", "comparison_id": "R26881", "paper_id": "R26861", "text": "A threshold accepting metaheuristic for the heterogeneous fixed fleet vehicle routing problem Abstract The purpose of this paper is to present a new metaheuristic, termed the backtracking adaptive threshold accepting algorithm, for solving the heterogeneous fixed fleet vehicle routing problem (HFFVRP). The HFFVRP is a variant of the classical vehicle routing problem (VRP) and has attracted much less attention in the operational research (OR) literature than the classical VRP. It involves the design of a set of minimum cost routes, originating and terminating at a depot, for a fleet with fixed number of vehicles of each type, with various capacities, and variable costs to service a set of customers with known demands. The numerical results show that the proposed algorithm is robust and efficient. New best solutions are reported over a set of published benchmark problems." }, { "instance_id": "R26881xR26875", "comparison_id": "R26881", "paper_id": "R26875", "text": "A flexible adaptive memory-based algorithm for real-life transportation operations: Two case studies from dairy and construction sector Abstract Effective routing of vehicles remains a focal goal of all modern enterprises, thriving for excellence in project management with minimal investment and operational costs. This paper proposes a metaheuristic methodology for solving a practical variant of the well-known Vehicle Routing Problem, called Heterogeneous Fixed Fleet VRP (HFFVRP). Using a two-phase construction heuristic, called GEneralized ROute Construction Algorithm (GEROCA), the proposed metaheuristic approach enhances its flexibility to easily adopt various operational constraints. Via this approach, two real-life distribution problems faced by a dairy and a construction company were tackled and formulated as HFFVRP. Computational results on the aforementioned case studies show that the proposed metaheuristic approach (a) consistently outperforms previous published metaheuristic approaches we have developed to solve the HFFVRP, and (b) substantially improves upon the current practice of the company. The key result that impressed both companies\u2019 management was the improvement over the bi-objective character of their problems: the minimization of the total distribution cost as well as the minimization of the number of the given heterogeneous number of vehicles used." }, { "instance_id": "R26881xR26869", "comparison_id": "R26881", "paper_id": "R26869", "text": "A heuristic for the routing and carrier selection problem We consider the problem of simultaneously selecting customers to be served by external carriers and routing a heterogeneous internal fleet. Very little attention was devoted to this problem. A recent paper proposed a heuristic solution procedure. Our paper shows that better results can be obtained by a simple method and corrects some erroneous results presented in the previous paper." }, { "instance_id": "R26918xR26896", "comparison_id": "R26918", "paper_id": "R26896", "text": "A hybrid simulated annealing for capacitated vehicle routing problems with the independent route length This paper presents a linear integer model of capacitated vehicle routing problems (VRP) with the independent route length to minimize the heterogeneous fleet cost and maximize the capacity utilization. In the proposed model, the fleet cost is independent on the route length and there is a hard time window over depot. In some real-world situations, the cost of routes is independent on their length, but it is dependent to type and capacity of vehicles allocated to routes where the fleet is mainly heterogeneous. In this case, the route length or travel time is expressed as restriction, that is implicated a hard time window in depot. The proposed model is solved by a hybrid simulated annealing (SA) based on the nearest neighborhood. It is shown that the proposed model enables to establish routes to serve all given customers by the minimum number of vehicles and the maximum capacity used. Also, the proposed heuristic can find good solutions in reasonable time. A number of small and large-scale problems in little and large scale are solved and the associated results are reported." }, { "instance_id": "R26918xR26883", "comparison_id": "R26918", "paper_id": "R26883", "text": "Lagrangian Relaxation Methods for Solving the Minimum Fleet Size Multiple Traveling Salesman Problem with Time Windows We consider the problem of finding the minimum number of vehicles required to visit once a set of nodes subject to time window constraints, for a homogeneous fleet of vehicles located at a common depot. This problem can be formulated as a network flow problem with additional time constraints. The paper presents an optimal solution approach using the augmented Lagrangian method. Two Lagrangian relaxations are studied. In the first one, the time constraints are relaxed producing network subproblems which are easy to solve, but the bound obtained is weak. In the second relaxation, constraints requiring that each node be visited are relaxed producing shortest path subproblems with time window constraints and integrality conditions. The bound produced is always excellent. Numerical results for several actual school busing problems with up to 223 nodes are discussed. Comparisons with a set partitioning formulation solved by column generation are given." }, { "instance_id": "R26918xR26900", "comparison_id": "R26918", "paper_id": "R26900", "text": "Economic Heuristic Optimization for Heterogeneous Fleet VRPHESTW A three-step local search algorithm based on a probabilistic variable neighborhood search is presented for the vehicle routing problem with a heterogeneous fleet of vehicles and soft time windows (VRPHESTW). A generation mechanism based on a greedy randomized adaptive search procedure, a diversification procedure using an extinctive selection evolution strategy, and a postoptimization method based on a threshold algorithm with restarts are considered to solve the problem. The results show the convenience of using an economic objective function to analyze the influence of the changes in the economic environment on the transportation average profit of vehicle routing problems. Near real-world vehicle routing problems need (1) an economic objective function to measure the quality of the solutions as well as (2) an appropriate guide function, which may be different from the economic objective function, for each heuristic method and for each economic scenario." }, { "instance_id": "R26918xR26913", "comparison_id": "R26918", "paper_id": "R26913", "text": "A well-scalable metaheuristic for the fleet size and mix vehicle routing problem with time windows This paper presents an efficient and well-scalable metaheuristic for fleet size and mix vehicle routing with time windows. The suggested solution method combines the strengths of well-known threshold accepting and guided local search metaheuristics to guide a set of four local search heuristics. The computational tests were done using the benchmarks of [Liu, F.-H., & Shen, S.-Y. (1999). The fleet size and mix vehicle routing problem with time windows. Journal of the Operational Research Society, 50(7), 721-732] and 600 new benchmark problems suggested in this paper. The results indicate that the suggested method is competitive and scales almost linearly up to instances with 1000 customers." }, { "instance_id": "R26982xR26938", "comparison_id": "R26982", "paper_id": "R26938", "text": "A decision support system for vehicle fleet planning Abstract A decision support system (DSS) is developed to solve the fleet planning problem. The system can be used by fleet managers to plan fleet size and mix. The decision support system was designed to assist managers in every step of the planning process: (i) To forecast demand; (ii) to determine relevant criteria; (iii) to generate alternative plans; (iv) to assess alternative plans with respect to the criteria determined in ii); and (v) to choose \u2018the best\u2019 plan. Emphasis of the decision support system is on flexibility. Another important feature of the decision support system is that it uses both a multicriteria approach to evaluate alternative plans and a stochastic programming model to generate plans. The system can be used to answer a wide variety of \u2018What if\u2019 questions with potentially significant cost impacts. The example provided shows how the DSS can be useful to improve vehicle fleet planning." }, { "instance_id": "R26982xR26955", "comparison_id": "R26982", "paper_id": "R26955", "text": "Global and Local Moves in Tabu Search: A Real-Life Mail Collecting Application The problem we deal with is the optimization of mail collecting at several customer sites that are scattered around an urban area. It involves the design of a set of minimum cost routes, originating and terminating at a central depot, for a fleet of vehicles that service those customer sites with known demands. We develop a Tabu Search approach where at each iteration the best move is selected among a large variety of possible moves. This new version of the metaheuristic Tabu leads us to determine a good vehicle fleet mix (cheapest cost incorporating routing and fixed vehicle costs) without violating constraints such as time restrictions and capacity." }, { "instance_id": "R26982xR26952", "comparison_id": "R26982", "paper_id": "R26952", "text": "The multi-trip vehicle routing problem The basic vehicle routing problem is concerned with the design of a set of routes to serve a given number of customers, minimising the total distance travelled. In that problem, each vehicle is assumed to be used only once during a planning period, which is typically a day, and therefore is unrepresentative of many practical situations, where a vehicle makes several journeys during a day. The present authors have previously published an algorithm which outperformed an experienced load planner working on the complex, real-life problems of Burton's Biscuits, where vehicles make more than one trip each day. This present paper uses a simplified version of that general algorithm, in order to compare it with a recently published heuristic specially designed for the theoretical multi-trip vehicle routing problem." }, { "instance_id": "R26982xR26962", "comparison_id": "R26982", "paper_id": "R26962", "text": "A robust optimization model for a cross-border logistics problem with fleet composition in an uncertain environment Since the implementation of the open-door policy in China, many Hong Kong-based manufacturers' production lines have moved to China to take advantage of the lower production cost, lower wages, and lower rental costs, and thus, the finished products must be transported from China to Hong Kong. It has been discovered that logistics management often encounters uncertainty and noisy data. In this paper, a robust optimization model is proposed to solve a cross-border logistics problem in an environment of uncertainty. By adjusting penalty parameters, decision-makers can determine an optimal long-term transportation strategy, including the optimal delivery routes and the optimal vehicle fleet composition to minimize total expenditure under different economic growth scenarios. We demonstrate the robustness and effectiveness of our model using the example of a Hong Kong-based manufacturing company. The analysis of the trade-off between model robustness and solution robustness is also presented." }, { "instance_id": "R26982xR26946", "comparison_id": "R26982", "paper_id": "R26946", "text": "A Tabu Search Approach for Delivering Pet Food and Flour in Switzerland In this paper, we consider a real-life vehicle routeing problem that occurs in a major Swiss company producing pet food and flour. In contrast with usual hypothetical problems, a large variety of restrictions has to be considered. The main constraints are relative to the accessibility and the time windows at customers, the carrying capacities of vehicles, the total duration of routes and the drivers' breaks. To find good solutions to this problem, we propose two heuristic methods: a fast straightforward insertion procedure and a method based on tabu search techniques. Next, the produced solutions are compared with the routes actually covered by the company. Our outcomes indicate that the total distance travelled can be reduced significantly when such methods are used." }, { "instance_id": "R27039xR27021", "comparison_id": "R27039", "paper_id": "R27021", "text": "Sizing the US destroyer fleet Abstract For the US Navy to be successful, it must make good investments in combatant ships. Historically a vital component in these decisions is expert opinion. This paper illustrates that the use of quantitative methods in conjunction with expert opinion can add considerable insight. We use the analytic hierarchy process (AHP) to gather expert opinions. Then, distributions are derived based on these expert opinions, and integrated into a mixed integer programming model to derive a distribution for the \u201ceffectiveness\u201d of a fleet with a particular mix of ships. These ideas are applied to the planning scenario for the 2015 conflict on the Korean Peninsula, one of the two key scenarios that the Department of Defense uses for planning." }, { "instance_id": "R27039xR27037", "comparison_id": "R27039", "paper_id": "R27037", "text": "Model Integrating Fleet Design and Ship Routing Problems for Coal Shipping In this paper, an integrated optimization model is developed to improve the efficiency of coal shipping. The objective is (1) to determine the types of ships and the number of each type, (2) to optimize the ship routing, therefore, to minimize the total coal shipping cost. Meanwhile, an algorithm based on two-phase tabu search is designed to solve the model. Numerical tests show that the proposed method can decrease the unit shipping cost and the average ship delay, and improve the reliability of the coal shipping system." }, { "instance_id": "R27039xR27027", "comparison_id": "R27039", "paper_id": "R27027", "text": "Ship Routing and Scheduling: Status and Perspectives The objective of this paper is to review the current status of ship routing and scheduling. We focus on literature published during the last decade. Because routing and scheduling problems are closely related to many other fleet planning problems, we have divided this review into several parts. We start at the strategic fleet planning level and discuss the design of fleets and sea transport systems. We continue with the tactical and operational fleet planning level and consider problems that comprise various ship routing and scheduling aspects. Here, we separately discuss the different modes of operations: industrial, tramp, and liner shipping. Finally, we take a glimpse at naval applications and other related problems that do not naturally fall into these categories. The paper also presents some perspectives regarding future developments and use of optimization-based decision-support systems for ship routing and scheduling. Several of the trends indicate both accelerating needs for and benefits from such systems and, hopefully, this paper will stimulate further research in this area." }, { "instance_id": "R27039xR26987", "comparison_id": "R27039", "paper_id": "R26987", "text": "An Industrial Ocean-Cargo Shipping Problem This paper reports the modeling and solution of an industrial ocean-cargo shipping problem. The problem involves the delivery of bulk products from an overseas port to transshipment ports on the Atlantic Coast, and then over land to customers. The decisions made include the number and the size of ships to charter in each time period during the planning horizon, the number and location of transshipment ports to use, and transportation from ports to customers. The complexity of this problem is compounded by the cost structure, which includes fixed charges in both ship charters and port operations. Such a large scale, dynamic, and stochastic problem is reduced to a solvable stationary, deterministic, and cyclical model. The process of modeling the problem and the solution of the resultant mixed integer program are described in detail. Recommendations from this study have been implemented." }, { "instance_id": "R27039xR26998", "comparison_id": "R27039", "paper_id": "R26998", "text": "Modeling the Increased Complexity of New York City's Refuse Marine Transport System The New York City Department of Sanitation operates the world's largest refuse marine transport system. Waste trucks unload their cargo at land-based transfer stations where refuse is placed in barges and then towed by tugboats to the Fresh Kills Landfill in Staten Island. In the early 1980s, the city commissioned the development of a computer-based model for use in fleet sizing and operations planning. As a result of the complexities introduced by environmental regulation and technological innovation, the marine transport system operations changed and the existing model became obsolete. Based on the success achieved with the first model in 1993, the city commissioned the development of a new model. In this paper, we present a PC-based model developed to meet the increased complexity of the system. Analysis performed for validation and calibration of the model demonstrates that it tracks well the operations of the real system. We illustrate through a detailed design exercise how to use the model to configure the system in a way that meets the requirements of the refuse marine transport system." }, { "instance_id": "R27061xR27049", "comparison_id": "R27061", "paper_id": "R27049", "text": "Smart City Components Architicture The research is essentially to modularize the structure of utilities and develop a system for following up the activities electronically on the city scale. The GIS operational platform will be the base for managing the infrastructure development components with the systems interoperability for the available city infrastructure related systems. The concentration will be on the available utility networks in order to develop a comprehensive, common, standardized geospatial data models. The construction operations for the utility networks such as electricity, water, Gas, district cooling, irrigation, sewerage and communication networks; are need to be fully monitored on daily basis, in order to utilize the involved huge resources and man power. These resources are allocated only to convey the operational status for the construction and execution sections that used to do the required maintenance. The need for a system that serving the decision makers for following up these activities with a proper geographical representation will definitely reduce the operational cost for the long term." }, { "instance_id": "R27061xR27059", "comparison_id": "R27061", "paper_id": "R27059", "text": "Using cloud technologies for large-scale house data in smart city In the smart city environment, a wide variety of data are collected from sensors and devices to achieve value-added services. In this paper, we especially focus on data taken from smart houses in the smart city, and propose a platform, called Scallop4SC, that stores and processes the large-scale house data. The house data is classified into log data or configuration data. Since the amount of the log is extremely large, we introduce the Hadoop/MapReduce with a multi-node cluster. On top of this, we use HBase key-value store to manage heterogeneous log data in a schemaless manner. On the other hand, to manage the configuration data, we choose MySQL to process various queries to the house data efficiently. We propose practical data models of the log data and the configuration data on HBase and MySQL, respectively. We then show how Scallop4SC works as a efficient data platform for smart city services. We implement a prototype with 12 Linux servers. We conduct an experimental evaluation to calculate device-wise energy consumption, using actual house log recorded for one year in our smart house. Based on the result, we discuss the applicability of Scallop4SC to city-scale data processing." }, { "instance_id": "R27061xR27057", "comparison_id": "R27061", "paper_id": "R27057", "text": "Smart City Development: A Business Process-centric Conceptualisation Smart city development has been proposed as a response to urbanisation challenges and changing citizen needs in the cities. It allows the city as a complex system of systems to be efficient and integrated, in order to work as a whole, and provide effective services to citizens through its inter-connected sector. This research attempts to conceptualise smart city, by looking at its requirements and components from a process change perspective, not a merely technology-led innovation within a city. In view of that, the research also gains benefits from the principles of smart city development such as systems thinking approach, city as a system of systems, and the necessity of systems integration. The outcome of this study emphasises the significance of considering a city as a system of systems and necessity of city systems integration and city process change for smart city development. Consequently, the research offers a city process-centric conceptualisation of smart city." }, { "instance_id": "R27235xR27206", "comparison_id": "R27235", "paper_id": "R27206", "text": "Exchange-rate volatility, exchange-rate regime, and trade volume: evidence from the UK\u2013US export function (1889\u20131999) Abstract This paper investigated the impact of exchange-rate volatility and exchange-rate regime on the British exports to the United States using data for the period 1889\u20131999. The empirical findings suggest that neither exchange-rate volatility nor the different exchange-rate regimes that spanned the last century had an effect on export volume." }, { "instance_id": "R27235xR27168", "comparison_id": "R27235", "paper_id": "R27168", "text": "Exchange Rate Volatility and International Prices We examine how exchange rate volatility affects exporter's pricing decisions in the presence of optimal forward covering. By taking account of forward covering, we are able to derive an expression for the risk premium in the foreign exchange market, which is then estimated as a generalized ARCH model to obtain the time-dependent variance of the exchange rate. Our theory implies a connection between the estimated risk premium equation, and the influence of exchange rate volatility on export prices. In particular, we argue that if there is no risk premium, then exchange rate variance can only have a negative impact on export prices. In the presence of a risk premium, however, the effect of exchange rate variance on export prices is ambiguous, and may be statistically insignificant with aggregate data. These results are supported using data on aggregate U.S. imports and exchange rates of the dollar against the pound. yen and mark." }, { "instance_id": "R27235xR27188", "comparison_id": "R27235", "paper_id": "R27188", "text": "The Impact of Exchange Rate Volatility on International Trade: Reduced Form Estimates using the GARCH-in-mean Model Abstract In this paper, we use a multivariate GARCH-in-mean model of the reduced form of multilateral exports to examine the relationship between nominal exchange rate volatility and export flows and prices. The model imposes rationality on perceived exchange rate volatility, unlike conventional, two-step strategies. Tests are performed for five industrialized countries over the post-Bretton Woods era. We find that the GARCH conditional variance has a statistically significant impact on the reduced form equations for all countries. For most of the countries, the magnitude of the effect is stronger for export prices than quantities. In addition, the estimated magnitude of the impact of volatility on exports is not robust to using the conventional estimation strategy. (JEL F41, F31)." }, { "instance_id": "R27235xR27180", "comparison_id": "R27235", "paper_id": "R27180", "text": "Unanticipated exchange rate variability and the growth of international trade ZusammenfassungUnerwartete Wechselkursschwankungen und das Wachstum des internationalen Handels. - Der Verfasser untersucht die oft zitierte These, die Wechselkursvariabilit\u00e4t habe den internationalen Handel beeintr\u00e4chtigt. Im Gegensatz zu fr\u00fcheren Arbeiten formuliert und sch\u00e4tzt er ein Modell mit zwei Gleichungen. Davon sch\u00e4tzt die erste die Bestimmungsgr\u00fcnde der Variabilit\u00e4t der realen Wechselkurse mit dem Ziel, zwischen den erwarteten und den unerwarteten Komponenten dieser Variabilit\u00e4t unterscheiden zu k\u00f6nnen. Die zweite ist eine Gleichung in reduzierter Form f\u00fcr die Bestimmungsgr\u00fcnde des Wachstums realer Exporte. Diese wird zum Testen der Hypothese benutzt, da\u00df nur die unerwarteten Schwankungen der realen Wechselkurse das Wachstum der realen Exporte signifikant beeinflussen. Die Ergebnisse best\u00e4tigen diese Hypothese.R\u00e9sum\u00e9La variabilit\u00e9 non-pr\u00e9vue des taux de change et l\u2019accroissement du commerce international. - Dans cette \u00e9tude l\u2019auteur examine l\u2019hypoth\u00e8se souvent-cit\u00e9e que la variabilit\u00e9 des taux de change a emp\u00each\u00e9 l\u2019accroissement du commerce international. Contraire aux \u00e9tudes ant\u00e9rieures, il formule et estime un mod\u00e8le \u00e0 deux \u00e9quations. La premi\u00e8re \u00e9quation \u00e9value les facteurs d\u00e9terminants de la variabilit\u00e9 des taux de change r\u00e9els pour diff\u00e9rencier entre les \u00e9l\u00e9ments pr\u00e9vus et non-pr\u00e9vus de la variabilit\u00e9 des taux de change r\u00e9els. La deuxi\u00e8me est une \u00e9quation \u00e0 forme r\u00e9duite et contient les facteurs d\u00e9terminants de l\u2019accroissement des exportations r\u00e9elles. Ce mod\u00eble est utilis\u00e9 pour v\u00e9rifier l\u2019hypoth\u00e8se que seulement la variabilit\u00e9 non-pr\u00e9vue des taux de change r\u00e9els a un effet significatif sur l\u2019accroissement des exportations r\u00e9elles. Les r\u00e9sultats confirment l\u2019hypoth\u00e8se.ResumenVariabilidad no anticipada de la tasa de cambio y el crecimiento del comercio international. - En este trabajo se investiga la muy citada hip\u00f3tesis de que la variabilidad de la tasa de cambio ha inhibido el crecimiento del comercio internacional. A diferencia de trabajos previos, se formula y estima un modelo biecuacional. La primera ecuaci\u00f3n estima las determinantes de la variabilidad de la tasa de cambio real (REER), con el fin de distinguir entre los componentes anticipados y no anticipados de la variabilidad de la REER. La segunda es una ecuaci\u00f3n en forma reducida para las d\u00e9terminantes del crecimiento real de las exportaciones. Se utiliza este modelo para llevar a cabo un test de la hip\u00f3tesis de que s\u00f3lo la variabilidad no anticipada de la REER afecta significativamente el crecimiento real del volumen de exportaciones. Los resultados indican que la variabilidad no anticipada de la REER ha inhibido el crecimiento de las exportaciones, mientras que la variabilidad anticipada no ha tenido efecto alguno." }, { "instance_id": "R27235xR27190", "comparison_id": "R27235", "paper_id": "R27190", "text": "Does Exchange Rate Volatility Depress Trade Flows? Evidence from Error- Correction Models This paper examines the impact of exchange rate volatility on the trade flows of the G-7 countries in the context of a multivariate error-correction model. The error-correction models do not show any sign of parameter instability. The results indicate that the exchange rate volatility has a significant negative impact on the volume of exports in each of the G-7 countries. Assuming market participants are risk averse, these results imply that exchange rate uncertainty causes them to reduce their activities, change prices, or shift sources of demand and supply in order to minimize their exposure to the effects of exchange rate volatility. This, in turn, can change the distribution of output across many sectors in these countries. It is quite possible that the surprisingly weak relationship between trade flows and exchange rate volatility reported in several previous studies are due to insufficient attention to the stochastic properties of the relevant time series. Copyright 1993 by MIT Press." }, { "instance_id": "R27235xR27153", "comparison_id": "R27235", "paper_id": "R27153", "text": "Exchange rate uncertainty and foreign trade Abstract This paper starts with reviewing the existing literature on exchange rate uncertainty and trade flows. It then argues that potential costs of medium term uncertainty in exchange rates and competitiveness are likely to be much larger than that of exchange risk which has been the focus of the existing literature. Two measures of medium term exchange rate uncertainty are constructed. One is a weighted function of the magnitude of past movements in nominal exchange rates and the current deviation of the exchange rate from \u2018equilibrium\u2019, while the second depends on both the duration and the amplitude of misalignment from \u2018equilibrium\u2019 exchange rates. The empirical evidence reported in the paper suggests that when exchange rate uncertainty is defined over a medium term period it does affect adversely trade flows of the industrial countries under review, with the notable exception of the United States." }, { "instance_id": "R27235xR27195", "comparison_id": "R27235", "paper_id": "R27195", "text": "The impact of exchange rate volatility on German-US trade flows Abstract This paper analyses the effect of exchange rate volatility on Germany-US bilateral trade flows for the period 1973:4\u20131992:9. ARCH models are used to generate a measure of exchange rate volatility and are then tested against Germany's exports to, and imports from, the US. This paper differs from many papers previously published as the effects of volatility are found to be positive and statistically significant for the period under review. The debate over the use of real or nominal exchange rate data in the derivation of volatility estimation is also addressed." }, { "instance_id": "R27235xR27144", "comparison_id": "R27235", "paper_id": "R27144", "text": "Exchange Rate Risk, Exchange Rate Regime and the Volume of International Trade The authors examine the effect of exchange-rate regimes on the volume of internatio nal trade. Bilateral trade flows among countries with floating exchan ge rates are higher than those among countries with fixed rates. Whil e exchange-rate risk does reduce the volume of trade among countries regardless of the nature of their exchange-rate regime, the greater r isk faced by traders in floating exchange-rate countries is more than offset by the trade-reducing effects of restrictive commercial polic ies imposed by fixed exchange rate countries. Copyright 1988 by WWZ and Helbing & Lichtenhahn Verlag AG" }, { "instance_id": "R27235xR27149", "comparison_id": "R27235", "paper_id": "R27149", "text": "Real Exchange Rate Volatility and U.S. Bilateral Trade: A VAR Approach This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press." }, { "instance_id": "R27235xR27131", "comparison_id": "R27235", "paper_id": "R27131", "text": "Exchange-rate variability and trade performance: evidence for the big seven industrial countries ZusammenfassungVariabilit\u00c4t der Wechselkurse und Entwicklung der Exporte: Evidenz f\u00fcr die sieben gro\\en Industriestaaten. \u2014 Dieser Aufsatz enth\u00c4lt empirische Ergebnisse \u00fcber den Zusammenhang zwischen der Variabilit\u00c4t der Wechselkurse und dem Au\\enhandel der sieben gro\\en OECD-L\u00c4nder. Im Gegensatz zu anderen Arbeiten wird der Einflu\\ der realen Exporterl\u00f6se der \u00f6lf\u00f6rderl\u00c4nder auf die Ausfuhr dieser sieben L\u00c4nder ber\u00fccksichtigt. Au\\erdem wird das ausl\u00c4ndische Einkommen sowohl bei \u201chohen\u201d als auch bei \u201cniedrigen\u201d Dollar-Wechselkursen berechnet, um sicherzustellen, da\\ die Ergebnisse nicht durch die Wahl eines bestimmten Wechselkursniveaus f\u00fcr den Dollar verzerrt werden. Schlie\\lich werden au\\er den unverz\u00f6gerten auch die verz\u00f6gerten Impulse der Wechselkursvariabilit\u00c4t f\u00fcr die Exporte getestet. Die Ergebnisse lassen darauf schlie\\en, da\\ die Wechselkursvariabilit\u00c4t die Exporte keines der sieben gro\\en L\u00c4nder w\u00c4hrend der Periode flexibler Kurse nachteilig beeinflu\\t hat.R\u00e9sum\u00e9Variabilit\u00e9 des taux de change et performance commerciale: Evidence pour les grands sept pays industriels. - Cet article pr\u00e9sente des r\u00e9sultats empiriques concernant la relation entre la variabilit\u00e9 des taux de change et le commerce pour les grands sept pays OCDE. Contrairement aux autres \u00e9tudes empiriques les auteurs consid\u00e8rent l\u2019influence des revenus re\u00e9ls d\u2019exportation des nations producteurs de p\u00e9trole sur les exportations de ces sept pays. Aussi les auteurs mesurent le revenu \u00e9tranger au niveau des taux de change de dollar \u00abhaut\u00bb aussi bien que \u00abbas\u00bb pour garantir que les r\u00e9sultats ne sont pas biaises par le niveau particulier des taux de change choisi pour le dollar EU. Finalement, les auteurs testent les effets imm\u00e9diats et retard\u00e9s de la variabilit\u00e9 des taux de change sur les exportations. Les r\u00e9sultats indiquent que la variabilit\u00e9 des taux de change n\u2019a pas n\u00e9gativement influenc\u00e9 les exportations des grands sept pays pendant la p\u00e9riode des taux de change flexibles.ResumenVariabilidad del tipo de cambio y comercio internacional: Evidencia para los siete pa\u00edses industrializados m\u00e1s importantes. - En este trabajo se presentan resultados emp\u00edricos de la relaci\u00f3n entre la variabilidad del tipo de cambio y el comercio para los siete pa\u00edses m\u00e1s importantes de la OECD. A diferencia de trabajos previos se toma en cuenta la influencia de los ingresos reaies en concepto de exportaciones de los pa\u00edses exportadores de petr\u00f3leo sobre las exportaciones de los siete pa\u00edses estudiados. Adem\u00e1s se mide el ingreso en divisas al cambio alto y bajo del d\u00f3lar, con el fin de evitar el sesgo immanente al utilizar un s\u00f3lo nivel de cambio para el d\u00f3lar. Finalmente, so lleva a cabo un test para estudiar el efecto instant\u00e1neo y desfasado de la variabilidad del tipo de cambio sobre las exportaciones. Los resultados indican que la variabilidad del tipo de cambio no ha afectado negativamente a las exportaciones de los siete pa\u00edses estudiados durante el per\u00edodo de cambios flexibles." }, { "instance_id": "R27264xR27238", "comparison_id": "R27264", "paper_id": "R27238", "text": "Middleware for Robotics: A Survey The field of robotics relies heavily on various technologies such as mechatronics, computing systems, and wireless communication. Given the fast growing technological progress in these fields, robots can offer a wide range of applications. However real world integration and application development for such a distributed system composed of many robotic modules and networked robotic devices is very difficult. Therefore, middleware services provide a novel approach offering many possibilities and drastically enhancing the application development for robots. This paper surveys the current state of middleware approaches in this domain. It discusses middleware challenges in these systems and presents some representative middleware solutions specifically designed for robots. The selection of the studied methods tries to cover most of the middleware platforms, objectives and approaches that have been proposed by researchers in this field." }, { "instance_id": "R27264xR27251", "comparison_id": "R27264", "paper_id": "R27251", "text": "An introduction to robot component model for opros(open platform for robotic services) The OPRoS(Open Platform for Robotic Service) is a platform for network based intelligent robots supported by the IT R&D program of Ministry of Knowledge Economy of KOREA. The OPRoS technology aims at establishing a component based standard software platform for the robot which enables complicated functions to be developed easily by using the standardized COTS components. The OPRoS provides a software component model for supporting reusability and compatibility of the robot software component in the heterogeneous communication network. In this paper, we will introduce the OPRoS component model and its background." }, { "instance_id": "R27380xR27281", "comparison_id": "R27380", "paper_id": "R27281", "text": "On the Changes in Residual Stress Produced by Plastic Torsion Due to Repeated Stressing Setting process is often practiced on coil springs in order to improve their fatigue resistance and prevent their creep deflection. Torsional residual stresses are produced by this process, and it is generally understood that these stresses would play a role in improving the fatigue properties. In this experiment, round bar specimens of the spring steel SUP2 were used, and after being twisted by the torsional moment 25% beyond that corresponding to the yield point, they were subjected to the fatigue test in alternating torsion. The distribution of residual stresses was measured by the etching method, by measuring the angle of torsion during the etching process. Three stress levels were employed in repeated stressing and the number of stress cycles was made to be the same in each stress level. As a new attempt, we studied the fading of residual stresses under repeated stressing in successive two stress levels.The results obtained are summariaed as follows:(1) Residual stresses produced by plastic torsion are of the thermal stress type near the surface, being negative at the surface layers.(2) Residual stresses subjected to repeated stressing fade noticeably in the first stage of fading and then gradually with the repetition of stress cycles. In the second stage of fading, the relation obtained between the ratio of surface residual stresses \u03c4r/\u03c4o, (\u03c4r is the current value and \u03c4o is the initial value of surface residual stress) and the logarithm of cycle ratio n/N, formed straight lines, and experimental formulas concerning the fading of residual stresses were established.(3) In repeated stressing under successive two stress levels, the fading of residual stresses is larger in the case of descending stressing than in the case of ascending stressing, when the same numbers of stress cycles are given to each stress level, respectively. Hardness has also the same tendency as the residual stress." }, { "instance_id": "R27380xR27316", "comparison_id": "R27380", "paper_id": "R27316", "text": "Modelling of the Shot Peening Residual Stress Relaxation in Steel Structure Under Cyclic Loading With the help of a new description on the material cyclic softening law [1] and the elastoplastic calculation method proposed by Zarka et al [2], a theoretical model is developped for calculating shot peening residual stress relaxation under cyclic loadings. This model can take into account the modification of material mechanical properties due to shot peening, material cyclic softening and real local loading conditions. An application of this model on a shot peened plate in the steel SEA4135 subjected to a repeated plane bending is presented. The calculated results can well predict the experimental ones obtained by X-ray diffractometer." }, { "instance_id": "R27380xR27359", "comparison_id": "R27380", "paper_id": "R27359", "text": "Residual Stress Relaxation and Fatigue Strength of AISI 4140 under Torsional Loading after Conventional Shot Peening, Stress Peening and Warm Peening Cylindrical rods of 450\u00b0C quenched and tempered AISI 41 40 were conventionally shot peened, stress peened and warm peened while rotating in the peening device. Warm peening at Tpeen = 310\u00b0C was conducted using a modified air blast shot peening machine with an electric air flow heater system. To perform stress peening using a torsional pre-stress, a device was conceived which allowed rotating pre-stressed samples without having material of the pre-loading gadget between the shot and the samples. Thus, same peening conditions for all peening procedures were ensured. The residual stress distributions present after the different peening procedures were evaluated and compared with results obtained after peening of flat material of the same steel. The differently peened samples were subjected to torsional pulsating stresses (R = 0) at different loadings to investigate their residual stress relaxation behavior. Additionally, the pulsating torsional strengths for the differently peened samples were determined." }, { "instance_id": "R27380xR27283", "comparison_id": "R27380", "paper_id": "R27283", "text": "X-ray diffraction study of residual macrostresses in shot-peened and fatiqued 4130 steel A study has been made of the effects of shot peening and fatigue cycling on the residual macrostresses determined by X-ray methods in an austenitized and tempered AISI 4130 steel (150\u2013170 ksi). The results show that the effect of shot peening is to produce a residual compressive macrostress layer 0.014-in. deep. The residual-stress profile (stress vs. depth) exhibits a small negative stress gradient at and near the surface and a large positive stress gradient in the interior. Stress relaxation (due to fatique cycling) which occurred early in the fatigue history of the specimen was found greater at the surface than in the subsurface layers. Stress gradients of the stress profile increased with continued cycling and varied with depth. A correlation appears to exist between stress relaxation and stress gradients at the surface." }, { "instance_id": "R27380xR27378", "comparison_id": "R27380", "paper_id": "R27378", "text": "High temperature fatigue behavior and residual stress stability of laser-shock peened and deep rolled austenitic steel AISI 304 Abstract In this paper, we investigate how laser-shock peening and deep rolling affect the cyclic deformation and S/N-behavior of austenitic stainless steel AISI 304 at elevated temperatures (up to 600 \u00b0C). The results demonstrate that laser shock peening can produce similar amounts of lifetime enhancements as deep rolling. The cycle, stress amplitude and temperature-dependent relaxation of compressive residual stresses is more pronounced than the decrease of near-surface work hardening." }, { "instance_id": "R27380xR27285", "comparison_id": "R27380", "paper_id": "R27285", "text": "Compressive Residual Stress on Fatigue Fractured Surface X-ray fractography is a technique for analysing the cause and mechanism of fracture from the information obtained by X-ray irradiation on the fractured surface. It has been shown that a good correlation exists between the residual stress or the half value breadth of diffraction profile and the stress intensity factor that had caused the fracture. X-ray fractography has been successfully applied for the in-service failure of many types of fracture. However, in some cases the residual stresses on the fatigue fractured surface in service are compressive, which have not been found in the laboratory experiments so far. In the present study, fatigue experiments were carried out on 0.5% carbon steel to investigate the stress condition that produces compressive residual stress on the fractured surface. The specimen was a centre notched rectangular plate of 8mm thick, and a wide range of stress ratio R= o min/ o max was applied from tensile to compressive, namely R=0.50, 0.25, 0.20, 0.00, -1.67,-2.33, -2.40, and -3.Q0. From the results of experiments, it was found that, when the stress ratio was -3.00 and the minimum stress was -150MPa, the residual stress on the fractured surface became compressive. Since the minimum stress was far smaller than the compressive yield stress, the cause of the compressive residual stress was considered to be the result of crack closure. In this case, the crack opening ratio U=(\u03c3max-\u03c3op)/\u039b\u03c3, where \u03c3 op is the crack opening stress, was about 0.3 and almost constant." }, { "instance_id": "R27380xR27357", "comparison_id": "R27380", "paper_id": "R27357", "text": "Experimental measurement and finite element simulation of the interaction between residual stresses and mechanical loading Abstract Residual stresses, which can be produced during the manufacturing process, play an important role in an industrial environment. Residual stresses can and do change in service. In this paper, measurements of the statistical distribution of the initial residual stress in shot blast bars of En15R steel are presented. Also measured was the relaxation of the residual stresses after simple tensile and cyclic tension\u2013compression loading. Results from an elastic\u2013plastic finite element (FE) analysis of the interaction between residual stresses and mechanical loading are given. Two material hardening models were used in an FE analyses: simple linear kinematic hardening and multilinear hardening. It is shown that residual stress relaxation occurs when the applied strains are below the elastic limit. Furthermore, the results from the simulations were found to depend on the type of material model. Using the complex multilinear model led to greater residual stress relaxation compared to the simple linear model. Agreement between measurements and predictions was poor for cyclic loading, and good for simple tensile loading." }, { "instance_id": "R27461xR27413", "comparison_id": "R27461", "paper_id": "R27413", "text": "Approximate Correlations for Chevron-Type Plate Heat Exchangers There exists very little useful data representing the performance of industrial plate heat exchangers (PHEs) in the open literature. As a result, it has been difficult to arrive at any generalized correlations. While every PHE manufacturer is believed to have a comprehensive set of performance curves for their own designs, there exists the need to generate an approximate set of generalized correlations for the heat-transfer community. Such correlations can be used for preliminary designs and analytical studies. This paper attempts to develop such a set of generalized correlations to quantify the heat-transfer and pressure-drop performance of chevron-type PHEs. For this purpose, the experimental data reported by Heavner et al. were used for the turbulent region. For the laminar region, a semi-theoretical approach was used to express, for example, the friction factor as a function of the Reynolds number and the chevron angle. Asymptotic curves were used for the transitional region. Physical explanations are provided for the trends shown by the generalized correlations. The correlations are compared against the open-literature data, where appropriate. These correlations are expected to be improved in the future when more data become available." }, { "instance_id": "R27461xR27447", "comparison_id": "R27461", "paper_id": "R27447", "text": "The effect of the corrugation inclination angle on the thermohydraulic performance of plate heat exchangers Abstract It is well established that the inclination angle between plate corrugations and the overall flow direction is a major parameter in the thermohydraulic performance of plate heat exchangers. Application of an improved flow visualization technique has demonstrated that at angles up to about 80\u00b0 the fluid flows mainly along the furrows on each plate. A secondary, swirling motion is imposed on the flow along a furrow when its path is crossed by streams flowing along furrows on the opposite wall. Through the use of the electrochemical mass transfer analogue, it is proved that this secondary motion determines the transfer process; as a consequence of this motion the transfer is fairly uniformly distributed across the width of the plates. The observed maximum transfer rate at an angle of about 80\u00b0 is explained from the observed flow patterns. At higher angles the flow pattern becomes less effective for transfer ; in particular at 90\u00b0 marked flow separation is observed." }, { "instance_id": "R27620xR27514", "comparison_id": "R27620", "paper_id": "R27514", "text": "Energy consumption, employment and causality in Japan: a multivariate approach Using Hsiao's version of Granger causality and cointegration, this study finds that employment (EP), energy consumption (EC), Real GNP (RGNP) and capital are not cointegrated. EC is found to negatively cause EP whereas EP and RNGP are found to directly cause EC. It is also found that capital negatively Granger-causes EP while RGNP and EP are found to strongly influence EC. The findings of this study seem to suggest that a policy of energy conservation may not be detrimental to a country such as Japan. In addition, the finding that energy and capital are substitutes implies that energy conservation will promote capital formation, given output constant." }, { "instance_id": "R27620xR27602", "comparison_id": "R27620", "paper_id": "R27602", "text": "Energy consumption and economic growth: a causality analysis for Greece This paper investigates the causal relationship between aggregated and disaggregated levels of energy consumption and economic growth for Greece for the period 1960-2006 through the application of a later development in the methodology of time series proposed by Toda and Yamamoto (1995). At aggregated levels of energy consumption empirical findings suggest the presence of a uni-directional causal relationship running from total energy consumption to real GDP. At disaggregated levels empirical evidence suggests that there is a bi-directional causal relationship between industrial and residential energy consumption to real GDP but this is not the case for the transport energy consumption with causal relationship being identified in neither direction. The importance of these findings lies on their policy implications and their adoption on structural policies affecting energy consumption in Greece suggesting that in order to address energy import dependence and environmental concerns without hindering economic growth emphasis should be put on the demand side and energy efficiency improvements." }, { "instance_id": "R27620xR27491", "comparison_id": "R27620", "paper_id": "R27491", "text": "The relationship between energy and GNP: further results This paper reexamines the casuality between GNP and energy consumption by using updated US data for the period 1947\u20131979. As a secondary contribution, we investigate the causal relationship between energy consumption and employment. Applying Sims' technique, we find no causal relationship between GNP and energy consumption. We find further that there is a slight unidirectional flow running from employment to energy consumption. Economic interpretations of the empirical results are also presented." }, { "instance_id": "R27620xR27558", "comparison_id": "R27620", "paper_id": "R27558", "text": "The impact of energy consumption on economic growth: evidence from linear and nonlinear models in Taiwan This paper considers the possibility of both a linear effect and nonlinear effect of energy consumption on economic growth, using data for the period 1955\u20132003 in Taiwan. We find evidence of a level-dependent effect between the two variables. Allowing for a nonlinear effect of energy consumption growth sheds new light on the explanation of the characteristics of the energy-growth link. We also provide evidence that the relationship between energy consumption and economic growth in Taiwan is characterized by an inverse U-shape. Some previous studies support the view that energy consumption may promote economic growth. However, the conclusion drawn from the empirical findings suggests that such a relationship exists only where there is a low level of energy consumption in Taiwan. We show that a threshold regression provides a better empirical model than the standard linear model and that policy-makers should seek to capture economic structures associated with different stages of economic growth. It is also worth noting that the energy consumption threshold was reached in the case of Taiwan in the world energy crises periods of 1979 and 1982." }, { "instance_id": "R27620xR27503", "comparison_id": "R27620", "paper_id": "R27503", "text": "Energy and economic growth in the USA. A multivariate approach Abstract This paper examines the causal relationship between GDP and energy use for the period 1947-90 in the USA. The relationship between energy use and economic growth has been examined by both biophysical and neoclassical economists. In particular, several studies have tested for the presence of a causal relationship (in the Granger sense) between energy use and economic growth. However, these tests do not allow a direct test of the relative explanatory powers of the neoclassical and biophysical models. A multivariate adaptation of the test-vector autoregression (VAR) does allow such a test. A VAR of GDP, energy use, capital stock and employment is estimated and Granger tests for causal relationships between the variables are carried out. Although there is no evidence that gross energy use Granger causes GDP, a measure of final energy use adjusted for changing fuel composition does Granger cause GDP." }, { "instance_id": "R27620xR27541", "comparison_id": "R27620", "paper_id": "R27541", "text": "Energy use and output growth in Canada: a multivariate cointegration analysis Using a neo-classical one-sector aggregate production technology where capital, labor and energy are treated as separate inputs, this paper develops a vector error-correction (VEC) model to test for the existence and direction of causality between output growth and energy use in Canada. Using the Johansen cointegration technique, the empirical findings indicate that the long-run movements of output, labor, capital and energy use in Canada are related by two cointegrating vectors. Then using a VEC specification, the short-run dynamics of the variables indicate that Granger-causality is running in both directions between output growth and energy use. Hence, an important policy implication of the analysis is that energy can be considered as a limiting factor to output growth in Canada." }, { "instance_id": "R27620xR27566", "comparison_id": "R27620", "paper_id": "R27566", "text": "Energy consumption and economic activities in Iran Abstract The causal relationship between overall GDP, industrial and agricultural value added and consumption of different kinds of energy are investigated using vector error correction model for the case of Iran within 1967\u20132003. A long-run unidirectional relationship from GDP to total energy and bidirectional relationship between GDP and gas as well as GDP and petroleum products consumption for the whole economy was discovered. Causality is running from value added to total energy, electricity, gas and petroleum products consumption and from gas consumption to value added in industrial sector. The long-run bidirectional relations hold between value added and total energy, electricity and petroleum products consumption in the agricultural sector. The short-run causality runs from GDP to total energy and petroleum products consumption, and also industrial value added to total energy and petroleum products consumption in this sector." }, { "instance_id": "R27620xR27560", "comparison_id": "R27620", "paper_id": "R27560", "text": "Sectoral energy consumption by source and economic growth in Turkey This paper provides a detailed analysis of the energy consumption in Turkey during the last 40 years. It investigates the causal relationships between income and energy consumption in two ways: first, the relationship is studied at the aggregate level; then, we focus on the industrial sector. Previous findings suggest that, in the case of Turkey, there is a unidirectional causality running from energy consumption to growth. However, our findings suggest that in the long run, income and energy consumption appear to be neutral with respect to each other both at the aggregate and at the industrial level. We also find a strong evidence of instantaneous causality, which means that contemporaneous values of energy consumption and income are correlated. Furthermore, a descriptive analysis is conducted in order to reveal the differences in the use of energy resources. We conclude that energy conservation policies are necessary for environmental concerns and our empirical results imply that such policies would not impede economic growth in the long term." }, { "instance_id": "R27620xR27507", "comparison_id": "R27620", "paper_id": "R27507", "text": "An investigation of co-integration and causality between energy consumption and economic activity in Taiwan Applying Hsiao'a version of the Granger causality method, this paper examines the causality between energy and GNP and energy and employment by applying recently developed techniques of co-integration and Hsiao's version of the Granger causality to Taiwanese data for the 1955\u20131993 period. The Phillips-Perron tests reveal that the series with the exception of GNP are not stationary and therefore differencing is performed to secure stationarity. The study finds causality running from GDP to energy consumption without feedback in Taiwan. It is also found that causality runs from GDP to energy but not vice versa." }, { "instance_id": "R27620xR27543", "comparison_id": "R27620", "paper_id": "R27543", "text": "Causal relationship between energy consumption and GDP: the case of Korea 1970-1999 Abstract Causal relationship between energy consumption and economic growth is investigated applying a multivariate model of capital, labor, energy and GDP. Usual BTU energy aggregate is substituted with a Divisia aggregate in an attempt to mitigate aggregation bias. To test for Granger causality in the presence of cointegration among the variables, we employ a vector error correction model rather than a vector autoregressive model. Empirical results for Korea over the period 1970\u20131999 suggest a long run bidirectional causal relationship between energy and GDP, and short run unidirectional causality running from energy to GDP. The source of causation in the long run is found to be the error correction terms in both directions." }, { "instance_id": "R27620xR27568", "comparison_id": "R27620", "paper_id": "R27568", "text": "Energy consumption and GDP in Turkey: is there a co-integration relationship? Energy consumption and GDP are expected to grow by 5.9% and 7% annually until 2025 in Turkey. This paper tries to unfold the linkage between energy consumption and GDP by undertaking a co-integration analysis for Turkey with annual data over the period 1970-2003. The analysis shows that energy consumption and GDP are co-integrated. This means that there is a (possibly bi-directional) causality relationship between the two. We establish that there is a unidirectional causality running from GDP to energy consumption indicating that energy saving would not harm economic growth in Turkey. In addition, we find that energy consumption keeps on growing as long as the economy grows in Turkey." }, { "instance_id": "R27705xR27659", "comparison_id": "R27705", "paper_id": "R27659", "text": "The electricity consumption and GDP nexus dynamic Fiji Islands Fiji is a small open island economy dependent on energy for its growth and development; hence, the relationship between energy consumption and economic growth is crucial for Fiji's development. In this paper, we investigate the nexus between electricity consumption and economic growth for Fiji within a multivariate framework through including the labour force variable. We use the bounds testing approach to cointegration and find that electricity consumption, GDP and labour force are only cointegrated when GDP is the endogenous variable. We use the Granger causality F-test and find that in the long-run causality runs from electricity consumption and labour force to GDP, implying that Fiji is an energy dependent country and thus energy conservation policies will have an adverse effect on Fiji's economic growth." }, { "instance_id": "R27705xR27653", "comparison_id": "R27705", "paper_id": "R27653", "text": "Causality relationship between electricity con- sumption and GDP in Bangladesh In this paper, we examine the causal relationship between the per capita electricity consumption and the per capita GDP for Bangladesh using cointegration and vector error correction model. Our results show that there is unidirectional causality from per capita GDP to per capita electricity consumption. However, the per capita electricity consumption does not cause per capita GDP in case of Bangladesh. The finding has significant implications from the point of view of energy conservation, emission reduction and economic development." }, { "instance_id": "R27705xR27664", "comparison_id": "R27705", "paper_id": "R27664", "text": "Disaggregated energy consumption and GDP in Taiwan: a threshold co-integration analysis Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan." }, { "instance_id": "R27705xR27674", "comparison_id": "R27705", "paper_id": "R27674", "text": "Electricity consumption and economic growth in Nigeria: evidence from cointegration and co-feature analysis The paper investigates the causality relationship between energy consumption and economic growth for Nigeria during the period 1980-2006. The results of our estimation show that real gross domestic product (rGDP) and electricity consumption (ele) are cointegrated and there is only unidirectional Granger causality running from electricity consumption (ele) to (rGDP). Then we applied Hodrick-Prescott (HP) filter to decompose the trend and the fluctuation components of the rGDP and electricity consumption (ele) series. The estimation results show that there is cointegration between the trend and the cyclical components of the two series, which seems to suggest that the Granger causality is possibly related with the business cycle. The paper suggests that investing more and reducing inefficiency in the supply and use of electricity can further stimulate economic growth in Nigeria. The results should, however, be interpreted with caution because of the possibility of loss in power associated with the small sample size and the danger of omitted variable bias that could result from the use of bi-variate analysis." }, { "instance_id": "R27705xR27690", "comparison_id": "R27705", "paper_id": "R27690", "text": "Co-integration and causality relationship between energy consumption and economic growth: further empirical evidence for Nigeria The Paper re - examined co\u2010integration and causality relationship between energy consumption and economic growth for Nigeria using data covering the period 1970 to 2005. Unlike previous related study for Nigeria, different proxies of energy consumption (electricity demand, domestic crude oil consumption and gas utilization) were used for the estimation. It also included government activities proxied by health expenditure and monetary policy proxied by broad money supply though; emphasis was on energy consumption. Using the Johansen co\u2010integration technique, it was found that there existed a long run relationship among the series. It was also found that all the variables used for the study were I(1). Furthermore, unidirectional causality was established between electricity consumption and economic growth, domestic crude oil production and economic growth as well as between gas utilization and economic growth in Nigeria. While causality runs from electricity consumption to economic growth as well as from gas utilization to economic growth, it was found that causality runs from economic growth to domestic crude oil production. Therefore, conservation policy regarding electricity consumption and gas utilization would harm economic growth in Nigeria while energy conservation policy as regards domestic crude oil consumption would not. Santrauka Tyrinejamas energijos suvartojimo ir ekonominio augimo tarpusavio ry\u0161ys bei prie\u017eastingumas Ni\u2010gerijoje, remiantis 1970\u20132005 m. statistiniais duomenimis. Naujai, lyginant su ankstesniais Nigerijos tyrimais, parenkami energijos vartojimo matavimo b\u016bdai (elektros energijos paklausa, vietines naftos \u017ealiavos suvartojimas, duju utilizavimas). Straipsnyje atsi\u017evelgiama i socialine ir monetarine valstybes politika, kurios atspindi valstybes gerove. Pritaikius Johansen tarpusavio priklausomybes metodabuvo gauta, kad tarp visu energijos vartojima atspindin\u010diu rodikliu ir ekonominio augimo yra netiesioginis prie\u017eastinis ry\u0161ys. Manoma, kad elektros bei dujunaudojimo apribojimas stabdytu Nigerijos ekonomini augima, o naftos \u017ealiavos vartojimo masto ma\u017einimas nepaveiktu tolesnes \u0161alies pletros." }, { "instance_id": "R27705xR27670", "comparison_id": "R27705", "paper_id": "R27670", "text": "Electricity consumption and economic growth, the case of Lebanon In this paper we investigate the causal relationship between electricity consumption and economic growth for Lebanon, using monthly data for Lebanon covering the period January 1995 to December 2005. Empirical results of the study confirm the absence of a long-term equilibrium relationship between electricity consumption and economic growth in Lebanon but the existence of unidirectional causality running from electricity consumption to economic growth when examined in a bivariate vector autoregression framework with change in temperature and relative humidity as exogenous variables. Thus, the policy makers in Lebanon should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector of Lebanon, as this would propel the economic growth of the country." }, { "instance_id": "R27705xR27679", "comparison_id": "R27705", "paper_id": "R27679", "text": "Electricity consumption and economic growth in South Africa: a trivariate causality test In this paper we examine the causal relationship between electricity consumption and economic growth in South Africa. We incorporate the employment rate as an intermittent variable in the bivariate model between electricity consumption and economic growth--thereby creating a simple trivariate causality framework. Our empirical results show that there is a distinct bidirectional causality between electricity consumption and economic growth in South Africa. In addition, the results show that employment in South Africa Granger-causes economic growth. The results apply irrespective of whether the causality is estimated in the short-run or in the long-run formulation. The study, therefore, recommends that policies geared towards the expansion of the electricity infrastructure should be intensified in South Africa in order to cope with the increasing demand exerted by the country's strong economic growth and rapid industrialisation programme. This will certainly enable the country to avoid unprecedented power outages similar to those experienced in the country in mid-January 2008." }, { "instance_id": "R27705xR27698", "comparison_id": "R27705", "paper_id": "R27698", "text": "Electricity consumption and economic growth nexus in Portugal using cointegration and causality approaches The aim of this paper is to re-examine the relationship between electricity consumption, economic growth, and employment in Portugal using the cointegration and Granger causality frameworks. This study covers the sample period from 1971 to 2009. We examine the presence of a long-run equilibrium relationship using the bounds testing approach to cointegration within the Unrestricted Error-Correction Model (UECM). Moreover, we examine the direction of causality between electricity consumption, economic growth, and employment in Portugal using the Granger causality test within the Vector Error-Correction Model (VECM). As a summary of the empirical findings, we find that electricity consumption, economic growth, and employment in Portugal are cointegrated and there is bi-directional Granger causality between the three variables in the long-run. With the exception of the Granger causality between electricity consumption and economic growth, the rest of the variables are also bi-directional Granger causality in the short-run. Furthermore, we find that there is unidirectional Granger causality running from economic growth to electricity consumption, but no evidence of reversal causality." }, { "instance_id": "R27835xR27767", "comparison_id": "R27835", "paper_id": "R27767", "text": "What Do Students Learn When Collaboratively Using A Computer Game in the Study of Historical Disease Epidemics, and Why? The use of computer games and virtual environments has been shown to engage and motivate students and can provide opportunities to visualize the historical period and make sense of complex visual information. This article presents the results of a study in which university students were asked to collaboratively solve inquiry-based problems related to historical disease epidemics using game-based learning. A multimethod approach to the data collection was used. Initial results indicated that students attended to visual information with more specificity than text-based information when using a virtual environment. Models of student\u2019s decision-making processes when interacting with the world confirmed that students were making decisions related to these visual elements, and not the inquiry process. Building on theories from the learning sciences, such as learning from animations/visualizations and computer-supported collaborative learning, in this article, the authors begin to answer the question of why students learned what they did about historical disease epidemics." }, { "instance_id": "R27835xR27739", "comparison_id": "R27835", "paper_id": "R27739", "text": "Digital Game-Based Learning in high school Computer Science education: Impact on educational effectiveness and student motivation The aim of this study was to assess the learning effectiveness and motivational appeal of a computer game for learning computer memory concepts, which was designed according to the curricular objectives and the subject matter of the Greek high school Computer Science (CS) curriculum, as compared to a similar application, encompassing identical learning objectives and content but lacking the gaming aspect. The study also investigated potential gender differences in the game's learning effectiveness and motivational appeal. The sample was 88 students, who were randomly assigned to two groups, one of which used the gaming application (Group A, N=47) and the other one the non-gaming one (Group B, N=41). A Computer Memory Knowledge Test (CMKT) was used as the pretest and posttest. Students were also observed during the interventions. Furthermore, after the interventions, students' views on the application they had used were elicited through a feedback questionnaire. Data analyses showed that the gaming approach was both more effective in promoting students' knowledge of computer memory concepts and more motivational than the non-gaming approach. Despite boys' greater involvement with, liking of and experience in computer gaming, and their greater initial computer memory knowledge, the learning gains that boys and girls achieved through the use of the game did not differ significantly, and the game was found to be equally motivational for boys and girls. The results suggest that within high school CS, educational computer games can be exploited as effective and motivational learning environments, regardless of students' gender." }, { "instance_id": "R27835xR27804", "comparison_id": "R27835", "paper_id": "R27804", "text": "Outdoor natural science learning with an RFID-supported immersive ubiquitous learning environment Despite their successful use in many conscientious studies involving outdoor learning applications, mobile learning systems still have certain limitations. For instance, because students cannot obtain real-time, contextaware content in outdoor locations such as historical sites, endangered animal habitats, and geological landscapes, they are unable to search, collect, share, and edit information by using information technology. To address such concerns, this work proposes an environment of ubiquitous learning with educational resources (EULER) based on radio frequency identification (RFID), augmented reality (AR), the Internet, ubiquitous computing, embedded systems, and database technologies. EULER helps teachers deliver lessons on site and cultivate student competency in adopting information technology to improve learning. To evaluate its effectiveness, we used the proposed EULER for natural science learning at the Guandu Nature Park in Taiwan. The participants were elementary school teachers and students. The analytical results revealed that the proposed EULER improves student learning. Moreover, the largely positive feedback from a post-study survey confirms the effectiveness of EULER in supporting outdoor learning and its ability to attract the interest of students." }, { "instance_id": "R27835xR27755", "comparison_id": "R27835", "paper_id": "R27755", "text": "The effects of computer games on primary school students\u2019 achievement and motivation in geography learning The implementation of a computer game for learning about geography by primary school students is the focus of this article. Researchers designed and developed a three-dimensional educational computer game. Twenty four students in fourth and fifth grades in a private school in Ankara, Turkey learnt about world continents and countries through this game for three weeks. The effects of the game environment on students' achievement and motivation and related implementation issues were examined through both quantitative and qualitative methods. An analysis of pre and post achievement tests showed that students made significant learning gains by participating in the game-based learning environment. When comparing their motivations while learning in the game-based learning environment and in their traditional school environment, it was found that students demonstrated statistically significant higher intrinsic motivations and statistically significant lower extrinsic motivations learning in the game-based environment. In addition, they had decreased focus on getting grades and they were more independent while participating in the game-based activities. These positive effects on learning and motivation, and the positive attitudes of students and teachers suggest that computer games can be used as an ICT tool in formal learning environments to support students in effective geography learning." }, { "instance_id": "R27835xR27753", "comparison_id": "R27835", "paper_id": "R27753", "text": "International Evaluation of a Localized Geography Educational Software A report on the implementation and evaluation of an intelligent learning system; the multimedia geography tutor and game software titled Lainos World SM was localized into English, French, Spanish, German, Portuguese, Russian and Simplified Chinese. Thereafter, multilingual online surveys were setup to which High school students were globally invited via mails to schools, targeted adverts and recruitment on Facebook, Google, etc. 1125 respondents from selected nations completed both the initial and final surveys. The effect of the software on students\u2019 geographical knowledge was analyzed through pre and post achievement test scores. In general, the mean score were higher after exposure to the educational software for fifteen days and it was established that the score differences were statistically significant. This positive effect and other qualitative data show that the localized software from students\u2019 perspective is a widely acceptable and effective educational tool for learning geography in an interactive and gaming environment.." }, { "instance_id": "R27835xR27781", "comparison_id": "R27835", "paper_id": "R27781", "text": "Gameplaying for maths learning: cooperative or not? This study investigated the effects of gameplaying on fifth-graders\u2019 maths performance and attitudes. One hundred twenty five fifth graders were recruited and assigned to a cooperative Teams-Games-Tournament (TGT), interpersonal competitive or no gameplaying condition. A state standards-based maths exam and an inventory on attitudes towards maths were used for the pretest and posttest. The students\u2019 gender, socio-economic status and prior maths ability were examined as the moderating variables and covariate. Multivariate analysis of covariance (MANCOVA) indicated that gameplaying was more effective than drills in promoting maths performance, and cooperative gameplaying was most effective for promoting positive maths attitudes regardless of students\u2019 individual differences." }, { "instance_id": "R27835xR27772", "comparison_id": "R27835", "paper_id": "R27772", "text": "The virtual playground: an educational virtual reality environment for evaluating interactivity and conceptual learning The research presented in this paper aims at investigating user interaction in immersive virtual learning environments, focusing on the role and the effect of interactivity on conceptual learning. The goal has been to examine if the learning of young users improves through interacting in (i.e. exploring, reacting to, and acting upon) an immersive virtual environment (VE) compared to non-interactive or non-immersive environments. Empirical work was carried out with more than 55 primary school students between the ages of 8 and 12, in different between-group experiments: an exploratory study, a pilot study, and a large-scale experiment. The latter was conducted in a virtual environment designed to simulate a playground. In this \u201cVirtual Playground,\u201d each participant was asked to complete a set of tasks designed to address arithmetical \u201cfractions\u201d problems. Three different conditions, two experimental virtual reality (VR) conditions and a non-VR condition, that varied the levels of activity and interactivity, were designed to evaluate how children accomplish the various tasks. Pre-tests, post-tests, interviews, video, audio, and log files were collected for each participant, and analysed both quantitatively and qualitatively. This paper presents a selection of case studies extracted from the qualitative analysis, which illustrate the variety of approaches taken by children in the VEs in response to visual cues and system feedback. Results suggest that the fully interactive VE aided children in problem solving but did not provide a strong evidence of conceptual change as expected; rather, it was the passive VR environment, where activity was guided by a virtual robot, that seemed to support student reflection and recall, leading to indications of conceptual change." }, { "instance_id": "R27835xR27779", "comparison_id": "R27835", "paper_id": "R27779", "text": "The Effect of Using Exercise-Based Computer Games during the Process of Learning on Academic Achievement among Education Majors Th e aim of this study is to define whether using exercise-based games increase the performance of learning. For this reason, two basic questions were tried to be answered in the study. First, is there any diff erence in learning between the group that was given exercisebased games and the group that was not? Second, is there any diff erence in learning between the group that used exercise-based games at end of the process of learning and the group that was not applied this but taken the questions of exercises in game material? Th is research has been conducted within the subject of Testing and Evaluation in the program of Kocaeli University Primary Maths Teacher\u2019s College. Experimental design with a pre test-post test control group was used in this study. Experimental process based on game material was used in 120 minutes at the end of a 3-week-teaching period. Th e reliability values (KR-20) of the two tests were found to be .79 and .71 which were used to evaluate learning level. Th e study has reached a conclusion that game materials used at the end of learning process have increased the learning levels of teacher candidates. However, the similar learning levels have been observed among students who were taken printed exercises instead of using learning game method to reinforce the traditional learning in the research. Th is means that in method of applying teaching games in addition to the traditional teaching, there isn\u2019t any diff erence of learning eff iciency of students answered the questions based on competition and fun and the group who only answered the questions. Th is study is expected to contribute defining in which situations games are eff ective." }, { "instance_id": "R27835xR27764", "comparison_id": "R27835", "paper_id": "R27764", "text": "Mobile game-based learning in secondary education: engagement, motivation and learning in a mobile city game Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular project-based instruction. No significant differences were found between the two groups with respect to motivation for History or the Middle Ages. The impact of location-based technology and game-based learning on pupil knowledge and motivation are discussed along with suggestions for future research." }, { "instance_id": "R27835xR27757", "comparison_id": "R27835", "paper_id": "R27757", "text": "Combining software games with education: Evaluation of its educational effectiveness Computer games are very popular among children and adolescents. In this respect, they could be exploited by educational software designers to render educational software more attractive and motivating. However, it remains to be explored what the educational scope of educational software games is. In this paper, we explore several issues concerning the educational effectiveness, appeal and scope of educational software games through an evaluation study of an Intelligent Tutoring System (ITS) that operates as a virtual reality educational game. The results of the evaluation show that educational virtual reality games can be very motivating while retaining or even improving the educational effects on students. Moreover, one important finding of the study was that the educational effectiveness of the game was particularly high for students who used to have poor performance in the domain taught prior to their learning experience with the game." }, { "instance_id": "R27835xR27745", "comparison_id": "R27835", "paper_id": "R27745", "text": "Successful implementation of user- centered game based learning in higher education: An example from civil engineering Goal: The use of an online game for learning in higher education aims to make complex theoretical knowledge more approachable. Permanent repetition will lead to a more in-depth learning. Objective: To gain insight into whether and to what extent, online games have the potential to contribute to student learning in higher education. Experimental setting: The online game was used for the first time during a lecture on Structural Concrete at Master's level, involving 121 seventh semester students. Methods: Pre-test/post-test experimental control group design with questionnaires and an independent online evaluation. Results: The minimum learning result of playing the game was equal to that achieved with traditional methods. A factor called ''joy'' was introduced, according to [Nielsen, J. (2002): User empowerment and the fun factor. In Jakob Nielsen's Alertbox, July 7, 2002. Available from http://www.useit.com/alertbox/20020707.html.], which was amazingly high. Conclusion: The experimental findings support the efficacy of game playing. Students enjoyed this kind of e-learning." }, { "instance_id": "R27835xR27800", "comparison_id": "R27835", "paper_id": "R27800", "text": "Principles underlying the design of \u201cThe Number Race\u201d, an adaptive computer game for remediation of dyscalculia Abstract Background Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a \"core deficit\" in number sense or in the link between number sense and symbolic number representations. Methods \"The Number Race\" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional \"learning space\" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). Results The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article [1] describes the evolution of number sense and arithmetic scores before and after training. Conclusion The software, open-source and freely available online, is designed for learning disabled children aged 5\u20138, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains." }, { "instance_id": "R28099xR27906", "comparison_id": "R28099", "paper_id": "R27906", "text": "A revisit to cost aggregation in stereo matching: How far can we reduce its computational redundancy? This paper presents a novel method for performing an efficient cost aggregation in stereo matching. The cost aggregation problem is re-formulated with a perspective of a histogram, and it gives us a potential to reduce the complexity of the cost aggregation significantly. Different from the previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy which exists among the search range, caused by a repeated filtering for all disparity hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The trade-off between accuracy and complexity is extensively investigated into parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity. This work provides new insights into complexity-constrained stereo matching algorithm design." }, { "instance_id": "R28099xR27857", "comparison_id": "R28099", "paper_id": "R27857", "text": "Adaptive support-weight approach for correspondence search In this paper, we present a new area-based method for visual correspondence search that focuses on the dissimilarity computation. Local and area-based matching methods generally measure the similarity (or dissimilarity) between the image pixels using local support windows. In this approach, an appropriate support window should be selected adaptively for each pixel to make the measure reliable and certain. Finding the optimal support window with an arbitrary shape and size is, however, very difficult and generally known as an NP-hard problem. For this reason, unlike the existing methods that try to find an optimal support window, we adjusted the support-weight of each pixel in a given support window. The adaptive support-weight of a pixel is computed based on the photometric and geometric relationship with the pixel under consideration. Dissimilarity is then computed using the raw matching costs and support-weights of both support windows, and the correspondence is finally selected by the WTA (winner-takes-all) method. The experimental results for the rectified real images show that the proposed method successfully produces piecewise smooth disparity maps while preserving sharp depth discontinuities accurately." }, { "instance_id": "R28099xR27896", "comparison_id": "R28099", "paper_id": "R27896", "text": "Real-time disparity estimation algorithm for stereo camera systems This paper proposes a real-time stereo matching algorithm using GPU programming. The likelihood model is implemented using GPU programming for real-time operation. And the prior model is proposed to improve the accuracy of disparity estimation. First, the likelihood matching based on rank transform is implemented in GPU programming. The shared memory handling in graphic hardware is introduced in calculating the likelihood model. The prior model considers the smoothness of disparity map and is defined as a pixel-wise energy function using adaptive interaction among neighboring disparities. The disparity is determined by minimizing the joint energy function which combines the likelihood model with prior model. These processes are performed in the multi-resolution approach. The disparity map is interpolated using the reliability of likelihood model and color-based similarity in the neighborhood. This paper evaluates the proposed approach with the Middlebury stereo images. According to the experiments, the proposed algorithm shows good estimation accuracy over 30 frames/second for 640\u00d7480 image and 60 disparity range. The proposed disparity estimation algorithm is applied to real-time stereo camera system such as 3-D image display, depth-based object extraction, 3-D rendering, and so on." }, { "instance_id": "R28099xR27947", "comparison_id": "R28099", "paper_id": "R27947", "text": "A non-local cost aggregation method for stereo matching Matching cost aggregation is one of the oldest and still popular methods for stereo correspondence. While effective and efficient, cost aggregation methods typically aggregate the matching cost by summing/averaging over a user-specified, local support region. This is obviously only locally-optimal, and the computational complexity of the full-kernel implementation usually depends on the region size. In this paper, the cost aggregation problem is re-examined and a non-local solution is proposed. The matching cost values are aggregated adaptively based on pixel similarity on a tree structure derived from the stereo image pair to preserve depth edges. The nodes of this tree are all the image pixels, and the edges are all the edges between the nearest neighboring pixels. The similarity between any two pixels is decided by their shortest distance on the tree. The proposed method is non-local as every node receives supports from all other nodes on the tree. As can be expected, the proposed non-local solution outperforms all local cost aggregation methods on the standard (Middlebury) benchmark. Besides, it has great advantage in extremely low computational complexity: only a total of 2 addition/subtraction operations and 3 multiplication operations are required for each pixel at each disparity level. It is very close to the complexity of unnormalized box filtering using integral image which requires 6 addition/subtraction operations. Unnormalized box filter is the fastest local cost aggregation method but blurs across depth edges. The proposed method was tested on a MacBook Air laptop computer with a 1.8 GHz Intel Core i7 CPU and 4 GB memory. The average runtime on the Middlebury data sets is about 90 milliseconds, and is only about 1.25\u00d7 slower than unnormalized box filter. A non-local disparity refinement method is also proposed based on the non-local cost aggregation method." }, { "instance_id": "R28099xR28093", "comparison_id": "R28099", "paper_id": "R28093", "text": "A stereo matching approach based on particle filters and scattered control landmarks In robot localization, particle filtering can estimate the position of a robot in a known environment with the help of sensor data. In this paper, we present an approach based on particle filtering, for accurate stereo matching. The proposed method consists of three parts. First, we utilize multiple disparity maps in order to acquire a very distinctive set of features called landmarks, and then we use segmentation as a grouping technique. Secondly, we apply scan line particle filtering using the corresponding landmarks as a virtual sensor data to estimate the best disparity value. Lastly, we reduce the computational redundancy of particle filtering in our stereo correspondence with a Markov chain model, given the previous scan line values. More precisely, we assist particle filtering convergence by adding a proportional weight in the predicted disparity value estimated by Markov chains. In addition to this, we optimize our results by applying a plane fitting algorithm along with a histogram technique to refine any outliers. This work provides new insights into stereo matching methodologies by taking advantage of global geometrical and spatial information from distinctive landmarks. Experimental results show that our approach is capable of providing high-quality disparity maps comparable to other well-known contemporary techniques. Display Omitted A stereo matching approach motivated by the particle filter framework in robot localization.Highly accurate GCPs, acquired by the computation of multiple cost efficient disparity maps.A Markov chain model has been introduced in the process to reduce the computational complexity of particle filtering.Application of RANSAC algorithm along with a histogram technique to refine any outliers." }, { "instance_id": "R28099xR27942", "comparison_id": "R28099", "paper_id": "R27942", "text": "Real-time stereo matching based on fast belief propagation In this paper, a global optimum stereo matching algorithm based on improved belief propagation is presented which is demonstrated to generate high quality results while maintaining real-time performance. These results are achieved using a foundation based on the hierarchical belief propagation architecture combined with a novel asymmetric occlusion handling model, as well as parallel graphical processing. Compared to the other real-time methods, the experimental results on Middlebury data show the efficiency of our approach." }, { "instance_id": "R28099xR28016", "comparison_id": "R28099", "paper_id": "R28016", "text": "Domain Transformation-Based Efficient Cost Aggregation for Local Stereo Matching Binocular stereo matching is one of the most important algorithms in the field of computer vision. Adaptive support-weight approaches, the current state-of-the-art local methods, produce results comparable to those generated by global methods. However, excessive time consumption is the main problem of these algorithms since the computational complexity is proportionally related to the support window size. In this paper, we present a novel cost aggregation method inspired by domain transformation, a recently proposed dimensionality reduction technique. This transformation enables the aggregation of 2-D cost data to be performed using a sequence of 1-D filters, which lowers computation and memory costs compared to conventional 2-D filters. Experiments show that the proposed method outperforms the state-of-the-art local methods in terms of computational performance, since its computational complexity is independent of the input parameters. Furthermore, according to the experimental results with the Middlebury dataset and real-world images, our algorithm is currently one of the most accurate and efficient local algorithms." }, { "instance_id": "R28099xR27928", "comparison_id": "R28099", "paper_id": "R27928", "text": "Real-time stereo on GPGPU using progressive multi-resolution adaptive windows We introduce a new GPGPU-based real-time dense stereo matching algorithm. The algorithm is based on a progressive multi-resolution pipeline which includes background modeling and dense matching with adaptive windows. For applications in which only moving objects are of interest, this approach effectively reduces the overall computation cost quite significantly, and preserves the high definition details. Running on an off-the-shelf commodity graphics card, our implementation achieves a 36 fps stereo matching on 1024x768 stereo video with a fine 256 pixel disparity range. This is effectively same as 7200M disparity evaluations per second. For scenes where the static background assumption holds, our approach outperforms all published alternative algorithms in terms of the speed performance, by a large margin. We envision a number of potential applications such as real-time motion capture, as well as tracking, recognition and identification of moving objects in multi-camera networks." }, { "instance_id": "R28099xR28088", "comparison_id": "R28099", "paper_id": "R28088", "text": "Real-time stereo to multi-view conversion system based on adaptive meshing The stereo to multi-view conversion technology plays an important role in the development and promotion of three-dimensional television, which can provide adequate supply of high-quality 3D content for autostereoscopic displays. This paper focuses on a real-time implementation of the stereo to multi-view conversion system, the major parts of which are adaptive meshing, sparse stereo correspondence, energy equation construction and virtual-view rendering. To achieve the real-time performance, we make three main contributions. First, we introduce adaptive meshing to reduce the computational complexity at the expense of slight decrease in quality. Second, we use a simple and effective method based on block matching algorithm to generate the sparse disparity map. Third, for the module of block-saliency calculation, sparse stereo correspondence and view synthesis, novel parallelization strategies and fine-grained optimization techniques based on graphic processing units are used to accelerate the executing speed. Experimental results show that the system can achieve real-time and semi-real-time performance when rendering 8 views with the image resolution of 1280 \u00d7 720 and 1920 \u00d7 1080 on Tesla K20. The images and videos presented finally are both visually realistic and comfortable." }, { "instance_id": "R28099xR27909", "comparison_id": "R28099", "paper_id": "R27909", "text": "Real-time stereo matching using memory-efficient Belief Propagation for high-definition 3D telepresence systems Highlights? A real-time and high-definition stereo matching algorithm is presented. ? The proposal is an improved Belief Propagation algorithm with pixel classification. ? It also includes a message compression technique that reduces memory traffic. ? The total memory traffic reduction is about 90%. ? The algorithm improves the overall performance by more than 6%. New generations of telecommunications systems will include high-definition 3D video that provides a telepresence feeling. These systems require high-quality depth maps to be generated in a very short time (very low latency, typically about 40ms). Classical Belief Propagation algorithms (BP) generate high-quality depth maps but they require huge memory bandwidths that limit low-latency implementations of stereo-vision systems with high-definition images.This paper proposes a real-time (latency inferior to 40ms) high-definition (1280i?720) stereo matching algorithm using Belief Propagation with good immersive feeling (80 disparity levels). There are two main contributions. The first is an improved BP algorithm with pixel classification that outperforms classical BP while reducing the number of memory accesses. The second is an adaptive message compression technique with a low performance penalty that greatly reduces the memory traffic. The combination of these techniques outperforms classical BP by about 6.0% while reducing the memory traffic by more than 90%." }, { "instance_id": "R28099xR27924", "comparison_id": "R28099", "paper_id": "R27924", "text": "Disparity map refinement and 3D surface smoothing via directed anisotropic diffusion We propose a new binocular stereo algorithm and 3D reconstruction method from multiple disparity images. First, we present an accurate binocular stereo algorithm. In our algorithm, we use neither color segmentation nor plane fitting methods, which are common techniques among many algorithms nominated in the Middlebury ranking. These methods assume that the 3D world consists of a collection of planes and that each segment of a disparity map obeys a plane equation. We exclude these assumptions and introduce a Directed Anisotropic Diffusion technique for refining a disparity map. Second, we show a method to fill some holes in a distance map and smooth the reconstructed 3D surfaces by using another type of Anisotropic Diffusion technique. The evaluation results on the Middlebury datasets show that our stereo algorithm is competitive with other algorithms that adopt plane fitting methods. We present an experiment that shows the high accuracy of a reconstructed 3D model using our method, and the effectiveness and practicality of our proposed method in a real environment." }, { "instance_id": "R28099xR27902", "comparison_id": "R28099", "paper_id": "R27902", "text": "On building an accurate stereo matching system on graphics hardware This paper presents a GPU-based stereo matching system with good performance in both accuracy and speed. The matching cost volume is initialized with an AD-Census measure, aggregated in dynamic cross-based regions, and updated in a scanline optimization framework to produce the disparity results. Various errors in the disparity results are effectively handled in a multi-step refinement process. Each stage of the system is designed with parallelism considerations such that the computations can be accelerated with CUDA implementations. Experimental results demonstrate the accuracy and the efficiency of the system: currently it is the top performer in the Middlebury benchmark, and the results are achieved on GPU within 0.1 seconds. We also provide extra examples on stereo video sequences and discuss the limitations of the system." }, { "instance_id": "R28099xR28070", "comparison_id": "R28099", "paper_id": "R28070", "text": "Acceleration of stereomatching on multi-core CPU and GPU This paper presents an accelerated version of a dense stereo-correspondence algorithm for two different parallelism enabled architectures, multi-core CPU and GPU. The algorithm is part of the vision system developed for a binocular robot-head in the context of the CloPeMa research project. This research project focuses on the conception of a new clothes folding robot with real-time and high resolution requirements for the vision system. The performance analysis shows that the parallelised stereo-matching algorithm has been significantly accelerated, maintaining 12\u00d7 and 176\u00d7 speed-up respectively for multi-core CPU and GPU, compared with SISD (Single Instruction, Single Data) single-thread CPU. To analyse the origin of the speed-up and gain deeper understanding about the choice of the optimal hardware, the algorithm was broken into key sub-tasks and the performance was tested for four different hardware architectures." }, { "instance_id": "R28099xR28059", "comparison_id": "R28099", "paper_id": "R28059", "text": "Fast and Accurate Stereo Vision System on FPGA In this article, we present a fast and high quality stereo matching algorithm on FPGA using cost aggregation (CA) and fast locally consistent (FLC) dense stereo. In many software programs, global matching algorithms are used in order to obtain accurate disparity maps. Although their error rates are considerably low, their processing speeds are far from that required for real-time processing because of their complex processing sequences. In order to realize real-time processing, many hardware systems have been proposed to date. They have achieved considerably high processing speeds; however, their error rates are not as good as those of software programs, because simple local matching algorithms have been widely used in those systems. In our system, sophisticated local matching algorithms (CA and FLC) that are suitable for FPGA implementation are used to achieve low error rate while maintaining the high processing speed. We evaluate the performance of our circuit on Xilinx Vertex-6 FPGAs. Its error rate is comparable to that of top-level software algorithms, and its processing speed is nearly 2 clock cycles per pixel, which reaches 507.9 fps for 640 480 pixel images." }, { "instance_id": "R28099xR27960", "comparison_id": "R28099", "paper_id": "R27960", "text": "Efficient Disparity Estimation Using Hierarchical Bilateral Disparity Structure Based Graph Cut Algorithm With a Foreground Boundary Refinement Mechanism The disparity estimation problem is commonly solved using graph cut (GC) methods, in which the disparity assignment problem is transformed to one of minimizing global energy function. Although such an approach yields an accurate disparity map, the computational cost is relatively high. Accordingly, this paper proposes a hierarchical bilateral disparity structure (HBDS) algorithm in which the efficiency of the GC method is improved without any loss in the disparity estimation performance by dividing all the disparity levels within the stereo image hierarchically into a series of bilateral disparity structures of increasing fineness. To address the well-known foreground fattening effect, a disparity refinement process is proposed comprising a fattening foreground region detection procedure followed by a disparity recovery process. The efficiency and accuracy of the HBDS-based GC algorithm are compared with those of the conventional GC method using benchmark stereo images selected from the Middlebury dataset. In addition, the general applicability of the proposed approach is demonstrated using several real-world stereo images." }, { "instance_id": "R28099xR28074", "comparison_id": "R28099", "paper_id": "R28074", "text": "Hardware implementation of a full HD real-time disparity estimation algorithm Disparity estimation is a common task in stereo vision and usually requires a high computational effort. High resolution disparity maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3D glasses. In this paper, an FPGA architecture for a disparity estimation algorithm is proposed, that is capable of processing high-definition content in real-time. The resulting architecture is efficient in terms of power consumption and can be easily scaled to support higher resolutions." }, { "instance_id": "R28099xR27884", "comparison_id": "R28099", "paper_id": "R27884", "text": "Accurate and Efficient Cost Aggregation Strategy for Stereo Correspondence Based on Approximated Joint Bilateral Filtering Recent local state-of-the-art stereo algorithms based on variable cost aggregation strategies allow for inferring disparity maps comparable to those yielded by algorithms based on global optimization schemes. Unfortunately, thought these results are excellent, they are obtained at the expense of high computational requirements that are comparable or even higher than those required by global approaches. In this paper, we propose a cost aggregation strategy based on joint bilateral filtering and incremental calculation schemes that allow for efficient and accurate inference of disparity maps. Experimental comparison with state-of-the-art techniques shows the effectiveness of our proposal." }, { "instance_id": "R28099xR28044", "comparison_id": "R28099", "paper_id": "R28044", "text": "A modified census transform based on the neighborhood information for stereo matching algorithm Census transform is a non-parametric local transform. Its weakness is that the results relied on the center pixel too much. This paper proposes a modified Census transform based on the neighborhood information for stereo matching. By improving the classic Census transform, the new technique utilizes more bits to represent the differences between the pixel and its neighborhood information. The result image of the modified Census transform has more detailed information at depth discontinuity. After stereo correspondence, sub-pixel interpolation and the disparity refinement, a better dense disparity map can be obtained. The experiments present that the proposed algorithm has simple mechanism and strong robustness. It can improve the accuracy of matching and is applicable to hardware systems." }, { "instance_id": "R28140xR28129", "comparison_id": "R28140", "paper_id": "R28129", "text": "Gangliocytic Paraganglioma: Case Report and Review of the Literature Gangliocytic paraganglioma is a rare tumor, which occurs nearly exclusively in the second portion of the duodenum. Generally, this tumor has a benign clinical course, although rarely, it may recur or metastasize to regional lymph nodes. Only one case with distant metastasis has been reported. We present a case of duodenal gangliocytic paraganglioma treated first by local resection followed by pylorus-preserving pancreaticoduodenectomy. Examination of the first specimen revealed focal nuclear pleomorphism and mitotic activity, in addition to the presence of three characteristic histologic components: epithelioid, ganglion, and spindle cell. In the subsequent pancreaticoduodenectomy specimen, there was no residual tumor identified in the periampullary area, but metastatic gangliocytic paraganglioma was present in two of seven lymph nodes. This case report confirms the malignant potential of this tumor. We review the published literature on gangliocytic paragangliomas pursuing a malignant course. We conclude that surgical therapy of these neoplasms should not be limited to local resection, as disease recurrence, lymph node involvement, and rarely distant metastasis may occur." }, { "instance_id": "R28140xR28104", "comparison_id": "R28140", "paper_id": "R28104", "text": "A Metastatic Endocrine-Neurogenic Tumor of the Ampulla of Vater with Multiple Endocrine Immunoreaction Malignant Paraganglioma? The present case report demonstrates the history of a 50-year-old man with a mixed endocrine-neurogenous tumor of the ampulla of Vater. The tumor was localized endoscopically after an attack of melena. There were no signs of endocrinopathy. A local resection with suturing of the pancreatic duct was performed. Morphologically, there were two different tissue types (neurogenous and carcinoid-like) with numerous cells and nerve fibers reacting immunohistochemically with somatostatin and neurotensin antisera: some immunoreactivity to PP-antibodies was observed. Still, after 20 months, the patient seems to have been cured by local resection." }, { "instance_id": "R28140xR28132", "comparison_id": "R28140", "paper_id": "R28132", "text": "An unusual case of duodenal obstruction-gangliocytic paraganglioma Gangliocytic paragangliomas are rare tumors located in the gastrointestinal tract that are considered to be benign. They are composed of spindle-shaped cells, epithelioid cells, and ganglion-like cells. They usually present with abdominal pain, and/or gastrointestinal bleeding, and occasionally with obstructive jaundice. We report a case of obstruction in a 17-year-old female, which on histology was found to be a gangliocytic paraganglioma, with an extremely unusual presentation. Intraoperatively, the patient was found to have local tumor extension and regional lymph node invasion, and so she underwent a pylorus-preserving pancreaticoduodenectomy, with local lymph node clearance. We discuss the management of this unusual case and review the literature." }, { "instance_id": "R28140xR28124", "comparison_id": "R28140", "paper_id": "R28124", "text": "Paraganglioma of the ampulla of vater: a potentially malignant neoplasm Paragangliomas are rare tumours originating from neuroectodermic remnants and are usually considered as benign. We present two cases of paraganglioma of the ampulla of Vater that were treated surgically by pancreaticoduodenectomy. In one case, histopathology revealed malignant characteristics of the tumour with invasion of the pancreas and simultaneous duodenal lymph\u2010node metastases. Both patients had a favourable outcome without disease recurrence at 40 and 44 months postoperatively. Only 21 cases of ampullary paraganglioma have been reported in the literature, 7 of them with malignant characteristics. In conclusion, paragangliomas of the ampulla of Vater have malignant potential. Surgical therapy of these tumours should not be limited to local resection, as disease recurrence and lymph node involvement have been reported. We propose that paragangliomas of the ampulla of Vater should be operated by cephalic pancreaticoduodenectomy, which allows long\u2010term and disease\u2010free survival." }, { "instance_id": "R28191xR28179", "comparison_id": "R28191", "paper_id": "R28179", "text": "An optimal containership slot allocation for liner shipping revenue management In the competitive liner shipping market, carriers may utilize revenue management systems to increase profits by using slot allocation and pricing. In this paper, related research on revenue management for transportation industries is reviewed. A conceptual model for liner shipping revenue management (LSRM) is proposed and a slot allocation model is formulated through mathematical programming to maximize freight contribution. We illustrate this slot allocation model with a case study of a Taiwan liner shipping company and the results show the applicability and better performances than the previous allocation used in practice." }, { "instance_id": "R28191xR28144", "comparison_id": "R28191", "paper_id": "R28144", "text": "A frequency-based maritime container assignment model This paper transfers the classic frequency-based transit assignment method of Spiess and Florian to containers demonstrating its promise as the basis for a global maritime container assignment model. In this model, containers are carried by shipping lines operating strings (or port rotations) with given service frequencies. An origin-destination matrix of full containers is assigned to these strings to minimize sailing time plus container dwell time at the origin port and any intermediate transhipment ports. This necessitated two significant model extensions. The first involves the repositioning of empty containers so that a net outflow of full containers from any port is balanced by a net inflow of empty containers, and vice versa. As with full containers, empty containers are repositioned to minimize the sum of sailing and dwell time, with a facility to discount the dwell time of empty containers in recognition of the absence of inventory. The second involves the inclusion of an upper limit to the maximum number of container moves per unit time at any port. The dual variable for this constraint provides a shadow price, or surcharge, for loading or unloading a container at a congested port. Insight into the interpretation of the dual variables is given by proposition and proof. Model behaviour is illustrated by a simple numerical example. The paper concludes by considering the next steps toward realising a container assignment model that can, amongst other things, support the assessment of supply chain vulnerability to maritime disruptions." }, { "instance_id": "R28191xR28166", "comparison_id": "R28191", "paper_id": "R28166", "text": "Seasonal slot allocation planning for a container liner shipping service This research addresses a slot allocation planning problem of the container shipping company for satisfying the estimated seasonal demands on a liner service. We explore in detail the influenced factors of planning and construct a quantitative model for the optimum allocation of the ship\u2019s slot spaces. An integer programming model is formulated to maximize the potential profits per round trip voyage for a liner company, and a real life example of an eastern Asia short sea service has been studied. Analysis results reveal that containers with the higher contributions like reefers and 40 feet dry containers have priorities to be allocated more than others, but not all because of satisfying necessary operational constraints. Our model is not only providing a higher space utilization rate and more detailed allocation results, but also helpful for the ship size assessment in long-term planning." }, { "instance_id": "R28191xR28185", "comparison_id": "R28191", "paper_id": "R28185", "text": "Robust optimization model for resource allocation of container shipping lines Abstract The operating efficiency of container shipping lines depends on proper resource allocation of container shipping. A deterministic model was developed for shipping lines based on the equilibrium principle. The objective was to optimize the resource allocation for container lines considering ship size, container deployment, and slot allocation. The deterministic model was then expanded to a robust optimization model accounting for the uncertain factors, while ship size was treated as the design variable and slot allocation as the control variable. The effectiveness of the proposed model is demonstrated using a pendulum shipping line as an example. The results indicate that infeasible solutions will increase and the model robustness will be enhanced by an increased penalty coefficient and the solution robustness will be enhanced by increasing the preference coefficient. The optimization model simultaneously considers demand uncertainty, model robustness, and risk preference of the decision maker to agree better with actual practices." }, { "instance_id": "R28235xR28205", "comparison_id": "R28235", "paper_id": "R28205", "text": "Maritime repositioning of empty containers under uncertain port disruptions This paper addresses the problem of repositioning empty containers in maritime networks under possible port disruptions. Since drastically different futures may occur, the decision making process for dealing with this problem cannot ignore the uncertain nature of its parameters. In this paper, we consider the uncertainty of relevant problem data by a stochastic programming approach, in which different scenarios are included in a multi-scenario optimization model and linked by non-anticipativity conditions. Numerical experiments show that the multi-scenario solutions provide a hedge against uncertainty when compared to deterministic decisions and exhibit some forms of robustness, which mitigate the risks of not meeting empty container demand." }, { "instance_id": "R28235xR28220", "comparison_id": "R28235", "paper_id": "R28220", "text": "Empty Container Management in Cyclic Shipping Routes This paper considers the empty container management problem in a cyclic shipping route. The objective is to seek the optimal empty container repositioning policy in a dynamic and stochastic situation by minimising the expected total costs consisting of inventory holding costs, demand lost-sale costs, lifting-on and lifting-off charges, and container transportation costs. A three-phase threshold control policy is developed to reposition empty containers in cyclic routes. The threshold values are determined based on the information of average customer demands and their variability. A non-repositioning policy and three other heuristic policies are introduced for purposes of comparison. Simulation is used to evaluate the performance of empty repositioning policies. A range of numerical examples with different demand patterns; degrees of uncertainty, and fleet sizes demonstrate that the threshold policy significantly outperforms the heuristic policies." }, { "instance_id": "R28235xR28202", "comparison_id": "R28235", "paper_id": "R28202", "text": "The effect of multi-scenario policies on empty container repositioning This study addresses a repositioning problem where some ports impose several restrictions on the storage of empty containers, sailing distances are short, information becomes available close to decision times and decisions are made in a highly uncertain environment. Although point forecasts are available and probabilistic distributions can be derived from the historical database, specific changes in the operational environment may give rise to the realization of parameters that were never observed in the past. Since historical statistics are useless for decision-making processes, we propose a time- extended multi-scenario optimization model in which scenarios are generated by shipping company opinions. We then show the importance of adopting multi-scenario policies compared to standard deterministic ones." }, { "instance_id": "R28235xR28226", "comparison_id": "R28235", "paper_id": "R28226", "text": "Cargo routing and empty container repositioning in multiple shipping service routes This paper considers the problem of joint cargo routing and empty container repositioning at the operational level for a shipping network with multiple service routes, multiple deployed vessels and multiple regular voyages. The objective is to minimize the total relevant costs in the planning horizon including: container lifting on/off costs at ports, customer demand backlog costs, the demurrage (or waiting) costs at the transhipment ports for temporarily storing laden containers, the empty container inventory costs at ports, and the empty container transportation costs. The laden container routing from the original port to the destination port is limited with at most three service routes. Two solution methods are proposed to solve the optimization problem. The first is a two-stage shortest-path based integer programming method, which combines a cargo routing algorithm with an integer programming of the dynamic system. The second is a two-stage heuristic-rules based integer programming method, which combines an integer programming of the static system with a heuristic implementation algorithm in dynamic system. The two solution methods are applied to two case studies with 30 different scenarios and compared with a practical policy. The results show that two solution methods perform substantially better than the practical policy. The shortest-path based method is preferable for relatively small-scale problems as it yields slightly better solution than the heuristic-rules based method. However, the heuristic-rules based method has advantages in its applicability to large-scale realistic systems while producing good performance, to which the shortest-path based method may be computationally inapplicable. Moreover, the heuristic-rules based method can also be applied to stochastic situations because its second stage is rule-based and dynamical." }, { "instance_id": "R28333xR28313", "comparison_id": "R28333", "paper_id": "R28313", "text": "Liner ship route schedule design with sea contingency time and port time uncertainty This paper deals with a tactical-level liner ship route schedule design problem which aims to determine the arrival time of a ship at each portcall on a ship route and the sailing speed function on each voyage leg by taking into account time uncertainties at sea and at port. It first derives the optimality condition for the sailing speed function with sea contingency and subsequently demonstrates the convexity of the bunker consumption function. A mixed-integer non-linear stochastic programming model is developed for the proposed liner ship route schedule design problem by minimizing the ship cost and expected bunker cost while maintaining a required transit time service level. In view of the special structure of the model, an exact cutting-plane based solution algorithm is proposed. Numerical experiments on real data provided by a global liner shipping company demonstrate that the proposed algorithm can efficiently solve real-case problems." }, { "instance_id": "R28333xR28250", "comparison_id": "R28333", "paper_id": "R28250", "text": "Container vessel scheduling with bi-directional flows We consider a strongly NP-hard container vessel scheduling problem with bi-directional flows. We show that a special case of it is solvable as a linear program. This property is then used to design a heuristic for the general case." }, { "instance_id": "R28333xR28320", "comparison_id": "R28333", "paper_id": "R28320", "text": "Bunker consumption optimization methods in shipping: A critical review and extensions It is crucial nowadays for shipping companies to reduce bunker consumption while maintaining a certain level of shipping service in view of the high bunker price and concerned shipping emissions. After introducing the three bunker consumption optimization contexts: minimization of total operating cost, minimization of emission and collaborative mechanisms between port operators and shipping companies, this paper presents a critical and timely literature review on mathematical solution methods for bunker consumption optimization problems. Several novel bunker consumption optimization methods are subsequently proposed. The applicability, optimality, and efficiency of the existing and newly proposed methods are also analyzed. This paper provides technical guidelines and insights for researchers and practitioners dealing with the bunker consumption issues." }, { "instance_id": "R28333xR28260", "comparison_id": "R28333", "paper_id": "R28260", "text": "Analysis of an exact algorithm for the vessel speed optimization problem Increased fuel costs together with environmental concerns have led shipping companies to consider the optimization of vessel speeds. Given a fixed sequence of port calls, each with a time window, and fuel cost as a convex function of vessel speed, we show that optimal speeds can be found in quadratic time. \u00a9 2013 Wiley Periodicals, Inc. NETWORKS, 2013" }, { "instance_id": "R28333xR28272", "comparison_id": "R28333", "paper_id": "R28272", "text": "Liner shipping service network design with empty container repositioning This paper proposes a liner shipping service network design problem with combined hub-and-spoke and multi-port-calling operations and empty container repositioning. It first introduces a novel concept - segment - defined as a pair of ordered ports served by one shipping line and subsequently develops a mixed-integer linear programming model for the proposed problem. Extensive numerical experiments based on realistic Asia-Europe-Oceania shipping operations show that the proposed model can be efficiently solved by CPLEX for real-case problems. They also demonstrate the potential for large cost-savings over pure hub-and-spoke or pure multi-port-calling network, or network without considering empty container repositioning." }, { "instance_id": "R28333xR28280", "comparison_id": "R28333", "paper_id": "R28280", "text": "Ship assignment with hub and spoke constraints As the shipping industry enters the future, an increasing number of technological developments are being introduced into this market. This has led to a significant change in business operations, such as the innovative design of hub and spoke systems, resulting in cargo consolidation and a better use of the ship's capacity. In the light of this new scenario, the authors present a successful application of integer linear programming to support the decision-making process of assigning ships to previously defined voyages \u2014 the rosters. The tool used to build the final models was the MS-Excel Solver (Microsoft\u00ae Excel 97 SR-2, 1997), a package that enabled the real case studies addressed to be solved. The results of the experiment prompted the authors to favour the assignment of very small fleets, as opposed to the existing high number of ships employed in such real trades," }, { "instance_id": "R28333xR28317", "comparison_id": "R28333", "paper_id": "R28317", "text": "Containership scheduling with transit-time-sensitive container shipment demand This paper examines the optimal containership schedule with transit-time-sensitive demand that is assumed to be a decreasing continuous function of transit time. A mixed-integer nonlinear non-convex optimization model is first formulated to maximize the total profit of a ship route. In view of the problem structure, a branch-and-bound based holistic solution method is developed. It is rigorously demonstrated that this solution method can obtain an e-optimal solution in a finite number of iterations for general forms of transit-time-sensitive demand. Computational results based on a trans-Pacific liner ship route demonstrate the applicability and efficiency of the solution method." }, { "instance_id": "R28333xR28266", "comparison_id": "R28333", "paper_id": "R28266", "text": "Tactical planning models for managing container flow and ship deployment This paper addresses two practical problems from a liner shipping company, i.e. the container flow management problem and the ship deployment problem, at the tactical planning level. A sequential model and a joint optimisation model are formulated to solve the problems. Our results show that the company should implement the joint optimisation model at the tactical planning level to improve the shipping capacity utilisation rather than the sequential model used in the current practice. Repositioning empty containers also need to be considered jointly with the nonempty container flow at the tactical planning level. Some important managerial insights into the operational and business processes are gained." }, { "instance_id": "R28333xR28298", "comparison_id": "R28333", "paper_id": "R28298", "text": "Ship scheduling and cost analysis for route planning in liner shipping Liner shipping companies can benefit significantly by improving ship scheduling and cost analysis in service route planning by systematic methods. This paper proposes a dynamic programming (DP) model for ship scheduling and identifies cost items relevant to the planning of a service route, which can help planners make better scheduling decisions under berth time-window constraints, as well as estimate more accurately voyage fixed costs and freight variable costs in liner service route planning. The proposed model pursues an optimal scheduling strategy including cruising speed and quay crane dispatching decisions, vis \u00e0 vis tentative and rough schedule arrangements. Additionally, the model can be extended to cases of integrating one company\u2019s \u2013 or strategic alliance \u2013 partners\u2019 service networks, in order to gain more efficient hub-and-spoke operations, tighter transshipment and better level-of-service." }, { "instance_id": "R28369xR28364", "comparison_id": "R28369", "paper_id": "R28364", "text": "Container shipping on the Northern Sea Route Since the beginning of the 20th century, the principal commercial maritime routes have changed very little. With global warming, the Northern Sea Route (NSR) has opened up as a possible avenue of trade in containerized products between Asia and Europe. This paper verifies the technical and economic feasibility of regular container transport along the NSR. By adopting a model schedule between Shanghai and Hamburg, we are able to analyze the relative costs of various axes in the Asia\u2013Europe transport network, including the NSR. While shipping through the Suez Canal is still by far the least expensive option, the NSR and Trans-Siberian Railway appear to be roughly equivalent second-tier alternatives." }, { "instance_id": "R28369xR28361", "comparison_id": "R28369", "paper_id": "R28361", "text": "Studying port selection on liner routes: An approach from logistics perspective The research aims to study the port selection in liner shipping. The central work is to set up a model to deal with port choice decisions. The model solves three matters: ports on a ship\u2019s route; the order of selected ports and loading/unloading ports for each shipment. Its objective is to minimize total cost including ship cost, port tariff, inland transport cost and inventory cost. The model has been applied in real data, with cargo flows between the USA and Northern Europe. Afterwards, two sensitive analyses are considered. The first assesses the impact of a number of port calls on the total cost which relates closely to the viability of two service patterns: multi ports and hub & spoke. The second analyzes the efficiency of large vessels in the scope of a logistics network. The overriding result of this research is to indicate the influence of logistics factors in the decision of port choice. The research emphasizes the necessity to combine different factors when dealing with this topic, or else a result can be one-sided." }, { "instance_id": "R28369xR28346", "comparison_id": "R28369", "paper_id": "R28346", "text": "Planning the route of container ships: A fuzzy genetic approach Nowadays, liner shipping has become a constant operation model for shipping companies, and scheduling is an important issue for operation. It is well-known that a nice plan for route of container ships will bring long-term profit to companies. In the earlier works, the market demand is assumed to be crisp. However, the market demand could be uncertain in real world. Fuzzy sets theory is frequently used to deal with the uncertainty problem. On the other hand, genetic algorithm owns powerful multi-objective searching capability and it can extensively find optimal solutions through continuous copy, crossover, and mutation. Due to these advantages, in this paper, a fuzzy genetic algorithm for liner shipping planning is proposed. This algorithm not only takes market demand, shipping and berthing time of container ships into account simultaneously but also is capable of finding the most suitable route of container ships." }, { "instance_id": "R28369xR28344", "comparison_id": "R28369", "paper_id": "R28344", "text": "Designing container shipping network under changing demand and freight rates This paper focuses on the optimization of container shipping network and its operations under changing cargo demand and freight rates. The problem is formulated as a mixed integer non-linear programming problem (MINP) with an objective of maximizing the average unit ship-slot profit at three stages using analytical methodology. The issues such as empty container repositioning, ship-slot allocating, ship sizing, and container configuration are simultaneously considered based on a series of the matrices of demand for a year. To solve the model, a bi-level genetic algorithm based method is proposed. Finally, numerical experiments are provided to illustrate the validity of the proposed model and algorithms. The obtained results show that the suggested model can provide a more realistic solution to the issues on the basis of changing demand and freight rates and arrange a more effective approach to the optimization of container shipping network structures and operations than does the model based on the average demand." }, { "instance_id": "R28407xR28403", "comparison_id": "R28407", "paper_id": "R28403", "text": "Study on a Liner Shipping Network Design Considering Empty Container Reposition Empty container allocation problems arise due to imbalance on trades. Imbalanced trade is a common fact in the liner shipping,creating the necessity of repositioning empty containers from import-dominant ports to export-dominant ports in an economic and efficient way. The present work configures a liner shipping network, by performing the routes assignment and their integration to maximize the profit for a liner shipping company. The empty container repositioning problem is expressly taken into account in whole process. By considering the empty container repositioning problem in the network design, the choice of routes will be also influenced by the empty container flow, resulting in an optimum network, both for loaded and empty cargo. The Liner Shipping Network Design Program (LS-NET program) will define the best set of routes among a set of candidate routes, the best composition of the fleet for the network and configure the empty container repositioning network. Further, a network of Asian ports was studied and the results obtained show that considering the empty container allocation problem in the designing process can influence the final configuration of the network." }, { "instance_id": "R28407xR28383", "comparison_id": "R28407", "paper_id": "R28383", "text": "A Base Integer Programming Model and Benchmark Suite for Liner-Shipping Network Design The liner-shipping network design problem is to create a set of nonsimple cyclic sailing routes for a designated fleet of container vessels that jointly transports multiple commodities. The objective is to maximize the revenue of cargo transport while minimizing the costs of operation. The potential for making cost-effective and energy-efficient liner-shipping networks using operations research OR is huge and neglected. The implementation of logistic planning tools based upon OR has enhanced performance of airlines, railways, and general transportation companies, but within the field of liner shipping, applications of OR are scarce. We believe that access to domain knowledge and data is a barrier for researchers to approach the important liner-shipping network design problem. The purpose of the benchmark suite and the paper at hand is to provide easy access to the domain and the data sources of liner shipping for OR researchers in general. We describe and analyze the liner-shipping domain applied to network design and present a rich integer programming model based on services that constitute the fixed schedule of a liner shipping company. We prove the liner-shipping network design problem to be strongly NP-hard. A benchmark suite of data instances to reflect the business structure of a global liner shipping network is presented. The design of the benchmark suite is discussed in relation to industry standards, business rules, and mathematical programming. The data are based on real-life data from the largest global liner-shipping company, Maersk Line, and supplemented by data from several industry and public stakeholders. Computational results yielding the first best known solutions for six of the seven benchmark instances is provided using a heuristic combining tabu search and heuristic column generation." }, { "instance_id": "R28407xR28394", "comparison_id": "R28407", "paper_id": "R28394", "text": "Planning and scheduling for efficiency in liner shipping ANALYSIS OF THE CAPACITY REQUIRED TO SERVE A SPECIFIC TRADE ROUTE,WITH APPLICATION TO AUSTRALIANORTH AMERICAN WEST COAST TRADE" }, { "instance_id": "R28407xR28380", "comparison_id": "R28407", "paper_id": "R28380", "text": "A Matheuristic for the Liner Shipping Network Design Problem with Transit Time Restrictions We present a mathematical model for the liner shipping network design problem with transit time restrictions on the cargo flow. We extend an existing matheuristic for the liner shipping network design problem to consider transit time restrictions. The matheuristic is an improvement heuristic, where an integer program is solved iteratively as a move operator in a large-scale neighborhood search. To assess the effects of insertions/removals of port calls, flow and revenue changes are estimated for relevant commodities along with an estimation of the change in the vessel cost. Computational results on the benchmark suite LINER-LIB are reported, showing profitable networks for most instances. We provide insights on causes for rejecting demand and the average speed per vessel class in the solutions obtained." }, { "instance_id": "R28446xR28428", "comparison_id": "R28446", "paper_id": "R28428", "text": "From multi-porting to a hub port configuration: the South African container port system in transition This paper addresses the tension that exists between multi-porting and a hub configuration in the South African container port system. We apply a generalised cost model to two alternative network configurations: the actual situation of multi-porting and an alternative hub port configuration. The results demonstrate that South African import and export flows are likely to face small cost increases when the port system moves to a hub port configuration. However, from a ship operator's perspective, the hub configuration is more attractive given considerable cost reductions in marine charges, port dues and ship costs. The paper concludes by underlining Transnet's pivotal role in the attractiveness of the hub option and the need for a wider Sub-Saharan strategy in view of making the hub port concept work." }, { "instance_id": "R28446xR28439", "comparison_id": "R28446", "paper_id": "R28439", "text": "Dynamic programming of port position and scale in the hierarchized container ports network A hierarchized container ports network, with several super hubs and many multilevel hub ports, will be established, mainly serving transshipment and carrying out most of its business in the hub-spoke mode. This paper sums up a programming model, in which the elementary statistic units, cost and expense of every phase of any shipment are the straight objects, and the minimum cost of the whole network is taken as the objective. This is established based on a dynamic system to make out the hierarchical structure of the container ports network, i.e. the trunk hub and feeder hubs can be planned in a economic zone, then the optimal scale vector can also be obtained for all container ports concerned with the network. The vector is a standard measurement to decide a port's position and their scale distribution in the whole network." }, { "instance_id": "R28446xR28421", "comparison_id": "R28446", "paper_id": "R28421", "text": "The impact of hub and spoke networks in the Mediterranean perculiarity The tendency towards consolidation of the liner shipping companies requires the development of the 'hub and spokes' model in the Mediterranean as well. In this framework, we argue that the network model will be reinforced as compared to the 'point to point' model. However, only a few studies are available to confirm the advantageousness of this choice. We therefore propose a methodological analysis of transport costs in a 'hub and spokes' system in comparison to a 'point to point' system, both in general terms and in the specific framework of the Mediterranean. The relative costs of the two alternatives were simulated utilizing the experience of the Mediterranean hub port of Gioia Tauro and with reference to the most recent literature on this subject." }, { "instance_id": "R28487xR28464", "comparison_id": "R28487", "paper_id": "R28464", "text": "A Containerized Liner Routing in Eastern Asia New partnerships has been made in containerized liner services. This likely results in drastic changes in ship size and hub location in Eastern Asia. In this study we address strategies of the containerized liner services by using a mathematical programming with two objectives of shipping company and customer." }, { "instance_id": "R28487xR28472", "comparison_id": "R28487", "paper_id": "R28472", "text": "Multi-port vs. Hub-and-Spoke port calls by containerships This paper addresses the design of container liner shipping networks taking into consideration container management issues including empty container repositioning. We examine two typical service networks with different ship sizes: multi-port calling by conventional ship size and hub-and-spoke by mega-ship. The entire solution process is performed in two phases: the service network design and container distribution. A wide variety of numerical experiments are conducted for the Asia-Europe and Asia-North America trade lanes. In most scenarios the multi-port calling is superior in terms of total cost, while the hub-and-spoke is more advantageous in the European trade for a costly shipping company." }, { "instance_id": "R28487xR28481", "comparison_id": "R28487", "paper_id": "R28481", "text": "The containership feeder network design problem: the new Izmir port as hub in the Black sea Global containership liners design their transportation service as hub-and-spoke networks to increase the market linkages and reduce the average operational costs by using indirect connections. These indirect connections from the hub ports to the feeder ports called feeder networks are serviced by feeder ships. The feeder network design (FND) problem determines the smallest feeder ship fleet size with routes to minimize operational costs. Therefore, this problem could be described as capacitated vehicle routing problem with simultaneous pick-ups and deliveries with time limit. In our investigation, a perturbation based variable neighborhood search (PVNS) approach is developed to solve the FND problem which determines the fleet mix and sequence of port calls. The proposed model implementation has been tested using a case study from the Black Sea region with the new Izmir port (Candarli port) as hub. Moreover, a range of scenarios and parameter values are used in order to test the robustness of the approach through sensitivity analyses. Numerical results show that the new Izmir port has great potential as hub port in the Black Sea region." }, { "instance_id": "R28487xR28475", "comparison_id": "R28487", "paper_id": "R28475", "text": "Containership routing with time deadlines and simultaneous deliveries and pick-ups In this paper we seek to determine optimal routes for a containership fleet performing pick-ups and deliveries between a hub and several spoke ports. A capacitated vehicle routing problem with pick-ups, deliveries and time deadlines is formulated and solved using a hybrid genetic algorithm for establishing routes for a dedicated containership fleet. Results on the performance of the algorithm and the feasibility of the approach show that a relatively small fleet of containerships could provide efficient services within deadlines. Moreover, through sensitivity analysis we discuss performance robustness and consistency of the developed algorithm under a variety of problem settings and parameters values." }, { "instance_id": "R28614xR28535", "comparison_id": "R28614", "paper_id": "R28535", "text": "Undifferentiated (embryonal) sarcoma of the liver.Report of 31 cases Thirty\u2010one cases of undifferentiated (embryonal) sarcoma of the liver are presented. The tumor is found predominantly in the pediatric age group, the majority of patients (51.6%) being between 6 and 10 years of age. An abdominal mass and pain are the usual presenting symptoms. Radiographic examination is nonspecific except to demonstrate a space\u2010occupying lesion of the liver. The tumors are large, single, usually globular and well demarcated, and have multiple cystic areas of hemorrhage, necrosis, and gelatinous degeneration. Histologic examination shows a pseudocapsule partially separating the normal liver from undifferentiated sarcomatous cells that, near the periphery of the tumor, surround entrapped hyperplastic or degenerating bile duct\u2010like structures. Eosinophilic globules that are PAS positive are usually found within and adjacent to tumor cells. Areas of necrosis and hemorrhage are prominent. The prognosis is poor, with a median survival of less than 1 year following diagnosis." }, { "instance_id": "R28614xR28574", "comparison_id": "R28614", "paper_id": "R28574", "text": "Undifferentiated sarcoma of the liver in childhood Undifferentiated (embryonal) sarcoma of the liver (UESL) is a rare childhood hepatic tumor, and it is generally considered an aggressive neoplasm with an unfavorable prognosis." }, { "instance_id": "R28614xR28591", "comparison_id": "R28614", "paper_id": "R28591", "text": "Pregnancy and Delivery in a Patient With Metastatic Embryonal Sarcoma of the Liver BACKGROUND Embryonal sarcoma of the liver is a rare undifferentiated mesenchymal neoplasm with a grave prognosis. CASE We report a spontaneous uneventful pregnancy in a young woman with recurrent metastatic embryonal sarcoma of the liver after hepatectomy (twice) and radiochemotherapy. The patient chose to continue pregnancy despite her general condition and possible hazards to maternal and fetal well-being. She gave birth to a full-term healthy infant. Her disease recurred shortly after the delivery. CONCLUSION According to a computerized search of the National Library of Medicine database, from 1966 to present, using the search terms \u201cembryonal (or undifferentiated) sarcoma\u201d and \u201cliver\u201d without language restrictions, this is the first reported case of pregnancy and delivery in a patient with embryonal sarcoma of the liver. It illustrates the clinicalandethicaldilemmasassociatedwiththiscomplicated condition." }, { "instance_id": "R28614xR28583", "comparison_id": "R28614", "paper_id": "R28583", "text": "Hepatic Undifferentiated (Embryonal) Sarcoma Arising in a Mesenchymal Hamartoma We report the case of a hepatic undifferentiated (embryonal) sarcoma (UES) arising within a mesenchymal hamartoma (MH) in a 15-year-old girl. Mapping of the tumor demonstrated a typical MH transforming gradually into a UES composed of anaplastic stromal cells. When evaluated by flow cytometry, the MH was diploid and the UES showed a prominent aneuploid peak. Karyotypic analysis of the UES showed structural alterations of chromosome 19, which have been implicated as a potential genetic marker of MH. The histogenesis of MH and UES is still debated, and reports of a relationship between them, although suggested on the basis of histomorphologic similarities, have never been convincing. The histologic, flow cytometric, and cytogenetic evidence reported herein suggests a link between these two hepatic tumors of the pediatric population." }, { "instance_id": "R28614xR28593", "comparison_id": "R28614", "paper_id": "R28593", "text": "Undifferentiated (embryonal) sarcoma of the liver in middle-aged adults: Smooth muscle differentiation determined by immunohistochemistry and electron microscopy Undifferentiated (embryonal) sarcoma of the liver (UESL) is a rare pediatric liver malignancy that is extremely uncommon in middle-aged individuals. We studied 2 cases of UESL in middle-aged adults (1 case in a 49-year-old woman and the other in a 62-year-old man) by histology, immunohistochemistry, and electron microscopy to clarify the cellular characteristics of this peculiar tumor. One tumor showed a mixture of spindle cells, polygonal cells, and multinucleated giant cells within a myxoid matrix and also revealed focal areas of a storiform pattern in a metastatic lesion. The other tumor was composed mainly of anaplastic large cells admixed with few fibrous or spindle-shaped components and many multinucleated giant cells. In both cases, some tumor cells contained eosinophilic hyaline globules that were diastase resistant and periodic acid-Schiff positive. Immunohistochemically, the tumor cells showed positive staining for smooth muscle markers, such as desmin, alpha-smooth muscle actin, and muscle-specific actin, and also for histiocytic markers, such as alpha-1-antitrypsin, alpha-1-antichymotrypsin, and CD68. Electron microscope examination revealed thin myofilaments with focal densities and intermediate filaments in the cytoplasm of tumor cells. Our studies suggest that UESL exhibits at least a partial smooth muscle phenotype in middle-aged adults, and this specific differentiation may be more common in this age group than in children. Tumor cells of UESL with smooth muscle differentiation in middle-aged adults show phenotypic diversity comparable to those of malignant fibrous histiocytoma with myofibroblastic differentiation." }, { "instance_id": "R28614xR28609", "comparison_id": "R28614", "paper_id": "R28609", "text": "Undifferentiated embryonal sarcoma of the liver mimicking acute appendicitis. Case report and review of the literature Abstract Background Undifferentiated embryonal sarcoma (UES) of liver is a rare malignant neoplasm, which affects mostly the pediatric population accounting for 13% of pediatric hepatic malignancies, a few cases has been reported in adults. Case presentation We report a case of undifferentiated embryonal sarcoma of the liver in a 20-year-old Caucasian male. The patient was referred to us for further investigation after a laparotomy in a district hospital for spontaneous abdominal hemorrhage, which was due to a liver mass. After a through evaluation with computed tomography scan and magnetic resonance imaging of the liver and taking into consideration the previous history of the patient, it was decided to surgically explore the patient. Resection of I\u2013IV and VIII hepatic lobe. Patient developed disseminated intravascular coagulation one day after the surgery and died the next day. Conclusion It is a rare, highly malignant hepatic neoplasm, affecting almost exclusively the pediatric population. The prognosis is poor but recent evidence has shown that long-term survival is possible after complete surgical resection with or without postoperative chemotherapy." }, { "instance_id": "R28614xR28554", "comparison_id": "R28614", "paper_id": "R28554", "text": "Hepatic undifferentiated (embryonal) sarcoma and rhabdomyosarcoma in children. Results of therapy From July 1972 through September 1984, 8 of 44 children diagnosed as having primary malignant hepatic tumors, who were treated at St. Jude Children's Research Hospital, had undifferentiated (embryonal) sarcoma (five patients) or rhabdomyosarcoma (three patients). The natural history and response to multimodal therapy of these rare tumors are described. The pathologic material was reviewed and evidence for the differentiating potential of undifferentiated (embryonal) sarcoma is presented. At diagnosis, disease was restricted to the right lobe of the liver in three patients, was bilobar in four patients, and extended from the left lobe into the diaphragm in one patient. Lung metastases were present in two patients at diagnosis. All three patients with rhabdomyosarcoma had intrahepatic lesions without involvement of the biliary tree. Survival ranged from 6 to 73 months from diagnosis (median, 19.5 months); two patients are surviving disease\u2010free for 55+ and 73+ months, and one patient recently underwent resection of a recurrent pulmonary nodule 22 months from initial diagnosis. Three patients died of progressive intrahepatic and extrahepatic abdominal tumors, and two patients, who died of progressive pulmonary tumor, also had bone or brain metastasis but no recurrence of intra\u2010abdominal tumor. Six patients had objective evidence of response to chemotherapy. The authors suggest an aggressive multimodal approach to the treatment of these rare tumors in children." }, { "instance_id": "R28614xR28542", "comparison_id": "R28614", "paper_id": "R28542", "text": "Primary sarcoma of the liver Abstract A case of primary undifferentiated sarcoma of the liver in a 66-year-old woman is reported. The similarity of the presenting symptoms to those of a hepatic abscess is emphasized. The patient's response to palliative treatment by means of hepatic artery ligation and chemotherapy was excellent. We believe that this is the first report of such a response in a sarcoma localized to the liver. This therapy may well be useful in symptomatic patients in whom partial hepatectomy is not feasible." }, { "instance_id": "R28889xR28853", "comparison_id": "R28889", "paper_id": "R28853", "text": "Faster Fault Finding at Google using Multi-Objective Regression Test Optimisation Companies such as Google tend to develop products from one continually evolving core of code. Software is neither shipped, nor released in the traditional sense. It is simply made available, with dramatically compressed release cycles regression testing. This large scale rapid release environment creates challenges for the application of regression test optimisation techniques. This paper reports initial results from a partnership between Google and the CREST centre at UCL aimed at transferring techniques from the regression test optimisation literature into industrial practice. The results illustrate the industrial potential for these techniques: regression test time can be reduced by between 33%\u201382%, while retaining fault detection capability. Our experience also highlights the importance of a multi objective approach: optimising for coverage and time alone is insufficient; we have, at least, to additionally prioritise historical fault revelation." }, { "instance_id": "R28889xR28875", "comparison_id": "R28889", "paper_id": "R28875", "text": "Multiobjective Simulation Optimisation in Software Project Management Traditionally, simulation has been used by project managers in optimising decision making. However, current simulation packages only include simulation optimisation which considers a single objective (or multiple objectives combined into a single fitness function). This paper aims to describe an approach that consists of using multiobjective optimisation techniques via simulation in order to help software project managers find the best values for initial team size and schedule estimates for a given project so that cost, time and productivity are optimised. Using a System Dynamics (SD) simulation model of a software project, the sensitivity of the output variables regarding productivity, cost and schedule using different initial team size and schedule estimations is determined. The generated data is combined with a well-known multiobjective optimisation algorithm, NSGA-II, to find optimal solutions for the output variables. The NSGA-II algorithm was able to quickly converge to a set of optimal solutions composed of multiple and conflicting variables from a medium size software project simulation model. Multiobjective optimisation and SD simulation modeling are complementary techniques that can generate the Pareto front needed by project managers for decision making. Furthermore, visual representations of such solutions are intuitive and can help project managers in their decision making process." }, { "instance_id": "R28889xR28619", "comparison_id": "R28889", "paper_id": "R28619", "text": "The Multi- Objective Next Release Problem This paper is concerned with the Multi-Objective Next Release Problem (MONRP), a problem in search-based requirements engineering. Previous work has considered only single objective formulations. In the multi-objective formulation, there are at least two (possibly conflicting) objectives that the software engineer wishes to optimize. It is argued that the multi-objective formulation is more realistic, since requirements engineering is characterised by the presence of many complex and conflicting demands, for which the software engineer must find a suitable balance. The paper presents the results of an empirical study into the suitability of weighted and Pareto optimal genetic algorithms, together with the NSGA-II algorithm, presenting evidence to support the claim that NSGA-II is well suited to the MONRP. The paper also provides benchmark data to indicate the size above which the MONRP becomes non--trivial." }, { "instance_id": "R28889xR28870", "comparison_id": "R28889", "paper_id": "R28870", "text": "Software Project Portfolio Optimization with Advanced Multiobjective Evolutionary Algorithms Large software companies have to plan their project portfolio to maximize potential portfolio return and strategic alignment, while balancing various preferences, and considering limited resources. Project portfolio managers need methods and tools to find a good solution for complex project portfolios and multiobjective target criteria efficiently. However, software project portfolios are challenging to describe for optimization in a practical way that allows efficient optimization. In this paper we propose an approach to describe software project portfolios with a set of multiobjective criteria for portfolio managers using the COCOMO II model and introduce a multiobjective evolutionary approach, mPOEMS, to find the Pareto-optimal front efficiently. We evaluate the new approach with portfolios choosing from a set of 50 projects that follow the validated COCOMO II model criteria and compare the performance of the mPOEMS approach with state-of-the-art multiobjective optimization evolutionary approaches. Major results are as follows: the portfolio management approach was found usable and useful; the mPOEMS approach outperformed the other approaches." }, { "instance_id": "R28889xR28861", "comparison_id": "R28889", "paper_id": "R28861", "text": "A Multi-Objective Software Quality Classification Model Using Genetic Programming A key factor in the success of a software project is achieving the best-possible software reliability within the allotted time & budget. Classification models which provide a risk-based software quality prediction, such as fault-prone & not fault-prone, are effective in providing a focused software quality assurance endeavor. However, their usefulness largely depends on whether all the predicted fault-prone modules can be inspected or improved by the allocated software quality-improvement resources, and on the project-specific costs of misclassifications. Therefore, a practical goal of calibrating classification models is to lower the expected cost of misclassification while providing a cost-effective use of the available software quality-improvement resources. This paper presents a genetic programming-based decision tree model which facilitates a multi-objective optimization in the context of the software quality classification problem. The first objective is to minimize the \"Modified Expected Cost of Misclassification\", which is our recently proposed goal-oriented measure for selecting & evaluating classification models. The second objective is to optimize the number of predicted fault-prone modules such that it is equal to the number of modules which can be inspected by the allocated resources. Some commonly used classification techniques, such as logistic regression, decision trees, and analogy-based reasoning, are not suited for directly optimizing multi-objective criteria. In contrast, genetic programming is particularly suited for the multi-objective optimization problem. An empirical case study of a real-world industrial software system demonstrates the promising results, and the usefulness of the proposed model" }, { "instance_id": "R28889xR28629", "comparison_id": "R28889", "paper_id": "R28629", "text": "Today/Future Importance Analysis SBSE techniques have been widely applied to requirements selection and prioritization problems in order to ascertain a suitable set of requirements for the next release of a system. Unfortunately, it has been widely observed that requirements tend to be changed as the development process proceeds and what is suitable for today, may not serve well into the future. Though SBSE has been widely applied to requirements analysis, there has been no previous work that seeks to balance the requirements needs of today with those of the future. This paper addresses this problem. It introduces a multi-objective formulation of the problem which is implemented using multi-objective Pareto optimal evolutionary algorithms. The paper presents the results of experiments on both synthetic and real world data." }, { "instance_id": "R28889xR28859", "comparison_id": "R28889", "paper_id": "R28859", "text": "Evolutionary Algorithms for the Multi-objective Test Data Generation Problem Automatic test data generation is a very popular domain in the field of search\u2010based software engineering. Traditionally, the main goal has been to maximize coverage. However, other objectives can be defined, such as the oracle cost, which is the cost of executing the entire test suite and the cost of checking the system behavior. Indeed, in very large software systems, the cost spent to test the system can be an issue, and then it makes sense by considering two conflicting objectives: maximizing the coverage and minimizing the oracle cost. This is what we did in this paper. We mainly compared two approaches to deal with the multi\u2010objective test data generation problem: a direct multi\u2010objective approach and a combination of a mono\u2010objective algorithm together with multi\u2010objective test case selection optimization. Concretely, in this work, we used four state\u2010of\u2010the\u2010art multi\u2010objective algorithms and two mono\u2010objective evolutionary algorithms followed by a multi\u2010objective test case selection based on Pareto efficiency. The experimental analysis compares these techniques on two different benchmarks. The first one is composed of 800 Java programs created through a program generator. The second benchmark is composed of 13 real programs extracted from the literature. In the direct multi\u2010objective approach, the results indicate that the oracle cost can be properly optimized; however, the full branch coverage of the system poses a great challenge. Regarding the mono\u2010objective algorithms, although they need a second phase of test case selection for reducing the oracle cost, they are very effective in maximizing the branch coverage. Copyright \u00a9 2011 John Wiley & Sons, Ltd." }, { "instance_id": "R28889xR28880", "comparison_id": "R28889", "paper_id": "R28880", "text": "Single and Multi Objective Genetic Programming for Software Development Effort Estimation The idea of exploiting Genetic Programming (GP) to estimate software development effort is based on the observation that the effort estimation problem can be formulated as an optimization problem. Indeed, among the possible models, we have to identify the one providing the most accurate estimates. To this end a suitable measure to evaluate and compare different models is needed. However, in the context of effort estimation there does not exist a unique measure that allows us to compare different models but several different criteria (e.g., MMRE, Pred(25), MdMRE) have been proposed. Aiming at getting an insight on the effects of using different measures as fitness function, in this paper we analyzed the performance of GP using each of the five most used evaluation criteria. Moreover, we designed a Multi-Objective Genetic Programming (MOGP) based on Pareto optimality to simultaneously optimize the five evaluation measures and analyzed whether MOGP is able to build estimation models more accurate than those obtained using GP. The results of the empirical analysis, carried out using three publicly available datasets, showed that the choice of the fitness function significantly affects the estimation accuracy of the models built with GP and the use of some fitness functions allowed GP to get estimation accuracy comparable with the ones provided by MOGP." }, { "instance_id": "R28889xR28633", "comparison_id": "R28889", "paper_id": "R28633", "text": "A Multiobjective Optimization Approach to the Software Release Planning with Undefined Number of Releases and Interdependent Requirements Release Planning is an important and complex activity in software development. It involves several aspects related to which functionalities are going to be developed in each release of the system. Consistent planning must meet the customers\u2019 needs and comply with existing constraints. Optimization techniques have been successfully applied to solve problems in the Software Engineering field, including the Software Release Planning Problem. In this context, this work presents an approach based on multiobjective optimization for the problem when the number of releases is not known a priori or when the number of releases is a value expected by stakeholders. The strategy regards on the stakeholders\u2019 satisfaction, business value and risk management, as well as provides ways for handling requirements interdependencies. Experiments show the feasibility of the proposed approach." }, { "instance_id": "R28889xR28654", "comparison_id": "R28889", "paper_id": "R28654", "text": "A Multiobjective Module-Order Model for Software Quality Enhancement The knowledge, prior to system operations, of which program modules are problematic is valuable to a software quality assurance team, especially when there is a constraint on software quality enhancement resources. A cost-effective approach for allocating such resources is to obtain a prediction in the form of a quality-based ranking of program modules. Subsequently, a module-order model (MOM) is used to gauge the performance of the predicted rankings. From a practical software engineering point of view, multiple software quality objectives may be desired by a MOM for the system under consideration: e.g., the desired rankings may be such that 100% of the faults should be detected if the top 50% of modules with highest number of faults are subjected to quality improvements. Moreover, the management team for the same system may also desire that 80% of the faults should be accounted if the top 20% of the modules are targeted for improvement. Existing work related to MOM(s) use a quantitative prediction model to obtain the predicted rankings of program modules, implying that only the fault prediction error measures such as the average, relative, or mean square errors are minimized. Such an approach does not provide a direct insight into the performance behavior of a MOM. For a given percentage of modules enhanced, the performance of a MOM is gauged by how many faults are accounted for by the predicted ranking as compared with the perfect ranking. We propose an approach for calibrating a multiobjective MOM using genetic programming. Other estimation techniques, e.g., multiple linear regression and neural networks cannot achieve multiobjective optimization for MOM(s). The proposed methodology facilitates the simultaneous optimization of multiple performance objectives for a MOM. Case studies of two industrial software systems are presented, the empirical results of which demonstrate a new promise for goal-oriented software quality modeling." }, { "instance_id": "R28889xR28805", "comparison_id": "R28889", "paper_id": "R28805", "text": "Software Module Clustering as a Multi-Objective Search Problem Software module clustering is the problem of automatically organizing software units into modules to improve program structure. There has been a great deal of recent interest in search-based formulations of this problem in which module boundaries are identified by automated search, guided by a fitness function that captures the twin objectives of high cohesion and low coupling in a single-objective fitness function. This paper introduces two novel multi-objective formulations of the software module clustering problem, in which several different objectives (including cohesion and coupling) are represented separately. In order to evaluate the effectiveness of the multi-objective approach, a set of experiments was performed on 17 real-world module clustering problems. The results of this empirical study provide strong evidence to support the claim that the multi-objective approach produces significantly better solutions than the existing single-objective approach." }, { "instance_id": "R28889xR28647", "comparison_id": "R28889", "paper_id": "R28647", "text": "Software Requirements Selection using Quantum-inspired Elitist Multi- objective Evolutionary Algorithm This paper presents a Quantum-inspired Multi-objective Differential Evolution Algorithm (QMDEA) for the selection of software requirements, an issue in Requirements engineering phase of software development life cycle. Generally the software development process is iterative or incremental in nature, as request for new requirements keep coming from the customers from time to time for inclusion in the next release of the software. Due to the feasibility reasons it is not possible for a company to incorporate all the requirements in the software product. Consequently, it becomes a challenging task for the company to select a subset of the requirements to be included, by keeping the business goals in view. The problem is to identify a set of requirements to be included in the next release of the product, by minimizing the cost and maximizing the customer satisfaction. As minimizing the cost and maximizing the customer satisfaction are contradictory objectives, the problem is multi-objective and is also NP-hard in nature. Therefore it cannot be solved efficiently using traditional optimization techniques especially for the large problem instances. QMDEA combines the preeminent features of Differential Evolution and Quantum Computing. The features of QMDEA help in achieving quality Pareto-optimal front solutions with faster convergence. The performance of QMDEA is tested on six benchmark problems derived from the literature. The comparison of the obtained results indicates superior performance over the other methods reported in the literature." }, { "instance_id": "R28889xR28637", "comparison_id": "R28889", "paper_id": "R28637", "text": "Simulating and Optimising Design Decisions in Quantitative Goal Models Making decisions among a set of alternative system designs is an essential activity of requirements engineering. It involves evaluating how well each alternative satisfies the stakeholders' goals and selecting one alternative that achieves some optimal tradeoffs between possibly conflicting goals. Quantitative goal models support such activities by describing how alternative system designs \u2014 expressed as alternative goal refinements and responsibility assignments \u2014 impact on the levels of goal satisfaction specified in terms of measurable objective functions. Analyzing large numbers of alternative designs in such models is an expensive activity for which no dedicated tool support is currently available. This paper takes a first step towards providing such support by presenting automated techniques for (i) simulating quantitative goal models so as to estimate the levels of goal satisfaction contributed by alternative system designs and (ii) optimising the system design by applying a multi-objective optimisation algorithm to search through the design space. These techniques are presented and validated using a quantitative goal model for a well-known ambulance service system." }, { "instance_id": "R28889xR28790", "comparison_id": "R28889", "paper_id": "R28790", "text": "Optimal Web Service Selection based on Multi-Objective Genetic Algorithm Considering that there are three aspects of constrains in the service selection process, such as control structure within a composition plan, relationship between concrete services, and tradeoff among multiple QoS indexes, a QoS based optimal Web services selection method by multi-objective genetic algorithm is presented. First we design a chromosome coding method to represent a feasible service selection solution, and then develop genetic operators and strategies for maintaining diversity of population and avoiding getting trapped in local optima. Experimental results show that within a finite number of evolving generations this algorithm can generate a set of nondominated Pareto optimal sol.utions which satisfy to user's QoS requirements." }, { "instance_id": "R28889xR28797", "comparison_id": "R28889", "paper_id": "R28797", "text": "Interactive, evolutionary search in upstream object-oriented class design Although much evidence exists to suggest that early life cycle software engineering design is a difficult task for software engineers to perform, current computational tool support for software engineers is limited. To address this limitation, interactive search-based approaches using evolutionary computation and software agents are investigated in experimental upstream design episodes for two example design domains. Results show that interactive evolutionary search, supported by software agents, appears highly promising. As an open system, search is steered jointly by designer preferences and software agents. Directly traceable to the design problem domain, a mass of useful and interesting class designs is arrived at which may be visualized by the designer with quantitative measures of structural integrity, such as design coupling and class cohesion. The class designs are found to be of equivalent or better coupling and cohesion when compared to a manual class design for the example design domains, and by exploiting concurrent execution, the runtime performance of the software agents is highly favorable." }, { "instance_id": "R28889xR28657", "comparison_id": "R28889", "paper_id": "R28657", "text": "Identifying \"Good\" Architectural Design Alternatives with Multi-Objective Optimization Strategies Architecture trade-off analysis methods are appropriate techniques to evaluate design decisions and design alternatives with respect to conflicting quality requirements. However, the identification of good design alternatives is a time consuming task, which is currently performed manually. To automate this task, this paper proposes to use evolutionary algorithms and multi-objective optimization strategies based on architecture refactorings to identify a sufficient set of design alternatives. This approach will reduce development costs and improve the quality of the final system, because an automated and systematic search will identify more and better design alternatives." }, { "instance_id": "R29012xR28984", "comparison_id": "R29012", "paper_id": "R28984", "text": "From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions." }, { "instance_id": "R29012xR28992", "comparison_id": "R29012", "paper_id": "R28992", "text": "XM2VTSDB: The extended M2VTS database Keywords: vision Reference EPFL-CONF-82502 URL: ftp://ftp.idiap.ch/pub/papers/vision/avbpa99.pdf Record created on 2006-03-10, modified on 2017-05-10" }, { "instance_id": "R29012xR28990", "comparison_id": "R29012", "paper_id": "R28990", "text": "Multi-PIE A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE." }, { "instance_id": "R29012xR28988", "comparison_id": "R29012", "paper_id": "R28988", "text": "Comprehensive database for facial expression analysis Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis." }, { "instance_id": "R29012xR28986", "comparison_id": "R29012", "paper_id": "R28986", "text": "The FERET evaluation methodology for face-recognition algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance." }, { "instance_id": "R29034xR29021", "comparison_id": "R29034", "paper_id": "R29021", "text": "Face alignment by coarse-to-fine shape searching We present a novel face alignment framework based on coarse-to-fine shape searching. Unlike the conventional cascaded regression approaches that start with an initial shape and refine the shape in a cascaded manner, our approach begins with a coarse search over a shape space that contains diverse shapes, and employs the coarse solution to constrain subsequent finer search of shapes. The unique stage-by-stage progressive and adaptive search i) prevents the final solution from being trapped in local optima due to poor initialisation, a common problem encountered by cascaded regression approaches; and ii) improves the robustness in coping with large pose variations. The framework demonstrates real-time performance and state-of-the-art results on various benchmarks including the challenging 300-W dataset." }, { "instance_id": "R29034xR28973", "comparison_id": "R29034", "paper_id": "R28973", "text": "Hyperface: A deep multi-task learning framework for face detection, land- mark localization, pose estimation, and gender recognition We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks." }, { "instance_id": "R29080xR29078", "comparison_id": "R29080", "paper_id": "R29078", "text": "Continuous conditional neural fields for structured regression An increasing number of computer vision and pattern recognition problems require structured regression techniques. Problems like human pose estimation, unsegmented action recognition, emotion prediction and facial landmark detection have temporal or spatial output dependencies that regular regression techniques do not capture. In this paper we present continuous conditional neural fields (CCNF) \u2013 a novel structured regression model that can learn non-linear input-output dependencies, and model temporal and spatial output relationships of varying length sequences. We propose two instances of our CCNF framework: Chain-CCNF for time series modelling, and Grid-CCNF for spatial relationship modelling. We evaluate our model on five public datasets spanning three different regression problems: facial landmark detection in the wild, emotion prediction in music and facial action unit recognition. Our CCNF model demonstrates state-of-the-art performance on all of the datasets used." }, { "instance_id": "R29080xR29036", "comparison_id": "R29080", "paper_id": "R29036", "text": "Locating facial features with an extended active shape model We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using two- instead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series." }, { "instance_id": "R29080xR29050", "comparison_id": "R29080", "paper_id": "R29050", "text": "Detector of facial landmarks learned by the structured output SVM In this paper we describe a detector of facial landmarks based on the Deformable Part Models. We treat the task of landmark detection as an instance of the structured output classification problem. We propose to learn the parameters of the detector from data by the Structured Output Support Vector Machines algorithm. In contrast to the previous works, the objective function of the learning algorithm is directly related to the performance of the resulting detector which is controlled by a user-defined loss function. The resulting detector is real-time on a standard PC, simple to implement and it can be easily modified for detection of a different set of landmarks. We evaluate performance of the proposed landmark detector on a challenging \u201cLabeled Faces in the Wild\u201d (LFW) database. The empirical results demonstrate that the proposed detector is consistently more accurate than two public domain implementations based on the Active Appearance Models and the Deformable Part Models. We provide an open-source implementation of the proposed detector and the manual annotation of the facial landmarks for all images in the LFW database." }, { "instance_id": "R29153xR29129", "comparison_id": "R29153", "paper_id": "R29129", "text": "A survey on the recent research literature on ERP systems The research literature on ERP systems has exponentially grown in recent years. In a domain, where new concepts and techniques are constantly introduced, it is therefore, of interest to analyze the recent trends of this literature, which is only partially included in the research papers published. Therefore, we have chosen to primarily analyze the literature of the last 2 years (2003 and 2004), on the basis of a classification according to six categories: implementation of ERP; optimisation of ERP; management through ERP; the ERP software; ERP for supply chain management; case studies. This survey confirms that the research on ERP systems is still a growing field, but has reached some maturity. Different research communities address this area from various points of view. Among the research axes that are now active, we can, especially, notice a growing interest on the post-implementation phase of the projects, on the customization of ERP systems, on the sociological aspects of the implementation, on the interoperability of the ERP with other systems and on the return on investment of the implementations." }, { "instance_id": "R29153xR29133", "comparison_id": "R29153", "paper_id": "R29133", "text": "Enterprise resource planning research: where are we now and where should we go from here? ABSTRACT The research related to Enterprise Resource Planning (ERP) has grown over the past several years. This growing body of ERP research results in an increased need to review this extant literature with the intent of identifying gaps and thus motivate researchers to close this breach. Therefore, this research was intended to critique, synthesize and analyze both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature, and then enumerates and discusses an agenda for future research efforts. To accomplish this, we analyzed 49 ERP articles published (1999-2004) in top Information Systems (IS) and Operations Management (OM) journals. We found an increasing level of activity during the 5-year period and a slightly biased distribution of ERP articles targeted at IS journals compared to OM. We also found several research methods either underrepresented or absent from the pool of ERP research. We identified several areas of need within the ERP literature, none more prevalent than the need to analyze ERP within the context of the supply chain. INTRODUCTION Davenport (1998) described the strengths and weaknesses of using Enterprise Resource Planning (ERP). He called attention to the growth of vendors like SAP, Baan, Oracle, and People-Soft, and defined this software as, \"...the seamless integration of all the information flowing through a companyfinancial and accounting information, human resource information, supply chain information, and customer information.\" (Davenport, 1998). Since the time of that article, there has been a growing interest among researchers and practitioners in how organization implement and use ERP systems (Amoako-Gyampah and Salam, 2004; Bendoly and Jacobs, 2004; Gattiker and Goodhue, 2004; Lander, Purvis, McCray and Leigh, 2004; Luo and Strong, 2004; Somers and Nelson, 2004; Zoryk-Schalla, Fransoo and de Kok, 2004). This interest is a natural continuation of trends in Information Technology (IT), such as MRP II, (Olson, 2004; Teltumbde, 2000; Toh and Harding, 1999) and in business practice improvement research, such as continuous process improvement and business process reengineering (Markus and Tanis, 2000; Ng, Ip and Lee, 1999; Reijers, Limam and van der Aalst, 2003; Toh and Harding, 1999). This growing body of ERP research results in an increased need to review this extant literature with the intent of \"identifying critical knowledge gaps and thus motivate researchers to close this breach\" (Webster and Watson, 2002). Also, as noted by Scandura & Williams (2000), in order for research to advance, the methods used by researchers must periodically be evaluated to provide insights into the methods utilized and thus the areas of need. These two interrelated needs provide the motivation for this paper. In essence, this research critiques, synthesizes and analyzes both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature and then enumerates and discusses an agenda for future research efforts. The remainder of the paper is organized as follows: Section 2 describes the approach to the analysis of the ERP research. Section 3 contains the results and a review of the literature. Section 4 discusses our findings and the needs relative to future ERP research efforts. Finally, section 5 summarizes the research. RESEARCH STUDY We captured the trends pertaining to (1) the number and distribution of ERP articles published in the leading journals, (2) methodologies employed in ERP research, and (3) emphasis relative to topic of ERP research. During the analysis of the ERP literature, we identified gaps and needs in the research and therefore enumerate and discuss a research agenda which allows the progression of research (Webster and Watson, 2002). In short, we sought to paint a representative landscape of the current ERP literature base in order to influence the direction of future research efforts relative to ERP. \u2026" }, { "instance_id": "R29153xR29140", "comparison_id": "R29153", "paper_id": "R29140", "text": "An Updated ERP Systems Annotated Bibliography: 2001-2005 This study provides an updated annotated bibliography of ERP publications published in the main IS conferences and journals during the period 2001-2005, categorizing them through an ERP lifecycle-based framework that is structured in phases. The first version of this bibliography was published in 2001 (Esteves and Pastor, 2001c). However, so far, we have extended the bibliography with a significant number of new publications in all the categories used in this paper. We also reviewed the categories and some incongruities were eliminated." }, { "instance_id": "R29153xR29114", "comparison_id": "R29153", "paper_id": "R29114", "text": "Enterprise Resource Planning Systems Research: An Annotated Bibliography Despite growing interest, publications on ERP systems within the academic Information Systems community, as reflected by contributions to journals and international conferences, is only now emerging. This article provides an annotated bibliography of the ERP publications published in the main Information Systems journals and conferences and reviews the state of the ERP art. The publications surveyed are categorized through a framework that is structured in phases that correspond to the different stages of an ERP system lifecycle within an organization. We also present topics for further research in each phase. ." }, { "instance_id": "R29184xR29159", "comparison_id": "R29184", "paper_id": "R29159", "text": "Planning for ERP systems: analysis and future trend The successful implementation of various enterprise resource planning (ERP) systems has provoked considerable interest over the last few years. Management has recently been enticed to look toward these new information technologies and philosophies of manufacturing for the key to survival or competitive edges. Although there is no shortage of glowing reports on the success of ERP installations, many companies have tossed millions of dollars in this direction with little to show for it. Since many of the ERP failures today can be attributed to inadequate planning prior to installation, we choose to analyze several critical planning issues including needs assessment and choosing a right ERP system, matching business process with the ERP system, understanding the organizational requirements, and economic and strategic justification. In addition, this study also identifies new windows of opportunity as well as challenges facing companies today as enterprise systems continue to evolve and expand." }, { "instance_id": "R29184xR29179", "comparison_id": "R29184", "paper_id": "R29179", "text": "The Future of ERP Systems: look backward before moving forward Abstract This paper explores the enterprise resource planning (ERP) systems literature in an attempt to elucidate knowledge to help us see the future of ERP systems\u2019 research. The main purpose of this research is to study the development of ERP systems and other related areas in order to reach the constructs of mainstream literature. The analysis of literature has helped us to reach the key constructs of an as-is scenario, those are: history and development of ERP systems, the implementation life cycle, critical success factors and project management, and benefits and costs. However, the to-be scenario calls for more up-to-date research constructs of ERP systems integrating the following constructs: social networks, cloud computing, enterprise 2.0, and decision 2.0. In the end, the conclusion section will establish the link between the as-is and to-be scenarios opening the door for more novel ERP research areas." }, { "instance_id": "R29184xR29161", "comparison_id": "R29184", "paper_id": "R29161", "text": "Enterprise resource planning (ERP) systems: a research agenda The continuing development of enterprise resource planning (ERP) systems has been considered by many researchers and practitioners as one of the major IT innovations in this decade. ERP solutions seek to integrate and streamline business processes and their associated information and work flows. What makes this technology more appealing to organizations is increasing capability to integrate with the most advanced electronic and mobile commerce technologies. However, as is the case with any new IT field, research in the ERP area is still lacking and the gap in the ERP literature is huge. Attempts to fill this gap by proposing a novel taxonomy for ERP research. Also presents the current status with some major themes of ERP research relating to ERP adoption, technical aspects of ERP and ERP in IS curricula. The discussion presented on these issues should be of value to researchers and practitioners. Future research work will continue to survey other major areas presented in the taxonomy framework." }, { "instance_id": "R29240xR29203", "comparison_id": "R29240", "paper_id": "R29203", "text": "Critical success factors in ERP implementation: a review ERP systems have become vital strategic tools in today\u2019s competitive business environment. This ongoing research study presents a review of recent research work in ERP systems. It attempts to identify the main benefits of ERP systems, the drawbacks and the critical success factors for implementation discussed in the relevant literature. The findings revealed that despite some organizations have faced challenges undertaking ERP implementations, many others have enjoyed the benefits that the systems have brought to the organizations. ERP system facilitates the smooth flow of common functional information and practices across the entire organization. In addition, it improves the performance of the supply chain and reduces the cycle times. However, without top management support, having appropriate business plan and vision, re-engineering business process, effective project management, user involvement and education and training, organizations can not embrace the full benefits of such complex system and the risk of failure might be at high level." }, { "instance_id": "R29240xR29224", "comparison_id": "R29240", "paper_id": "R29224", "text": "Comparing risk and success factors in ERP projects: a literature review Although research and practice has attributed considerable attention to Enterprise Resource Planning (ERP) projects their failure rate is still high. There are two main fields of research, which aim at increasing the success rate of ERP projects: Research on risk factors and research on success factors. Despite their topical relatedness, efforts to integrate these two fields have been rare. Against this background, this paper analyzes 68 articles dealing with risk and success factors and categorizes all identified factors into twelve categories. Though some topics are equally important in risk and success factor research, the literature on risk factors emphasizes topics which ensure achieving budget, schedule and functionality targets. In contrast, the literature on success factors concentrates more on strategic and organizational topics. We argue that both fields of research cover important aspects of project success. The paper concludes with the presentation of a possible holistic consideration to integrate both, the understanding of risk and success factors." }, { "instance_id": "R29240xR29210", "comparison_id": "R29240", "paper_id": "R29210", "text": "Identification and assessment of risks associated with ERP post-implementation in China Purpose \u2013 The purpose of this paper is to identify, assess and explore potential risks that Chinese companies may encounter when using, maintaining and enhancing their enterprise resource planning (ERP) systems in the post\u2010implementation phase.Design/methodology/approach \u2013 The study adopts a deductive research design based on a cross\u2010sectional questionnaire survey. This survey is preceded by a political, economic, social and technological analysis and a set of strength, weakness, opportunity and threat analyses, from which the researchers refine the research context and select state\u2010owned enterprises (SOEs) in the electronic and telecommunications industry in Guangdong province as target companies to carry out the research. The questionnaire design is based on a theoretical risk ontology drawn from a critical literature review process. The questionnaire is sent to 118 selected Chinese SOEs, from which 42 (84 questionnaires) valid and usable responses are received and analysed.Findings \u2013 The findings ident..." }, { "instance_id": "R29240xR29201", "comparison_id": "R29240", "paper_id": "R29201", "text": "Risk management in ERP project introduction: Review of the literature In recent years ERP systems have received much attention. However, ERP projects have often been found to be complex and risky to implement in business enterprises. The organizational relevance and risk of ERP projects make it important for organizations to focus on ways to make ERP implementation successful. We collected and analyzed a number of key articles discussing and analyzing ERP implementation. The different approaches taken in the literature were compared from a risk management point of view to highlight the key risk factors and their impact on project success. Literature was further classified in order to address and analyze each risk factor and its relevance during the stages of the ERP project life cycle." }, { "instance_id": "R29240xR29198", "comparison_id": "R29240", "paper_id": "R29198", "text": "ERP implementation: a compilation and analysis of critical success factors Purpose \u2013 To explore the current literature base of critical success factors (CSFs) of ERP implementations, prepare a compilation, and identify any gaps that might exist.Design/methodology/approach \u2013 Hundreds of journals were searched using key terms identified in a preliminary literature review. Successive rounds of article abstract reviews resulted in 45 articles being selected for the compilation. CSF constructs were then identified using content analysis methodology and an inductive coding technique. A subsequent critical analysis identified gaps in the literature base.Findings \u2013 The most significant finding is the lack of research that has focused on the identification of CSFs from the perspectives of key stakeholders. Additionally, there appears to be much variance with respect to what exactly is encompassed by change management, one of the most widely cited CSFs, and little detail of specific implementation tactics.Research limitations/implications \u2013 There is a need to focus future research efforts..." }, { "instance_id": "R29240xR29194", "comparison_id": "R29240", "paper_id": "R29194", "text": "Critical successful factors of ERP implementation: a review Recently e -business has become the focus of management interest both in academics and in business. Among the major components of e -business, ERP (Enterprise Resource Planning) is the backbone of other applications. Therefore more and more enterprises attempt to adopt this new application in order to improve their business competitiveness. Owing to the specific characteristics of ERP, its implementation is more difficult than that of traditional information systems. For this reason, how to implement ERP successfully becomes an important issue for both academics and practitioners. In this paper, a review on critical successful factors of ERP in important MIS publications will be presented. Additionally traditional IS implementatio n and ERP implementation will be compared and the findings will be served as the basis for further research." }, { "instance_id": "R29351xR29320", "comparison_id": "R29351", "paper_id": "R29320", "text": "ERP systems business value: a critical review of empirical literature The business value generated by information and communication technologies (ICT) has been for long time a major research topic. Recently there is a growing research interest in the business value generated by particular types of information systems (IS). One of them is the enterprise resource planning (ERP) systems, which are increasingly adopted by organizations for supporting and integrating key business and management processes. The current paper initially presents a critical review of the existing empirical literature concerning the business value of the ERP systems, which investigates the impact of ERP systems adoption on various measures of organizational performance. Then is critically reviewed the literature concerning the related topic of critical success factors (CSFs) in ERP systems implementation, which aims at identifying and investigating factors that result in more successful ERP systems implementation that generate higher levels of value for organizations. Finally, future directions of research concerning ERP systems business value are proposed." }, { "instance_id": "R29351xR29326", "comparison_id": "R29351", "paper_id": "R29326", "text": "Enterprise resource planning systems (ERP) and user performance: a literature review Organizations spend billions of dollars and countless hours implementing Enterprise Resources Planning systems (ERPs) to attain better performance. However, the failure rate of ERP implementation is very high, with subsequent research interests focussing mainly on understanding the failure factors. With the spotlight of prior research mainly focussed on success and failure factors other important aspects have not been given enough attention. This paper starts from the proposition that users can evaluate the benefits of the ERP systems and users can judge whether or not ERPs provide reasonable payoff and outcomes for organizations. This premise is based on the view that the user creates the benefits through the accomplishment of tasks leading to the achievement of goals. The study consists of comprehensive literature review bringing to light previous investigations on the impacts of ERP on user performance and presents how ERP research utilises IS theory to investigate ERP in different settings." }, { "instance_id": "R29351xR29310", "comparison_id": "R29351", "paper_id": "R29310", "text": "Understanding the impact of enterprise systems on management decision making: an agenda for future research Enterprise systems have been widely sold on the basis that they reduce costs through process efficiency and enhance decision making by providing accurate and timely enterprise wide information. Although research shows that operational efficiencies can be achieved, ERP systems are notoriously poor at delivering management information in a form that would support effective decision-making. Research suggests managers are not helped in their decision-making abilities simply by increasing the flow of information. This paper calls for a new approach to researching the impact of ERP implementations on global organizations by examining decision making processes at 3 levels in the organisation (corporate, core implementation team and local site)." }, { "instance_id": "R29351xR29344", "comparison_id": "R29351", "paper_id": "R29344", "text": "The effects of management information and ERP systems on strategic knowledge management and decision-making resources, to shorten delivery time, to increase quality and product variety, in other words, are obligated to develop \"an integrated information system\". Enterprise Resource Planning (ERP) systems help unleash the true potential of companies by integrating business and management processes. In this study, how and in what direction Enterprise Resource Planning Systems affect the decision of the upper and middle level managers of businesses together with the effects of ERP systems on strategic knowledge management to make enterprises more innovative and competitively advantaged, transformable, and decisions based on ERP systems will be investigated. As a result of literature review study, the role and the impacts of these systems on strategic information management and decisionmaking will be investigated with both global and local business application examples. 2013 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of the 9 th" }, { "instance_id": "R29351xR29298", "comparison_id": "R29351", "paper_id": "R29298", "text": "Potential impact of cultural differences on enterprise resource planning (ERP) projects Over the last ten years, there has been a dramatic growth in the acquisition of Enterprise Resource Planning (ERP) systems, where the market leader is the German company, SAP AG. However, more recently, there has been an increase in reported ERP failures, suggesting that the implementation issues are not just technical, but encompass wider behavioural factors." }, { "instance_id": "R29351xR29316", "comparison_id": "R29351", "paper_id": "R29316", "text": "Organisations and vanilla software: what do we know about ERP systems and competitive advantage? Enterprise Resource Planning (ERP) systems have become a de facto standard for integrating business functions. But an obvious question arises: if every business is using the same socalled \u201cVanilla\u201d software (e.g. an SAP ERP system) what happens to the competitive advantage from implementing IT systems? If we discard our custom-built legacy systems in favour of enterprise systems do we also jettison our valued competitive advantage from IT? While for some organisations ERPs have become just a necessity for conducting business, others want to exploit them to outperform their competitors. In the last few years, researchers have begun to study the link between ERP systems and competitive advantage. This link will be the focus of this paper. We outline a framework summarizing prior research and suggest two researchable questions. A future article will develop the framework with two empirical case studies from within part of the European food industry." }, { "instance_id": "R29351xR29332", "comparison_id": "R29351", "paper_id": "R29332", "text": "Taking knowledge management on the ERP road: a two-dimensional analysis\u201d In today's fierce business competition, companies face the tremendous challenge of expanding markets, improving their products, services and processes and exploiting their intellectual capital in a dynamic network of knowledge-intensive relations inside and outside their borders. In order to accomplish these objectives, more and more companies are turning to the Enterprise Resource Planning systems (ERP). On the other hand, Knowledge Management (KM) has received considerable attention in the last decade and is continuously gaining interest by industry, enterprises and academia. As we are moving into an era of \u201cknowledge capitalism\u201d, knowledge management will play a fundamental role in the success of today's businesses. This paper aims at throwing light on the role of KM in the ERP success first and on their possible integration second. A wide range of academic and practitioner literature related to KM and ERP is reviewed. On the basis of this review, the paper gives answers to specific research questions and analyses future research directions." }, { "instance_id": "R30476xR30284", "comparison_id": "R30476", "paper_id": "R30284", "text": "The Relationship between CO2 Emission, Energy Consumption, Urbanization and Trade Openness for Selected CEECs This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991\u20132011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well." }, { "instance_id": "R30476xR29900", "comparison_id": "R30476", "paper_id": "R29900", "text": "Environmental Kuznet\u2019s curve for India: evidence from tests for cointegration with unknown structural breaks This study revisits the cointegrating relationship between carbon emission, energy use, economic activity and trade openness for India using threshold cointegration tests with a view to testing the environmental Kuznet\u2019s curve hypothesis in the presence of possible regime shift in long run relationship of the variables for the period 1971 to 2008. The article confirms the existence of \u2018regime-shift\u2019 or \u2018threshold\u2019 cointegration among the variables and environmental Kuznet\u2019s curve for India. It challenges previous empirical works for India which fail to establish cointegrating relationship among these variables and explains its logical and econometric reasons. The study finds that the carbon emission is highly elastic with respect to real per capita income and energy use in India. This finding is critical and warns successful design and execution of energy and environmental policy framework which would pave the low carbon sustainable growth path inIndia." }, { "instance_id": "R30476xR29404", "comparison_id": "R30476", "paper_id": "R29404", "text": "Richer and cleaner? A study on carbon dioxide emissions in developing countries The Climate Change debate has drawn attention to the problem of greenhouse gases emissions into the atmosphere. One of the most important issues in the policy debate is the role that should be played by developing countries in joining the commitment of developed countries to reduce GHG emissions, and particularly CO2 emissions. This debate calls into play the relationship between energy consumption, CO2 emissions and economic development. In this paper we use a panel data model for 110 world countries to estimate the relationship between CO2 emissions and GDP and to produce emission forecast. The paper contains three major results: (i) the empirical relationship between carbon dioxide and income is well described by non linear Gamma and Weibull specifications as opposed to more usual linear and log-linear functional forms; (ii) our single equation reduced form model is comparable in terms of forecasted emissions with other more complex, less data driven models; (iii) despite the decreasing marginal propensity to pollute, our forecasts show that future global emissions will rise. The average world growth of CO2 emissions between 2000 and 2020 is about 2.2% per year, while that of Non Annex 1 countries is posted at 3.3% per year." }, { "instance_id": "R30476xR30272", "comparison_id": "R30476", "paper_id": "R30272", "text": "Estimating the environmental Kuznets curve for Spain by considering fuel oil prices (1874\u20132011) Abstract We perform a structural analysis on an environmental Kuznets curve (EKC) for Spain by exploiting long time series (1874\u20132011) and by using real oil prices as an indicator of variations in fuel energy consumption. This empirical strategy allows us to both, capture the effect of the most pollutant energy on carbon dioxide (CO2) emissions and, at the same time, preclude potential endogeneity problems derived from the direct inclusion of fuel consumption in econometric specification. Knowing the extent to which oil prices affect CO2 emissions has a straightforward application for environmental policy. The dynamics estimates of the long and short-term relationships among CO2, economic growth and oil prices are built through an autoregressive distributed lag (ARDL) model. Our test results support the EKC hypothesis. Moreover, real oil prices are clearly revealed as a valuable indicator of pollutant energy consumption." }, { "instance_id": "R30476xR30189", "comparison_id": "R30476", "paper_id": "R30189", "text": "CO2 emissions, economic growth, energy consumption, trade and urbanization in new EU member and candidate countries: A panel data analysis This paper investigates the causal relationship between energy consumption, carbon dioxide emissions, economic growth, trade openness and urbanization for a panel of new EU member and candidate countries over the period 1992\u20132010. Panel unit root tests, panel cointegration methods and panel causality tests are used to investigate this relationship. The main results provide evidence supporting the Environmental Kuznets Curve hypothesis. Hence, there is an inverted U-shaped relationship between environment and income for the sampled countries. The results also indicate that there is a short-run unidirectional panel causality running from energy consumption, trade openness and urbanization to carbon emissions, from GDP to energy consumption, from GDP, energy consumption and urbanization to trade openness, from urbanization to GDP, and from urbanization to trade openness. As for the long-run causal relationship, the results indicate that estimated coefficients of lagged error correction term in the carbon dioxide emissions, energy consumption, GDP, and trade openness equations are statistically significant, implying that these four variables could play an important role in adjustment process as the system departs from the long-run equilibrium." }, { "instance_id": "R30476xR29984", "comparison_id": "R30476", "paper_id": "R29984", "text": "Fossil & renewable energy consumption, GHGs (greenhouse gases) and economic growth: Evidence from a panel of EU (European Union) countries Recently a great number of empirical research studies have been conducted on the relationship between certain indicators of environmental degradation and income. The EKC (Environmental Kuznets Curve) hypothesis has been tested for various types of environmental degradation. The EKC hypothesis states that the relationship between environmental degradation and income per capita takes the form of an inverted U shape. In this paper the EKC hypothesis was investigated with regards to the relationship between carbon emissions, income and energy consumption in 16 EU (European Union) countries. We conducted panel data analysis for the period of 1990\u20132008 by fixing the multicollinearity problem between the explanatory variables using their centered values. The main contribution of this paper is that the EKC hypothesis has been investigated by separating final energy consumption into renewable and fossil fuel energy consumption. Unfortunately, the inverted U-shape relationship (EKC) does not hold for carbon emissions in the 16 EU countries. The other important finding is that renewable energy consumption contributes around 1/2 less per unit of energy consumed than fossil energy consumption in terms of GHG (greenhouse gas) emissions in EU countries. This implies that a shift in energy consumption mix towards alternative renewable energy technologies might decrease the GHG emissions." }, { "instance_id": "R30476xR30151", "comparison_id": "R30476", "paper_id": "R30151", "text": "Public budgets for energy RD&D and the effects on energy intensity and pollution levels This study, based on the N-shaped cubic model of the environmental Kuznets curve, analyzes the evolution of per capita greenhouse gas emissions (GHGpc) using not just economic growth but also public budgets dedicated to energy-oriented research development and demonstration (RD&D) and energy intensity. The empirical evidence, obtained from an econometric model of fixed effects for 28 OECD countries during 1994\u20132010, suggests that energy innovations help reduce GHGpc levels and mitigate the negative impact of energy intensity on environmental quality. When countries develop active energy RD&D policies, they can reduce both the rates of energy intensity and the level of GHGpc emissions. This paper incorporates a moderating variable to the econometric model that emphasizes the effect that GDP has on energy intensity. It also adds a variable that reflects the difference between countries that have made a greater economic effort in energy RD&D, which in turn corrects the GHG emissions resulting from the energy intensity of each country." }, { "instance_id": "R30476xR30161", "comparison_id": "R30476", "paper_id": "R30161", "text": "Causal relationship between CO2 emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia The aim of this paper is to examine the causal relationship between CO2 emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia over the period of 1971\u20132012. The long-run relationship is investigated by the auto-regressive distributed lag (ARDL) bounds testing approach to cointegration and error correction method (ECM). The results of the analysis reveal a positive sign for the coefficient of financial development, suggesting that the financial development in Tunisia has taken place at the expense of environmental pollution. The Tunisian case also shows a positive monotonic relationship between real GDP and CO2 emissions. This means that the results do not support the validity of environmental Kuznets curve (EKC) hypothesis. In addition, the paper explores causal relationship between the variables by using Granger causality models and it concludes that financial development plays a vital role in the Tunisian economy." }, { "instance_id": "R30476xR30426", "comparison_id": "R30476", "paper_id": "R30426", "text": "The impact of financial development and trade on environmental quality in Iran Undesirable changes in the environment such as global warming and emissions of greenhouse gases have elicited worldwide attention in recent decades. Environmental problems emanating from economic activities targeted at achieving higher economic growth rate have become a controversial issue. In this study, the effects of financial development and trade on environmental quality in Iran were investigated. To this purpose, statistical data collected between the periods of 1970 and 2011 were used. In addition to using the autoregressive distributed lag model (ARDL), the short-term and long-term relationships between the variables were estimated and analyzed. Moreover, the environmental Kuznets curve (EKC) hypothesis was evaluated using various pollutants. The results show that financial development accelerates the degradation of the environment; however, an increase in trade openness reduces the damage to the environment in Iran. Furthermore, the results did not agree with the EKC hypothesis in Iran. Error correction coefficient showed that in each period, 49% of imbalances were justified and approached their long-run procedure. Structural stability tests showed that the estimated coefficients were stable over the period." }, { "instance_id": "R30476xR30280", "comparison_id": "R30476", "paper_id": "R30280", "text": "Estimating the relationship between economic growth and environmental quality for the brics economies - a dynamic panel data approach It has been forecasted by many economists that in the next couple of decades the BRICS economies are going to experience an unprecedented economic growth. This massive economic growth would definitely have a detrimental impact on the environment since these economies, like others, would extract their environmental and natural resource to a larger scale in the process of their economic growth. Therefore, maintaining environmental quality while growing has become a major challenge for these economies. However, the proponents of Environmental Kuznets Curve (EKC) Hypothesis - an inverted U shape relationship between income and emission per capita, suggest BRICS economies need not bother too much about environmental quality while growing because growth would eventually take care of the environment once a certain level of per capita income is achieved. In this backdrop, the present study makes an attempt to estimate EKC type relationship, if any, between income and emission in the context of the BRICS countries for the period 1997 to 2011. Therefore, the study first adopts fixed effect (FE) panel data model to control time constant country specific effects, and then uses Generalized Method of Moments (GMM) approach for dynamic panel data to address endogeneity of income variable and dynamism in emission per capita. Apart from income, we also include variables related to financial sector development and energy utilization to explain emission. The fixed effect model shows a significant EKC type relation between income and emission supporting the previous literature. However, GMM estimates for the dynamic panel model show the relationship between income and emission is actually U shaped with the turning point being out of sample. This out of sample turning point indicates that emission has been growing monotonically with growth in income. Factors like, net energy imports and share of industrial output in GDP are found to be significant and having detrimental impact on the environment in the dynamic panel model. However, these variables are found to be insignificant in FE model. Capital account convertibility shows significant and negative impact on the environment irrespective of models used. The monotonically increasing relationship between income and emission suggests the BRICS economies must adopt some efficiency oriented action plan so that they can grow without putting much pressure on the environment. These findings can have important policy implications as BRICS countries are mainly depending on these factors for their growth but at the same time they can cause serious threat to the environment." }, { "instance_id": "R30476xR30202", "comparison_id": "R30476", "paper_id": "R30202", "text": "Is there an environmental Kuznets curve for South Africa? A co-summability approach using a century of data There exists a huge international literature on the, so-called, Environmental Kuznets Curve (EKC) hypothesis, which in turn, postulates an inverted u-shaped relationship between environmental pollutants and output. The empirical literature on EKC has mainly used test for cointegration, based on polynomial relationships between pollution and income. Motivated by the fact that, measured in per capita CO2 equivalent emissions, South Africa is the world's most carbon-intensive non-oil-producing developing country, this paper aims to test the validity of the EKC for South Africa. For this purpose, we use a century of data (1911\u20132010), to capture the process of development better compared to short sample-based research; and the concept of co-summability, which is designed to analyze non-linear long-run relations among persistent processes. Our results, however, provide no support of the EKC for South Africa, both for the full-sample and sub-samples (determined by tests of structural breaks), implying that to reduce emissions without sacrificing growth, policies should be aimed at promoting energy efficiency." }, { "instance_id": "R30476xR29581", "comparison_id": "R30476", "paper_id": "R29581", "text": "Environment Kuznets curve for CO2 emissions: A cointegration analysis for China This study examines the long-run relationship between carbon emissions and energy consumption, income and foreign trade in the case of China by employing time series data of 1975-2005. In particular the study aims at testing whether environmental Kuznets curve (EKC) relationship between CO2 emissions and per capita real GDP holds in the long run or not. Auto regressive distributed lag (ARDL) methodology is employed for empirical analysis. A quadratic relationship between income and CO2 emission has been found for the sample period, supporting EKC relationship. The results of Granger causality tests indicate one way causality runs through economic growth to CO2 emissions. The results of this study also indicate that the carbon emissions are mainly determined by income and energy consumption in the long run. Trade has a positive but statistically insignificant impact on CO2 emissions." }, { "instance_id": "R30476xR30295", "comparison_id": "R30476", "paper_id": "R30295", "text": "CO2 emissions, real output, energy consumption, trade, urbanization and financial development: testing the EKC hypothesis for the USA This study aims to investigate the relationship between carbon dioxide (CO2) emissions, energy consumption, real output (GDP), the square of real output (GDP2), trade openness, urbanization, and financial development in the USA for the period 1960\u20132010. The bounds testing for cointegration indicates that the analyzed variables are cointegrated. In the long run, energy consumption and urbanization increase environmental degradation while financial development has no effect on it, and trade leads to environmental improvements. In addition, this study does not support the validity of the environmental Kuznets curve (EKC) hypothesis for the USA because real output leads to environmental improvements while GDP2 increases the levels of gas emissions. The results from the Granger causality test show that there is bidirectional causality between CO2 and GDP, CO2 and energy consumption, CO2 and urbanization, GDP and urbanization, and GDP and trade openness while no causality is determined between CO2 and trade openness, and gas emissions and financial development. In addition, we have enough evidence to support one-way causality running from GDP to energy consumption, from financial development to output, and from urbanization to financial development. In light of the long-run estimates and the Granger causality analysis, the US government should take into account the importance of trade openness, urbanization, and financial development in controlling for the levels of GDP and pollution. Moreover, it should be noted that the development of efficient energy policies likely contributes to lower CO2 emissions without harming real output." }, { "instance_id": "R30476xR29415", "comparison_id": "R30476", "paper_id": "R29415", "text": "An Exploration of the Conceptual and Empirical Basis of the Environmental Kuznets Curve We examine the conceptual and empirical basis of the environmental Kuznets curve. From both perspectives, the relationship lacks firm foundations. In particular, the empirical relationship is shown to be highly sensitive to the choice of pollutant, sample of countries and time period. This strongly suggests that there is an omitted variables problem. We find that two important omitted variables are education and inequality. Also, we show that the observed relationship is sensitive to the measure of income/welfare used. The paper concludes with a discussion of some policy implications of our findings. Copyright 2002 by Blackwell Publishers Ltd/University of Adelaide and Flinders University of South Australia" }, { "instance_id": "R30476xR29437", "comparison_id": "R30476", "paper_id": "R29437", "text": "The impact of population pressure on global carbon dioxide emissions, 1975\u20131996: evidence from pooled cross-country data Abstract In assessing and forecasting the impact of population change on carbon dioxide emissions, most previous studies have assumed a unitary elasticity of emissions with respect to population change, i.e. that a 1% increase in population results in a 1% increase in emissions. This study finds that global population change over the last two decades is more than proportionally associated with growth in carbon dioxide emissions, and that the impact of population change on emissions is much more pronounced in developing countries than in developed countries. The empirical findings are based on a data for 93 countries over the period 1975\u20131996." }, { "instance_id": "R30476xR29384", "comparison_id": "R30476", "paper_id": "R29384", "text": "Are environmental Kuznets curves misleading us? The case of CO2 emissions Environmental Kuznets curve (EKC) analysis links changes in environmental quality to national economic growth. The reduced form models, however, do not provide insight into the underlying processes that generate these changes. We compare EKC models to structural transition models of per capita CO 2 emissions and per capita GDP, and find that, for the 16 countries which have undergone such a transition, the initiation of the transition correlates not with income levels but with historic events related to the oil price shocks of the 1970s and the policies that followed them. In contrast to previous EKC studies of CO 2 the transition away from positive emissions elasticities for these 16 countries is found to occur as a sudden, discontinuous transition rather than as a gradual change. We also demonstrate that the third order polynomial 'N' dependence of emissions on income is the result of data aggregation. We conclude that neither the 'U'- nor the 'N'-shaped relationship between CO 2 emissions and income provide a reliable indication of future behaviour." }, { "instance_id": "R30476xR29976", "comparison_id": "R30476", "paper_id": "R29976", "text": "The environmental Kuznets curve and the role of coal consumption in India: cointegration and causality analysis in an open economy This study investigates the dynamic relationship between coal consumption, economic growth, trade openness and CO2 emissions for Indian economy. In doing so, Narayan and Pop structural break unit test is applied to test the order of integration of the variables. Long run relationship between the variables is tested by applying ARDL bounds testing approach to cointegration developed by Pesaran et al. (2001). The results confirm the existence of cointegration for long run between coal consumption, economic growth, trade openness and CO2 emissions. Our empirical exercise indicates the presence of Environmental Kuznets Curve (EKC) long run as well as short run. Coal consumption as well as trade openness contributes to CO2 emissions. The causality results report the feedback hypothesis between economic growth and CO2 emissions and same inference is drawn between coal consumption and CO2 emissions. Moreover, trade openness Granger causes economic growth, coal consumption and CO2 emissions." }, { "instance_id": "R30476xR29779", "comparison_id": "R30476", "paper_id": "R29779", "text": "Modeling the CO2 emissions, energy use, and economic growth in Russia This paper applies the co-integration technique and causality test to examine the dynamic relationships between pollutant emissions, energy use, and real output during the period between 1990 and 2007 for Russia. The empirical results show that in the long-run equilibrium, emissions appear to be energy use elastic and output inelastic. This elasticity suggests high energy use responsiveness to changes in emissions. The output exhibits a negative significant impact on emissions and does not support EKC hypothesis. These indicate that both economic growth and energy conservation policies can reduce emissions and no negative impact on economic development. The causality results indicate that there is a bidirectional strong Granger-causality running between output, energy use and emissions, and whenever a shock occurs in the system, each variable makes a short-run adjustment to restore the long-run equilibrium. The average speed of adjustment is as low as just over 0.26 years. Hence, in order to reduce emissions, the best environmental policy is to increase infrastructure investment to improve energy efficiency, and to step up energy conservation policies to reduce any unnecessary waste of energy. That is, energy conservation is expected to improve energy efficiency, thereby promoting economic growth." }, { "instance_id": "R30476xR29973", "comparison_id": "R30476", "paper_id": "R29973", "text": "The environmental Kuznets curve in Asia: the case of sulphur and carbon emissions\u201d The present study examines whether the Race to the Bottom and Revised EKC scenarios presented by Dasgupta and others (2002) are, with regard to the analytical framework of the Environmental Kuznets Curve (EKC), applicable in Asia to representative environmental indices, such as sulphur emissions and carbon emissions. To carry out this study, a generalized method of moments (GMM) estimation was made, using panel data of 19 economies for the period 1950-2009. The main findings of the analysis on the validity of EKC indicate that sulphur emissions follow the expected inverted U-shape pattern, while carbon emissions tend to increase in line with per capita income in the observed range. As for the Race to the Bottom and Revised EKC scenarios, the latter was verified in sulphur emissions, as their EKC trajectories represent a linkage of the later development of the economy with the lower level of emissions while the former one was not present in neither sulphur nor carbon emissions." }, { "instance_id": "R30476xR30085", "comparison_id": "R30476", "paper_id": "R30085", "text": "Non-renewable and renewable energy consumption and CO2 emissions in OECD countries: A comparative analysis This paper attempts to explore the determinants of CO2 emissions using the STIRPAT model and data from 1980 to 2011 for OECD countries. The empirical results show that non-renewable energy consumption increases CO2 emissions, whereas renewable energy consumption decreases CO2 emissions. Further, the results support the existence of an environmental Kuznets curve between urbanisation and CO2 emissions, implying that at higher levels of urbanisation, the environmental impact decreases. Therefore, the overall evidence suggests that policy makers should focus on urban planning as well as clean energy development to make substantial contributions to both reducing non-renewable energy use and mitigating climate change." }, { "instance_id": "R30476xR29627", "comparison_id": "R30476", "paper_id": "R29627", "text": "The emissions, energy consumption, and growth nexus: Evidence from the commonwealth of independent states This study examines the causal relationship between carbon dioxide emissions, energy consumption, and real output within a panel vector error correction model for eleven countries of the Commonwealth of Independent States over the period 1992-2004. In the long-run, energy consumption has a positive and statistically significant impact on carbon dioxide emissions while real output follows an inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis. The short-run dynamics indicate unidirectional causality from energy consumption and real output, respectively, to carbon dioxide emissions along with bidirectional causality between energy consumption and real output. In the long-run there appears to be bidirectional causality between energy consumption and carbon dioxide emissions." }, { "instance_id": "R30476xR29843", "comparison_id": "R30476", "paper_id": "R29843", "text": "An econometric study of carbon dioxide (CO2) emissions, energy consumption, and economic growth of PakistanA hybrid of CdS/HCa2Nb3O10 ultrathin nanosheets with a tough heterointerface was successfully fabricated. Efficient interfacial charge transfer from CdS to HCa2Nb3O10 nanosheets was achieved to realize the enhanced photocatalytic H2 evolution activity.
" }, { "instance_id": "R46299xR46231", "comparison_id": "R46299", "paper_id": "R46231", "text": "Polymeric g-C3N4 coupled with NaNbO3 nanowires toward enhanced photocatalytic reduction of CO2 into renewable fuel Visible-light-responsive g-C3N4/NaNbO3 nanowires photocatalysts were fabricated by introducing polymeric g-C3N4 on NaNbO3 nanowires. The microscopic mechanisms of interface interaction, charge transfer and separation, as well as the influence on the photocatalytic activity of g-C3N4/NaNbO3 composite were systematic investigated. The high-resolution transmission electron microscopy (HR-TEM) revealed that an intimate interface between C3N4 and NaNbO3 nanowires formed in the g-C3N4/NaNbO3 heterojunctions. The photocatalytic performance of photocatalysts was evaluated for CO2 reduction under visible-light illumination. Significantly, the activity of g-C3N4/NaNbO3 composite photocatalyst for photoreduction of CO2 was higher than that of either single-phase g-C3N4 or NaNbO3. Such a remarkable enhancement of photocatalytic activity was mainly ascribed to the improved separation and transfer of photogenerated electron\u2013hole pairs at the intimate interface of g-C3N4/NaNbO3 heterojunctions, which originated from the..." }, { "instance_id": "R46299xR46223", "comparison_id": "R46299", "paper_id": "R46223", "text": "Constructing cubic-orthorhombic surface-phasejunctionsofNaNbO3 towardssigni\ufb01cantenhancementofCO2 photoreductionNaNbO3 with cubic\u2013orthorhombic surface-phase junctions were synthesized
Statistical predictions are useful to predict events based on statistical models. The data is useful to determine outcomes based on inputs and calculations. The Crow-AMSAA method will be explored to predict new cases of Coronavirus 19 (COVID19). This method is currently used within engineering reliability design to predict failures and evaluate the reliability growth. The author intents to use this model to predict the COVID19 cases by using daily reported data from Michigan, New York City, U.S.A and other countries. The piece wise Crow-AMSAA (CA) model fits the data very well for the infected cases and deaths at different phases during the start of the COVID19 outbreak. The slope \u03b2 of the Crow-AMSAA line indicates the speed of the transmission or death rate. The traditional epidemiological model is based on the exponential distribution, but the Crow-AMSAA is the Non Homogeneous Poisson Process (NHPP) which can be used to modeling the complex problem like COVID19, especially when the various mitigation strategies such as social distance, isolation and locking down were implemented by the government at different places.
This paper is to use piece wise Crow-AMSAA method to fit the COVID19 confirmed cases in Michigan, New York City, U.S.A and other countries.
piece wise Crow-AMSAA method to fit the COVID19 confirmed cases
From the Crow-AMSAA analysis above, at the beginning of the COVID 19, the infectious cases did not follow the Crow-AMSAA prediction line, but during the outbreak start, the confirmed cases does follow the CA line, the slope \u03b2 value indicates the pace of the transmission rate or death rate in each case. The piece wise Crow-AMSAA describes the different phases of spreading. This indicates the speed of the transmission rate could change according to the government interference, social distance order or other factors. Comparing the piece wise CA \u03b2 slopes (\u03b2: 1.683-- 0.834--0.092) in China and in U.S.A (\u03b2:5.138--10.48--5.259), the speed of infectious rate in U.S.A is much higher than the infectious rate in China. From the piece wise CA plots and summary table 1 of the CA slope \u03b2s, the COVID19 spreading has the different behavior at different places and countries where the government implemented the different policy to slow down the spreading.
From the analysis of data and conclusions from confirmed cases and deaths of COVID 19 in Michigan, New York city, U.S.A, China and other countries, the piece wise Crow-AMSAA method can be used to modeling the spreading of COVID19.