{ "instances": [ { "instance_id": "R108331xR108301", "comparison_id": "R108331", "paper_id": "R108301", "text": "A notation for Knowledge-Intensive Processes Business process modeling has become essential for managing organizational knowledge artifacts. However, this is not an easy task, especially when it comes to the so-called Knowledge-Intensive Processes (KIPs). A KIP comprises activities based on acquisition, sharing, storage, and (re)use of knowledge, as well as collaboration among participants, so that the amount of value added to the organization depends on process agents' knowledge. The previously developed Knowledge Intensive Process Ontology (KIPO) structures all the concepts (and relationships among them) to make a KIP explicit. Nevertheless, KIPO does not include a graphical notation, which is crucial for KIP stakeholders to reach a common understanding about it. This paper proposes the Knowledge Intensive Process Notation (KIPN), a notation for building knowledge-intensive processes graphical models." }, { "instance_id": "R108331xR108312", "comparison_id": "R108331", "paper_id": "R108312", "text": "Rapid knowledge work visualization for organizations Purpose \u2013 The purpose of this contribution is to motivate a new, rapid approach to modeling knowledge work in organizational settings and to introduce a software tool that demonstrates the viability of the envisioned concept.Design/methodology/approach \u2013 Based on existing modeling structures, the KnowFlow toolset that aids knowledge analysts in rapidly conducting interviews and in conducting multi\u2010perspective analysis of organizational knowledge work is introduced.Findings \u2013 This article demonstrates how rapid knowledge work visualization can be conducted largely without human modelers by developing an interview structure that allows for self\u2010service interviews. Two application scenarios illustrate the pressing need for and the potentials of rapid knowledge work visualizations in organizational settings.Research limitations/implications \u2013 The efforts necessary for traditional modeling approaches in the area of knowledge management are often prohibitive. This contribution argues that future research needs ..." }, { "instance_id": "R108331xR108316", "comparison_id": "R108331", "paper_id": "R108316", "text": "Analyzing Knowledge Transfer Effectiveness--An Agent-Oriented Modeling Approach Facilitating the transfer of knowledge between knowledge workers represents one of the main challenges of knowledge management. Knowledge transfer instruments, such as the experience factory concept, represent means for facilitating knowledge transfer in organizations. As past research has shown, effectiveness of knowledge transfer instruments strongly depends on their situational context, on the stakeholders involved in knowledge transfer, and on their acceptance, motivation and goals. In this paper, we introduce an agent-oriented modeling approach for analyzing the effectiveness of knowledge transfer instruments in the light of (potentially conflicting) stakeholders' goals. We apply this intentional approach to the experience factory concept and analyze under which conditions it can fail, and how adaptations to the experience factory can be explored in a structured way" }, { "instance_id": "R108331xR108296", "comparison_id": "R108331", "paper_id": "R108296", "text": "B-KIDE: a framework and a tool for business process-oriented knowledge infrastructure development The need for an effective management of knowledge is gaining increasing recognition in today's economy. To acknowledge this fact, new promising and powerful technologies have emerged from industrial and academic research. With these innovations maturing, organizations are increasingly willing to adapt such new knowledge management technologies to improve their knowledge-intensive businesses. However, the successful application in given business contexts is a complex, multidimensional challenge and a current research topic. Therefore, this contribution addresses this challenge and introduces a framework for the development of business process-supportive, technological knowledge infrastructures. While business processes represent the organizational setting for the application of knowledge management technologies, knowledge infrastructures represent a concept that can enable knowledge management in organizations. The B-KIDE Framework introduced in this work provides support for the development of knowledge infrastructures that comprise innovative knowledge management functionality and are visibly supportive of an organization's business processes. The developed B-KIDE Tool eases the application of the B-KIDE Framework for knowledge infrastructure developers. Three empirical studies that were conducted with industrial partners from heterogeneous industry sectors corroborate the relevance and viability of the introduced concepts. Copyright \u00a9 2005 John Wiley & Sons, Ltd." }, { "instance_id": "R108331xR108292", "comparison_id": "R108331", "paper_id": "R108292", "text": "Process Oriented Knowledge Management: A Service Based Approach This paper introduces a new viewpoint in knowledge management by introducing KM-Services as a basic concept for Knowledge Management. This text discusses the vision of service oriented knowledge management (KM) as a realisation approach of process oriented knowledge management. In the following process oriented knowledge management as it was defined in the EU-project PROMOTE (IST-1999-11658) is presented and the KM-Service approach to realise process oriented knowledge management is explained. The last part is concerned with an implementation scenario that uses Web-technology to realise a service framework for a KM-system." }, { "instance_id": "R108331xR108325", "comparison_id": "R108331", "paper_id": "R108325", "text": "Modeling Knowledge Work for the Design of Knowledge Infrastructures During the last years, a large number of information and communication technologies (ICT) have been proposed to be supportive of knowledge management (KM). Several KM instruments have been developed and implemented in many organizations that require support by ICT. Recently, many of these technologies are bundled in the form of comprehensive, enterprise-wide knowledge infrastructures. The implementation of both, instruments and infrastructures, requires adequate modeling techniques that consider the specifics of modeling context in knowledge work. The paper studies knowledge work, KM instruments and knowledge infrastructures. Modeling techniques are reviewed, especially for business process management and activity theory. The concept of knowledge stance is discussed in order to relate functions from process models to actions from activity theory, thus detailing the context relevant for knowledge work." }, { "instance_id": "R108332xR108312", "comparison_id": "R108332", "paper_id": "R108312", "text": "Rapid knowledge work visualization for organizations Purpose \u2013 The purpose of this contribution is to motivate a new, rapid approach to modeling knowledge work in organizational settings and to introduce a software tool that demonstrates the viability of the envisioned concept.Design/methodology/approach \u2013 Based on existing modeling structures, the KnowFlow toolset that aids knowledge analysts in rapidly conducting interviews and in conducting multi\u2010perspective analysis of organizational knowledge work is introduced.Findings \u2013 This article demonstrates how rapid knowledge work visualization can be conducted largely without human modelers by developing an interview structure that allows for self\u2010service interviews. Two application scenarios illustrate the pressing need for and the potentials of rapid knowledge work visualizations in organizational settings.Research limitations/implications \u2013 The efforts necessary for traditional modeling approaches in the area of knowledge management are often prohibitive. This contribution argues that future research needs ..." }, { "instance_id": "R108332xR108301", "comparison_id": "R108332", "paper_id": "R108301", "text": "A notation for Knowledge-Intensive Processes Business process modeling has become essential for managing organizational knowledge artifacts. However, this is not an easy task, especially when it comes to the so-called Knowledge-Intensive Processes (KIPs). A KIP comprises activities based on acquisition, sharing, storage, and (re)use of knowledge, as well as collaboration among participants, so that the amount of value added to the organization depends on process agents' knowledge. The previously developed Knowledge Intensive Process Ontology (KIPO) structures all the concepts (and relationships among them) to make a KIP explicit. Nevertheless, KIPO does not include a graphical notation, which is crucial for KIP stakeholders to reach a common understanding about it. This paper proposes the Knowledge Intensive Process Notation (KIPN), a notation for building knowledge-intensive processes graphical models." }, { "instance_id": "R108332xR108328", "comparison_id": "R108332", "paper_id": "R108328", "text": "Modeling Techniques for Knowledge Management: Knowledge management is an umbrella concept for different management tasks and activities. Various modeling abstractions and techniques have been developed providing specialized support for different knowledge management tasks. This article gives an overview of modeling abstractions that are frequently discussed in the knowledge management literature as well as some promising techniques in a mature research state. Six groups of modeling techniques are presented and additionally evaluated with respect to their suitability for different fields of applications within the knowledge management domain." }, { "instance_id": "R108332xR108325", "comparison_id": "R108332", "paper_id": "R108325", "text": "Modeling Knowledge Work for the Design of Knowledge Infrastructures During the last years, a large number of information and communication technologies (ICT) have been proposed to be supportive of knowledge management (KM). Several KM instruments have been developed and implemented in many organizations that require support by ICT. Recently, many of these technologies are bundled in the form of comprehensive, enterprise-wide knowledge infrastructures. The implementation of both, instruments and infrastructures, requires adequate modeling techniques that consider the specifics of modeling context in knowledge work. The paper studies knowledge work, KM instruments and knowledge infrastructures. Modeling techniques are reviewed, especially for business process management and activity theory. The concept of knowledge stance is discussed in order to relate functions from process models to actions from activity theory, thus detailing the context relevant for knowledge work." }, { "instance_id": "R108332xR108316", "comparison_id": "R108332", "paper_id": "R108316", "text": "Analyzing Knowledge Transfer Effectiveness--An Agent-Oriented Modeling Approach Facilitating the transfer of knowledge between knowledge workers represents one of the main challenges of knowledge management. Knowledge transfer instruments, such as the experience factory concept, represent means for facilitating knowledge transfer in organizations. As past research has shown, effectiveness of knowledge transfer instruments strongly depends on their situational context, on the stakeholders involved in knowledge transfer, and on their acceptance, motivation and goals. In this paper, we introduce an agent-oriented modeling approach for analyzing the effectiveness of knowledge transfer instruments in the light of (potentially conflicting) stakeholders' goals. We apply this intentional approach to the experience factory concept and analyze under which conditions it can fail, and how adaptations to the experience factory can be explored in a structured way" }, { "instance_id": "R108332xR108292", "comparison_id": "R108332", "paper_id": "R108292", "text": "Process Oriented Knowledge Management: A Service Based Approach This paper introduces a new viewpoint in knowledge management by introducing KM-Services as a basic concept for Knowledge Management. This text discusses the vision of service oriented knowledge management (KM) as a realisation approach of process oriented knowledge management. In the following process oriented knowledge management as it was defined in the EU-project PROMOTE (IST-1999-11658) is presented and the KM-Service approach to realise process oriented knowledge management is explained. The last part is concerned with an implementation scenario that uses Web-technology to realise a service framework for a KM-system." }, { "instance_id": "R108358xR108141", "comparison_id": "R108358", "paper_id": "R108141", "text": "Iron Oxides Mapping from E0-1 Hyperion Data The spatial and spectral capabilities of hyperion were considered highly suitable for mineral potential mapping. This study aimed to map iron oxide minerals from hyperion data set by following a procedure involving data pre-processing, atmospheric calibration and image classification. By several steps, the noise in the spectral and spatial domains was removed. These include the angular shift, along-track de-striping to remove the vertical stripes from the data set and reducing low-frequency spectral effect (smile). The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm in combination with the radiance transfer code MODTRAN4 was applied for quantification and removal of the atmospheric effect and retrieval of the surface reflectance. The reflectance values were compared with the spectra obtained from USGS spectral library and with the spectra obtained from radiometric measurements. Results derived from the visible and near infrared and shortwave infrared bands of hyperion between 400 to 2500nm represented iron oxide minerals signature at 465, 650 and 850\u2013950 nm." }, { "instance_id": "R108358xR108138", "comparison_id": "R108358", "paper_id": "R108138", "text": "Mapping the wavelength position of deepest absorption features to explore mineral diversity in hyperspectral images A new method is presented for the exploratory analysis of hyperspectral OMEGA imagery of Mars. It involves mapping the wavelength position and depth of the deepest absorption feature in the range between 2.1 and 2.4 \u00b5m, where reflectance spectra of minerals such as phyllosilicates, carbonates and sulphates contain diagnostic absorption features. For each pixel of the image, the wavelength position maps display the wavelength position of the deepest absorption feature in color and its depth in intensity. This can be correlated with (groups of) minerals and their occurrences. To test the validity of the method, comparisons were made between wavelength position maps calculated from OMEGA images of the Nili Fossae area at two different spatial resolutions, of 0.95 and 2.2 km, and five CRISM images in targeted mode, at 18 m spatial resolution. The wavelength positions and their spatial patterns in the two OMEGA images were generally similar, except that the higher spatial resolution OMEGA image showed a larger diversity of wavelength positions and more spatial detail than the lower resolution OMEGA image. Patterns formed by groups of pixels with relatively deep absorption features between 2.250 and 2.350 \u00b5m in the OMEGA imagery were in agreement with the patterns calculated from the CRISM imagery. The wavelength positions of clusters of similar pixels in the wavelength position maps are consistent with groups of minerals that have been described elsewhere in the literature. We conclude that mapping the wavelength position of the deepest absorption features between 2.1 and 2.4 \u00b5m provides a useful method for exploratory analysis of the surface mineralogy of Mars with hyperspectral OMEGA imagery. The method provides a synoptic spatial view of the spectral diversity in one single image. It is complementary to the use of summary products, which many researchers have been using for assessment of the information content of OMEGA imagery. The results of the exploratory analysis can be used as input for the construction of surface mineralogical maps. The wavelength position mapping method itself is equally applicable to other terrestrial and planetary data sets and will be particular useful in areas where field validation is sparse and with imagery containing shallow spectral features." }, { "instance_id": "R108358xR108150", "comparison_id": "R108358", "paper_id": "R108150", "text": "Improved k-means and spectral matching for hyperspectral mineral mapping Abstract Mineral mapping is an important step for the development and utilization of mineral resources. The emergence of remote sensing technology, especially hyperspectral imagery, has paved a new approach to geological mapping. The k-means clustering algorithm is a classical approach to classifying hyperspectral imagery, but the influence of mixed pixels and noise mean that it usually has poor mineral mapping accuracy. In this study, the mapping accuracy of the k-means algorithm was improved in three ways: similarity measurement methods that are insensitive to dimensions are used instead of the Euclidean distance for clustering; the spectral absorption features of minerals are enhanced; and the mineral mapping results are combined as the number of cluster centers (K) is incremented from 1. The improved algorithm is used with combined spectral matching to match the clustering results with a spectral library. A case study on Cuprite, Nevada, demonstrated that the improved k-means algorithm can identify most minerals with the kappa value of over 0.8, which is 46% and 15% higher than the traditional k-means and spectral matching technology. New mineral types are more likely to be found with increasing K. When K is much greater than the number of mineral types, the accuracy is improved, and the mineral mapping results are independent of the similarity measurement method. The improved k-means algorithm can also effectively remove speckle noise from the mineral mapping results and be used to identify other objects." }, { "instance_id": "R108358xR108135", "comparison_id": "R108358", "paper_id": "R108135", "text": "Mapping of hydrothermally altered rocks by the EO-1 Hyperion sensor, Northern Danakil Depression, Eritrea An EO\u20101 Hyperion scene was used to identify and map hydrothermally altered rocks and a Precambrian metamorphic sequence at and around the Alid volcanic dome, at the northern Danakil Depression, Eritrea. Mapping was coupled with laboratory analyses, including reflectance measurements, X\u2010ray diffraction, and petrographic examination of selected rock samples. Thematic maps were compiled from the dataset, which was carefully pre\u2010processed to evaluate and to correct interferences in the data. Despite the difficulties, lithological mapping using narrow spectral bands proved possible. A spectral signature attributed to ammonium was detected in the laboratory measurements of hydrothermally altered rocks from Alid. This was expressed as spectral absorption clues in the atmospherically corrected cube, at the known hydrothermally altered areas. The existence of ammonium in hydrothermally altered rocks within the Alid dome has been confirmed by previous studies. Spectral information of endmember's mineralogy found in the area (e.g. dolomite) enables a surface mineral map to be produced that stands in good agreement with the known geology along the overpass. These maps are the first hyperspectral overview of the surface mineralogy in this arid terrain and may be used as a base for future studies of remote areas such as the Danakil." }, { "instance_id": "R108358xR108147", "comparison_id": "R108358", "paper_id": "R108147", "text": "Potential of airborne hyperspectral data for geo-exploration over parts of different geological/metallogenic provinces in India based on AVIRIS-NG observations Satadru Bhattacharya*, Hrishikesh Kumar, Arindam Guha, Aditya K. Dagar, Sumit Pathak, Komal Rani (Pasricha), S. Mondal, K. Vinod Kumar, William Farrand, Snehamoy Chatterjee, S. Ravi, A. K. Sharma and A. S. Rajawat Space Applications Centre, Indian Space Research Organisation, Ahmedabad 380 015, India National Remote Sensing Centre, Indian Space Research Organisation, Hyderabad 500 042, India Department of Geophysics, Indian Institute of Technology (ISM), Dhanbad 826 004, India Space Science Institute, Boulder, Colorado 80301, USA Department of Geological and Mining Engineering and Sciences, Michigan Technological University, Houghton, Michigan 49931, USA Geological Survey of India Training Institute, Bandlaguda, Hyderabad 500 068, India" }, { "instance_id": "R108358xR108126", "comparison_id": "R108358", "paper_id": "R108126", "text": "The Performance of the Satellite-borne Hyperion Hyperspectral VNIR-SWIR Imaging System for Mineral Mapping at Mount Fitton, South Australia Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations." }, { "instance_id": "R108358xR108129", "comparison_id": "R108358", "paper_id": "R108129", "text": "Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS." }, { "instance_id": "R109612xR109573", "comparison_id": "R109612", "paper_id": "R109573", "text": "N2 Fixation in the Eastern Arabian Sea: Probable Role of Heterotrophic Diazotrophs Biogeochemical implications of global imbalance between the rates of marine dinitrogen (N2) fixation and denitrification have spurred us to understand the former process in the Arabian Sea, which contributes considerably to the global nitrogen budget. Heterotrophic bacteria have gained recent appreciation for their major role in marine N budget by fixing a significant amount of N2. Accordingly, we hypothesize a probable role of heterotrophic diazotrophs from the 15N2 enriched isotope labelling dark incubations that witnessed rates comparable to the light incubations in the eastern Arabian Sea during spring 2010. Maximum areal rates (8 mmol N m-2 d-1) were the highest ever observed anywhere in world oceans. Our results suggest that the eastern Arabian Sea gains ~92% of its new nitrogen through N2 fixation. Our results are consistent with the observations made in the same region in preceding year, i.e., during the spring of 2009." }, { "instance_id": "R109612xR109583", "comparison_id": "R109612", "paper_id": "R109583", "text": "Severe phosphate limitation on nitrogen fixation in the Bay of Bengal Abstract Several anticyclonic (ACE) and cyclonic (CE) eddies constitute the circulation in the Bay of Bengal (BoB) and are associated with the downwelling and upwelling processes leading to oligotrophic and eutrophic conditions respectively. In this study, the nitrogen (N2) fixation rates and controlling factors are estimated through deck incubation experiments in the BoB using enriched N2 gas dissolution method. We observed measurable concentrations of dissolved inorganic nitrogen (DIN) and phosphate in the CE and close to detection limits in the ACE in the mixed layer. Photic zone integrated N2 fixation rates ranged between 53.3 and 194.1 \u03bcmol m\u22122 d\u22121 with lower rates in the ACE (91 \u00b1 18 \u03bcmol m\u22122 d\u22121) than CE (162 \u00b1 28 \u03bcmol m\u22122 d\u22121) and no eddy regions (NE; 138 \u00b1 27 \u03bcmol m\u22122 d\u22121). The photic zone integrated N2 fixation rates are linearly correlated with photic zone integrated chlorophyll a and the mean phosphate concentrations in the photic zone suggesting that phosphate is controlling the N2 fixation in the BoB. The observed high N:P ratio (25 \u00b1 3) also indicate that severe phosphate limitation in the BoB. This is further confirmed from increase in N2 fixation rates by 1.2 to 8 times due to artificial increase in phosphate from that rates at in situ phosphate concentrations. This study suggests that though the conditions are conducive for N2 fixation in the BoB, the removal of dissolved phosphate within the estuaries opened to the BoB, provide weaker inputs from subsurface due to stratification and less input from atmospheric dust may limit N2 fixation in the BoB." }, { "instance_id": "R109612xR109569", "comparison_id": "R109612", "paper_id": "R109569", "text": "An extensive bloom of the N2-fixing cyanobacterium Trichodesmium erythraeum in the central Arabian Sea LVe encountered an extensive surface bloom of the N, fixing cyanobactenum Trichodesrniurn erythraeum in the central basin of the Arabian Sea during the spring ~nter-n~onsoon of 1995. The bloom, which occurred dunng a penod of calm winds and relatively high atmospher~c iron content, was metabollcally active. Carbon fixation by the bloom represented about one-quarter of water column primary productivity while input by h:: flxation could account for a major fraction of the estimated 'new' N demand of pnmary production. Isotopic measurements of the N in surface suspended material confirmed a direct contribution of N, fixation to the organic nltrogen pools of the upper water column. Retrospective analysis of NOAA-12 AVHRR imagery indicated that blooms covered up to 2 X 106 km2, or 20% of the Arabian Sea surface, during the period from 22 to 27 May 1995. In addition to their biogeochemical impact, surface blooms of this extent may have secondary effects on sea surface albedo and light penetration as well as heat and gas exchange across the air-sea interface. A preliminary extrapolation based on our observed, non-bloom rates of N, fixation from our limited sampling in the spring intermonsoon, including a conservative estimate of the input by blooms, suggest N2 fixation may account for an input of about 1 Tg N yr-I This is substantial, but relatively minor compared to current estimates of the removal of N through denitrification in the basin. However, N2 fixation may also occur in the central basin through the mild winter monsoon, be considerably greater during the fall intermonsoon than we observed during the spring intermonsoon, and may also occur at higher levels in the chronically oligotrophic southern basin. Ongoing satellite observations will help to determine more accurately the distribution and density of Trichodesmium in this and other tropical oceanic basins, as well as resolving the actual frequency and duration of bloom occurrence." }, { "instance_id": "R109612xR109577", "comparison_id": "R109612", "paper_id": "R109577", "text": "Nitrogen Uptake Dynamics in a Tropical Eutrophic Estuary (Cochin, India) and Adjacent Coastal Waters Quantification of nitrogen (N) transformation rates in tropical estuarine-coastal water coupled systems undergoing anthropogenic disturbances is scant. A thorough understanding of these metabolic rates is required to evolve a mitigation strategy to save such systems from further degradation. Here, we report the first measurements of ammonium (NH4+) and nitrate (NO3\u2212) uptake along with N2 fixation rates in the Cochin estuary, a tropical eutrophic ecosystem along the west coast of India, and two transects (off Cochin and off Mangalore) in the coastal Arabian Sea. In general, the Cochin estuary sustained higher uptake rates of NH4+ (0.32\u20130.91 \u03bcmol N l\u22121 h\u22121) and NO3\u2212 (0.01\u20130.38 \u03bcmol N l\u22121 h\u22121) compared to coastal waters. The N uptake in the nearshore waters of Cochin transect (NH4+ : 0.34 \u03bcmol N l\u22121 h\u22121 and NO3\u2212 : 0.18 \u03bcmol N l\u22121 h\u22121) was influenced more by estuarine discharge than was the Mangalore transect (NH4+ : 0.02 \u03bcmol N l\u22121 h\u22121 and NO3\u2212 : 0.03 \u03bcmol N l\u22121 h\u22121). Despite high dissolved inorganic nitrogen (DIN) concentrations, the Cochin estuary also showed higher N2 fixation rates (0.59\u20131.31 nmol N l\u22121 h\u22121) than the coastal waters (0.33\u20130.55 nmol N l\u22121 h\u22121). NH4+ was the preferred substrate for phytoplankton growth, both in the Cochin estuary and coastal waters, indicating the significance of regenerative processes in primary production. A significant negative correlation between total nitrogen (TN): total phosphorus (TP), and NH4+ uptake (as well as N2 fixation) rates in the estuary suggests that nutrient stoichiometry plays a major role in modulating N transformation rates in the Cochin estuary." }, { "instance_id": "R109612xR109575", "comparison_id": "R109612", "paper_id": "R109575", "text": "Surplus supply of bioavailable nitrogen through N2 fixation to primary producers in the eastern Arabian Sea during autumn Abstract Diazotrophs have received recent appreciation as a major source of bioavailable nitrogen in the global oceans. They mostly flourish in warm, stratified, calm and nutrient depleted conditions in the ocean. Such conditions prevail during spring and autumn seasons in the Arabian Sea. Some previous experimental studies conducted during spring have suggested the highest rates of N2 fixation among other oceans in the eastern Arabian Sea, but there are no such records during autumn. In addition, modelling studies have suggested high rates of annual N2 fixation in the Arabian Sea. In this study, we conducted isotope labeling incubation experiments in the eastern Arabian Sea in autumn 2010 to estimate N2 fixation rates and primary production. Unlike previous studies conducted in this region, we did not witness any diazotrophic bloom, but our N2 fixation rates (1300\u20132500 \u03bcmol N m\u22122 d\u22121) were still comparable to the rates reported in previous studies conducted in spring, and among the highest rates observed in the global oceans. Our data suggest an important role of excess phosphate to sustain N2 fixation during autumn. Most intriguingly our study shows that N2 fixation supplies a surplus amount of bioavailable nitrogen required for primary producers during autumn in this region." }, { "instance_id": "R109612xR109392", "comparison_id": "R109612", "paper_id": "R109392", "text": "First direct measurements of N2 fixation during a Trichodesmium bloom in the eastern Arabian Sea We report the first direct estimates of N 2 fixation rates measured during the spring, 2009 using the 15N 2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N 2 fixation rates vary from 0.1 to 34 mmol N m -2d -1 after correcting for the isotopic under-equilibrium with dissolved air in the samples. These higher N 2 fixation rates are consistent with higher chlorophyll a and low I\u00b4 15N of natural particulate organic nitrogen. Our estimates of N 2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget." }, { "instance_id": "R109612xR109394", "comparison_id": "R109612", "paper_id": "R109394", "text": "Heterotrophic bacteria as major nitrogen fixers in the euphotic zone of the Indian Ocean Diazotrophy in the Indian Ocean is poorly understood compared to that in the Atlantic and Pacific Oceans. We first examined the basin-scale community structure of diazotrophs and their nitrogen fixation activity within the euphotic zone during the northeast monsoon period along about 69\u00b0E from 17\u00b0N to 20\u00b0S in the oligotrophic Indian Ocean, where a shallow nitracline (49\u201359 m) prevailed widely and the sea surface temperature (SST) was above 25\u00b0C. Phosphate was detectable at the surface throughout the study area. The dissolved iron concentration and the ratio of iron to nitrate + nitrite at the surface were significantly higher in the Arabian Sea than in the equatorial and southern Indian Ocean. Nitrogen fixation in the Arabian Sea (24.6\u201347.1 \u03bcmolN m\u22122 d\u22121) was also significantly greater than that in the equatorial and southern Indian Ocean (6.27\u201316.6 \u03bcmolN m\u22122 d\u22121), indicating that iron could control diazotrophy in the Indian Ocean. Phylogenetic analysis of nifH showed that most diazotrophs belonged to the Proteobacteria and that cyanobacterial diazotrophs were absent in the study area except in the Arabian Sea. Furthermore, nitrogen fixation was not associated with light intensity throughout the study area. These results are consistent with nitrogen fixation in the Indian Ocean, being largely performed by heterotrophic bacteria and not by cyanobacteria. The low cyanobacterial diazotrophy was attributed to the shallow nitracline, which is rarely observed in the Pacific and Atlantic oligotrophic oceans. Because the shallower nitracline favored enhanced upward nitrate flux, the competitive advantage of cyanobacterial diazotrophs over nondiazotrophic phytoplankton was not as significant as it is in other oligotrophic oceans." }, { "instance_id": "R109904xR109875", "comparison_id": "R109904", "paper_id": "R109875", "text": "A survey of Chinese interpreting studies: who influences who \u2026and why? This paper describes how scholars in Chinese Interpreting Studies (CIS) interact with each other and form discrete circles of influence. It also discusses what it means to be an influential scholar in the community and the relationship between an author\u2019s choice of research topic and his academic influence. The study examines an all-but-exhaustive collection of 59,303 citations from 1,289 MA theses, 32 doctoral dissertations and 2,909 research papers, combining traditional citation analysis with the newer Social Network Analysis to paint a panorama of CIS. It concludes that the community cannot be broadly divided into Liberal Arts and Empirical Science camps; rather, it comprises several distinct communities with various defining features. The analysis also reveals that the top Western influencers have an array of academic backgrounds and research interests across many different disciplines, whereas their Chinese counterparts are predominantly focused on Interpreting Studies. Last but not least, there is found to be a positive correlation between choosing non-mainstream research topics and having a high level of academic influence in the community." }, { "instance_id": "R109904xR109886", "comparison_id": "R109904", "paper_id": "R109886", "text": "Measuring the academic reputation through citation networks via PageRank The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of University Rankings have been proposed to quantify the excellence of different research institutions in the world. Albeit met with criticism in some cases, the relevance of university rankings is being increasingly acknowledged: indeed, rankings are having a major impact on the design of research policies, both at the institutional and governmental level. Yet, the debate on what rankings are {\\em exactly} measuring is enduring. Here, we address the issue by measuring a quantitive and reliable proxy of the academic reputation of a given institution and by evaluating its correlation with different university rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the \\pr~algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Our results allow to quantifying the prestige of a set of institutions in a certain research field based only on hard bibliometric data. Given the volume of the data analysed, our findings are statistically robust and less prone to bias, at odds with ad--hoc surveys often employed by ranking bodies in order to attain similar results. Because our findings are found to correlate extremely well with the ARWU Subject rankings, the approach we propose in our paper may open the door to new, Academic Ranking methodologies that go beyond current methods by reconciling the qualitative evaluation of Academic Prestige with its quantitative measurements via publication impact." }, { "instance_id": "R109904xR109869", "comparison_id": "R109904", "paper_id": "R109869", "text": "Network Analysis and Indicators Networks have for a long time been used both as a metaphor and as a method for studying science. With the advent of very large data sets and the increase in the computational power, network analysis became more prevalent in the studies of science in general and the studies of science indicators in particular. For the purposes of this chapter science indicators are broadly defined as \u201cmeasures of changes in aspects of science\u201d (Elkana et al., Toward a metric of science: The advent of science indicators, John Wiley & Sons, New York, 1978). The chapter covers network science-based indicators related to both the social and the cognitive aspects of science. Particular emphasis is placed on different centrality measures. Articles published in the journal Scientometrics over a 10-year period (2003\u20132012) were used to show how the indicators can be computed in coauthorship and citation networks." }, { "instance_id": "R109904xR109854", "comparison_id": "R109904", "paper_id": "R109854", "text": "A systematic metadata harvesting workflow for analysing scientific networks One of the disciplines behind the science of science is the study of scientific networks. This work focuses on scientific networks as a social network having different nodes and connections. Nodes can be represented by authors, articles or journals while connections by citation, co-citation or co-authorship. One of the challenges in creating scientific networks is the lack of publicly available comprehensive data set. It limits the variety of analyses on the same set of nodes of different scientific networks. To supplement such analyses we have worked on publicly available citation metadata from Crossref and OpenCitatons. Using this data a workflow is developed to create scientific networks. Analysis of these networks gives insights into academic research and scholarship. Different techniques of social network analysis have been applied in the literature to study these networks. It includes centrality analysis, community detection, and clustering coefficient. We have used metadata of Scientometrics journal, as a case study, to present our workflow. We did a sample run of the proposed workflow to identify prominent authors using centrality analysis. This work is not a bibliometric study of any field rather it presents replicable Python scripts to perform network analysis. With an increase in the popularity of open access and open metadata, we hypothesise that this workflow shall provide an avenue for understanding scientific scholarship in multiple dimensions." }, { "instance_id": "R109904xR109863", "comparison_id": "R109904", "paper_id": "R109863", "text": "Betweenness centrality as a driver of preferential attachment in the evolution of research collaboration networks We analyze whether preferential attachment in scientific coauthorship networks is different for authors with different forms of centrality. Using a complete database for the scientific specialty of research about \u201csteel structures,\u201d we show that betweenness centrality of an existing node is a significantly better predictor of preferential attachment by new entrants than degree or closeness centrality. During the growth of a network, preferential attachment shifts from (local) degree centrality to betweenness centrality as a global measure. An interpretation is that supervisors of PhD projects and postdocs broker between new entrants and the already existing network, and thus become focal to preferential attachment. Because of this mediation, scholarly networks can be expected to develop differently from networks which are predicated on preferential attachment to nodes with high degree centrality." }, { "instance_id": "R109904xR109872", "comparison_id": "R109904", "paper_id": "R109872", "text": "PageRank-Related Methods for Analyzing Citation Networks A central question in citation analysis is how the most important or most prominent nodes in a citation network can be identified. Many different approaches have been proposed to address this question. In this chapter, we focus on approaches that assess the importance of a node in a citation network based not just on the local structure of the network but instead on the network\u2019s global structure. For instance, rather than just counting the number of citations a journal has received, these approaches also take into account from which journals the citations originate and how often these citing journals have been cited themselves. The methods that we study are closely related to the well-known PageRank method for ranking web pages. We therefore start by discussing the PageRank method, and we then review the work that has been done in the field of citation analysis on similar types of methods. In the second part of the chapter, we provide a tutorial in which we demonstrate how PageRank calculations can be performed for citation networks constructed based on data from the Web of Science database. The Sci2 tool is used to construct citation networks, and MATLAB is used to perform PageRank calculations." }, { "instance_id": "R109904xR109882", "comparison_id": "R109904", "paper_id": "R109882", "text": "Predicting the research performance of early career scientists This paper examines how early career-related factors can predict the future research performance of computer and information scientists. Although a few bibliometric studies have previously investigated multiple factors relating to early career scientists that significantly predict their future research performance, there have been limited studies on early career-related factors affecting scientists in the fields of information science and computer science. This study analyzes 4102 scientists whose publishing careers started in the same year. The criteria used to quantify future research performance of the target scientists included the number of publications and citation counts of publications in a 4-year citation window to indicate future research productivity and research impact, respectively. These criteria were regressed on 13 early career-related factors. The results showed that these factors accounted for about 27% and 23% of the future productivity of the target scientists in terms of journal articles and conference papers, respectively; these 13 factors were also responsible for 19% of the future impact of target scientists\u2019 journal articles and 19% of the future impact of their conference papers. The factor that most contributed to explaining the future research performance (i.e. publication numbers) and future research impact (i.e. citation counts of publications) was the number of publications (both journal articles and conference papers) produced by the target scientists in their early career years." }, { "instance_id": "R109904xR109889", "comparison_id": "R109904", "paper_id": "R109889", "text": "Structure and evolution of Indian physics co-authorship networks We trace the evolution of Indian physics community from 1919 to 2013 by analyzing the co-authorship network constructed from papers published by authors in India in American Physical Society (APS) journals. We make inferences on India\u2019s contribution to different branches of Physics and identify the most influential Indian physicists at different time periods. The relative contribution of India to global physics publication (research) and its variation across subfields of physics is assessed. We extract the changing collaboration pattern of authors between Indian physicists through various network measures. We study the evolution of Indian physics communities and trace the mean life and stationarity of communities by size in different APS journals. We map the transition of authors between communities of different sizes from 1970 to 2013, capturing their birth, growth, merger and collapse. We find that Indian\u2013Foreign collaborations are increasing at a faster pace compared to the Indian\u2013Indian. We observe that the degree distribution of Indian collaboration networks follows the power law, with distinct patterns between Physical Review A, B and E, and high energy physics journals Physical Review C and D, and Physical Review Letters. In almost every measure, we observe strong structural differences between low-energy and high-energy physics journals." }, { "instance_id": "R111045xR110858", "comparison_id": "R111045", "paper_id": "R110858", "text": "Multinuclear rare-earth metal complexes supported by chalcogen-based 1,2,3-triazole Abstract The reaction of MCp3 (M = Y, Nd, Sm, Gd) with the 4,5-bis(diphenylselenophosphoranyl)-1,2,3-triazole (LH) (1) in the presence of stoichiometric amounts of H2O afforded the trinuclear rare-earth metal complexes [(MCp)3(\u03bc3-O)L4] [M = Y (3), Nd (4), Sm (5), Gd (4)]. The unforeseen formation of these multimetallic systems stems from the protonolysis reactions of the intermediate dicyclopentadienyl rare-earth complexes MCp2L with H2O. This was confirmed by the transformation of YCp2L (2) to (YCp)3(\u03bc3-O)L4 (3) under controlled conditions. X-ray diffraction studies reveal that 3\u20136 possess a trinuclear [M3(\u03bc3-O)] core with M\u2013Se contacts featuring M\u22efM interactions. The magnitude of the M\u22efM separations is controlled by the constrictions imposed on the planar [M3(\u03bc3-O)] core by the surrounding M2ON2 and M2ON ring systems. DFT calculations were performed on 3 which was used as a model compound for the heavier rare-earth metals providing insight into the nature of the Y\u2013Se and Y\u2013N contacts around the M3(\u03bc3-O) core." }, { "instance_id": "R111045xR110972", "comparison_id": "R111045", "paper_id": "R110972", "text": "Mixed Methyl Aryloxy Rare-Earth-Metal Complexes Stabilized by a Superbulky Tris(pyrazolyl)borato Ligand Various mixed methyl aryloxide complexes TptBu,MeLnMe(OAr) (Ln = Y, Lu) were obtained in moderate to high yields according to distinct synthesis protocols dependent on the metal size and sterics of the phenolic proligand. The reaction of TptBu,MeLuMe2 and TptBu,MeYMe(AlMe4) via protonolysis with 1 or 2 equiv HOC6H2tBu2-2,6-Me-4 in n-hexane gave the desired complexes TptBu,MeLnMe(OAr). Corresponding treatment of TptBu,MeLuMe2 with the sterically less demanding HOC6H3Me2-2,6, HOC6H3iPr2-2,6 and HOC6H3(CF3)2-3,5 led to the formation of the bis(aryloxy) lutetium complexes TptBu,MeLu(OAr)2. Application of a salt-metathesis protocol employing TptBu,MeLnMe(AlMe4) and the potassium aryloxides KOAr made complexes TptBu,MeLnMe(OAr) accessible for the smaller aryloxy ligands as well. All complexes were analyzed by X-ray crystallography to compare the terminal Ln\u2013Me bond lengths and to evaluate the implication of the methyl/aryloxy coordination for the exact cone angles \u0398\u00b0 of the [TptBu,Me] ancillary ligand. Treatmen..." }, { "instance_id": "R111045xR111005", "comparison_id": "R111045", "paper_id": "R111005", "text": "White-light emission from discrete heterometallic lanthanide-directed self-assembled complexes in solution

Herein, we have developed a white-light-emitting system based on the formation of discrete lanthanide-based self-assembled complexes using a newly-designed ligand. We demonstrate that fine tuning of the lanthanide ions molar ratio in the self-assemblies combined with the intrinsic blue fluorescence of the ligand allows for the successful emission of pure white light with CIE coordinates of (0.33, 0.34).

" }, { "instance_id": "R111045xR110929", "comparison_id": "R111045", "paper_id": "R110929", "text": "Synthesis of heterometallic chalcogenides containing lanthanide and group 13\u201315 metal elements Abstract Heterometallic chalcogenides are no longer limited to only transition metals or main group metal chalcogenides. Recent research has showed that combinations of lanthanides and heavier group 13\u201315 metals can also generate a new class of heterometallic chalcogenide materials. These materials are typically synthesized under alkali metal polychalcogenide flux or solvothermal conditions at intermediate temperatures. Under these mild and soft conditions, various metal chalcogenide building-blocks can be bound to lanthanide ions (or lanthanide complexes) to yield a fascinating variety of heterometallic chalcogenides, the dimensionality of which can be influenced by the ionic radii of the different constituents. This review discusses the synthesis, crystal structures, and optical properties of heterometallic chalcogenides containing lanthanide and group 13\u201315 metal elements in the presence of alkali metal polychalcogenide salt or organic chelating amines as a reaction medium." }, { "instance_id": "R111045xR110913", "comparison_id": "R111045", "paper_id": "R110913", "text": "Multinuclear Lanthanide-Implanted Tetrameric Dawson-Type Phosphotungstates with Switchable Luminescence Behaviors Induced by Fast Photochromism A series of benzoate-decorated lanthanide (Ln)-containing tetrameric Dawson-type phosphotungstates [N(CH3)4]6H20[{(P2W17O61)Ln(H2O)3Ln(C6H5COO)(H2O)6]}{[(P2W17O61)Ln(H2O)3}]2Cl2\u00b798H2O [Ln = Sm (1), Eu (2), and Gd (3)] were made using a facile one-step assembly strategy and characterized by several techniques. Notably, the Ln-containing tetrameric Dawson-type polyoxoanions [{(P2W17O61)Ln(H2O)3Ln(C6H5COO)(H2O)6]}{[(P2W17O61)Ln(H2O)3}]224- are all established by four monolacunary Dawson-type [P2W17O61]10- segments, encapsulating a Ln3+ ion with two benzoates coordinating to the Ln3+ ions. 1-3 exhibit reversible photochromism, which can change from intrinsic white to blue for 6 min upon UV irradiation, and their colors gradually recover for 30 h in the dark. The solid-state photoluminescence spectra of 1 and 2 display characteristic emissions of Ln components based on 4f-4f transitions. Time-resolved emission spectra of 1 and 2 were also measured to authenticate the energy transfer from the phosphotungstate and organic chromophores to Eu3+. In particular, 1 shows an effectively switchable luminescence behavior induced by its fast photochromism." }, { "instance_id": "R112387xR76818", "comparison_id": "R112387", "paper_id": "R76818", "text": "App Review Analysis Via Active Learning: Reducing Supervision Effort without Compromising Classification Accuracy Automated app review analysis is an important avenue for extracting a variety of requirements-related information. Typically, a first step toward performing such analysis is preparing a training dataset, where developers (experts) identify a set of reviews and, manually, annotate them according to a given task. Having sufficiently large training data is important for both achieving a high prediction accuracy and avoiding overfitting. Given millions of reviews, preparing a training set is laborious. We propose to incorporate active learning, a machine learning paradigm, in order to reduce the human effort involved in app review analysis. Our app review classification framework exploits three active learning strategies based on uncertainty sampling. We apply these strategies to an existing dataset of 4,400 app reviews for classifying app reviews as features, bugs, rating, and user experience. We find that active learning, compared to a training dataset chosen randomly, yields a significantly higher prediction accuracy under multiple scenarios." }, { "instance_id": "R112387xR112044", "comparison_id": "R112387", "paper_id": "R112044", "text": "Can app changelogs improve requirements classification from app reviews?: an exploratory study [Background] Recent research on mining app reviews for software evolution indicated that the elicitation and analysis of user requirements can benefit from supplementing user reviews by data from other sources. However, only a few studies reported results of leveraging app changelogs together with app reviews. [Aims] Motivated by those findings, this exploratory experimental study looks into the role of app changelogs in the classification of requirements derived from app reviews. We aim at understanding if the use of app changelogs can lead to more accurate identification and classification of functional and non-functional requirements from app reviews. We also want to know which classification technique works better in this context. [Method] We did a case study on the effect of app changelogs on automatic classification of app reviews. Specifically, manual labeling, text preprocessing, and four supervised machine learning algorithms were applied to a series of experiments, varying in the number of app changelogs in the experimental data. [Results] We compared the accuracy of requirements classification from app reviews, by training the four classifiers with varying combinations of app reviews and changelogs. Among the four algorithms, Na\u00efve Bayes was found to be more accurate for categorizing app reviews. [Conclusions] The results show that official app changelogs did not contribute to more accurate identification and classification of requirements from app reviews. In addition, Na\u00efve Bayes seems to be more suitable for our further research on this topic." }, { "instance_id": "R112387xR111979", "comparison_id": "R112387", "paper_id": "R111979", "text": "\"What Parts of Your Apps are Loved by Users?\" (T) Recently, Begel et al. found that one of the most important questions software developers ask is \"what parts of software are used/loved by users.\" User reviews provide an effective channel to address this question. However, most existing review summarization tools treat reviews as bags-of-words (i.e., mixed review categories) and are limited to extract software aspects and user preferences. We present a novel review summarization framework, SUR-Miner. Instead of a bags-of-words assumption, it classifies reviews into five categories and extracts aspects for sentences which include aspect evaluation using a pattern-based parser. Then, SUR-Miner visualizes the summaries using two interactive diagrams. Our evaluation on seventeen popular apps shows that SUR-Miner summarizes more accurate and clearer aspects than state-of-the-art techniques, with an F1-score of 0.81, significantly greater than that of ReviewSpotlight (0.56) and Guzmans' method (0.55). Feedback from developers shows that 88% developers agreed with the usefulness of the summaries from SUR-Miner." }, { "instance_id": "R112387xR108208", "comparison_id": "R112387", "paper_id": "R108208", "text": "Facilitating developer-user interactions with mobile app review digests As users are interacting with a large of mobile apps under various usage contexts, user involvements in an app design process has become a critical issue. Despite this fact, existing apps or app store platforms only provide a limited form of user involvements such as posting app reviews and sending email reports. While building a unified platform for facilitating user involvements with various apps is our ultimate goal, we present our preliminary work on handling developers' information overload attributed to a large number of app comments. To address this issue, we first perform a simple content analysis on app reviews from the developer's standpoint. We then propose an algorithm that automatically identifies informative reviews reflecting user involvements. The preliminary evaluation results document the efficiency of our algorithm." }, { "instance_id": "R112387xR78371", "comparison_id": "R112387", "paper_id": "R78371", "text": "Automatic Classification of Non-Functional Requirements from Augmented App User Reviews Context: The leading App distribution platforms, Apple App Store, Google Play, and Windows Phone Store, have over 4 million Apps. Research shows that user reviews contain abundant useful information which may help developers to improve their Apps. Extracting and considering Non-Functional Requirements (NFRs), which describe a set of quality attributes wanted for an App and are hidden in user reviews, can help developers to deliver a product which meets users' expectations. Objective: Developers need to be aware of the NFRs from massive user reviews during software maintenance and evolution. Automatic user reviews classification based on an NFR standard provides a feasible way to achieve this goal. Method: In this paper, user reviews were automatically classified into four types of NFRs (reliability, usability, portability, and performance), Functional Requirements (FRs), and Others. We combined four classification techniques BoW, TF-IDF, CHI2, and AUR-BoW (proposed in this work) with three machine learning algorithms Naive Bayes, J48, and Bagging to classify user reviews. We conducted experiments to compare the F-measures of the classification results through all the combinations of the techniques and algorithms. Results: We found that the combination of AUR-BoW with Bagging achieves the best result (a precision of 71.4%, a recall of 72.3%, and an F-measure of 71.8%) among all the combinations. Conclusion: Our finding shows that augmented user reviews can lead to better classification results, and the machine learning algorithm Bagging is more suitable for NFRs classification from user reviews than Na\u00efve Bayes and J48." }, { "instance_id": "R112387xR112021", "comparison_id": "R112387", "paper_id": "R112021", "text": "Analyzing and automatically labelling the types of user issues that are raised in mobile app reviews Mobile app reviews by users contain a wealth of information on the issues that users are experiencing. For example, a review might contain a feature request, a bug report, and/or a privacy complaint. Developers, users and app store owners (e.g. Apple, Blackberry, Google, Microsoft) can benefit from a better understanding of these issues \u2013 developers can better understand users\u2019 concerns, app store owners can spot anomalous apps, and users can compare similar apps to decide which ones to download or purchase. However, user reviews are not labelled, e.g. we do not know which types of issues are raised in a review. Hence, one must sift through potentially thousands of reviews with slang and abbreviations to understand the various types of issues. Moreover, the unstructured and informal nature of reviews complicates the automated labelling of such reviews. In this paper, we study the multi-labelled nature of reviews from 20 mobile apps in the Google Play Store and Apple App Store. We find that up to 30 % of the reviews raise various types of issues in a single review (e.g. a review might contain a feature request and a bug report). We then propose an approach that can automatically assign multiple labels to reviews based on the raised issues with a precision of 66 % and recall of 65 %. Finally, we apply our approach to address three proof-of-concept analytics use case scenarios: (i) we compare competing apps to assist developers and users, (ii) we provide an overview of 601,221 reviews from 12,000 apps in the Google Play Store to assist app store owners and developers and (iii) we detect anomalous apps in the Google Play Store to assist app store owners and users." }, { "instance_id": "R112387xR78432", "comparison_id": "R112387", "paper_id": "R78432", "text": "Software Feature Request Detection in Issue Tracking Systems Communication about requirements is often handled in issue tracking systems, especially in a distributed setting. As issue tracking systems also contain bug reports or programming tasks, the software feature requests of the users are often difficult to identify. This paper investigates natural language processing and machine learning features to detect software feature requests in natural language data of issue tracking systems. It compares traditional linguistic machine learning features, such as \"bag of words\", with more advanced features, such as subject-action-object, and evaluates combinations of machine learning features derived from the natural language and features taken from the issue tracking system meta-data. Our investigation shows that some combinations of machine learning features derived from natural language and the issue tracking system meta-data outperform traditional approaches. We show that issues or data fields (e.g. descriptions or comments), which contain software feature requests, can be identified reasonably well, but hardly the exact sentence. Finally, we show that the choice of machine learning algorithms should depend on the goal, e.g. maximization of the detection rate or balance between detection rate and precision. In addition, the paper contributes a double coded gold standard and an open-source implementation to further pursue this topic." }, { "instance_id": "R112387xR111923", "comparison_id": "R112387", "paper_id": "R111923", "text": "Same App, Different App Stores: A Comparative Study To attract more users, implementing the same mobile app for different platforms has become a common industry practice. App stores provide a unique channel for users to share feedback on the acquired apps through ratings and textual reviews. However, each mobile platform has its own online store for distributing apps to users. To understand the characteristics of and discrepancies in how users perceive the same app implemented for and distributed through different platforms, we present a large-scale comparative study of cross-platform apps. We mine the characteristics of 80,000 app-pairs (160K apps in total) from a corpus of 2.4 million apps collected from the Apple and Google Play app stores. We quantitatively compare their app-store attributes, such as stars, versions, and prices. We measure the aggregated user-perceived ratings and find many discrepancies across the platforms. Further, we employ machine learning to classify 1.7 million textual user reviews obtained from 2,000 of the mined app-pairs. We analyze discrepancies and root causes of user complaints to understand cross-platform development challenges that impact cross-platform user-perceived ratings. We also follow up with the developers to understand the reasons behind identified discrepancies." }, { "instance_id": "R112387xR78466", "comparison_id": "R112387", "paper_id": "R78466", "text": "How can I improve my app? Classifying user reviews for software maintenance and evolution App Stores, such as Google Play or the Apple Store, allow users to provide feedback on apps by posting review comments and giving star ratings. These platforms constitute a useful electronic mean in which application developers and users can productively exchange information about apps. Previous research showed that users feedback contains usage scenarios, bug reports and feature requests, that can help app developers to accomplish software maintenance and evolution tasks. However, in the case of the most popular apps, the large amount of received feedback, its unstructured nature and varying quality can make the identification of useful user feedback a very challenging task. In this paper we present a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques: (1) Natural Language Processing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify app reviews into the proposed categories. We show that the combined use of these techniques allows to achieve better results (a precision of 75% and a recall of 74%) than results obtained using each technique individually (precision of 70% and a recall of 67%)." }, { "instance_id": "R112387xR111969", "comparison_id": "R112387", "paper_id": "R111969", "text": "User Feedback from Tweets vs App Store Reviews: An Exploratory Study of Frequency, Timing and Content Context: User feedback on apps is essential for gauging market needs and maintaining a competitive edge in the mobile apps development industry. App Store Reviews have been a primary resource for this feedback, however, recent studies have observed that Twitter is another potentially valuable source for this information. Objective: The objective of this study is to assess user feedback from Twitter in terms of timing as well as content and compare with the App Store reviews. Method: This study employs various text analysis and Natural Language Processing methods such as semantic analysis and Latent Dirichlet Allocation (LDA) to analyze tweets and App Store Reviews. Additionally, supervised learning classifiers are used to classify them as semantically similar tweet and App Store reviews. Results: In spite of a difference in the magnitude between tweets and App Store Review counts, frequency analysis shows that bug report and feature request are discussed mostly on Twitter first as the number of Tweets during the reporting time reached the peak a few days earlier. Likewise, timing analysis on a set of 426 tweets and 2,383 reviews (which are bug reports and feature requests) show that approximately 15% appear on Twitter first. Of these 15% tweets, 72% are related to functional or behavioural aspects of the mobile app. Content analysis shows that user feedback in tweets mostly focuses on critical issues related to the feature failure and improper functionality. Conclusion: The results of this investigation show that the Twitter is not only a strong contender for useful information but also a faster source of information for mobile app improvement." }, { "instance_id": "R112387xR78455", "comparison_id": "R112387", "paper_id": "R78455", "text": "App store mining is not enough for app improvement The rise in popularity of mobile devices has led to a parallel growth in the size of the app store market, intriguing several research studies and commercial platforms on mining app stores. App store reviews are used to analyze different aspects of app development and evolution. However, app users\u2019 feedback does not only exist on the app store. In fact, despite the large quantity of posts that are made daily on social media, the importance and value that these discussions provide remain mostly unused in the context of mobile app development. In this paper, we study how Twitter can provide complementary information to support mobile app development. By analyzing a total of 30,793 apps over a period of six weeks, we found strong correlations between the number of reviews and tweets for most apps. Moreover, through applying machine learning classifiers, topic modeling and subsequent crowd-sourcing, we successfully mined 22.4% additional feature requests and 12.89% additional bug reports from Twitter. We also found that 52.1% of all feature requests and bug reports were discussed on both tweets and reviews. In addition to finding common and unique information from Twitter and the app store, sentiment and content analysis were also performed for 70 randomly selected apps. From this, we found that tweets provided more critical and objective views on apps than reviews from the app store. These results show that app store review mining is indeed not enough; other information sources ultimately provide added value and information for app developers." }, { "instance_id": "R112387xR77177", "comparison_id": "R112387", "paper_id": "R77177", "text": "Mining User Requirements from Application Store Reviews Using Frame Semantics Context and motivation: Research on mining user reviews in mobile application (app) stores has noticeably advanced in the past few years. The majority of the proposed techniques rely on classifying the textual description of user reviews into different categories of technically informative user requirements and uninformative feedback. Question/Problem: Relying on the textual attributes of reviews often produces high dimensional models. This increases the complexity of the classifier and can lead to overfitting problems. Principal ideas/results: We propose a novel semantic approach for app review classification. The proposed approach is based on the notion of semantic role labeling, or characterizing the lexical meaning of text in terms of semantic frames. Semantic frames help to generalize from text (individual words) to more abstract scenarios (contexts). This reduces the dimensionality of the data and enhances the predictive capabilities of the classifier. Three datasets of user reviews are used to conduct our experimental analysis. Results show that semantic frames can be used to generate lower dimensional and more accurate models in comparison to text classification methods. Contribution: A novel semantic approach for extracting user requirements from app reviews. The proposed approach enables a more efficient classification process and reduces the chance of overfitting." }, { "instance_id": "R112387xR112015", "comparison_id": "R112387", "paper_id": "R112015", "text": "A Little Bird Told Me: Mining Tweets for Requirements and Software Evolution Twitter is one of the most popular social networks. Previous research found that users employ Twitter to communicate about software applications via short messages, commonly referred to as tweets, and that these tweets can be useful for requirements engineering and software evolution. However, due to their large number---in the range of thousands per day for popular applications---a manual analysis is unfeasible.In this work we present ALERTme, an approach to automatically classify, group and rank tweets about software applications. We apply machine learning techniques for automatically classifying tweets requesting improvements, topic modeling for grouping semantically related tweets and a weighted function for ranking tweets according to specific attributes, such as content category, sentiment and number of retweets. We ran our approach on 68,108 collected tweets from three software applications and compared its results against software practitioners' judgement. Our results show that ALERTme is an effective approach for filtering, summarizing and ranking tweets about software applications. ALERTme enables the exploitation of Twitter as a feedback channel for information relevant to software evolution, including end-user requirements." }, { "instance_id": "R114155xR113196", "comparison_id": "R114155", "paper_id": "R113196", "text": "A Needle in a Haystack: What Do Twitter Users Say about Software? Users of the Twitter microblogging platform share a vast amount of information about various topics through short messages on a daily basis. Some of these so called tweets include information that is relevant for software companies and could, for example, help requirements engineers to identify user needs. Therefore, tweets have the potential to aid in the continuous evolution of software applications. Despite the existence of such relevant tweets, little is known about their number and content. In this paper we report on the results of an exploratory study in which we analyzed the usage characteristics, content and automatic classification potential of tweets about software applications by using descriptive statistics, content analysis and machine learning techniques. Although the manual search of relevant information within the vast stream of tweets can be compared to looking for a needle in a haystack, our analysis shows that tweets provide a valuable input for software companies. Furthermore, our results demonstrate that machine learning techniques have the capacity to identify and harvest relevant information automatically." }, { "instance_id": "R114155xR76126", "comparison_id": "R114155", "paper_id": "R76126", "text": "Crowd-centric Requirements Engineering Requirements engineering is a preliminary and crucial phase for the correctness and quality of software systems. Despite the agreement on the positive correlation between user involvement in requirements engineering and software success, current development methods employ a too narrow concept of that user and rely on a recruited set of users considered to be representative. Such approaches might not cater for the diversity and dynamism of the actual users and the context of software usage. This is especially true in new paradigms such as cloud and mobile computing. To overcome these limitations, we propose crowd-centric requirements engineering (CCRE) as a revised method for requirements engineering where users become primary contributors, resulting in higher-quality requirements and increased user satisfaction. CCRE relies on crowd sourcing to support a broader user involvement, and on gamification to motivate that voluntary involvement." }, { "instance_id": "R114155xR112434", "comparison_id": "R114155", "paper_id": "R112434", "text": "Users \u2014 The Hidden Software Product Quality Experts?: A Study on How App Users Report Quality Aspects in Online Reviews [Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question/problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas/results] By tagging online reviews, we found that users mainly write about \"usability\" and \"reliability\", but the majority of statements are on a subcharacteristic level, most notably regarding \"operability\", \"adaptability\", \"fault tolerance\", and \"interoperability\". A set of 16 language patterns regarding \"usability\" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements." }, { "instance_id": "R114155xR76353", "comparison_id": "R114155", "paper_id": "R76353", "text": "Providing a User Forum is not enough: First Experiences of a Software Company with CrowdRE Crowd-based requirements engineering (CrowdRE) is promising to derive requirements by gathering and analyzing information from the crowd. Setting up CrowdRE in practice seems challenging, although first solutions to support CrowdRE exist. In this paper, we report on a German software company's experience on crowd involvement by using feedback communication channels and a monitoring solution for user-event data. In our case study, we identified several problem areas that a software company is confronted with to setup an environment for gathering requirements from the crowd. We conclude that a CrowdRE process cannot be implemented ad-hoc and that future work is needed to create and analyze a continuous feedback and monitoring data stream." }, { "instance_id": "R114155xR76818", "comparison_id": "R114155", "paper_id": "R76818", "text": "App Review Analysis Via Active Learning: Reducing Supervision Effort without Compromising Classification Accuracy Automated app review analysis is an important avenue for extracting a variety of requirements-related information. Typically, a first step toward performing such analysis is preparing a training dataset, where developers (experts) identify a set of reviews and, manually, annotate them according to a given task. Having sufficiently large training data is important for both achieving a high prediction accuracy and avoiding overfitting. Given millions of reviews, preparing a training set is laborious. We propose to incorporate active learning, a machine learning paradigm, in order to reduce the human effort involved in app review analysis. Our app review classification framework exploits three active learning strategies based on uncertainty sampling. We apply these strategies to an existing dataset of 4,400 app reviews for classifying app reviews as features, bugs, rating, and user experience. We find that active learning, compared to a training dataset chosen randomly, yields a significantly higher prediction accuracy under multiple scenarios." }, { "instance_id": "R114155xR113008", "comparison_id": "R114155", "paper_id": "R113008", "text": "Canary: Extracting Requirements-Related Information from Online Discussions Online discussions about software applications generate a large amount of requirements-related information. This information can potentially be usefully applied in requirements engineering; however currently, there are few systematic approaches for extracting such information. To address this gap, we propose Canary, an approach for extracting and querying requirements-related information in online discussions. The highlight of our approach is a high-level query language that combines aspects of both requirements and discussion in online forums. We give the semantics of the query language in terms of relational databases and SQL. We demonstrate the usefulness of the language using examples on real data extracted from online discussions. Our approach relies on human annotations of online discussions. We highlight the subtleties involved in interpreting the content in online discussions and the assumptions and choices we made to effectively address them. We demonstrate the feasibility of generating high-quality annotations by obtaining them from lay Amazon Mechanical Turk users." }, { "instance_id": "R114155xR113181", "comparison_id": "R114155", "paper_id": "R113181", "text": "Towards Crowd-Based Requirements Engineering A Research Preview [Context and motivation] Stakeholders who are highly distributed form a large, heterogeneous online group, the so-called \u201ccrowd\u201d. The rise of mobile, social and cloud apps has led to a stark increase in crowd-based settings. [Question/problem] Traditional requirements engineering (RE) techniques face scalability issues and require the co-presence of stakeholders and engineers, which cannot be realized in a crowd setting. While different approaches have recently been introduced to partially automate RE in this context, a multi-method approach to (semi-)automate all RE activities is still needed. [Principal ideas/results] We propose \u201cCrowd-based Requirements Engineering\u201d as an approach that integrates existing elicitation and analysis techniques and fills existing gaps by introducing new concepts. It collects feedback through direct interactions and social collaboration, and by deploying mining techniques. [Contribution] This paper describes the initial state of the art of our approach, and previews our plans for further research." }, { "instance_id": "R114155xR113067", "comparison_id": "R114155", "paper_id": "R113067", "text": "Mining User Rationale from Software Reviews Rationale refers to the reasoning and justification behind human decisions, opinions, and beliefs. In software engineering, rationale management focuses on capturing design and requirements decisions and on organizing and reusing project knowledge. This paper takes a different view on rationale written by users in online reviews. We studied 32,414 reviews for 52 software applications in the Amazon Store. Through a grounded theory approach and peer content analysis, we investigated how users argue and justify their decisions, e.g. about upgrading, installing, or switching software applications. We also studied the occurrence frequency of rationale concepts such as issues encountered or alternatives considered in the reviews and found that assessment criteria like performance, compatibility, and usability represent the most pervasive concept. We then used the truth set of manually labeled review sentences to explore how accurately we can mine rationale concepts from the reviews. Support Vector Classifier, Naive Bayes, and Logistic Regression, trained on the review metadata, syntax tree of the review text, and influential terms, achieved a precision around 80% for predicting sentences with alternatives and decisions, with top recall values of 98%. On the review level, precision was up to 13% higher with recall values reaching 99%. We discuss the findings and the rationale importance for supporting deliberation in user communities and synthesizing the reviews for developers." }, { "instance_id": "R114155xR113173", "comparison_id": "R114155", "paper_id": "R113173", "text": "Software Feature Request Detection in Issue Tracking Systems Communication about requirements is often handled in issue tracking systems, especially in a distributed setting. As issue tracking systems also contain bug reports or programming tasks, the software feature requests of the users are often difficult to identify. This paper investigates natural language processing and machine learning features to detect software feature requests in natural language data of issue tracking systems. It compares traditional linguistic machine learning features, such as \"bag of words\", with more advanced features, such as subject-action-object, and evaluates combinations of machine learning features derived from the natural language and features taken from the issue tracking system meta-data. Our investigation shows that some combinations of machine learning features derived from natural language and the issue tracking system meta-data outperform traditional approaches. We show that issues or data fields (e.g. descriptions or comments), which contain software feature requests, can be identified reasonably well, but hardly the exact sentence. Finally, we show that the choice of machine learning algorithms should depend on the goal, e.g. maximization of the detection rate or balance between detection rate and precision. In addition, the paper contributes a double coded gold standard and an open-source implementation to further pursue this topic." }, { "instance_id": "R114155xR113085", "comparison_id": "R114155", "paper_id": "R113085", "text": "Discovering Requirements through Goal-Driven Process Mining Software systems are designed to support their users in performing tasks that are parts of more general processes. Unfortunately, software designers often make invalid assumptions about the users' processes and therefore about the requirements to support such processes. Eliciting and validating such assumptions through manual means (e.g., through observations, interviews, and workshops) is expensive, time-consuming, and may fail to identify the users' real processes. Using process mining may reduce these problems by automating the monitoring and discovery of the actual processes followed by a crowd of users. The Crowd provides an opportunity to involve diverse groups of users to interact with a system and conduct their intended processes. This implicit feedback in the form of discovered processes can then be used to modify the existing system's functionalities and ensure whether or not a software product is used as initially designed. In addition, the analysis of user-system interactions may reveal lacking functionalities and quality issues. These ideas are illustrated on the GreenSoft personal energy management system." }, { "instance_id": "R114155xR113054", "comparison_id": "R114155", "paper_id": "R113054", "text": "A gradual approach to crowd-based requirements engineering: The case of conference online social networks This paper proposes a gradual approach to crowd-based requirements engineering (RE) for supporting the establishment of a more engaged crowd, hence, mitigating the low involvement risk in crowd-based RE. Our approach advocates involving micro-crowds (MCs), where in each micro-crowd, the population is relatively cohesive and familiar with each other. Using this approach, the evolving product is developed iteratively. At each iteration, a new MC can join the already established crowd to enhance the requirements for the next version, while adding terminology to an evolving folksonomy. We are currently using this approach in an on-going research project to develop an online social network (OSN) for academic researchers that will facilitate discussions and knowledge sharing around conferences." }, { "instance_id": "R114155xR113160", "comparison_id": "R114155", "paper_id": "R113160", "text": "Customer Rating Reactions Can Be Predicted Purely using App Features In this paper we provide empirical evidence that the rating that an app attracts can be accurately predicted from the features it offers. Our results, based on an analysis of 11,537 apps from the Samsung Android and BlackBerry World app stores, indicate that the rating of 89% of these apps can be predicted with 100% accuracy. Our prediction model is built by using feature and rating information from the existing apps offered in the App Store and it yields highly accurate rating predictions, using only a few (11-12) existing apps for case-based prediction. These findings may have important implications for requirements engineering in app stores: They indicate that app developers may be able to obtain (very accurate) assessments of the customer reaction to their proposed feature sets (requirements), thereby providing new opportunities to support the requirements elicitation process for app developers." }, { "instance_id": "R114155xR113204", "comparison_id": "R114155", "paper_id": "R113204", "text": "Mining Android App Descriptions for Permission Requirements Recommendation During the development or maintenance of an Android app, the app developer needs to determine the app's security and privacy requirements such as permission requirements. Permission requirements include two folds. First, what permissions (i.e., access to sensitive resources, e.g., location or contact list) the app needs to request. Second, how to explain the reason of permission usages to users. In this paper, we focus on the multiple challenges that developers face when creating permission-usage explanations. We propose a novel framework, CLAP, that mines potential explanations from the descriptions of similar apps. CLAP leverages information retrieval and text summarization techniques to find frequent permission usages. We evaluate CLAP on a large dataset containing 1.4 million Android apps. The evaluation results outperform existing state-of-the-art approaches, showing great promise of CLAP as a tool for assisting developers and permission requirements discovery." }, { "instance_id": "R114155xR111441", "comparison_id": "R114155", "paper_id": "R111441", "text": "Crowd Out the Competition MyERP is a fictional developer of an Enterprise Resource Planning (ERP) system. Driven by the competition, they face the challenge of losing market share if they fail to de-ploy a Software as a Service (SaaS) ERP system to the European market quickly, but with high quality product. This also means that the requirements engineering (RE) activities will have to be performed efficiently and provide solid results. An additional problem they face is that their (potential) stakeholders are phys-ically distributed, it makes sense to consider them a \"crowd\". This competition paper suggests a Crowd-based RE approach that first identifies the crowd, then collects and analyzes their feedback to derive wishes and needs, and validate the results through prototyping. For this, techniques are introduced that have so far been rarely employed within RE, but more \"traditional\" RE techniques, will also be integrated and/or adapted to attain the best possible result in the case of MyERP." }, { "instance_id": "R114155xR113213", "comparison_id": "R114155", "paper_id": "R113213", "text": "Modelling Users Feedback in Crowd-Based Requirements Engineering: An Empirical Study Most enterprises operate within a complex and ever-changing context. To ensure that requirements keep pace with changing context, users\u2019 feedback is advocated to ensure that the requirements knowledge is refreshed and reflects the degree to which the system meets its design objectives. The traditional approach to users\u2019 feedback, which is based on data mining and text analysis, is often limited, partly due to the ad-hoc nature of users\u2019 feedback and, also, the methods used to acquire it. To maximize the expressiveness of users\u2019 feedback and still be able to efficiently analyse it, we propose that feedback acquisition should be designed with that goal in mind. This paper contributes to that aim by presenting an empirical study that investigates users\u2019 perspectives on feedback constituents and how they could be structured. This will provide a baseline for modelling and customizing feedback for enterprise systems in order to maintain and evolve their requirements." }, { "instance_id": "R114155xR112416", "comparison_id": "R114155", "paper_id": "R112416", "text": "Using the crowds to satisfy unbounded requirements The Internet is a social space that is shaped by humans through the development of websites, the release of web services, the collaborative creation of encyclopedias and forums, the exchange of information through social networks, the provision of work through crowdsourcing platforms, etc. This landscape offers novel possibilities for software systems to satisfy their requirements, e.g., by retrieving and aggregating the information from Internet websites as well as by crowdsourcing the execution of certain functions. In this paper, we present a special type of functional requirements (called unbounded) that is not fully satisfiable and whose satisfaction is increased by gathering evidence from multiple sources. In addition to charac- terizing unbounded requirements, we explain how to maximize their satisfaction by asking and by combining opinions of mul- tiple sources: people, services, information, and algorithms. We provide evidence of the existence of these requirements through examples by studying a modern Web application (Spotify) and from a traditional system (Microsoft Word)." }, { "instance_id": "R114155xR76118", "comparison_id": "R114155", "paper_id": "R76118", "text": "CrowdREquire: A Requirements Engineering Crowdsourcing Platform This paper describes CrowdREquire, a platform that supports requirements engineering using the crowdsourcing concept. The power of the crowd is in the diversity of talents and expertise available within the crowd and CrowdREquire specifies how requirements engineering can harness skills available in the crowd. In developing CrowdREquire, this paper designs a crowdsourcing business model and market strategy for crowdsourcing requirements engineering irrespective of the professions and areas of expertise of the crowd involved. This is also a specific application of crowdsourcing which establishes the general applicability and efficacy of crowdsourcing. The results obtained could be used as a reference for other crowdsourcing systems as well." }, { "instance_id": "R114155xR113137", "comparison_id": "R114155", "paper_id": "R113137", "text": "Mining Context-Aware User Requirements from Crowd Contributed Mobile Data Internetware is required to respond quickly to emergent user requirements or requirements changes by providing application upgrade or making context-aware recommendations. As user requirements in Internet computing environment are often changing fast and new requirements emerge more and more in a creative way, traditional requirements engineering approaches based on requirements elicitation and analysis cannot ensure the quick response of Internetware. In this paper, we propose an approach for mining context-aware user requirements from crowd contributed mobile data. The approach captures behavior records contributed by a crowd of mobile users and automatically mines context-aware user behavior patterns (i.e., when, where and under what conditions users require a specific service) from them using Apriori-M algorithm. Based on the mined user behaviors, emergent requirements or requirements changes can be inferred from the mined user behavior patterns and solutions that satisfy the requirements can be recommended to users. To evaluate the proposed approach, we conduct an experimental study and show the effectiveness of the requirements mining approach." }, { "instance_id": "R12250xR12245", "comparison_id": "R12250", "paper_id": "R12245", "text": "Estimation of the Transmission Risk of 2019-nCov and Its Implication for Public Health Interventions Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71\u20137.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction." }, { "instance_id": "R12250xR12237", "comparison_id": "R12250", "paper_id": "R12237", "text": "Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( \u03b3 ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks." }, { "instance_id": "R12250xR12231", "comparison_id": "R12250", "paper_id": "R12231", "text": "Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions Abstract Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39\u20134.13); 58\u201376% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6\u20137.4); 21022 (11090\u201333490) total infections in Wuhan 1 to 22 January. Changes to previous version case data updated to include 22 Jan 2020; we did not use cases reported after this period as cases were reported at the province level hereafter, and large-scale control interventions were initiated on 23 Jan 2020; improved likelihood function, better accounting for first 41 confirmed cases, and now using all infections (rather than just cases detected) in Wuhan for prediction of infection in international travellers; improved characterization of uncertainty in parameters, and calculation of epidemic trajectory confidence intervals using a more statistically rigorous method; extended range of latent period in sensitivity analysis to reflect reports of up to 6 day incubation period in household clusters; removed travel restriction analysis, as different modelling approaches (e.g. stochastic transmission, rather than deterministic transmission) are more appropriate to such analyses. " }, { "instance_id": "R12250xR12223", "comparison_id": "R12250", "paper_id": "R12223", "text": "Modelling the epidemic trend of the 2019 novel coronavirus outbreak in China We present a timely evaluation of the Chinese 2019-nCov epidemic in its initial phase, where 2019-nCov demonstrates comparable transmissibility but lower fatality rates than SARS and MERS. A quick diagnosis that leads to case isolation and integrated interventions will have a major impact on its future trend. Nevertheless, as China is facing its Spring Festival travel rush and the epidemic has spread beyond its borders, further investigation on its potential spatiotemporal transmission pattern and novel intervention strategies are warranted." }, { "instance_id": "R12250xR12235", "comparison_id": "R12250", "paper_id": "R12235", "text": "Estimating the effective reproduction number of the 2019-nCoV in China Abstract We estimate the effective reproduction number for 2019-nCoV based on the daily reported cases from China CDC. The results indicate that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate. Article Summary Line This modeling study indicates that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate." }, { "instance_id": "R12250xR12226", "comparison_id": "R12250", "paper_id": "R12226", "text": "Time-varying transmission dynamics of Novel Coronavirus Pneumonia in China ABSTRACT Rationale Several studies have estimated basic production number of novel coronavirus pneumonia (NCP). However, the time-varying transmission dynamics of NCP during the outbreak remain unclear. Objectives We aimed to estimate the basic and time-varying transmission dynamics of NCP across China, and compared them with SARS. Methods Data on NCP cases by February 7, 2020 were collected from epidemiological investigations or official websites. Data on severe acute respiratory syndrome (SARS) cases in Guangdong Province, Beijing and Hong Kong during 2002-2003 were also obtained. We estimated the doubling time, basic reproduction number ( R 0 ) and time-varying reproduction number ( R t ) of NCP and SARS. Measurements and main results As of February 7, 2020, 34,598 NCP cases were identified in China, and daily confirmed cases decreased after February 4. The doubling time of NCP nationwide was 2.4 days which was shorter than that of SARS in Guangdong (14.3 days), Hong Kong (5.7 days) and Beijing (12.4 days). The R 0 of NCP cases nationwide and in Wuhan were 4.5 and 4.4 respectively, which were higher than R 0 of SARS in Guangdong ( R 0 =2.3), Hongkong ( R 0 =2.3), and Beijing ( R 0 =2.6). The R t for NCP continuously decreased especially after January 16 nationwide and in Wuhan. The R 0 for secondary NCP cases in Guangdong was 0.6, and the R t values were less than 1 during the epidemic. Conclusions NCP may have a higher transmissibility than SARS, and the efforts of containing the outbreak are effective. However, the efforts are needed to persist in for reducing time-varying reproduction number below one. At a Glance Commentary Scientific Knowledge on the Subject Since December 29, 2019, pneumonia infection with 2019-nCoV, now named as Novel Coronavirus Pneumonia (NCP), occurred in Wuhan, Hubei Province, China. The disease has rapidly spread from Wuhan to other areas. As a novel virus, the time-varying transmission dynamics of NCP remain unclear, and it is also important to compare it with SARS. What This Study Adds to the Field We compared the transmission dynamics of NCP with SARS, and found that NCP has a higher transmissibility than SARS. Time-varying production number indicates that rigorous control measures taken by governments are effective across China, and persistent efforts are needed to be taken for reducing instantaneous reproduction number below one." }, { "instance_id": "R137469xR137444", "comparison_id": "R137469", "paper_id": "R137444", "text": "Mechanisms of bacterial inactivation in the liquid phase induced by a remote RF cold atmospheric pressure plasma jet A radio-frequency atmospheric pressure argon plasma jet is used for the inactivation of bacteria (Pseudomonas aeruginosa) in solutions. The source is characterized by measurements of power dissipation, gas temperature, absolute UV irradiance as well as mass spectrometry measurements of emitted ions. The plasma-induced liquid chemistry is studied by performing liquid ion chromatography and hydrogen peroxide concentration measurements on treated distilled water samples. Additionally, a quantitative estimation of an extensive liquid chemistry induced by the plasma is made by solution kinetics calculations. The role of the different active components of the plasma is evaluated based on either measurements, as mentioned above, or estimations based on published data of measurements of those components. For the experimental conditions being considered in this work, it is shown that the bactericidal effect can be solely ascribed to plasma-induced liquid chemistry, leading to the production of stable and transient chemical species. It is shown that HNO2, ONOO \u2212 and H2O2 are present in the liquid phase in similar quantities to concentrations which are reported in the literature to cause bacterial inactivation. The importance of plasma-induced chemistry at the gas\u2010liquid interface is illustrated and discussed in detail. (Some figures may appear in colour only in the online journal)" }, { "instance_id": "R137469xR137395", "comparison_id": "R137469", "paper_id": "R137395", "text": "Direct current plasma jet at atmospheric pressure operating in nitrogen and air An atmospheric pressure direct current (DC) plasma jet is investigated in N2 and dry air in terms of plasma properties and generation of active species in the active zone and the afterglow. The influence of working gases and the discharge current on plasma parameters and afterglow properties are studied. The electrical diagnostics show that discharge can be sustained in two different operating modes, depending on the current range: a self-pulsing regime at low current and a glow regime at high current. The gas temperature and the N2 vibrational temperature in the active zone of the jet and in the afterglow are determined by means of emission spectroscopy, based on fitting spectra of N2 second positive system (C3\u03a0-B3\u03a0) and the Boltzmann plot method, respectively. The spectra and temperature differences between the N2 and the air plasma jet are presented and analyzed. Space-resolved ozone and nitric oxide density measurements are carried out in the afterglow of the jet. The density of ozone, which is formed..." }, { "instance_id": "R137469xR137410", "comparison_id": "R137469", "paper_id": "R137410", "text": "A brush-shaped air plasma jet operated in glow discharge mode at atmospheric pressure Using ambient air as working gas, a direct-current plasma jet is developed to generate a brush-shaped plasma plume with fairly large volume. Although a direct-current power supply is used, the discharge shows a pulsed characteristic. Based on the voltage-current curve and fast photography, the brush-shaped plume, like the gliding arc plasma, is in fact a temporal superposition of a moving discharge filament in an arched shape. During it moves away from the nozzle, the discharge evolves from a low-current arc into a normal glow in one discharge cycle. The emission profile is explained qualitatively based on the dynamics of the plasma brush." }, { "instance_id": "R137469xR137413", "comparison_id": "R137469", "paper_id": "R137413", "text": "Inactivation of Gram-positive biofilms by low-temperature plasma jet at atmospheric pressure This work is devoted to the evaluation of the efficiency of a new low-temperature plasma jet driven in ambient air by a dc-corona discharge to inactivate adherent cells and biofilms of Gram-positive bacteria. The selected microorganisms were lactic acid bacteria, a Weissella confusa strain which has the particularity to excrete a polysaccharide polymer (dextran) when sucrose is present. Both adherent cells and biofilms were treated with the low-temperature plasma jet for different exposure times. The antimicrobial efficiency of the plasma was tested against adherent cells and 48 h-old biofilms grown with or without sucrose. Bacterial survival was estimated using both colony-forming unit counts and fluorescence-based assays for bacterial cell viability. The experiments show the ability of the low-temperature plasma jet at atmospheric pressure to inactivate the bacteria. An increased resistance of bacteria embedded within biofilms is clearly observed. The resistance is also significantly higher with biofilm in the presence of sucrose, which indicates that dextran could play a protective role." }, { "instance_id": "R137469xR137404", "comparison_id": "R137469", "paper_id": "R137404", "text": "Characteristics of an atmospheric-pressure argon plasma jet excited by a dc voltage A dc-excited plasma jet is developed to generate a diffuse plasma plume in flowing argon. The discharge characteristics of the plasma jet are investigated by optical and electrical methods. The results show that the plasma plume is a pulsed discharge even when a dc voltage is applied. The discharge frequency varies with a change in the applied voltage, the gas flow rate and the gas gap width. It is found that the discharges at different positions of the plasma plume are initiated and quenched almost at the same time with a jitter of about 10 ns by the spatially resolved measurement. Optical emission spectroscopy is used to investigate the excited electron temperature of the plasma plume. The results show that the excited electron temperature decreases with increasing applied voltage, gas flow rate or gas gap width. These results are analyzed qualitatively." }, { "instance_id": "R137469xR137429", "comparison_id": "R137469", "paper_id": "R137429", "text": "Plasma Processes and Plasma Sources in Medicine The use of plasma for healthcare can be dated back as far as the middle of the 19th century. Only the development of room temperature atmospheric pressure plasma sources in the past decade, however, has opened the new and fast growing interdisciplinary research field of plasma medicine. Three main topics can be distinguished: plasma treated implants, plasma decontamination, and plasmas in medical therapy. Understanding of the plasma sources and the plasma processes involved is still incomplete. With the aim of a more fundamental insight we investigate plasmas in a) functionalization of implants with antimicrobial as well as cell attachment enhancing surfaces b) atmospheric pressure plasmas (APPs) in inactivation of bacteria, decontamination of bottles and food products, as well as medical equipment and c) APPs in medical therapy and their effects on cell viability as a means to finding a plasma \u201cdosage\u201d. The possibilities of an application focused designing of plasma sources will be emphasized. On the example of feed gas humidity and its significant influence the importance of determining and controlling unobvious or hidden parameter is demonstrated (\u00a9 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)" }, { "instance_id": "R137469xR137438", "comparison_id": "R137469", "paper_id": "R137438", "text": "Power coupling and electrical characterization of a radio-frequency micro atmospheric pressure plasma jet We propose an efficient RF power coupling scheme for a micro atmospheric pressure plasma jet operating in helium. The discharge gap is used as a resonant element in a series LC circuit. In resonance, the voltage across the discharge gap is amplified and the ignition of the plasma is enabled with the input RF power as low as 0.5 W. High power coupling efficiency and simplicity of the circuit allow accurate electrical characterization of the discharge. Systematic measurements of the dissipated power as a function of the applied voltage are reported for the discharge operating in helium with molecular admixtures of N2 and O2." }, { "instance_id": "R137469xR137386", "comparison_id": "R137469", "paper_id": "R137386", "text": "Cold DC-Operated Air Plasma Jet for the Inactivation of Infectious Microorganisms We evaluated a nonthermal plasma jet for a respective use to prevent infections from bacteria and yeasts. The plasma jet is generated from the flow of ambient air with 8 slm through a microhollow cathode discharge assembly that is operated with a direct current of 30 mA. With these parameters, the temperature in the jet reaches 43 \u00b0C at 10 mm from the discharge. Agar plates that were inoculated with Staphylococcus aureus, Pseudomonas aeruginosa, Acinetobacter baumannii, and Candida kefyr were treated at this distance, moving the plates through the jet in a meander that covered a 2 cm by 2 cm area. Different exposure times were realized by changing the speed of the movement and adjusting the distance between consecutive passes. S. aureus was most responsive to the exposure with a reduction in the number of colony forming units of 5.5 log steps in 40 s. All other microorganisms show a more gradual inactivation with exposure times. For all bacteria, a clearing of the treated area is achieved in about 2.5-3.5 min, corresponding to log-reduction factors of 5.5-6.5. Complete inactivation of the yeast requires about 7 min. Both S. aureus and C. kefyr show considerable inactivation also outside the immediate treatment area, while P. aeruginosa and A. baumannii do not. We conclude that differences in the morphologies of the membrane structures are responsible for the diverging results, together with a targeted response to different agents provided with the plasma jet. For the gram negative bacteria, we hold short-lived agents, acting across a short range, responsible, while for the other microorganisms, longer lived species seem more important. Our measurements show that neither heat, ultraviolet radiation, nor the generation of ozone can be responsible for the observed results. The most prominent long lived reaction product found is nitric oxide, which, by itself or through induced chemical reactions, might affect cell viability." }, { "instance_id": "R137469xR137432", "comparison_id": "R137469", "paper_id": "R137432", "text": "Scar formation of laser skin lesions after cold atmospheric pressure plasma (CAP) treatment: A clinical long term observation Abstract CAP treatment is likely to be of benefit in wound healing. In a clinical study, 20 laser lesions in five individuals have been treated with argon plasma 10, 30 or three times for 10 s, with untreated as control. The scar formation was followed for 10 days, six and 12 months. In early stages of wound healing, plasma treatment seems to support the inflammation needed for tissue recovery. In later stages, plasma treatment shows better results in terms of avoiding post-traumatic skin disorders. Plasma treatment shows superior aesthetics during scar formation. No precancerous skin features occurred up to 12 months." }, { "instance_id": "R137469xR137377", "comparison_id": "R137469", "paper_id": "R137377", "text": "A dc non-thermal atmospheric-pressure plasma microjet A direct current (dc), non-thermal, atmospheric-pressure plasma microjet is generated with helium/oxygen gas mixture as working gas. The electrical property is characterized as a function of the oxygen concentration and show distinctive regions of operation. Side-on images of the jet were taken to analyze the mode of operation as well as the jet length. A self-pulsed mode is observed before the transition of the discharge to normal glow mode. Optical emission spectroscopy is employed from both end-on and side-on along the jet to analyze the reactive species generated in the plasma. Line emissions from atomic oxygen (at 777.4 nm) and helium (at 706.5 nm) were studied with respect to the oxygen volume percentage in the working gas, flow rate and discharge current. Optical emission intensities of Cu and OH are found to depend heavily on the oxygen concentration in the working gas. Ozone concentration measured in a semi-confined zone in front of the plasma jet is found to be from tens to ~120 ppm. The results presented here demonstrate potential pathways for the adjustment and tuning of various plasma parameters such as reactive species selectivity and quantities or even ultraviolet emission intensities manipulation in an atmospheric-pressure non-thermal plasma source. The possibilities of fine tuning these plasma species allows for enhanced applications in health and medical related areas." }, { "instance_id": "R137469xR137407", "comparison_id": "R137469", "paper_id": "R137407", "text": "Discharge Dynamics and Modes of an Atmospheric Pressure Non-Equilibrium Air Plasma Jet A plasma jet operated with atmospheric pressure air is presented. Unlike the dynamics of plasma jets working with noble gases, the propagation of the jet that is operated with air is primarily determined by the gas flow. This jet can be generated by applying a continuous, i.e., DC high voltage. However, depending on the applied voltage and gas flow rate a true DC operation can be distinguished from a self-pulsing mode. The gas temperature of the plasma plume when operated in the pulsed mode is lower than for the DC mode. Conversely, emission intensities of atomic oxygen, O, and nitrogen species, N 2 and N + 2 , are much higher for the pulsed mode than observed for DC operation." }, { "instance_id": "R137469xR137401", "comparison_id": "R137469", "paper_id": "R137401", "text": "On the spatio-temporal dynamics of a self-pulsed nanosecond transient spark discharge: a spectroscopic and electrical analysis A self-pulsing discharge in flowing argon is investigated by means of electrical, optical and spectroscopic methods. The dependence of the discharge self-pulsing frequency on external parameters (applied negative dc voltage, gap dimensions) is determined, and optical and spectroscopic methods are used to investigate the discharge development with high spatial and temporal resolution. High-resolution spectroscopic measurements at several wavelengths reveal the complex dynamics of the transient spark discharge: a pre-phase at the needle tip and capillary edge, propagation of positive and negative streamers, creation of a transient glow discharge structure and a long-lasting afterglow. Excited plasma species necessary for the treatment of an exposed sample continue to be present even 80 \u00b5s after the breakdown of the active plasma." }, { "instance_id": "R137469xR137389", "comparison_id": "R137469", "paper_id": "R137389", "text": "Arrays of microplasmas for the controlled production of tunable high fluxes of reactive oxygen species at atmospheric pressure The atmospheric-pressure generation of singlet delta oxygen (O2(a 1\u0394g)) by microplasmas was experimentally studied. The remarkable stability of microcathode sustained discharges (MCSDs) allowed the operation of dc glow discharges, free from the glow-to-arc transition, in He/O2/NO mixtures at atmospheric pressure. From optical diagnostics measurements we deduced the yield of O2(a 1\u0394g). By operating arrays of several MCSDs in series, O2(a 1\u0394g) densities higher than 1.0 \u00d7 1017 cm\u22123 were efficiently produced and transported over distances longer than 50 cm, corresponding to O2(a 1\u0394g) partial pressures and production yields greater than 5 mbar and 6%, respectively. At such high O2(a 1\u0394g) densities, the fluorescence of the so-called O2(a 1\u0394g) dimol was observed as a red glow at 634 nm up to 1 m downstream. Parallel operation of arrays of MCSDs was also implemented, generating O2(a 1\u0394g) fluxes as high as 100 mmol h\u22121. In addition, ozone (O3) densities up to 1016 cm\u22123 were obtained. Finally, the density ratio of O2(a 1\u0394g) to O3 was finely and easily tuned in the range [10\u22123\u201310+5], through the values of the discharge current and NO concentration. This opens up opportunities for a large spectrum of new applications, making this plasma source notably very useful for biomedicine." }, { "instance_id": "R137469xR137419", "comparison_id": "R137469", "paper_id": "R137419", "text": "The influence of the geometry and electrical characteristics on the formation of the atmospheric pressure plasma jet An extensive electrical study was performed on a coaxial geometry atmospheric pressure plasma jet source in helium, driven by 30 kHz sine voltage. Two modes of operation were observed, a highly reproducible low-power mode that features the emission of one plasma bullet per voltage period and an erratic high-power mode in which micro-discharges appear around the grounded electrode. The minimum of power transfer efficiency corresponds to the transition between the two modes. Effective capacitance was identified as a varying property influenced by the discharge and the dissipated power. The charge carried by plasma bullets was found to be a small fraction of charge produced in the source irrespective of input power and configuration of the grounded electrode. The biggest part of the produced charge stays localized in the plasma source and below the grounded electrode, in the range 1.2\u20133.3 nC for ground length of 3\u20138 mm." }, { "instance_id": "R137469xR137416", "comparison_id": "R137469", "paper_id": "R137416", "text": "Flux of OH and O radicals onto a surface by an atmospheric-pressure helium plasma jet measured by laser-induced fluorescence The atmospheric-pressure helium plasma jet is of emerging interest as a cutting-edge biomedical device for cancer treatment, wound healing and sterilization. Reactive oxygen species such as OH and O radicals are considered to be major factors in the application of biological plasma. In this study, density distribution, temporal behaviour and flux of OH and O radicals on a surface are measured using laser-induced fluorescence. A helium plasma jet is generated by applying pulsed high voltage of 8 kV with 10 kHz using a quartz tube with an inner diameter of 4 mm. To evaluate the relation between the surface condition and active species production, three surfaces are used: dry, wet and rat skin. When the helium flow rate is 1.5 l min\u22121, radial distribution of OH density on the rat skin surface shows a maximum density of 1.2 \u00d7 1013 cm\u22123 at the centre of the plasma-mediated area, while O atom density shows a maximum of 1.0 \u00d7 1015 cm\u22123 at 2.0 mm radius from the centre of the plasma-mediated area. Their densities in the effluent of the plasma jet are almost constant during the intervals of the discharge pulses because their lifetimes are longer than the pulse interval. Their density distribution depends on the helium flow rate and the surface humidity. With these results, OH and O production mechanisms in the plasma jet and their flux onto the surface are discussed." }, { "instance_id": "R137469xR137447", "comparison_id": "R137469", "paper_id": "R137447", "text": "Spectroscopic Investigation of a Microwave-Generated Atmospheric Pressure Plasma Torch The investigated new microwave plasma torch is based on an axially symmetric resonator. Microwaves of a frequency of 2.45 GHz are resonantly fed into this cavity resulting in a sufficiently high electric field to ignite plasma without any additional igniters as well as to maintain stable plasma operation. Optical emission spectroscopy was carried out to characterize a humid air plasma. OH-bands were used to determine the gas rotational temperature Trot while the electron temperature was estimated by a Boltzmann plot of oxygen lines. Maximum temperatures of Trot of about 3600 K and electron temperatures of 5800 K could be measured. The electron density ne was estimated to ne \u2248 3 \u00b7 1020m\u20133 by using Saha's equation. Parametric studies in dependence of the gas flow and the supplied microwave power revealed that the maximum temperatures are independent of these parameters. However, the volume of the plasma increases with increasing microwave power and with a decrease of the gas flow. Considerations using collision frequencies, energy transfer times and power coupling provide an explanation of the observed phenomena: The optimal microwave heating is reached for electron-neutral collision frequencies \u03bden being near to the angular frequency of the wave \u03c9 (\u00a9 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)" }, { "instance_id": "R137469xR137450", "comparison_id": "R137469", "paper_id": "R137450", "text": "Modeling of microwave-induced plasma in argon at atmospheric pressure A two-dimensional model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. The model describes in a self-consistent manner the gas flow and heat transfer, the in-coupling of the microwave energy into the plasma, and the reaction kinetics relevant to high-pressure argon plasma including the contribution of molecular ion species. The model provides the gas and electron temperature distributions, the electron, ion, and excited state number densities, and the power deposited into the plasma for given gas flow rate and temperature at the inlet, and input power of the incoming TEM microwave. For flow rate and absorbed microwave power typical for analytical applications (200-400 ml/min and 20 W), the plasma is far from thermodynamic equilibrium. The gas temperature reaches values above 2000 K in the plasma region, while the electron temperature is about 1 eV. The electron density reaches a maximum value of about 4 \u00d7 10(21) m(-3). The balance of the charged particles is essentially controlled by the kinetics of the molecular ions. For temperatures above 1200 K, quasineutrality of the plasma is provided by the atomic ions, and below 1200 K the molecular ion density exceeds the atomic ion density and a contraction of the discharge is observed. Comparison with experimental data is presented which demonstrates good quantitative and qualitative agreement." }, { "instance_id": "R137469xR137453", "comparison_id": "R137469", "paper_id": "R137453", "text": "Integrated Microwave Atmospheric Plasma Source (IMAPlaS): thermal and spectroscopic properties and antimicrobial effect onB. atrophaeusspores The Integrated Microwave Atmospheric Plasma Source (IMAPlaS) operating with a microwave resonator at 2.45 GHz driven by a solid-state transistor oscillator generates a core plasma of high temperature (T > 1000 K), therefore producing reactive species such as NO very effectively. The effluent of the plasma source is much colder, which enables direct treatment of thermolabile materials or even living tissue. In this study the source was operated with argon, helium and nitrogen with gas flow rates between 0.3 and 1.0 slm. Depending on working gas and distance, axial gas temperatures between 30 and 250 \u00b0C were determined in front of the nozzle. Reactive species were identified by emission spectroscopy in the spectral range from vacuum ultraviolet to near infrared. The irradiance in the ultraviolet range was also measured. Using B. atrophaeus spores to test antimicrobial efficiency, we determined log10-reduction rates of up to a factor of 4." }, { "instance_id": "R138127xR138036", "comparison_id": "R138127", "paper_id": "R138036", "text": "Poly(d,l-lactide-co-glycolide)/montmorillonite nanoparticles for oral delivery of anticancer drugs This research developed a novel bioadhesive drug delivery system, poly(d,l-lactide-co-glycolide)/montmorillonite (PLGA/MMT) nanoparticles, for oral delivery of paclitaxel. Paclitaxel-loaded PLGA/MMT nanoparticles were prepared by the emulsion/solvent evaporation method. MMT was incorporated in the formulation as a matrix material component, which also plays the role of a co-emulsifier in the nanoparticle preparation process. Paclitaxel-loaded PLGA/MMT nanoparticles were found to be of spherical shape with a mean size of around 310 nm and polydispersity of less than 0.150. Adding MMT component to the matrix material appears to have little influence on the particles size and the drug encapsulation efficiency. The drug release pattern was found biphasic with an initial burst followed by a slow, sustained release, which was not remarkably affected by the MMT component. Cellular uptake of the fluorescent coumarin 6-loaded PLGA/MMT nanoparticles showed that MMT enhanced the cellular uptake efficiency of the pure PLGA nanoparticles by 57-177% for Caco-2 cells and 11-55% for HT-29 cells, which was dependent on the amount of MMT and the particle concentration in incubation. Such a novel formulation is expected to possess extended residence time in the gastrointestinal (GI) tract, which promotes oral delivery of paclitaxel." }, { "instance_id": "R138127xR137480", "comparison_id": "R138127", "paper_id": "R137480", "text": "PLGA Nanoparticles Stabilized with Cationic Surfactant: Safety Studies and Application in Oral Delivery of Paclitaxel to Treat Chemical-Induced Breast Cancer in Rat PurposeThis study was carried out to formulate poly(lactide-co-glycolide) (PLGA) nanoparticles using a quaternary ammonium salt didodecyl dimethylammonium bromide (DMAB) and checking their utility to deliver paclitaxel by oral route.MethodsParticles were prepared by emulsion solvent diffusion evaporation method. DMAB and particles stabilized with it were evaluated by MTT and LDH cytotoxicity assays. Paclitaxel was encapsulated in these nanoparticles and evaluated in a chemical carcinogenesis model in Sprague Dawley rats.ResultsMTT and LDH assays showed the surfactant to be safe to in vitro cell cultures at concentrations <33 \u03bcM. PLGA nanoparticles prepared using this stabilizer were also found to be non-toxic to cell lines for the duration of the study. When administered orally to rats bearing chemically induced breast cancer, nanoparticles were equally effective/better than intravenous paclitaxel in cremophor EL at 50% lower dose.ConclusionsThis study proves the safety and utility of DMAB in stabilizing preformed polymers like PLGA resulting in nanoparticles. This preliminary data provides a proof of concept of enabling oral chemotherapy by efficacy enhancement for paclitaxel." }, { "instance_id": "R138127xR138006", "comparison_id": "R138127", "paper_id": "R138006", "text": "A novel controlled release formulation for the anticancer drug paclitaxel (Taxol\u00ae): PLGA nanoparticles containing vitamin E TPGS Paclitaxel (Taxol) is one of the best antineoplastic drugs found from nature in the past decades. Like many other anticancer drugs, there are difficulties in its clinical administration due to its poor solubility. Therefore an adjuvant called Cremophor EL has to be employed, but this has been found to cause serious side-effects. However, nanoparticles of biodegradable polymers can provide an ideal solution to the adjuvant problem and realise a controlled and targeted delivery of the drug with better efficacy and fewer side-effects. The present research proposes a novel formulation for fabrication of nanoparticles of biodegradable polymers containing d-alpha-tocopheryl polyethylene glycol 1000 succinate (vitamin E TPGS or TPGS) to replace the current method of clinical administration and, with further modification, to provide an innovative solution for oral chemotherapy. In the modified solvent extraction/evaporation technique employed in this research, the emulsifier/stabiliser/additive and the matrix material can play a key role in determining the morphological, physicochemical and pharmaceutical properties of the produced nanoparticles. We found that vitamin E TPGS could be a novel surfactant as well as a matrix material when blended with other biodegradable polymers. The nanoparticles composed of various formulations and manufactured under various conditions were characterised by laser light scattering (LLS) for size and size distribution, scanning electron microscopy (SEM) and atomic force microscopy (AFM) for morphological properties, X-ray photoelectron spectroscopy (XPS) for surface chemistry and differential scanning calorimetry (DSC) for thermogram properties. The drug encapsulation efficiency (EE) and the drug release kinetics under in vitro conditions were measured by high performance liquid chromatography (HPLC). It was concluded that vitamin E TPGS has great advantages for the manufacture of polymeric nanoparticles for controlled release of paclitaxel and other anti-cancer drugs. Nanoparticles of nanometer size with narrow distribution can be obtained. A drug encapsulation efficiency as high as 100% can be achieved and the release kinetics can be controlled." }, { "instance_id": "R138127xR138014", "comparison_id": "R138127", "paper_id": "R138014", "text": "Nanoparticles of lipid monolayer shell and biodegradable polymer core for controlled release of paclitaxel: Effects of surfactants on particles size, characteristics and in vitro performance This work developed a system of nanoparticles of lipid monolayer shell and biodegradable polymer core for controlled release of anticancer drugs with paclitaxel as a model drug, in which the emphasis was given to the effects of the surfactant type and the optimization of the emulsifier amount used in the single emulsion solvent evaporation/extraction process for the nanoparticle preparation on the particle size, characters and in vitro performance. The drug loaded nanoparticles were characterized by laser light scattering (LLS) for size and size distribution, field-emission scanning electron microscopy (FESEM) for surface morphology, X-ray photoelectron spectroscopy (XPS) for surface chemistry, zetasizer for surface charge, and high performance liquid chromatography (HPLC) for drug encapsulation efficiency and in vitro drug release kinetics. MCF-7 breast cancer cells were employed to evaluate the cellular uptake and cytotoxicity. It was found that phospholipids of short chains such as 1,2-dilauroylphosphatidylocholine (DLPC) have great advantages over the traditional emulsifier poly(vinyl alcohol) (PVA), which is used most often in the literature, in preparation of nanoparticles of biodegradable polymers such as poly(D,L-lactide-co-glycolide) (PLGA) for desired particle size, character and in vitro cellular uptake and cytotoxicity. After incubation with MCF-7 cells at 0.250 mg/ml NP concentration, the coumarin-6 loaded PLGA NPs of DLPC shell showed more effective cellular uptake versus those of PVA shell. The analysis of IC(50), i.e. the drug concentration at which 50% of the cells are killed, demonstrated that our DLPC shell PLGA core NP formulation of paclitaxel could be 5.88-, 5.72-, 7.27-fold effective than the commercial formulation Taxol after 24, 48, 72h treatment, respectively." }, { "instance_id": "R138127xR138032", "comparison_id": "R138127", "paper_id": "R138032", "text": "The intracellular uptake ability of chitosan-coated Poly (D,L-lactideco-glycolide) nanoparticles In this study, we prepared chitosan-coated Poly (D,L-lactide-co-glycolide) (PLGA) nanoparticles. Specifically, we utilized a double emulsion-solvent evaporation technique to formulate nanoparticles containing paclitaxel as a model macromolecule and 6-coumarin as a fluorescent marker. SEM images verified that all nanoparticles were spherical in shape with smooth surfaces. Chitosan coating slightly increased the size distribution of the PLGA/PVA nanoparticles, from 202.2 \u00b1 3.2 nm to 212.2 \u00b1 2.9 nm, but the encapsulation efficiency was not significantly different. In contrast, coating with chitosan slowed the in vitro drug release rate and significantly changed the zeta potential from negative (\u221230.1 \u00b1 0.6 mV) to positive (26 \u00b1 1.2 mV). At the initial burst time, the drug release rate from chitosancoated nanoparticles was slightly slower than that of the uncoated nanoparticles. Chitosan-coated nanoparticles were also taken up much more efficiently than uncoated nanoparticles. This study demonstrated the efficacy of chitosancoated PLGA nanoparticles as an efficient delivery system." }, { "instance_id": "R138127xR137720", "comparison_id": "R138127", "paper_id": "R137720", "text": "Paclitaxel-loaded PEGylated PLGA-based nanoparticles: In vitro and in vivo evaluation The purpose of this study was to develop Cremophor EL-free nanoparticles loaded with Paclitaxel (PTX), intended to be intravenously administered, able to improve the therapeutic index of the drug and devoid of the adverse effects of Cremophor EL. PTX-loaded PEGylated PLGA-based were prepared by simple emulsion and nanoprecipitation. The incorporation efficiency of PTX was higher with the nanoprecipitation technique. The release behavior of PTX exhibited a biphasic pattern characterized by an initial burst release followed by a slower and continuous release. The in vitro anti-tumoral activity was assessed using the Human Cervix Carcinoma cells (HeLa) by the MTT test and was compared to the commercial formulation Taxol and to Cremophor EL. When exposed to 25 microg/ml of PTX, the cell viability was lower for PTX-loaded nanoparticles than for Taxol (IC(50) 5.5 vs 15.5 microg/ml). Flow cytometry studies showed that the cellular uptake of PTX-loaded nanoparticles was concentration and time dependent. Exposure of HeLa cells to Taxol and PTX-loaded nanoparticles induced the same percentage of apoptotic cells. PTX-loaded nanoparticles showed greater tumor growth inhibition effect in vivo on TLT tumor, compared with Taxol. Therefore, PTX-loaded nanoparticles may be considered as an effective anticancer drug delivery system for cancer chemotherapy." }, { "instance_id": "R138127xR138043", "comparison_id": "R138127", "paper_id": "R138043", "text": "Paclitaxel-loaded PLGA nanoparticles surface modified with transferrin and Pluronic\u00aeP85, anin vitrocell line andin vivobiodistribution studies on rat model The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic\u00aeP85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague\u2013Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution." }, { "instance_id": "R139050xR138820", "comparison_id": "R139050", "paper_id": "R138820", "text": "iMEGES: integrated mental-disorder GEnome score by deep neural network for prioritizing the susceptibility genes for mental disorders in personal genomes BackgroundA range of rare and common genetic variants have been discovered to be potentially associated with mental diseases, but many more have not been uncovered. Powerful integrative methods are needed to systematically prioritize both variants and genes that confer susceptibility to mental diseases in personal genomes of individual patients and to facilitate the development of personalized treatment or therapeutic approaches.MethodsLeveraging deep neural network on the TensorFlow framework, we developed a computational tool, integrated Mental-disorder GEnome Score (iMEGES), for analyzing whole genome/exome sequencing data on personal genomes. iMEGES takes as input genetic mutations and phenotypic information from a patient with mental disorders, and outputs the rank of whole genome susceptibility variants and the prioritized disease-specific genes for mental disorders by integrating contributions from coding and non-coding variants, structural variants (SVs), known brain expression quantitative trait loci (eQTLs), and epigenetic information from PsychENCODE.ResultsiMEGES was evaluated on multiple datasets of mental disorders, and it achieved improved performance than competing approaches when large training dataset is available.ConclusioniMEGES can be used in population studies to help the prioritization of novel genes or variants that might be associated with the susceptibility to mental disorders, and also on individual patients to help the identification of genes or variants related to mental diseases." }, { "instance_id": "R139050xR139028", "comparison_id": "R139050", "paper_id": "R139028", "text": "Natural Language Processing of Social Media as Screening for Suicide Risk Suicide is among the 10 most common causes of death, as assessed by the World Health Organization. For every death by suicide, an estimated 138 people\u2019s lives are meaningfully affected, and almost any other statistic around suicide deaths is equally alarming. The pervasiveness of social media\u2014and the near-ubiquity of mobile devices used to access social media networks\u2014offers new types of data for understanding the behavior of those who (attempt to) take their own lives and suggests new possibilities for preventive intervention. We demonstrate the feasibility of using social media data to detect those at risk for suicide. Specifically, we use natural language processing and machine learning (specifically deep learning) techniques to detect quantifiable signals around suicide attempts, and describe designs for an automated system for estimating suicide risk, usable by those without specialized mental health training (eg, a primary care doctor). We also discuss the ethical use of such technology and examine privacy implications. Currently, this technology is only used for intervention for individuals who have \u201copted in\u201d for the analysis and intervention, but the technology enables scalable screening for suicide risk, potentially identifying many people who are at risk preventively and prior to any engagement with a health care system. This raises a significant cultural question about the trade-off between privacy and prevention\u2014we have potentially life-saving technology that is currently reaching only a fraction of the possible people at risk because of respect for their privacy. Is the current trade-off between privacy and prevention the right one?" }, { "instance_id": "R139050xR138756", "comparison_id": "R139050", "paper_id": "R138756", "text": "Learning Spatial\u2013Spectral\u2013Temporal EEG Features With Recurrent 3D Convolutional Neural Networks for Cross-Task Mental Workload Assessment Mental workload assessment is essential for maintaining human health and preventing accidents. Most research on this issue is limited to a single task. However, cross-task assessment is indispensable for extending a pre-trained model to new workload conditions. Because brain dynamics are complex across different tasks, it is difficult to propose efficient human-designed features based on prior knowledge. Therefore, this paper proposes a concatenated structure of deep recurrent and 3D convolutional neural networks (R3DCNNs) to learn EEG features across different tasks without prior knowledge. First, this paper adds frequency and time dimensions to EEG topographic maps based on a Morlet wavelet transformation. Then, R3DCNN is proposed to simultaneously learn EEG features from the spatial, spectral, and temporal dimensions. The proposed model is validated based on the EEG signals collected from 20 subjects. This paper employs a binary classification of low and high mental workload across spatial n-back and arithmetic tasks. The results show that the R3DCNN achieves an average accuracy of 88.9%, which is a significant increase compared with that of the state-of-the-art methods. In addition, the visualization of the convolutional layers demonstrates that the deep neural network can extract detailed features. These results indicate that R3DCNN is capable of identifying the mental workload levels for cross-task conditions." }, { "instance_id": "R139050xR138964", "comparison_id": "R139050", "paper_id": "R138964", "text": "The Facial Stress Recognition Based on Multi-histogram Features and Convolutional Neural Network The health disorders due to stress and depression should not be considered trivial because it has a negative impact on health. Prolonged stress not only triggers mental fatigue but also affects physical health. Therefore, we must be able to identify stress early. In this paper, we proposed the new methods for stress recognition on three classes (neutral, low stress, high stress) from a facial frontal image. Each image divided into three parts, i.e. pairs of eyes, nose, and mouth. Facial features have extracted on each image pixel using DoG, HOG, and DWT. The strength of orthonormality features is considered by the RICA. The GDA distributes the nonlinear covariance. Furthermore, the histogram features of the image parts are applied at a depth-based learning of ConvNet to model the facial stress expression. The proposed method is used FERET databases for training and validation. The k-fold validation method is used as a validation with k=5. Based on the experiments result, the proposed method accuracy showing outperforms compared with other works." }, { "instance_id": "R139050xR139042", "comparison_id": "R139050", "paper_id": "R139042", "text": "The biomedical model of mental disorder: A critical analysis of its validity, utility, and effects on psychotherapy research The biomedical model posits that mental disorders are brain diseases and emphasizes pharmacological treatment to target presumed biological abnormalities. A biologically-focused approach to science, policy, and practice has dominated the American healthcare system for more than three decades. During this time, the use of psychiatric medications has sharply increased and mental disorders have become commonly regarded as brain diseases caused by chemical imbalances that are corrected with disease-specific drugs. However, despite widespread faith in the potential of neuroscience to revolutionize mental health practice, the biomedical model era has been characterized by a broad lack of clinical innovation and poor mental health outcomes. In addition, the biomedical paradigm has profoundly affected clinical psychology via the adoption of drug trial methodology in psychotherapy research. Although this approach has spurred the development of empirically supported psychological treatments for numerous mental disorders, it has neglected treatment process, inhibited treatment innovation and dissemination, and divided the field along scientist and practitioner lines. The neglected biopsychosocial model represents an appealing alternative to the biomedical approach, and an honest and public dialog about the validity and utility of the biomedical paradigm is urgently needed." }, { "instance_id": "R139050xR139038", "comparison_id": "R139050", "paper_id": "R139038", "text": "Hierarchical neural model with attention mechanisms for the\n classification of social media text related to mental health Mental health problems represent a major public health challenge. Automated analysis of text related to mental health is aimed to help medical decision-making, public health policies and to improve health care. Such analysis may involve text classification. Traditionally, automated classification has been performed mainly using machine learning methods involving costly feature engineering. Recently, the performance of those methods has been dramatically improved by neural methods. However, mainly Convolutional neural networks (CNNs) have been explored. In this paper, we apply a hierarchical Recurrent neural network (RNN) architecture with an attention mechanism on social media data related to mental health. We show that this architecture improves overall classification results as compared to previously reported results on the same data. Benefitting from the attention mechanism, it can also efficiently select text elements crucial for classification decisions, which can also be used for in-depth analysis." }, { "instance_id": "R139050xR138992", "comparison_id": "R139050", "paper_id": "R138992", "text": "User-level psychological stress detection from social media using deep neural network It is of significant importance to detect and manage stress before it turns into severe problems. However, existing stress detection methods usually rely on psychological scales or physiological devices, making the detection complicated and costly. In this paper, we explore to automatically detect individuals' psychological stress via social media. Employing real online micro-blog data, we first investigate the correlations between users' stress and their tweeting content, social engagement and behavior patterns. Then we define two types of stress-related attributes: 1) low-level content attributes from a single tweet, including text, images and social interactions; 2) user-scope statistical attributes through their weekly micro-blog postings, leveraging information of tweeting time, tweeting types and linguistic styles. To combine content attributes with statistical attributes, we further design a convolutional neural network (CNN) with cross autoencoders to generate user-scope content attributes from low-level content attributes. Finally, we propose a deep neural network (DNN) model to incorporate the two types of user-scope attributes to detect users' psychological stress. We test the trained model on four different datasets from major micro-blog platforms including Sina Weibo, Tencent Weibo and Twitter. Experimental results show that the proposed model is effective and efficient on detecting psychological stress from micro-blog data. We believe our model would be useful in developing stress detection tools for mental health agencies and individuals." }, { "instance_id": "R139050xR138859", "comparison_id": "R139050", "paper_id": "R138859", "text": "Multi task sequence learning for depression scale prediction from video Depression is a typical mood disorder, which affects people in mental and even physical problems. People who suffer depression always behave abnormal in visual behavior and the voice. In this paper, an audio visual based multimodal depression scale prediction system is proposed. Firstly, features are extracted from video and audio are fused in feature level to represent the audio visual behavior. Secondly, long short memory recurrent neural network (LSTM-RNN) is utilized to encode the dynamic temporal information of the abnormal audio visual behavior. Thirdly, emotion information is utilized by multi-task learning to boost the performance further. The proposed approach is evaluated on the Audio-Visual Emotion Challenge (AVEC2014) dataset. Experiments results show the dimensional emotion recognition helps to depression scale prediction." }, { "instance_id": "R139050xR138661", "comparison_id": "R139050", "paper_id": "R138661", "text": "Clinical data Neuroimage data Effective discrimination of attention deficit hyperactivity disorder (ADHD) using imaging and functional biomarkers would have fundamental influence on public health. In usual, the discrimination is based on the standards of American Psychiatric Association. In this paper, we modified one of the deep learning method on structure and parameters according to the properties of ADHD data, to discriminate ADHD on the unique public dataset of ADHD-200. We predicted the subjects as control, combined, inattentive or hyperactive through their frequency features. The results achieved improvement greatly compared to the performance released by the competition. Besides, the imbalance in datasets of deep learning model influenced the results of classification. As far as we know, it is the first time that the deep learning method has been used for the discrimination of ADHD with fMRI data." }, { "instance_id": "R139050xR138719", "comparison_id": "R139050", "paper_id": "R138719", "text": "Deep Neural Generative Model of Functional MRI Images for Psychiatric Disorder Diagnosis Accurate diagnosis of psychiatric disorders plays a critical role in improving the quality of life for patients and potentially supports the development of new treatments. Many studies have been conducted on machine learning techniques that seek brain imaging data for specific biomarkers of disorders. These studies have encountered the following dilemma: A direct classification overfits to a small number of high-dimensional samples but unsupervised feature-extraction has the risk of extracting a signal of no interest. In addition, such studies often provided only diagnoses for patients without presenting the reasons for these diagnoses. This study proposed a deep neural generative model of resting-state functional magnetic resonance imaging (fMRI) data. The proposed model is conditioned by the assumption of the subject's state and estimates the posterior probability of the subject's state given the imaging data, using Bayes\u2019 rule. This study applied the proposed model to diagnose schizophrenia and bipolar disorders. Diagnostic accuracy was improved by a large margin over competitive approaches, namely classifications of functional connectivity, discriminative/generative models of regionwise signals, and those with unsupervised feature-extractors. The proposed model visualizes brain regions largely related to the disorders, thus motivating further biological investigation." }, { "instance_id": "R139050xR138931", "comparison_id": "R139050", "paper_id": "R138931", "text": "DCNN and DNN based multi-modal depression recognition In this paper, we propose an audio visual multimodal depression recognition framework composed of deep convolutional neural network (DCNN) and deep neural network (DNN) models. For each modality, corresponding feature descriptors are input into a DCNN to learn high-level global features with compact dynamic information, which are then fed into a DNN to predict the PHQ-8 score. For multi-modal depression recognition, the predicted PHQ-8 scores from each modality are integrated in a DNN for the final prediction. In addition, we propose the Histogram of Displacement Range as a novel global visual descriptor to quantify the range and speed of the facial landmarks' displacements. Experiments have been carried out on the Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) dataset for the Depression Sub-challenge of the Audio-Visual Emotion Challenge (AVEC 2016), results show that the proposed multi-modal depression recognition framework obtains very promising results on both the development set and test set, which outperforms the state-of-the-art results." }, { "instance_id": "R139050xR138876", "comparison_id": "R139050", "paper_id": "R138876", "text": "Mood disorder identification using deep bottleneck features of elicited speech In the diagnosis of mental health disorder, a large portion of the Bipolar Disorder (BD) patients is likely to be misdiagnosed as Unipolar Depression (UD) on initial presentation. As speech is the most natural way to express emotion, this work focuses on tracking emotion profile of elicited speech for short-term mood disorder identification. In this work, the Deep Scattering Spectrum (DSS) and Low Level Descriptors (LLDs) of the elicited speech signals are extracted as the speech features. The hierarchical spectral clustering (HSC) algorithm is employed to adapt the emotion database to the mood disorder database to alleviate the data bias problem. The denoising autoencoder is then used to extract the bottleneck features of DSS and LLDs for better representation. Based on the bottleneck features, a long short term memory (LSTM) is applied to generate the time-varying emotion profile sequence. Finally, given the emotion profile sequence, the HMM-based identification and verification model is used to determine mood disorder. This work collected the elicited emotional speech data from 15 BDs, 15 UDs and 15 healthy controls for system training and evaluation. Five-fold cross validation was employed for evaluation. Experimental results show that the system using the bottleneck feature achieved an identification accuracy of 73.33%, improving by 8.89%, compared to that without bottleneck features. Furthermore, the system with verification mechanism, improving by 4.44%, outperformed that without verification." }, { "instance_id": "R139050xR138927", "comparison_id": "R139050", "paper_id": "R138927", "text": "DeepBreath: Deep learning of breathing patterns for automatic stress recognition using low-cost thermal imaging in unconstrained settings We propose DeepBreath, a deep learning model which automatically recognises people's psychological stress level (mental overload) from their breathing patterns. Using a low cost thermal camera, we track a person's breathing patterns as temperature changes around his/her nostril. The paper's technical contribution is threefold. First of all, instead of creating handcrafted features to capture aspects of the breathing patterns, we transform the uni-dimensional breathing signals into two dimensional respiration variability spectrogram (RVS) sequences. The spectrograms easily capture the complexity of the breathing dynamics. Second, a spatial pattern analysis based on a deep Convolutional Neural Network (CNN) is directly applied to the spectrogram sequences without the need of hand-crafting features. Finally, a data augmentation technique, inspired from solutions for over-fitting problems in deep learning, is applied to allow the CNN to learn with a small-scale dataset from short-term measurements (e.g., up to a few hours). The model is trained and tested with data collected from people exposed to two types of cognitive tasks (Stroop Colour Word Test, Mental Computation test) with sessions of different difficulty levels. Using normalised self-report as ground truth, the CNN reaches 84.59% accuracy in discriminating between two levels of stress and 56.52% in discriminating between three levels. In addition, the CNN outperformed powerful shallow learning methods based on a single layer neural network. Finally, the dataset of labelled thermal images will be open to the community." }, { "instance_id": "R139050xR138969", "comparison_id": "R139050", "paper_id": "R138969", "text": "Artificial Intelligent System for Automatic Depression Level Analysis Through Visual and Vocal Expressions A human being\u2019s cognitive system can be simulated by artificial intelligent systems. Machines and robots equipped with cognitive capability can automatically recognize a humans mental state through their gestures and facial expressions. In this paper, an artificial intelligent system is proposed to monitor depression. It can predict the scales of Beck depression inventory II (BDI-II) from vocal and visual expressions. First, different visual features are extracted from facial expression images. Deep learning method is utilized to extract key visual features from the facial expression frames. Second, spectral low-level descriptors and mel-frequency cepstral coefficients features are extracted from short audio segments to capture the vocal expressions. Third, feature dynamic history histogram (FDHH) is proposed to capture the temporal movement on the feature space. Finally, these FDHH and audio features are fused using regression techniques for the prediction of the BDI-II scales. The proposed method has been tested on the public Audio/Visual Emotion Challenges 2014 dataset as it is tuned to be more focused on the study of depression. The results outperform all the other existing methods on the same dataset." }, { "instance_id": "R139050xR138769", "comparison_id": "R139050", "paper_id": "R138769", "text": "Predicting healthcare trajectories from medical records: A deep learning approach Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy." }, { "instance_id": "R139050xR138825", "comparison_id": "R139050", "paper_id": "R138825", "text": "Comprehensive functional genomic resource and integrative model for the human brain INTRODUCTION Strong genetic associations have been found for a number of psychiatric disorders. However, understanding the underlying molecular mechanisms remains challenging. RATIONALE To address this challenge, the PsychENCODE Consortium has developed a comprehensive online resource and integrative models for the functional genomics of the human brain. RESULTS The base of the pyramidal resource is the datasets generated by PsychENCODE, including bulk transcriptome, chromatin, genotype, and Hi-C datasets and single-cell transcriptomic data from ~32,000 cells for major brain regions. We have merged these with data from Genotype-Tissue Expression (GTEx), ENCODE, Roadmap Epigenomics, and single-cell analyses. Via uniform processing, we created a harmonized resource, allowing us to survey functional genomics data on the brain over a sample size of 1866 individuals. From this uniformly processed dataset, we created derived data products. These include lists of brain-expressed genes, coexpression modules, and single-cell expression profiles for many brain cell types; ~79,000 brain-active enhancers with associated Hi-C loops and topologically associating domains; and ~2.5 million expression quantitative-trait loci (QTLs) comprising ~238,000 linkage-disequilibrium\u2013independent single-nucleotide polymorphisms and of other types of QTLs associated with splice isoforms, cell fractions, and chromatin activity. By using these, we found that >88% of the cross-population variation in brain gene expression can be accounted for by cell fraction changes. Furthermore, a number of disorders and aging are associated with changes in cell-type proportions. The derived data also enable comparison between the brain and other tissues. In particular, by using spectral analyses, we found that the brain has distinct expression and epigenetic patterns, including a greater extent of noncoding transcription than other tissues. The top level of the resource consists of integrative networks for regulation and machine-learning models for disease prediction. The networks include a full gene regulatory network (GRN) for the brain, linking transcription factors, enhancers, and target genes from merging of the QTLs, generalized element-activity correlations, and Hi-C data. By using this network, we link disease genes to genome-wide association study (GWAS) variants for psychiatric disorders. For schizophrenia, we linked 321 genes to the 142 reported GWAS loci. We then embedded the regulatory network into a deep-learning model to predict psychiatric phenotypes from genotype and expression. Our model gives a ~6-fold improvement in prediction over additive polygenic risk scores. Moreover, it achieves a ~3-fold improvement over additive models, even when the gene expression data are imputed, highlighting the value of having just a small amount of transcriptome data for disease prediction. Lastly, it highlights key genes and pathways associated with disorder prediction, including immunological, synaptic, and metabolic pathways, recapitulating de novo results from more targeted analyses. CONCLUSION Our resource and integrative analyses have uncovered genomic elements and networks in the brain, which in turn have provided insight into the molecular mechanisms underlying psychiatric disorders. Our deep-learning model improves disease risk prediction over traditional approaches and can be extended with additional data types (e.g., microRNA and neuroimaging). A comprehensive functional genomic resource for the adult human brain. The resource forms a three-layer pyramid. The bottom layer includes sequencing datasets for traits, such as schizophrenia. The middle layer represents derived datasets, including functional genomic elements and QTLs. The top layer contains integrated models, which link genotypes to phenotypes. DSPN, Deep Structured Phenotype Network; PC1 and PC2, principal components 1 and 2; ref, reference; alt, alternate; H3K27ac, histone H3 acetylation at lysine 27. Despite progress in defining genetic risk for psychiatric disorders, their molecular mechanisms remain elusive. Addressing this, the PsychENCODE Consortium has generated a comprehensive online resource for the adult brain across 1866 individuals. The PsychENCODE resource contains ~79,000 brain-active enhancers, sets of Hi-C linkages, and topologically associating domains; single-cell expression profiles for many cell types; expression quantitative-trait loci (QTLs); and further QTLs associated with chromatin, splicing, and cell-type proportions. Integration shows that varying cell-type proportions largely account for the cross-population variation in expression (with >88% reconstruction accuracy). It also allows building of a gene regulatory network, linking genome-wide association study variants to genes (e.g., 321 for schizophrenia). We embed this network into an interpretable deep-learning model, which improves disease prediction by ~6-fold versus polygenic risk scores and identifies key genes and pathways in psychiatric disorders." }, { "instance_id": "R139050xR138776", "comparison_id": "R139050", "paper_id": "R138776", "text": "Applying deep neural networks to unstructured text notes in electronic medical records for phenotyping youth depression Background We report a study of machine learning applied to the phenotyping of psychiatric diagnosis for research recruitment in youth depression, conducted with 861 labelled electronic medical records (EMRs) documents. A model was built that could accurately identify individuals who were suitable candidates for a study on youth depression. Objective Our objective was a model to identify individuals who meet inclusion criteria as well as unsuitable patients who would require exclusion. Methods Our methods included applying a system that coded the EMR documents by removing personally identifying information, using two psychiatrists who labelled a set of EMR documents (from which the 861 came), using a brute force search and training a deep neural network for this task. Findings According to a cross-validation evaluation, we describe a model that had a specificity of 97% and a sensitivity of 45% and a second model with a specificity of 53% and a sensitivity of 89%. We combined these two models into a third one (sensitivity 93.5%; specificity 68%; positive predictive value (precision) 77%) to generate a list of most suitable candidates in support of research recruitment. Conclusion Our efforts are meant to demonstrate the potential for this type of approach for patient recruitment purposes but it should be noted that a larger sample size is required to build a truly reliable recommendation system. Clinical implications Future efforts will employ alternate neural network algorithms available and other machine learning methods." }, { "instance_id": "R139050xR138702", "comparison_id": "R139050", "paper_id": "R138702", "text": "3D CNN Based Automatic Diagnosis of Attention Deficit Hyperactivity Disorder Using Functional and Structural MRI Attention deficit hyperactivity disorder (ADHD) is one of the most common mental-health disorders. As a neurodevelopment disorder, neuroimaging technologies, such as magnetic resonance imaging (MRI), coupled with machine learning algorithms, are being increasingly explored as biomarkers in ADHD. Among various machine learning methods, deep learning has demonstrated excellent performance on many imaging tasks. With the availability of publically-available, large neuroimaging data sets for training purposes, deep learning-based automatic diagnosis of psychiatric disorders can become feasible. In this paper, we develop a deep learning-based ADHD classification method via 3-D convolutional neural networks (CNNs) applied to MRI scans. Since deep neural networks may utilize millions of parameters, even the large number of MRI samples in pooled data sets is still relatively limited if one is to learn discriminative features from the raw data. Instead, here we propose to first extract meaningful 3-D low-level features from functional MRI (fMRI) and structural MRI (sMRI) data. Furthermore, inspired by radiologists\u2019 typical approach for examining brain images, we design a 3-D CNN model to investigate the local spatial patterns of MRI features. Finally, we discover that brain functional and structural information are complementary, and design a multi-modality CNN architecture to combine fMRI and sMRI features. Evaluations on the hold-out testing data of the ADHD-200 global competition shows that the proposed multi-modality 3-D CNN approach achieves the state-of-the-art accuracy of 69.15% and outperforms reported classifiers in the literature, even with fewer training samples. We suggest that multi-modality classification will be a promising direction to find potential neuroimaging biomarkers of neurodevelopment disorders." }, { "instance_id": "R139050xR138959", "comparison_id": "R139050", "paper_id": "R138959", "text": "Automated Depression Diagnosis Based on Deep Networks to Encode Facial Appearance and Dynamics As a severe psychiatric disorder disease, depression is a state of low mood and aversion to activity, which prevents a person from functioning normally in both work and daily lives. The study on automated mental health assessment has been given increasing attentions in recent years. In this paper, we study the problem of automatic diagnosis of depression. A new approach to predict the Beck Depression Inventory II (BDI-II) values from video data is proposed based on the deep networks. The proposed framework is designed in a two stream manner, aiming at capturing both the facial appearance and dynamics. Further, we employ joint tuning layers that can implicitly integrate the appearance and dynamic information. Experiments are conducted on two depression databases, AVEC2013 and AVEC2014. The experimental results show that our proposed approach significantly improve the depression prediction performance, compared to other visual-based approaches." }, { "instance_id": "R139050xR138879", "comparison_id": "R139050", "paper_id": "R138879", "text": "Exploring microscopic fluctuation of facial expression for mood disorder classification In clinical diagnosis of mood disorder, depression is one of the most common psychiatric disorders. There are two major types of mood disorders: major depressive disorder (MDD) and bipolar disorder (BPD). A large portion of BPD are misdiagnosed as MDD in the diagnostic of mood disorders. Short-term detection which could be used in early detection and intervention is thus desirable. This study investigates microscopic facial expression changes for the subjects with MDD, BPD and control group (CG), when elicited by emotional video clips. This study uses eight basic orientations of motion vector (MV) to characterize the subtle changes in microscopic facial expression. Then, wavelet decomposition is applied to extract entropy and energy of different frequency bands. Next, an autoencoder neural network is adopted to extract the bottleneck features for dimensionality reduction. Finally, the long short term memory (LSTM) is employed for modeling the long-term variation among different mood disorders types. For evaluation of the proposed method, the elicited data from 36 subjects (12 for each of MDD, BPD and CG) were considered in the K-fold (K=12) cross validation experiments, and the performance for distinguishing among MDD, BPD and CG achieved 67.7% accuracy." }, { "instance_id": "R139050xR139024", "comparison_id": "R139050", "paper_id": "R139024", "text": "X-A-BiLSTM: a Deep Learning Approach for Depression Detection in Imbalanced Data An increasing number of people suffering from mental health conditions resort to online resources (specialized websites, social media, etc.) to share their feelings. Early depression detection using social media data through deep learning models can help to change life trajectories and save lives. But the accuracy of these models was not satisfying due to the real-world imbalanced data distributions. To tackle this problem, we propose a deep learning model (X-A-BiLSTM) for depression detection in imbalanced social media data. The X-A-BiLSTM model consists of two essential components: the first one is XGBoost, which is used to reduce data imbalance; and the second one is an Attention-BiLSTM neural network, which enhances classification capacity. The Reddit Self-reported Depression Diagnosis (RSDD) dataset was chosen, which included approximately 9,000 users who claimed to have been diagnosed with depression (\u201ddiagnosed users and approximately 107,000 matched control users. Results demonstrate that our approach significantly outperforms the previous state-of-the-art models on the RSDD dataset." }, { "instance_id": "R139050xR139004", "comparison_id": "R139050", "paper_id": "R139004", "text": "Characterisation of mental health conditions in social media using Informed Deep Learning Abstract The number of people affected by mental illness is on the increase and with it the burden on health and social care use, as well as the loss of both productivity and quality-adjusted life-years. Natural language processing of electronic health records is increasingly used to study mental health conditions and risk behaviours on a large scale. However, narrative notes written by clinicians do not capture first-hand the patients\u2019 own experiences, and only record cross-sectional, professional impressions at the point of care. Social media platforms have become a source of \u2018in the moment\u2019 daily exchange, with topics including well-being and mental health. In this study, we analysed posts from the social media platform Reddit and developed classifiers to recognise and classify posts related to mental illness according to 11 disorder themes. Using a neural network and deep learning approach, we could automatically recognise mental illness-related posts in our balenced dataset with an accuracy of 91.08% and select the correct theme with a weighted average accuracy of 71.37%. We believe that these results are a first step in developing methods to characterise large amounts of user-generated content that could support content curation and targeted interventions." }, { "instance_id": "R139050xR138934", "comparison_id": "R139050", "paper_id": "R138934", "text": "An Affect Prediction Approach Through Depression Severity Parameter Incorporation in Neural Networks Humans use emotional expressions to communicate their internal affective states. These behavioral expressions are often multi-modal (e.g. facial expression, voice and gestures) and researchers have proposed several schemes to predict the latent affective states based on these expressions. The relationship between the latent affective states and their expression is hypothesized to be affected by several factors; depression disorder being one of them. Despite a wide interest in affect prediction, and several studies linking the effect of depression on affective expressions, only a limited number of affect prediction models account for the depression severity. In this work, we present a novel scheme that incorporates depression severity as a parameter in Deep Neural Networks (DNNs). In order to predict affective dimensions for an individual at hand, our scheme alters the DNN activation function based on the subject\u2019s depression severity. We perform experiments on affect prediction in two different sessions of the Audio-Visual Depressive language Corpus, which involves patients with varying degree of depression. Our results show improvements in arousal and valence prediction on both the sessions using the proposed DNN modeling. We also present analysis of the impact of such an alteration in DNNs during training and testing." }, { "instance_id": "R139050xR139019", "comparison_id": "R139050", "paper_id": "R139019", "text": "UArizona at the CLEF eRisk 2017 Pilot Task: Linear and Recurrent Models for Early Depression Detection The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users' posts to Reddit. In this paper we present the techniques employed for the University of Arizona team's participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets." }, { "instance_id": "R139050xR139014", "comparison_id": "R139050", "paper_id": "R139014", "text": "Detecting Stress Based on Social Interactions in Social Networks Psychological stress is threatening people\u2019s health. It is non-trivial to detect stress timely for proactive care. With the popularity of social media, people are used to sharing their daily activities and interacting with friends on social media platforms, making it feasible to leverage online social network data for stress detection. In this paper, we find that users stress state is closely related to that of his/her friends in social media, and we employ a large-scale dataset from real-world social platforms to systematically study the correlation of users\u2019 stress states and social interactions. We first define a set of stress-related textual, visual, and social attributes from various aspects, and then propose a novel hybrid model - a factor graph model combined with Convolutional Neural Network to leverage tweet content and social interaction information for stress detection. Experimental results show that the proposed model can improve the detection performance by 6-9 percent in F1-score. By further analyzing the social interaction data, we also discover several intriguing phenomena, i.e., the number of social structures of sparse connections (i.e., with no delta connections) of stressed users is around 14 percent higher than that of non-stressed users, indicating that the social structure of stressed users\u2019 friends tend to be less connected and less complicated than that of non-stressed users." }, { "instance_id": "R139050xR138951", "comparison_id": "R139050", "paper_id": "R138951", "text": "Human Behaviour-Based Automatic Depression Analysis Using Hand-Crafted Statistics and Deep Learned Spectral Features Depression is a serious mental disorder that affects millions of people all over the world. Traditional clinical diagnosis methods are subjective, complicated and need extensive participation of experts. Audio-visual automatic depression analysis systems predominantly base their predictions on very brief sequential segments, sometimes as little as one frame. Such data contains much redundant information, causes a high computational load, and negatively affects the detection accuracy. Final decision making at the sequence level is then based on the fusion of frame or segment level predictions. However, this approach loses longer term behavioural correlations, as the behaviours themselves are abstracted away by the frame-level predictions. We propose to on the one hand use automatically detected human behaviour primitives such as Gaze directions, Facial action units (AU), etc. as low-dimensional multi-channel time series data, which can then be used to create two sequence descriptors. The first calculates the sequence-level statistics of the behaviour primitives and the second casts the problem as a Convolutional Neural Network problem operating on a spectral representation of the multichannel behaviour signals. The results of depression detection (binary classification) and severity estimation (regression) experiments conducted on the AVEC 2016 DAIC-WOZ database show that both methods achieved significant improvement compared to the previous state of the art in terms of the depression severity estimation." }, { "instance_id": "R139050xR138676", "comparison_id": "R139050", "paper_id": "R138676", "text": "Discrimination of ADHD Based on fMRI Data with Deep Belief Network Effective discrimination of attention deficit hyperactivity disorder (ADHD) using imaging and functional biomarkers would have fundamental influence on public health. In this paper, we created a classification model using ADHD-200 dataset focusing on resting state functional magnetic resonance imaging. We predicted ADHD status and subtype by deep belief network (DBN). In the data preprocessing stage, in order to reduce the high dimension of fMRI brain data, brodmann mask, Fast Fourier Transform algorithm (FFT) and max-pooling of frequencies are applied respectively. Experimental results indicate that our method has a good discrimination effect, and outperform the results of the ADHD-200 competition. Meanwhile, our results conform to the biological research that there exists discrepancy in prefrontal cortex and cingulate cortex. As far as we know, it is the first time that the deep learning method has been used for the discrimination of ADHD with fMRI data." }, { "instance_id": "R139050xR138807", "comparison_id": "R139050", "paper_id": "R138807", "text": "DeepBipolar: Identifying genomic mutations for bipolar disorder via deep learning Bipolar disorder, also known as manic depression, is a brain disorder that affects the brain structure of a patient. It results in extreme mood swings, severe states of depression, and overexcitement simultaneously. It is estimated that roughly 3% of the population of the United States (about 5.3 million adults) suffers from bipolar disorder. Recent research efforts like the Twin studies have demonstrated a high heritability factor for the disorder, making genomics a viable alternative for detecting and treating bipolar disorder, in addition to the conventional lengthy and costly postsymptom clinical diagnosis. Motivated by this study, leveraging several emerging deep learning algorithms, we design an end\u2010to\u2010end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype. We participated in the Critical Assessment of Genome Interpretation (CAGI) bipolar disorder challenge and DeepBipolar was considered the most successful by the independent assessor. In this work, we thoroughly evaluate the performance of DeepBipolar and analyze the type of signals we believe could have affected the classifier in distinguishing the case samples from the control set." }, { "instance_id": "R139050xR138796", "comparison_id": "R139050", "paper_id": "R138796", "text": "A Deep Learning Approach for Predicting Antidepressant Response in Major Depression Using Clinical and Genetic Biomarkers In the wake of recent advances in scientific research, personalized medicine using deep learning techniques represents a new paradigm. In this work, our goal was to establish deep learning models which distinguish responders from non-responders, and also to predict possible antidepressant treatment outcomes in major depressive disorder (MDD). To uncover relationships between the responsiveness of antidepressant treatment and biomarkers, we developed a deep learning prediction approach resulting from the analysis of genetic and clinical factors such as single nucleotide polymorphisms (SNPs), age, sex, baseline Hamilton Rating Scale for Depression score, depressive episodes, marital status, and suicide attempt status of MDD patients. The cohort consisted of 455 patients who were treated with selective serotonin reuptake inhibitors (treatment-response rate = 61.0%; remission rate = 33.0%). By using the SNP dataset that was original to a genome-wide association study, we selected 10 SNPs (including ABCA13 rs4917029, BNIP3 rs9419139, CACNA1E rs704329, EXOC4 rs6978272, GRIN2B rs7954376, LHFPL3 rs4352778, NELL1 rs2139423, NUAK1 rs2956406, PREX1 rs4810894, and SLIT3 rs139863958) which were associated with antidepressant treatment response. Furthermore, we pinpointed 10 SNPs (including ARNTL rs11022778, CAMK1D rs2724812, GABRB3 rs12904459, GRM8 rs35864549, NAALADL2 rs9878985, NCALD rs483986, PLA2G4A rs12046378, PROK2 rs73103153, RBFOX1 rs17134927, and ZNF536 rs77554113) in relation to remission. Then, we employed multilayer feedforward neural networks (MFNNs) containing 1\u20133 hidden layers and compared MFNN models with logistic regression models. Our analysis results revealed that the MFNN model with 2 hidden layers (area under the receiver operating characteristic curve (AUC) = 0.8228 \u00b1 0.0571; sensitivity = 0.7546 \u00b1 0.0619; specificity = 0.6922 \u00b1 0.0765) performed maximally among predictive models to infer the complex relationship between antidepressant treatment response and biomarkers. In addition, the MFNN model with 3 hidden layers (AUC = 0.8060 \u00b1 0.0722; sensitivity = 0.7732 \u00b1 0.0583; specificity = 0.6623 \u00b1 0.0853) achieved best among predictive models to predict remission. Our study indicates that the deep MFNN framework may provide a suitable method to establish a tool for distinguishing treatment responders from non-responders prior to antidepressant therapy." }, { "instance_id": "R139050xR138940", "comparison_id": "R139050", "paper_id": "R138940", "text": "Automated depression analysis using convolutional neural networks from speech To help clinicians to efficiently diagnose the severity of a person's depression, the affective computing community and the artificial intelligence field have shown a growing interest in designing automated systems. The speech features have useful information for the diagnosis of depression. However, manually designing and domain knowledge are still important for the selection of the feature, which makes the process labor consuming and subjective. In recent years, deep-learned features based on neural networks have shown superior performance to hand-crafted features in various areas. In this paper, to overcome the difficulties mentioned above, we propose a combination of hand-crafted and deep-learned features which can effectively measure the severity of depression from speech. In the proposed method, Deep Convolutional Neural Networks (DCNN) are firstly built to learn deep-learned features from spectrograms and raw speech waveforms. Then we manually extract the state-of-the-art texture descriptors named median robust extended local binary patterns (MRELBP) from spectrograms. To capture the complementary information within the hand-crafted features and deep-learned features, we propose joint fine-tuning layers to combine the raw and spectrogram DCNN to boost the depression recognition performance. Moreover, to address the problems with small samples, a data augmentation method was proposed. Experiments conducted on AVEC2013 and AVEC2014 depression databases show that our approach is robust and effective for the diagnosis of depression when compared to state-of-the-art audio-based methods." }, { "instance_id": "R139050xR138687", "comparison_id": "R139050", "paper_id": "R138687", "text": "Diagnosis of attention deficit hyperactivity disorder using deep belief network based on greedy approach Attention deficit hyperactivity disorder creates conditions for the child as s/he cannot sit calm and still, control his/her behavior and focus his/her attention on a particular issue. Five out of every hundred children are affected by the disease. Boys are three times more than girls at risk for this complication. The disorder often begins before age seven, and parents may not realize their children problem until they get older. Children with hyperactivity and attention deficit are at high risk of conduct disorder, antisocial personality, and drug abuse. Most children suffering from the disease will develop a feeling of depression, anxiety and lack of self-confidence. Given the importance of diagnosis the disease, Deep Belief Networks (DBNs) were used as a deep learning model to predict the disease. In this system, in addition to FMRI images features, sophisticated features such as age and IQ as well as functional characteristics, etc. were used. The proposed method was evaluated by two standard data sets of ADHD-200 Global Competitions, including NeuroImage and NYU data sets, and compared with state-of-the-art algorithms. The results showed the superiority of the proposed method rather than other systems. The prediction accuracy has improved respectively as +12.04 and +27.81 over NeuroImage and NYU datasets compared to the best proposed method in the ADHD-200 Global competition." }, { "instance_id": "R139050xR138974", "comparison_id": "R139050", "paper_id": "R138974", "text": "Depression Severity Classification from Speech Emotion Major Depressive Disorder (MDD) is a common psychiatric illness. Automatically classifying depression severity using audio analysis can help clinical management decisions during Deep Brain Stimulation (DBS) treatment of MDD patients. Leveraging the link between short-term emotions and long-term depressed mood states, we build our predictive model on the top of emotion-based features. Because acquiring emotion labels of MDD patients is a challenging task, we propose to use an auxiliary emotion dataset to train a Deep Neural Network (DNN) model. The DNN is then applied to audio recordings of MDD patients to find their low dimensional representation to be used in the classification algorithm. Our preliminary results indicate that the proposed approach, in comparison to the alternatives, effectively classifies depressed and improved phases of DBS treatment with an AUC of 0.80." }, { "instance_id": "R139050xR138715", "comparison_id": "R139050", "paper_id": "R138715", "text": "Combination of rs-fMRI and sMRI Data to Discriminate Autism Spectrum Disorders in Young Children Using Deep Belief Network In recent years, the use of advanced magnetic resonance (MR) imaging methods such as functional magnetic resonance imaging (fMRI) and structural magnetic resonance imaging (sMRI) has recorded a great increase in neuropsychiatric disorders. Deep learning is a branch of machine learning that is increasingly being used for applications of medical image analysis such as computer-aided diagnosis. In a bid to classify and represent learning tasks, this study utilized one of the most powerful deep learning algorithms (deep belief network (DBN)) for the combination of data from Autism Brain Imaging Data Exchange I and II (ABIDE I and ABIDE II) datasets. The DBN was employed so as to focus on the combination of resting-state fMRI (rs-fMRI), gray matter (GM), and white matter (WM) data. This was done based on the brain regions that were defined using the automated anatomical labeling (AAL), in order to classify autism spectrum disorders (ASDs) from typical controls (TCs). Since the diagnosis of ASD is much more effective at an early age, only 185 individuals (116 ASD and 69 TC) ranging in age from 5 to 10 years were included in this analysis. In contrast, the proposed method is used to exploit the latent or abstract high-level features inside rs-fMRI and sMRI data while the old methods consider only the simple low-level features extracted from neuroimages. Moreover, combining multiple data types and increasing the depth of DBN can improve classification accuracy. In this study, the best combination comprised rs-fMRI, GM, and WM for DBN of depth 3 with 65.56% accuracy (sensitivity = 84%, specificity = 32.96%, F1 score = 74.76%) obtained via 10-fold cross-validation. This result outperforms previously presented methods on ABIDE I dataset." }, { "instance_id": "R139050xR138729", "comparison_id": "R139050", "paper_id": "R138729", "text": "fMRIPrep: a robust preprocessing pipeline for functional MRI Preprocessing of functional magnetic resonance imaging (fMRI) involves numerous steps to clean and standardize the data before statistical analysis. Generally, researchers create ad hoc preprocessing workflows for each dataset, building upon a large inventory of available tools. The complexity of these workflows has snowballed with rapid advances in acquisition and processing. We introduce fMRIPrep, an analysis-agnostic tool that addresses the challenge of robust and reproducible preprocessing for fMRI data. fMRIPrep automatically adapts a best-in-breed workflow to the idiosyncrasies of virtually any dataset, ensuring high-quality preprocessing without manual intervention. By introducing visual assessment checkpoints into an iterative integration framework for software testing, we show that fMRIPrep robustly produces high-quality results on a diverse fMRI data collection. Additionally, fMRIPrep introduces less uncontrolled spatial smoothness than observed with commonly used preprocessing tools. fMRIPrep equips neuroscientists with an easy-to-use and transparent preprocessing workflow, which can help ensure the validity of inference and the interpretability of results.fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results." }, { "instance_id": "R139050xR138690", "comparison_id": "R139050", "paper_id": "R138690", "text": "Deep learning based automatic diagnoses of attention deficit hyperactive disorder In this paper, we aim to develop a deep learning based automatic Attention Deficit Hyperactive Disorder (ADHD) diagnosis algorithm using resting state functional magnetic resonance imaging (rs-fMRI) scans. However, relative to millions of parameters in deep neural networks (DNN), the number of fMRI samples is still limited to learn discriminative features from the raw data. In light of this, we first encode our prior knowledge on 3D features voxel-wisely, including Regional Homogeneity (ReHo), fractional Amplitude of Low Frequency Fluctuations (fALFF) and Voxel-Mirrored Homotopic Connectivity (VMHC), and take these 3D images as the input to the DNN. Inspired by the way that radiologists examine brain images, we further investigate a novel 3D convolutional neural network (CNN) architecture to learn 3D local patterns which may boost the diagnosis accuracy. Investigation on the hold-out testing data of the ADHD-200 Global competition demonstrates that the proposed 3D CNN approach yields superior performances when compared to the reported classifiers in the literature, even with less training samples." }, { "instance_id": "R139050xR138742", "comparison_id": "R139050", "paper_id": "R138742", "text": "Automated EEG-based screening of depression using deep convolutional neural network In recent years, advanced neurocomputing and machine learning techniques have been used for Electroencephalogram (EEG)-based diagnosis of various neurological disorders. In this paper, a novel computer model is presented for EEG-based screening of depression using a deep neural network machine learning approach, known as Convolutional Neural Network (CNN). The proposed technique does not require a semi-manually-selected set of features to be fed into a classifier for classification. It learns automatically and adaptively from the input EEG signals to differentiate EEGs obtained from depressive and normal subjects. The model was tested using EEGs obtained from 15 normal and 15 depressed patients. The algorithm attained accuracies of 93.5% and 96.0% using EEG signals from the left and right hemisphere, respectively. It was discovered in this research that the EEG signals from the right hemisphere are more distinctive in depression than those from the left hemisphere. This discovery is consistent with recent research and revelation that the depression is associated with a hyperactive right hemisphere. An exciting extension of this research would be diagnosis of different stages and severity of depression and development of a Depression Severity Index (DSI)." }, { "instance_id": "R139050xR138870", "comparison_id": "R139050", "paper_id": "R138870", "text": "DepAudioNet: An Efficient Deep Model for Audio based Depression Classification This paper presents a novel and effective audio based method on depression classification. It focuses on two important issues, \\emph{i.e.} data representation and sample imbalance, which are not well addressed in literature. For the former one, in contrast to traditional shallow hand-crafted features, we propose a deep model, namely DepAudioNet, to encode the depression related characteristics in the vocal channel, combining Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to deliver a more comprehensive audio representation. For the latter one, we introduce a random sampling strategy in the model training phase to balance the positive and negative samples, which largely alleviates the bias caused by uneven sample distribution. Evaluations are carried out on the DAIC-WOZ dataset for the Depression Classification Sub-challenge (DCC) at the 2016 Audio-Visual Emotion Challenge (AVEC), and the experimental results achieved clearly demonstrate the effectiveness of the proposed approach." }, { "instance_id": "R139050xR138763", "comparison_id": "R139050", "paper_id": "R138763", "text": "EEG-based mild depression recognition using convolutional neural network AbstractElectroencephalography (EEG)\u2013based studies focus on depression recognition using data mining methods, while those on mild depression are yet in infancy, especially in effective monitoring and quantitative measure aspects. Aiming at mild depression recognition, this study proposed a computer-aided detection (CAD) system using convolutional neural network (ConvNet). However, the architecture of ConvNet derived by trial and error and the CAD system used in clinical practice should be built on the basis of the local database; we therefore applied transfer learning when constructing ConvNet architecture. We also focused on the role of different aspects of EEG, i.e., spectral, spatial, and temporal information, in the recognition of mild depression and found that the spectral information of EEG played a major role and the temporal information of EEG provided a statistically significant improvement to accuracy. The proposed system provided the accuracy of 85.62% for recognition of mild depression and normal controls with 24-fold cross-validation (the training and test sets are divided based on the subjects). Thus, the system can be clinically used for the objective, accurate, and rapid diagnosis of mild depression. Graphical abstractThe EEG power of theta, alpha, and beta bands is calculated separately under trial-wise and frame-wise strategies and is organized into three input forms of deep neural networks: feature vector, images without electrode location (spatial information), and images with electrode location. The role of EEG\u2019s spectral and spatial information in mild depression recognition is investigated through ConvNet, and the role of EEG\u2019s temporal information is investigated using different architectures to aggregate temporal features from multiple frames. The ConvNet and models for aggregating temporal features are transferred from the state-of-the-art model in mental load classification." }, { "instance_id": "R139190xR139112", "comparison_id": "R139190", "paper_id": "R139112", "text": "Absolute ozone densities in a radio-frequency driven atmospheric pressure plasma using two-beam UV-LED absorption spectroscopy and numerical simulations The efficient generation of reactive oxygen species (ROS) in cold atmospheric pressure plasma jets (APPJs) is an increasingly important topic, e.g. for the treatment of temperature sensitive biological samples in the field of plasma medicine. A 13.56 MHz radio-frequency (rf) driven APPJ device operated with helium feed gas and small admixtures of oxygen (up to 1%), generating a homogeneous glow-mode plasma at low gas temperatures, was investigated. Absolute densities of ozone, one of the most prominent ROS, were measured across the 11 mm wide discharge channel by means of broadband absorption spectroscopy using the Hartley band centered at \u03bb = 255 nm. A two-beam setup with a reference beam in MachZehnder configuration is employed for improved signal-to-noise ratio allowing highsensitivity measurements in the investigated single-pass weak-absorbance regime. The results are correlated to gas temperature measurements, deduced from the rotational temperature of the N2 (C \u03a0 u \u2192 B \u03a0 g , \u03c5 = 0 \u2192 2) optical emission from introduced air impurities. The observed opposing trends of both quantities as a function of rf power input and oxygen admixture are analysed and explained in terms of a zerodimensional plasma-chemical kinetics simulation. It is found that the gas temperature as well as the densities of O and O2(b \u03a3 g ) influence the absolute O3 densities when the rf power is varied. \u2021 Current address: KROHNE Innovation GmbH, Ludwig-Krone-Str.5, 47058 Duisburg, Germany Page 1 of 26 AUTHOR SUBMITTED MANUSCRIPT PSST-101801.R1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 A cc e d M an us cr ip t" }, { "instance_id": "R139190xR139130", "comparison_id": "R139190", "paper_id": "R139130", "text": "Atmospheric plasma VUV photon emission Owing to its distinctive photon energy range, vacuum ultraviolet (VUV) emission plays a key role in diverse photo-induced natural and technological processes. Atmospheric-pressure plasma produced VUV is central to resolve long-held issues in dynamics of natural (e.g., lightning) and laboratory (e.g., streamer) plasmas. Challenging the seemingly unavoidable vacuum systems used to prevent VUV emission quenching by ambient gases, here we report the first observation of vacuum-free generation of stable sub-110 nm VUV emission from atmospheric-pressure plasmas jetted into open air and atmospheric air plasma. Emission from atomic helium at 58.4 nm is observed from a nonequilibrium atmospheric pressure plasma jet (N-APPJ), jetted directly into ambient air. In a similar experiment, we also report VUV emission from excited nitrogen species in an atmospheric pressure discharge in ambient air. The photon emissions detected expand the window of photo-induced processes beyond ~10 eV commonly achievable by existing non-excimer VUV plasma sources, and enables direct photo-excitation and ionization of molecular species such as CO2 and many others. The thus-enabled direct photoionization of O2, O, and N species further justifies the role of direct photoionization in the dynamics of natural and laboratory atmospheric-pressure plasmas and informs the development of the relevant plasma photoionization models, which currently largely sidestep the sub-110 nm domain. These findings can make contribution to the complement of photoionization model of lightning, streamer, and other plasmas, open new avenues to quantify the yet elusive role of photoionization in the plasma dynamics." }, { "instance_id": "R139190xR139100", "comparison_id": "R139190", "paper_id": "R139100", "text": "Impact of plasma jet vacuum ultraviolet radiation on reactive oxygen species generation in bio-relevant liquids Plasma medicine utilizes the combined interaction of plasma produced reactive components. These are reactive atoms, molecules, ions, metastable species, and radiation. Here, ultraviolet (UV, 100\u2013400 nm) and, in particular, vacuum ultraviolet (VUV, 10\u2013200 nm) radiation generated by an atmospheric pressure argon plasma jet were investigated regarding plasma emission, absorption in a humidified atmosphere and in solutions relevant for plasma medicine. The energy absorption was obtained for simple solutions like distilled water (dH2O) or ultrapure water and sodium chloride (NaCl) solution as well as for more complex ones, for example, Rosewell Park Memorial Institute (RPMI 1640) cell culture media. As moderate stable reactive oxygen species, hydrogen peroxide (H2O2) was studied. Highly reactive oxygen radicals, namely, superoxide anion (O2\u2022\u2212) and hydroxyl radicals (\u2022OH), were investigated by the use of electron paramagnetic resonance spectroscopy. All species amounts were detected for three different treatmen..." }, { "instance_id": "R139190xR139115", "comparison_id": "R139190", "paper_id": "R139115", "text": "Chemical kinetics in an atmospheric pressure helium plasma containing humidity

Investigating the formation and kinetics of O and OH in a He\u2013H2O plasma jet using absorption spectroscopy and 0D modelling.

" }, { "instance_id": "R139190xR139071", "comparison_id": "R139190", "paper_id": "R139071", "text": "On the Vacuum Ultraviolet Radiation of a Miniaturized Non-thermal Atmospheric Pressure Plasma Jet The suitability of a miniaturized non-thermal APPJ operating with Ar at ambient atmosphere for applications related to surface treatment is demonstrated. The VUV emission is measured and the dependence of selected line intensities over the radius of the plasma jet is presented. The Ar discharge is characterized by an intense VUV radiation, attributed to N, H, and O atomic lines along with an Ar2* excimer continuum, which is drastically reduced after adding up to 5% N2 to the Ar working gas. Two absorption dips are found in the VUV spectrum. The surface energy enhancement of substrates at temperatures as low as 35 \u00b0C along with chemical reactivity originating from abundant NO and OH free radicals and UV/VUV radiation in the plasma give rise to numerous applications, e.g., in the medical and biological field." }, { "instance_id": "R139190xR139118", "comparison_id": "R139190", "paper_id": "R139118", "text": "Determination of NO densities in a surface dielectric barrier discharge using optical emission spectroscopy A new computationally assisted diagnostic to measure NO densities in atmospheric-pressure microplasmas by Optical Emission Spectroscopy (OES) is developed and validated against absorption spectroscopy in a volume Dielectric Barrier Discharge (DBD). The OES method is then applied to a twin surface DBD operated in N 2 to measure the NO density as a function of the O 2 admixture ( 0.1%\u2013 1%). The underlying rate equation model reveals that NO ( A 2 \u03a3 + ) is primarily excited by reactions of the ground state NO ( X 2 \u03a0 ) with metastables N 2 ( A 3 \u03a3 u + ).A new computationally assisted diagnostic to measure NO densities in atmospheric-pressure microplasmas by Optical Emission Spectroscopy (OES) is developed and validated against absorption spectroscopy in a volume Dielectric Barrier Discharge (DBD). The OES method is then applied to a twin surface DBD operated in N 2 to measure the NO density as a function of the O 2 admixture ( 0.1%\u2013 1%). The underlying rate equation model reveals that NO ( A 2 \u03a3 + ) is primarily excited by reactions of the ground state NO ( X 2 \u03a0 ) with metastables N 2 ( A 3 \u03a3 u + )." }, { "instance_id": "R139190xR139097", "comparison_id": "R139190", "paper_id": "R139097", "text": "Absolute atomic oxygen and nitrogen densities in radio-frequency driven atmospheric pressure cold plasmas: Synchrotron vacuum ultra-violet high-resolution Fourier-transform absorption measurements Reactive atomic species play a key role in emerging cold atmospheric pressure plasma applications, in particular, in plasma medicine. Absolute densities of atomic oxygen and atomic nitrogen were measured in a radio-frequency driven non-equilibrium plasma operated at atmospheric pressure using vacuum ultra-violet (VUV) absorption spectroscopy. The experiment was conducted on the DESIRS synchrotron beamline using a unique VUV Fourier-transform spectrometer. Measurements were carried out in plasmas operated in helium with air-like N2/O2 (4:1) admixtures. A maximum in the O-atom concentration of (9.1 \u00b1 0.7)\u00d71020 m\u22123 was found at admixtures of 0.35 vol. %, while the N-atom concentration exhibits a maximum of (5.7 \u00b1 0.4)\u00d71019 m\u22123 at 0.1 vol. %." }, { "instance_id": "R139190xR139062", "comparison_id": "R139190", "paper_id": "R139062", "text": "Generation of excimer emission in dielectric barrier discharges Dielectric barrier discharges (silent discharges) are used to excite a large number of excimers radiating in the VUV, UV or visible spectral range. The excited species include rare-gas dimers, halogen dimers as well as rare-gas halogen excimers and mercury halogen excimers. In many cases narrow-band UV radiation of typically 1\u201317 nm halfwidth and remarkable efficiency (1\u201310%) could be generated. Thus, dielectric barrier discharges provide a simple, versatile arrangement to study the basic reaction kinetics of excimer formation and also bear a substantial potential for large-scale industrial UV processes." }, { "instance_id": "R139190xR139077", "comparison_id": "R139190", "paper_id": "R139077", "text": "Spatially resolved diagnostics on a microscale atmospheric pressure plasma jet Despite enormous potential for technological applications, fundamentals of stable non-equilibrium micro-plasmas at ambient pressure are still only partly understood. Micro-plasma jets are one sub-group of these plasma sources. For an understanding it is particularly important to analyse transport phenomena of energy and particles within and between the core and effluent of the discharge. The complexity of the problem requires the combination and correlation of various highly sophisticated diagnostics yielding different information with an extremely high temporal and spatial resolution. A specially designed rf microscale atmospheric pressure plasma jet (\u03bc-APPJ) provides excellent access for optical diagnostics to the discharge volume and the effluent region. This allows detailed investigations of the discharge dynamics and energy transport mechanisms from the discharge to the effluent. Here we present examples for diagnostics applicable to different regions and combine the results. The diagnostics applied are optical emission spectroscopy (OES) in the visible and ultraviolet and two-photon absorption laser-induced fluorescence spectroscopy. By the latter spatially resolved absolutely calibrated density maps of atomic oxygen have been determined for the effluent. OES yields an insight into energy transport mechanisms from the core into the effluent. The first results of spatially and phase-resolved OES measurements of the discharge dynamics of the core are presented." }, { "instance_id": "R139190xR139068", "comparison_id": "R139190", "paper_id": "R139068", "text": "An atmospheric pressure plasma source An atmospheric pressure plasma source operated by radio frequency power has been developed. This source produces a unique discharge that is volumetric and homogeneous at atmospheric pressure with a gas temperature below 300 \u00b0C. It also produces a large quantity of oxygen atoms, \u223c5\u00d71015 cm\u22123, which has important value for materials applications. A theoretical model shows electron densities of 0.2\u20132\u00d71011 cm\u22123 and characteristic electron energies of 2\u20134 eV for helium discharges at a power level of 3\u201330 W cm\u22123." }, { "instance_id": "R139190xR139109", "comparison_id": "R139190", "paper_id": "R139109", "text": "Cold Atmospheric Pressure Plasma VUV Interactions With Surfaces: Effect of Local Gas Environment and Source Design This study uses photoresist materials in combination with several optical filters as a diagnostic to examine the relative importance of VUV-induced surface modifications for different cold atmospheric pressure plasma (CAPP) sources. The argon fed kHz-driven ring-APPJ showed the largest ratio of VUV surface modification relative to the total modification introduced, whereas the MHz APPJ showed the largest overall surface modification. The MHz APPJ shows increased total thickness reduction and reduced VUV effect as oxygen is added to the feed gas, a condition that is often used for practical applications. We examine the influence of noble gas flow from the APPJ on the local environment. The local environment has a decisive impact on polymer modification from VUV emission as O2 readily absorbs VUV photons." }, { "instance_id": "R139190xR139089", "comparison_id": "R139190", "paper_id": "R139089", "text": "The dynamics of radio-frequency driven atmospheric pressure plasma jets The complex dynamics of radio-frequency driven atmospheric pressure plasma jets is investigated using various optical diagnostic techniques and numerical simulations. Absolute number densities of ground state atomic oxygen radicals in the plasma effluent are measured by two-photon absorption laser induced fluorescence spectroscopy (TALIF). Spatial profiles are compared with (vacuum) ultra-violet radiation from excited states of atomic oxygen and molecular oxygen, respectively. The excitation and ionization dynamics in the plasma core are dominated by electron impact and observed by space and phase resolved optical emission spectroscopy (PROES). The electron dynamics is governed through the motion of the plasma boundary sheaths in front of the electrodes as illustrated in numerical simulations using a hybrid code based on fluid equations and kinetic treatment of electrons." }, { "instance_id": "R139190xR139083", "comparison_id": "R139190", "paper_id": "R139083", "text": "Vacuum UV Radiation of a Plasma Jet Operated With Rare Gases at Atmospheric Pressure The vacuum ultraviolet (VUV) emissions from 115 to 200 nm from the effluent of an RF (1.2 MHz) capillary jet fed with pure argon and binary mixtures of argon and xenon or krypton (up to 20%) are analyzed. The feed gas mixture is emanating into air at normal pressure. The Ar2 excimer second continuum, observed in the region of 120-135 nm, prevails in the pure Ar discharge. It decreases when small amounts (as low as 0.5%) of Xe or Kr are added. In that case, the resonant emission of Xe at 147 nm (or 124 nm for Kr, respectively) becomes dominant. The Xe2 second continuum at 172 nm appears for higher admixtures of Xe (10%). Furthermore, several N I emission lines, the O I resonance line, and H I line appear due to ambient air. Two absorption bands (120.6 and 124.6 nm) are present in the spectra. Their origin could be unequivocally associated to O2 and O3. The radiance is determined end-on at varying axial distance in absolute units for various mixtures of Ar/Xe and Ar/Kr and compared to pure Ar. Integration over the entire VUV wavelength region provides the integrated spectral distribution. Maximum values of 2.2 mW middotmm-2middotsr-1 are attained in pure Ar and at a distance of 4 mm from the outlet nozzle of the discharge. By adding diminutive admixtures of Kr or Xe, the intensity and spectral distribution is effectively changed." }, { "instance_id": "R139190xR139094", "comparison_id": "R139190", "paper_id": "R139094", "text": "Characterization of transient discharges under atmospheric-pressure conditions applying nitrogen photoemission and current measurements The plasma parameters such as electron distribution function and electron density of three atmospheric-pressure transient discharges namely filamentary and homogeneous dielectric barrier discharges in air, and the spark discharge of an argon plasma coagulation (APC) system are determined. A combination of numerical simulation as well as diagnostic methods including current measurement and optical emission spectroscopy (OES) based on nitrogen emissions is used. The applied methods supplement each other and resolve problems, which arise when these methods are used individually. Nitrogen is used as a sensor gas and is admixed in low amount to argon for characterizing the APC discharge. Both direct and stepwise electron-impact excitation of nitrogen emissions are included in the plasma-chemical model applied for characterization of these transient discharges using OES where ambiguity arises in the determination of plasma parameters under specific discharge conditions. It is shown that the measured current solves this problem by providing additional information useful for the determination of discharge-specific plasma parameters." }, { "instance_id": "R139190xR139132", "comparison_id": "R139190", "paper_id": "R139132", "text": "Characterization of an RF-driven argon plasma at atmospheric pressure using broadband absorption and optical emission spectroscopy Atmospheric pressure plasmas in argon are of particular interest due to the production of highly excited and reactive species enabling numerous plasma-aided applications. In this contribution, we report on absolute optical emission and absorption spectroscopy of a radio frequency (RF) driven capacitively coupled argon glow discharge operated in a parallel-plate configuration. This enabled the study of all key parameters including electron density and temperature, gas temperature, and absolute densities of atoms in highly electronically excited states. Space and time-averaged electron density and temperature were determined from the measurement of the absolute intensity of the electron-atom bremsstrahlung in the visible range. Considering the non-Maxwellian electron energy distribution function, an electron temperature ( T e) of 2.1 eV and an electron density ( n e) of 1.1 \u00d7 10 19 m \u2212 3 were obtained. The time-averaged and spatially resolved absolute densities of atoms in the metastable ( 1 s 5 and 1 s 3) and resonant ( 1 s 4 and 1 s 2) states of argon in the pure Ar and Ar/He mixture were obtained by broadband absorption spectroscopy. The 1 s 5 metastable atoms had the largest density near the sheath region with a maximum value of 8 \u00d7 10 17 m \u2212 3, while all other 1s states had densities of at most 2 \u00d7 10 17 m \u2212 3. The dominant production and loss mechanisms of these atoms were discussed, in particular, the role of radiation trapping. We conclude with comparison of the plasma properties of the argon RF glow discharges with the more common He equivalent and highlight their differences." }, { "instance_id": "R139190xR139065", "comparison_id": "R139190", "paper_id": "R139065", "text": "Etching materials with an atmospheric-pressure plasma jet A plasma jet has been developed for etching materials at atmospheric pressure and between 100 and C. Gas mixtures containing helium, oxygen and carbon tetrafluoride were passed between an outer, grounded electrode and a centre electrode, which was driven by 13.56 MHz radio frequency power at 50 to 500 W. At a flow rate of , a stable, arc-free discharge was produced. This discharge extended out through a nozzle at the end of the electrodes, forming a plasma jet. Materials placed 0.5 cm downstream from the nozzle were etched at the following maximum rates: for Kapton ( and He only), for silicon dioxide, for tantalum and for tungsten. Optical emission spectroscopy was used to identify the electronically excited species inside the plasma and outside in the jet effluent." }, { "instance_id": "R139425xR78308", "comparison_id": "R139425", "paper_id": "R78308", "text": "R3F: RDF triple filtering method for efficient SPARQL query processing With the rapid growth in the amount of graph-structured Resource Description Framework (RDF) data, SPARQL query processing has received significant attention. The most important part of SPARQL query processing is its method of subgraph pattern matching. For this, most RDF stores use relation-based approaches, which can produce a vast number of redundant intermediate results during query evaluation. In order to address this problem, we propose an RDF Triple Filtering (R3F) method that exploits the graph-structural information of RDF data. We design a path-based index called the RDF Path index (RP-index) to efficiently provide filter data for the triple filtering. We also propose a relational operator called the RDF Filter (RFLT) that can conduct the triple filtering with little overhead compared to the original query processing. Through comprehensive experiments on large-scale RDF datasets, we demonstrate that R3F can effectively and efficiently reduce the number of redundant intermediate results and improve the query performance." }, { "instance_id": "R139425xR109824", "comparison_id": "R139425", "paper_id": "R109824", "text": "Cost Based Query Ordering over OWL Ontologies The paper presents an approach for cost-based query planning for SPARQL queries issued over an OWL ontology using the OWL Direct Semantics entailment regime of SPARQL 1.1. The costs are based on information about the instances of classes and properties that are extracted from a model abstraction built by an OWL reasoner. A static and a dynamic algorithm are presented which use these costs to find optimal or near optimal execution orders for the atoms of a query. For the dynamic case, we improve the performance by exploiting an individual clustering approach that allows for computing the cost functions based on one individual sample from a cluster. Our experimental study shows that the static ordering usually outperforms the dynamic one when accurate statistics are available. This changes, however, when the statistics are less accurate, e.g., due to non-deterministic reasoning decisions." }, { "instance_id": "R139425xR109638", "comparison_id": "R139425", "paper_id": "R109638", "text": "Dynamic and fast processing of queries on large-scale RDF data As RDF data continue to gain popularity, we witness the fast growing trend of RDF datasets in both the number of RDF repositories and the size of RDF datasets. Many known RDF datasets contain billions of RDF triples (subject, predicate and object). One of the grant challenges for managing these huge RDF data is how to execute RDF queries efficiently. In this paper, we address the query processing problems against the billion triple challenges. We first identify some causes for the problems of existing query optimization schemes, such as large intermediate results, initial query cost estimation errors. Then, we present our block-oriented dynamic query plan generation approach powered with pipelining execution. Our approach consists of two phases. In the first phase, a near-optimal execution plan for queries is chosen by identifying the processing blocks of queries. We group the join patterns sharing a join variable into building blocks of the query plan since executing them first provides opportunities to reduce the size of intermediate results generated. In the second phase, we further optimize the initial pipelining for a given query plan. We employ optimization techniques, such as sideways information passing and semi-join, to further reduce the size of intermediate results, improve the query processing cost estimation and speed up the performance of query execution. Experimental results on several RDF datasets of over a billion triples demonstrate that our approach outperforms existing RDF query engines that rely on dynamic programming based static query processing strategies." }, { "instance_id": "R139526xR139466", "comparison_id": "R139526", "paper_id": "R139466", "text": "An artificial immune algorithm for the flexible job-shop scheduling problem This article addresses the flexible job-shop scheduling problem (FJSP) to minimize makespan. The FJSP is strongly NP-hard and consists of two sub-problems. The first one is to assign each operation to a machine out of a set of capable machines, and the second one deals with sequencing the assigned operations on all machines. To solve this problem, an artificial immune algorithm (AIA) based on integrated approach is proposed. This algorithm uses several strategies for generating the initial population and selecting the individuals for reproduction. Different mutation operators are also utilized for reproducing new individuals. To show the effectiveness of the proposed method, numerical experiments by using benchmark problems are conducted. Consequently, the computational results validate the quality of the proposed approach." }, { "instance_id": "R139526xR139481", "comparison_id": "R139526", "paper_id": "R139481", "text": "Optimization of the Master Production Scheduling in a Textile Industry Using Genetic Algorithm In a competitive environment, an industry\u2019s success is directly related to the level of optimization of its processes, how production is planned and developed. In this area, the master production scheduling (MPS) is the key action for success. The object of study arises from the need to optimize the medium-term production planning system in a textile company, through genetic algorithms. This research begins with the analysis of the constraints, mainly determined by the installed capacity and the number of workers. The aggregate production planning is carried out for the T-shirts families. Due to such complexity, the application of bioinspired optimization techniques demonstrates their best performance, before industries that normally employ exact and simple methods that provide an empirical MPS but can compromise efficiency and costs. The products are then disaggregated for each of the items in which the MPS is determined, based on the analysis of the demand forecast, and the orders made by customers. From this, with the use of genetic algorithms, the MPS is optimized to carry out production planning, with an improvement of up to 96% of the level of service provided." }, { "instance_id": "R139526xR139484", "comparison_id": "R139526", "paper_id": "R139484", "text": "Production scheduling in a knitted fabric dyeing and finishing process Abstract Developing detailed production schedules for dyeing and finishing operations is a very difficult task that has received relatively little attention in the literature. In this paper, a scheduling procedure is presented for a knitted fabric dyeing and finishing plant that is essentially a flexible job shop with sequence-dependent setups. An existing job shop scheduling algorithm is modified to take into account the complexities of the case plant. The resulting approach based on family scheduling is tested on problems generated with case plant characteristics." }, { "instance_id": "R139526xR139522", "comparison_id": "R139526", "paper_id": "R139522", "text": "Multisystem Optimization for an Integrated Production Scheduling with Resource Saving Problem in Textile Printing and Dyeing Resource saving has become an integral aspect of manufacturing in industry 4.0. This paper proposes a multisystem optimization (MSO) algorithm, inspired by implicit parallelism of heuristic methods, to solve an integrated production scheduling with resource saving problem in textile printing and dyeing. First, a real-world integrated production scheduling with resource saving is formulated as a multisystem optimization problem. Then, the MSO algorithm is proposed to solve multisystem optimization problems that consist of several coupled subsystems, and each of the subsystems may contain multiple objectives and multiple constraints. The proposed MSO algorithm is composed of within-subsystem evolution and cross-subsystem migration operators, and the former is to optimize each subsystem by excellent evolution operators and the later is to complete information sharing between multiple subsystems, to accelerate the global optimization of the whole system. Performance is tested on a set of multisystem benchmark functions and compared with improved NSGA-II and multiobjective multifactorial evolutionary algorithm (MO-MFEA). Simulation results show that the MSO algorithm is better than compared algorithms for the benchmark functions studied in this paper. Finally, the MSO algorithm is successfully applied to the proposed integrated production scheduling with resource saving problem, and the results show that MSO is a promising algorithm for the studied problem." }, { "instance_id": "R139526xR139457", "comparison_id": "R139526", "paper_id": "R139457", "text": "A review of energy use and energy efficiency technologies for the textile industry The textile industry is a complicated manufacturing industry because it is a fragmented and heterogeneous sector dominated by small and medium enterprises (SMEs). There are various energy-efficiency opportunities that exist in every textile plant. However, even cost-effective options often are not implemented in textile plants mostly because of limited information on how to implement energy-efficiency measures. Know-how on energy-efficiency technologies and practices should, therefore, be prepared and disseminated to textile plants. This paper provides information on the energy use and energy-efficiency technologies and measures applicable to the textile industry. The paper includes case studies from textile plants around the world and includes energy savings and cost information when available. A total of 184 energy efficiency measures applicable to the textile industry are introduced in this paper. Also, the paper gives a brief overview of the textile industry around the world. An analysis of the type and the share of energy used in different textile processes is also included in the paper. Subsequently, energy-efficiency improvement opportunities available within some of the major textile sub-sectors are given with a brief explanation of each measure. This paper shows that a large number of energy efficiency measures exist for the textile industry and most of them have a low simple payback period." }, { "instance_id": "R139526xR139460", "comparison_id": "R139526", "paper_id": "R139460", "text": "Cleaner Production in the textile industry and its relationship to sustainable development goals Abstract Conceptually Cleaner Production seeks to integrate the continuous utilization of deterrent environmental approaches to processes, products and services aiming to rise efficiency and to minimize the risks to people and environment. Extant literature has shown that the implementation of Cleaner Production practices brings as a result economic and environmental gains. Nevertheless, very few studies link those savings to the Sustainable Development Goals, reason why this research aims to evaluate if the economic and environmental advantages coming from Cleaner Production adoption in the textile industry contributed to the Sustainable Development Goals. This was done through extensive review of the literature, complemented by the proposal of a theoretical framework confirmed through the development of two case studies. As a result, it was concluded that the adoption of Cleaner Production practices in Brazilian textile industries through technological innovation made it possible to highlight the economic and environmental gains relating those to Sustainable Development Goals 9, 12 and 15." }, { "instance_id": "R139551xR139538", "comparison_id": "R139551", "paper_id": "R139538", "text": "High resolution DNA barcode library for European butterflies reveals continental patterns of mitochondrial genetic diversity Abstract The study of global biodiversity will greatly benefit from access to comprehensive DNA barcode libraries at continental scale, but such datasets are still very rare. Here, we assemble the first high-resolution reference library for European butterflies that provides 97% taxon coverage (459 species) and 22,306 COI sequences. We estimate that we captured 62% of the total haplotype diversity and show that most species possess a few very common haplotypes and many rare ones. Specimens in the dataset have an average 95.3% probability of being correctly identified. Mitochondrial diversity displayed elevated haplotype richness in southern European refugia, establishing the generality of this key biogeographic pattern for an entire taxonomic group. Fifteen percent of the species are involved in barcode sharing, but two thirds of these cases may reflect the need for further taxonomic research. This dataset provides a unique resource for conservation and for studying evolutionary processes, cryptic species, phylogeography, and ecology." }, { "instance_id": "R139551xR108960", "comparison_id": "R139551", "paper_id": "R108960", "text": "Use of species delimitation approaches to tackle the cryptic diversity of an assemblage of high Andean butterflies (Lepidoptera: Papilionoidea) Cryptic biological diversity has generated ambiguity in taxonomic and evolutionary studies. Single-locus methods and other approaches for species delimitation are useful for addressing this challenge, enabling the practical processing of large numbers of samples for identification and inventory purposes. This study analyzed an assemblage of high Andean butterflies using DNA barcoding and compared the identifications based on the current morphological taxonomy with three methods of species delimitation (automatic barcode gap discovery, generalized mixed Yule coalescent model, and Poisson tree processes). Sixteen potential cryptic species were recognized using these three methods, representing a net richness increase of 11.3% in the assemblage. A well-studied taxon of the genus Vanessa, which has a wide geographical distribution, appeared with the potential cryptic species that had a higher genetic differentiation at the local level than at the continental level. The analyses were useful for identifying the potential cryptic species in Pedaliodes and Forsterinaria complexes, which also show differentiation along altitudinal and latitudinal gradients. This genetic assessment of an entire assemblage of high Andean butterflies (Papilionoidea) provides baseline information for future research in a region characterized by high rates of endemism and population isolation." }, { "instance_id": "R139551xR108983", "comparison_id": "R139551", "paper_id": "R108983", "text": "Barcoding the butterflies of southern South America: Species delimitation efficacy, cryptic diversity and geographic patterns of divergence Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%\u20139% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail." }, { "instance_id": "R139551xR136201", "comparison_id": "R139551", "paper_id": "R136201", "text": "DNA barcode analysis of butterfly species from Pakistan points towards regional endemism DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north\u2010central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7\u201314.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour\u2010joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region." }, { "instance_id": "R139551xR109043", "comparison_id": "R139551", "paper_id": "R109043", "text": "A DNA barcode library for the butterflies of North America Although the butterflies of North America have received considerable taxonomic attention, overlooked species and instances of hybridization continue to be revealed. The present study assembles a DNA barcode reference library for this fauna to identify groups whose patterns of sequence variation suggest the need for further taxonomic study. Based on 14,626 records from 814 species, DNA barcodes were obtained for 96% of the fauna. The maximum intraspecific distance averaged 1/4 the minimum distance to the nearest neighbor, producing a barcode gap in 76% of the species. Most species (80%) were monophyletic, the others were para- or polyphyletic. Although 15% of currently recognized species shared barcodes, the incidence of such taxa was far higher in regions exposed to Pleistocene glaciations than in those that were ice-free. Nearly 10% of species displayed high intraspecific variation (>2.5%), suggesting the need for further investigation to assess potential cryptic diversity. Aside from aiding the identification of all life stages of North American butterflies, the reference library has provided new perspectives on the incidence of both cryptic and potentially over-split species, setting the stage for future studies that can further explore the evolutionary dynamics of this group." }, { "instance_id": "R139551xR139527", "comparison_id": "R139551", "paper_id": "R139527", "text": "DNA barcoding and species delimitation of butterflies (Lepidoptera) from Nigeria Accurate identification of species is a prerequisite for successful biodiversity management and further genetic studies. Species identification techniques often require both morphological diagnostics and molecular tools, such as DNA barcoding, for correct identification. In particular, the use of the subunit I of the mitochondrial cytochrome c oxidase (COI) gene for DNA barcoding has proven useful in species identification for insects. However, to date, no studies have been carried out on the DNA barcoding of Nigerian butterflies. We evaluated the utility of DNA barcoding applied for the first time to 735 butterfly specimens from southern Nigeria. In total, 699 DNA barcodes, resulting in a record of 116 species belonging to 57 genera, were generated. Our study sample comprised 807 DNA barcodes based on sequences generated from our current study and 108 others retrieved from BOLD. Different molecular analyses, including genetic distance-based evaluation (Neighbor-Joining, Maximum Likelihood and Bayesian trees) and species delimitation tests (TaxonDNA, Automated Barcode Gap Discovery, General Mixed Yule-Coalescent, and Bayesian Poisson Tree Processes) were performed to accurately identify and delineate species. The genetic distance-based analyses resulted in 163 well-separated clusters consisting of 147 described and 16 unidentified species. Our findings indicate that about 90.20% of the butterfly species were explicitly discriminated using DNA barcodes. Also, our field collections reported the first country records of ten butterfly species-Acraea serena, Amauris cf. dannfelti, Aterica galena extensa, Axione tjoane rubescens, Charaxes galleyanus, Papilio lormieri lormeri, Pentila alba, Precis actia, Precis tugela, and Tagiades flesus. Further, DNA barcodes revealed a high mitochondrial intraspecific divergence of more than 3% in Bicyclus vulgaris vulgaris and Colotis evagore. Furthermore, our result revealed an overall high haplotype (gene) diversity (0.9764), suggesting that DNA barcoding can provide information at a population level for Nigerian butterflies. The present study confirms the efficiency of DNA barcoding for identifying butterflies from Nigeria. To gain a better understanding of regional variation in DNA barcodes of this biogeographically complex area, future work should expand the DNA barcode reference library to include all butterfly species from Nigeria as well as surrounding countries. Also, further studies, involving relevant genetic and eco-morphological datasets, are required to understand processes governing mitochondrial intraspecific divergences reported in some species complexes." }, { "instance_id": "R139551xR136193", "comparison_id": "R139551", "paper_id": "R136193", "text": "Complete DNA barcode reference library for a country's butterfly fauna reveals high performance for temperate Europe DNA barcoding aims to accelerate species identification and discovery, but performance tests have shown marked differences in identification success. As a consequence, there remains a great need for comprehensive studies which objectively test the method in groups with a solid taxonomic framework. This study focuses on the 180 species of butterflies in Romania, accounting for about one third of the European butterfly fauna. This country includes five eco-regions, the highest of any in the European Union, and is a good representative for temperate areas. Morphology and DNA barcodes of more than 1300 specimens were carefully studied and compared. Our results indicate that 90 per cent of the species form barcode clusters allowing their reliable identification. The remaining cases involve nine closely related species pairs, some whose taxonomic status is controversial or that hybridize regularly. Interestingly, DNA barcoding was found to be the most effective identification tool, outperforming external morphology, and being slightly better than male genitalia. Romania is now the first country to have a comprehensive DNA barcode reference database for butterflies. Similar barcoding efforts based on comprehensive sampling of specific geographical regions can act as functional modules that will foster the early application of DNA barcoding while a global system is under development." }, { "instance_id": "R139567xR138294", "comparison_id": "R139567", "paper_id": "R138294", "text": "Towards a flexible ICT-architecture for multi-channel e-government service provisioning The planning and subsequent nationwide implementation of e-government service provisioning faces a number of challenges at the level of municipalities in the Netherlands. Initiatives are confronted with a highly fragmented ICT-architecture that has been vertically organized around departments and with hardly any common horizontal functionality. This situation is even further enforced by a defacto duopoly on the software market of information systems used by municipalities. The provision of services over Web-based channels leads to a need for a more flexible, open ICT-architecture based on standardized elements. The goal of the research presented in this paper is to determine the feasibility of a component-based approach to meet the aforementioned challenge for a more flexible, open ICT architecture. The research consisted of two parts (1) the identification of opportunities for generic components in the ICT-architecture of municipalities and (2) supporting the evaluation of these opportunities using simulation." }, { "instance_id": "R139567xR138300", "comparison_id": "R139567", "paper_id": "R138300", "text": "Bridging the Gap between Citizens and Local Administrations with Knowledge-Based Service Bundle Recommendations The Italian Public Administration Services (IPAS) is a registry of services provided to Italian citizens likewise the Local Government Service List (UK), or the European Service List for local authorities from other nations. Unlike existing registries, IPAS presents the novelty of modelling public services from the view point of the value they have for the consumers and the providers. A value-added-service (VAS) is linked to a life event that requires its fruition, addresses consumer categories to identify market opportunities for private providers, and is described by non-functional-properties such as price and time of fruition. Where Italian local authorities leave the citizen-users in a daedalus of references to understand whether they can/have to apply for a service, the IPAS model captures the necessary back-ground knowledge about the connection between administrative legislation and service specifications, life events, and application contexts to support the citizen-users to fulfill their needs. As a proof of concept, we developed an operational Web environment named ASSO, designed to assist the citizen-user to intuitively create bundles of mandatory-by-legislation and recommended services, to accomplish his bureaucratic fulfillments. Although ASSO is an ongoing project, domain experts gave preliminary positive feedback on the innovativeness and effectiveness of the proposed approach." }, { "instance_id": "R139567xR139300", "comparison_id": "R139567", "paper_id": "R139300", "text": "Personalized recommendations in e-participation: offline experiments for the 'Decide Madrid' platform In e-participation platforms, citizens suggest, discuss and vote online for initiatives aimed to address a wide range of issues and problems in a city, such as economic development, public safety, budges, infrastructure, housing, environment, social rights, and health care. For a particular citizen, the number of proposals and debates may be overwhelming, and recommender systems could help filtering and ranking those that are more relevant. Focusing on a particular case, the `Decide Madrid' platform, in this paper we empirically investigate which sources of user preferences and recommendation approaches could be more effective, in terms of several aspects, namely precision, coverage and diversity." }, { "instance_id": "R139567xR138297", "comparison_id": "R139567", "paper_id": "R138297", "text": "A Multi-Agent System for the management of E-Government Services This paper aims at studying the exploitation of intelligent agents for supporting citizens to access e-government services. To this purpose, it proposes a multi-agent system capable of suggesting to the users the most interesting services for them; specifically, these suggestions are computed by taking into account both their exigencies/preferences and the capabilities of the devices they are currently exploiting. The paper first describes the proposed system and, then, reports various experimental results. Finally, it presents a comparison between our system and other related ones already presented in the literature." }, { "instance_id": "R139567xR138284", "comparison_id": "R139567", "paper_id": "R138284", "text": "Studying the feasibility of a recommender in a citizen web portal based on user modeling and clustering algorithms This paper presents a methodology to estimate the future success of a collaborative recommender in a citizen web portal. This methodology consists of four stages, three of them are developed in this study. First of all, a user model, which takes into account some usual characteristics of web data, is developed to produce artificial data sets. These data sets are used to carry out a clustering algorithm comparison in the second stage of our approach. This comparison provides information about the suitability of each algorithm in different scenarios. The benchmarked clustering algorithms are the ones that are most commonly used in the literature: c-Means, Fuzzy c-Means, a set of hierarchical algorithms, Gaussian mixtures trained by the expectation-maximization algorithm, and Kohonen's self-organizing maps (SOM). The most accurate clustering is yielded by SOM. Afterwards, we turn to real data. The users of a citizen web portal (Infoville XXI, http://www.infoville.es) are clustered. The clustering achieved enables us to study the future success of a collaborative recommender by means of a prediction strategy. New users are recommended according to the cluster in which they have been classified. The suitability of the recommendation is evaluated by checking whether or not the recommended objects correspond to those actually selected by the user. The results show the relevance of the information provided by clustering algorithms in this web portal, and therefore, the relevance of developing a collaborative recommender for this web site." }, { "instance_id": "R139567xR139307", "comparison_id": "R139567", "paper_id": "R139307", "text": "A Recommender System with Uncertainty on the Example of Political Elections The article presents a system of election recommendation in which both candidate\u2019s and voter\u2019s preferences can be described in an imprecise way. The model of the system is based on IF-set theory which can express hesitation or lack of knowledge. Similarity measures of IF-sets and linguistic quantifiers are used in the decision-making process." }, { "instance_id": "R139567xR139310", "comparison_id": "R139567", "paper_id": "R139310", "text": "A Fuzzy Recommender System for eElections eDemocracy aims to increase participation of citizens in democratic processes through the use of information and communication technologies. In this paper, an architecture of recommender systems for eElections using fuzzy clustering methods is proposed. The objective is to assist voters in making decisions by providing information about candidates close to the voters preferences and tendencies. The use of recommender systems for eGovernment is a research topic used to reduce information overload, which could help to improve democratic processes." }, { "instance_id": "R139567xR138187", "comparison_id": "R139567", "paper_id": "R138187", "text": "A building permit system for smart cities: A cloud-based framework Abstract In this paper we propose a novel, cloud-based framework to support citizens and city officials in the building permit process. The proposed framework is efficient, user-friendly, and transparent with a quick turn-around time for homeowners. Compared to existing permit systems, the proposed smart city permit framework provides a pre-permitting decision workflow, and incorporates a data analytics and mining module that enables the continuous improvement of both the end user experience and the permitting and urban planning processes. This is enabled through a data mining-powered permit recommendation engine as well as a data analytics process that allow a gleaning of key insights for real estate development and city planning purposes, by analyzing how users interact with the system depending on their location, time, and type of request. The novelty of the proposed framework lies in the integration of a pre-permit processing front-end with permit processing and data analytics & mining modules, along with utilization of techniques for extracting knowledge from the data generated through the use of the system. The proposed framework is completely cloud-based, such that any city can deploy it with lower initial as well as maintenance costs. We also present a proof-of-concept use case, using real permit data from New York City." }, { "instance_id": "R139585xR76559", "comparison_id": "R139585", "paper_id": "R76559", "text": "Socioeconomic status and well-being during COVID-19: A resource-based examination. The authors assess levels and within-person changes in psychological well-being (i.e., depressive symptoms and life satisfaction) from before to during the COVID-19 pandemic for individuals in the United States, in general and by socioeconomic status (SES). The data is from 2 surveys of 1,143 adults from RAND Corporation's nationally representative American Life Panel, the first administered between April-June, 2019 and the second during the initial peak of the pandemic in the United States in April, 2020. Depressive symptoms during the pandemic were higher than population norms before the pandemic. Depressive symptoms increased from before to during COVID-19 and life satisfaction decreased. Individuals with higher education experienced a greater increase in depressive symptoms and a greater decrease in life satisfaction from before to during COVID-19 in comparison to those with lower education. Supplemental analysis illustrates that income had a curvilinear relationship with changes in well-being, such that individuals at the highest levels of income experienced a greater decrease in life satisfaction from before to during COVID-19 than individuals with lower levels of income. We draw on conservation of resources theory and the theory of fundamental social causes to examine four key mechanisms (perceived financial resources, perceived control, interpersonal resources, and COVID-19-related knowledge/news consumption) underlying the relationship between SES and well-being during COVID-19. These resources explained changes in well-being for the sample as a whole but did not provide insight into why individuals of higher education experienced a greater decline in well-being from before to during COVID-19. (PsycInfo Database Record (c) 2020 APA, all rights reserved)." }, { "instance_id": "R139585xR76554", "comparison_id": "R139585", "paper_id": "R76554", "text": "The COVID-19 pandemic and subjective well-being: longitudinal evidence on satisfaction with work and family ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies \u2013 remote work, short-time work and closure of schools and childcare \u2013 have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected." }, { "instance_id": "R139585xR75942", "comparison_id": "R139585", "paper_id": "R75942", "text": "Parental well-being in times of Covid-19 in Germany Abstract We examine the effects of Covid-19 and related restrictions on individuals with dependent children in Germany. We specifically focus on the role of day care center and school closures, which may be regarded as a \u201cdisruptive exogenous shock\u201d to family life. We make use of a novel representative survey of parental well-being collected in May and June 2020 in Germany, when schools and day care centers were closed but while other measures had been relaxed and new infections were low. In our descriptive analysis, we compare well-being during this period with a pre-crisis period for different groups. In a difference-in-differences design, we compare the change for individuals with children to the change for individuals without children, accounting for unrelated trends as well as potential survey mode and context effects. We find that the crisis lowered the relative well-being of individuals with children, especially for individuals with young children, for women, and for persons with lower secondary schooling qualifications. Our results suggest that public policy measures taken to contain Covid-19 can have large effects on family well-being, with implications for child development and parental labor market outcomes." }, { "instance_id": "R139585xR75949", "comparison_id": "R139585", "paper_id": "R75949", "text": "Employee psychological well-being during the COVID-19 pandemic in Germany: A longitudinal study of demands, resources, and exhaustion M any governments react to the current coronavirus/COVID-19 pandemic by restricting daily (work) life. On the basis of theories from occupational health, we propose that the duration of the pandemic, its demands (e.g., having to work from home, closing of childcare facilities, job insecurity, work-privacy conflicts, privacy-work conflicts) and personaland job-related resources (co-worker social support, job autonomy, partner support and corona self-efficacy) interact in their effect on employee exhaustion. We test the hypotheses with a three-wave sample of German employees during the pandemic from April to June 2020 (Nw1 = 2900, Nw12 = 1237, Nw123 = 789). Our findings show a curvilinear effect of pandemic duration on working women\u2019s exhaustion. The data also show that the introduction and the easing of lockdown measures affect exhaustion, and that women with children who work from home while childcare is unavailable are especially exhausted. Job autonomy and partner support mitigated some of these effects. In sum, women\u2019s psychological health was more strongly affected by the pandemic than men\u2019s. We discuss implications for occupational health theories and that interventions targeted at mitigating the psychological consequences of the COVID-19 pandemic should target women specifically." }, { "instance_id": "R139585xR76567", "comparison_id": "R139585", "paper_id": "R76567", "text": "Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic. The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved)." }, { "instance_id": "R139585xR76542", "comparison_id": "R139585", "paper_id": "R76542", "text": "Up and About: Older Adults\u2019 Well-being During the COVID-19 Pandemic in a Swedish Longitudinal Study Abstract Objectives To investigate early effects of the COVID-19 pandemic related to (a) levels of worry, risk perception, and social distancing; (b) longitudinal effects on well-being; and (c) effects of worry, risk perception, and social distancing on well-being. Methods We analyzed annual changes in four aspects of well-being over 5 years (2015\u20132020): life satisfaction, financial satisfaction, self-rated health, and loneliness in a subsample (n = 1,071, aged 65\u201371) from a larger survey of Swedish older adults. The 2020 wave, collected March 26\u2013April 2, included measures of worry, risk perception, and social distancing in response to COVID-19. Results (a) In relation to COVID-19: 44.9% worried about health, 69.5% about societal consequences, 25.1% about financial consequences; 86.4% perceived a high societal risk, 42.3% a high risk of infection, and 71.2% reported high levels of social distancing. (b) Well-being remained stable (life satisfaction and loneliness) or even increased (self-rated health and financial satisfaction) in 2020 compared to previous years. (c) More worry about health and financial consequences was related to lower scores in all four well-being measures. Higher societal worry and more social distancing were related to higher well-being. Discussion In the early stage of the pandemic, Swedish older adults on average rated their well-being as high as, or even higher than, previous years. However, those who worried more reported lower well-being. Our findings speak to the resilience, but also heterogeneity, among older adults during the pandemic. Further research, on a broad range of health factors and long-term psychological consequences, is needed." }, { "instance_id": "R139642xR139632", "comparison_id": "R139642", "paper_id": "R139632", "text": "Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of \u223c1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 \u00b1 0.33%, indicating good reproducibility." }, { "instance_id": "R139642xR139638", "comparison_id": "R139642", "paper_id": "R139638", "text": "Efficient perovskite solar cells by metal ion doping

Realizing the theoretical limiting power conversion efficiency (PCE) in perovskite solar cells requires a better understanding and control over the fundamental loss processes occurring in the bulk of the perovskite layer and at the internal semiconductor interfaces in devices.

" }, { "instance_id": "R139642xR139608", "comparison_id": "R139642", "paper_id": "R139608", "text": "Lead-Free Halide Perovskite Solar Cells with High Photocurrents Realized Through Vacancy Modulation Lead free perovskite solar cells based on a CsSnI3 light absorber with a spectral response from 950 nm is demonstrated. The high photocurrents noted in the system are a consequence of SnF2 addition which reduces defect concentrations and hence the background charge carrier density." }, { "instance_id": "R139642xR139629", "comparison_id": "R139642", "paper_id": "R139629", "text": "Stable Low-Bandgap Pb-Sn Binary Perovskites for Tandem Solar Cells A low-bandgap (1.33 eV) Sn-based MA0.5 FA0.5 Pb0.75 Sn0.25 I3 perovskite is developed via combined compositional, process, and interfacial engineering. It can deliver a high power conversion efficiency (PCE) of 14.19%. Finally, a four-terminal all-perovskite tandem solar cell is demonstrated by combining this low-bandgap cell with a semitransparent MAPbI3 cell to achieve a high efficiency of 19.08%." }, { "instance_id": "R139642xR139602", "comparison_id": "R139642", "paper_id": "R139602", "text": "Organometal Halide Perovskites as Visible-Light Sensitizers for Photovoltaic Cells Two organolead halide perovskite nanocrystals, CH(3)NH(3)PbBr(3) and CH(3)NH(3)PbI(3), were found to efficiently sensitize TiO(2) for visible-light conversion in photoelectrochemical cells. When self-assembled on mesoporous TiO(2) films, the nanocrystalline perovskites exhibit strong band-gap absorptions as semiconductors. The CH(3)NH(3)PbI(3)-based photocell with spectral sensitivity of up to 800 nm yielded a solar energy conversion efficiency of 3.8%. The CH(3)NH(3)PbBr(3)-based cell showed a high photovoltage of 0.96 V with an external quantum conversion efficiency of 65%." }, { "instance_id": "R139642xR139614", "comparison_id": "R139642", "paper_id": "R139614", "text": "Highly Efficient and Stable Sn-Rich Perovskite Solar Cells by Introducing Bromine Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (\u226560%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4)." }, { "instance_id": "R139642xR139611", "comparison_id": "R139642", "paper_id": "R139611", "text": "Lead-free solid-state organic\u2013inorganic halide perovskite solar cells Perovskite solar cells containing tin rather than lead, which is usually employed, are reported. These cells have a power conversion efficiency of 5.7% and retain 80% of their performance over a period of 12 hours." }, { "instance_id": "R139642xR139618", "comparison_id": "R139642", "paper_id": "R139618", "text": "Efficiently Improving the Stability of Inverted Perovskite Solar Cells by Employing Polyethylenimine-Modified Carbon Nanotubes as Electrodes Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of \u223c11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs." }, { "instance_id": "R139972xR139969", "comparison_id": "R139972", "paper_id": "R139969", "text": "A Reliable Liquid-Based CMOS MEMS Micro Thermal Convective Accelerometer With Enhanced Sensitivity and Limit of Detection In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\\mu \\text{g}$ ) compared with the air-based MTCA. [2021-0092]" }, { "instance_id": "R139972xR139951", "comparison_id": "R139972", "paper_id": "R139951", "text": "Studies and optimization of the frequency response of a micromachined thermal accelerometer Abstract In the present work, the design of a micromachined thermal accelerometer based on convection effect was studied. The accelerometer sensitivity and especially the frequency response have been experimentally and numerically studied with low cavity volume. Although this type of sensor has already been intensively examined, few information concerning the frequency response modeling is currently available. In particular, no experimental result about frequency response for low volume and their variation according to the external temperature variation was reported in the literature. In certain particular condition, we assumed that a bandwidth of 120 Hz at \u22123 dB has been measured. By using numerical resolution of fluid dynamics equations with the computational fluid dynamics (CFD) software package Fluent V6.2 and a simple model based on thermo-conduction, a good accordance with the experimental results has been demonstrated. So, the effects of these two parameters like the volume and the external temperature variation on the thermal accelerometer response have been theoretically, experimentally and numerically investigated." }, { "instance_id": "R139972xR139948", "comparison_id": "R139972", "paper_id": "R139948", "text": "A 2-DOF convective micro accelerometer with a low thermal stress sensing element This paper presents the development of a dual-axis convective microaccelerometer, whose working principle is based on the convective heat transfer and thermoresistive effect of lightly doped silicon. In contrast to the developed convective accelerometer, the sensor utilizes new structures of the sensing element which can reduce at least 90% of the thermally induced stress. By using a numerical method, the dimensions of the sensing chip and of the package are optimized. The sensitivity of the sensor is simulated; other characteristics such as frequency response, shock resistance and the noise problem were investigated. The sensor has been fabricated by a microelectrical mechanical systems (MEMS) process and characterized by experiments." }, { "instance_id": "R139972xR139945", "comparison_id": "R139972", "paper_id": "R139945", "text": "Experimental and finite-element study of convective accelerometer on CMOS This paper addresses the design of CMOS thermal accelerometers. A test-chip including the sensor and a signal conditioning circuit has been designed, fabricated and characterized. The fabricated prototype exhibits good performance: the measured resolution is better than 30 mg (equivalent to 1.7\u00b0 in inclination) with a sensitivity of 375 mV/g. The bandwidth is around 15 Hz. This test-chip is used to improve the modelling of heat transfer phenomenon for this family of devices. FEM is successfully used as a first modelling approach. Although thermal convective accelerometers have already been reported and even commercialised, few information regarding behavioural modelling, optimization issues, and their integration on CMOS technology is today available. In particular, no information about response time modelling was reported in the literature. Thanks to experimental results and FEM simulations, we provide in this study information concerning the integration of convection heat transfer accelerometers on CMOS. In addition, the effects of the packaging (i.e. the top and bottom cavities) on both the static sensitivity and the dynamic response of the sensor have been investigated." }, { "instance_id": "R139972xR139960", "comparison_id": "R139972", "paper_id": "R139960", "text": "A new monolithic 3-axis thermal convective accelerometer: principle, design, fabrication and characterization Thermal convective accelerometers are based on heat transfer in a fluid-filled cavity. Working principle of these sensors is well known and first MEMS implementations were reported in the late 90\u2019s. Since that time, many single-axis or dual-axis sensors were reported. Therefore, complex assembly operations were needed to implement 3-axis acceleration measurement using two dies or more. The goal of this paper is then to demonstrate out-of-plane sensitivity of a 2-axis thermal convective accelerometer and to present extensively the first monolithic MEMS device allowing acceleration measurements in three orthogonal directions. First, finite element modeling is used to study the impact of geometrical parameters and heating power on sensor performances. Second, a prototype has been designed to prove feasibility of a fully integrated 3-axis sensor obtained by a self-aligned post-CMOS etching of a silicon die. To our knowledge, the presented device is the first 3-axis MEMS sensor manufactured in a standard CMOS technology ever reported in the literature." }, { "instance_id": "R139972xR139938", "comparison_id": "R139972", "paper_id": "R139938", "text": "Micromachined accelerometer with no proof mass This paper describes a revolutionary micromachined accelerometer which is simple, reliable, and inexpensive to make. The operating principle of this accelerometer is based on free-convection heat transfer of a tiny hot air bubble in an enclosed chamber. An experimental device has demonstrated a 0.6 milli-g sensitivity which can theoretically be extended to sub-micro-g level." }, { "instance_id": "R139972xR139963", "comparison_id": "R139972", "paper_id": "R139963", "text": "Theoretical Modeling, Numerical Simulations and Experimental Study of Micro Thermal Convective Accelerometers We present a one-dimensional (1D) theoretical model for the design analysis of a micro thermal convective accelerometer (MTCA). Systematical design analysis was conducted on the sensor performance covering the sensor output, sensitivity, and power consumption. The sensor output was further normalized as a function of normalized input acceleration in terms of Rayleigh number R $_{\\mathrm {a}}$ (the product of Grashof number G $_{\\mathrm {r}}$ and Prandtl number P $_{\\mathrm {r}}$ ) for different fluids. A critical Rayleigh number (Rac = 3,000) is founded, for the first time, to determine the boundary between the linear and nonlinear response regime of MTCA. Based on the proposed 1D model, key parameters, including the location of the detectors, sensor length, thin film thickness, cavity height, heater temperature, and fluid types, were optimized to improve sensor performance. Accordingly, a CMOS compatible MTCA was designed and fabricated based on the theoretical analysis, which showed a high sensitivity of 1,289 mV/g. Therefore, this efficient 1D model, one million times faster than CFD simulation, can be a promising tool for the system-level CMOS MEMS design." }, { "instance_id": "R140131xR139993", "comparison_id": "R140131", "paper_id": "R139993", "text": "The Role of Smart City Characteristics in the Plans of Fifteen Cities ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately." }, { "instance_id": "R140131xR140015", "comparison_id": "R140131", "paper_id": "R140015", "text": "Management of immersive heritage tourism experiences: A conceptual model Abstract There is potential for immersive technology, such as augmented and virtual reality, to create memorable tourism experiences, specifically for heritage tourism. However, there is a lack of conceptual clarity surrounding the management of heritage for memorable tourism experiences. Subsequently, this research note proposes a four-stage conceptual model of heritage preservation for managing heritage into digital tourism experiences. The four stages include the presentation of historical facts; contested heritage; integration of historical facts and contested heritage; and/or an alternate scenario. This research note demonstrates that integrating history with cutting-edge technology in immersive environments has the potential to not only preserve and manage heritage but to enrich the visitor experience and subsequent engagement with history." }, { "instance_id": "R140131xR140030", "comparison_id": "R140131", "paper_id": "R140030", "text": "World Heritage meets Smart City in an Urban-Educational Hackathon in Rauma During recent years, the \u2018smart city\u2019 concept has emerged in literature (e.g., Kunttu, 2019; Markkula & Kune, 2018; \u00d6berg, Graham, & Hennelly, 2017; Visvizi & Lytras, 2018). Inherently, the smart city concept includes urban innovation; therefore, simply developing and applying technology is not enough for success. For cities to be 'smart,' they also have to be innovative, apply new ways of thinking among businesses, citizens, and academia, as well as integrate diverse actors, especially universities, in their innovation practices (Kunttu, 2019; Markkula & Kune, 2018)." }, { "instance_id": "R140131xR140112", "comparison_id": "R140131", "paper_id": "R140112", "text": "Smart cities of the future Abstract Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science of smart cities. Graphical abstract" }, { "instance_id": "R140347xR135750", "comparison_id": "R140347", "paper_id": "R135750", "text": "Characterization and comparison of poorly known moth communities through DNA barcoding in two Afrotropical environments in Gabon Biodiversity research in tropical ecosystems\u2014popularized as the most biodiverse habitats on Earth\u2014often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity." }, { "instance_id": "R140347xR140263", "comparison_id": "R140347", "paper_id": "R140263", "text": "DNA Barcoding of an Assembly of Montane Andean Butterflies (Satyrinae): Geographical Scale and Identification Performance DNA barcoding is a technique used primarily for the documentation and identification of biological diversity based on mitochondrial DNA sequences. Butterflies have received particular attention in DNA barcoding studies, although varied performance may be obtained due to different scales of geographic sampling and speciation processes in various groups. The montane Andean Satyrinae constitutes a challenging study group for taxonomy. The group displays high richness, with more of 550 species, and remarkable morphological similarity among taxa, which renders their identification difficult. In the present study, we evaluated the effectiveness of DNA barcodes in the identification of montane Andean satyrines and the effect of increased geographical scale of sampling on identification performance. Mitochondrial sequences were obtained from 104 specimens of 39 species and 16 genera, collected in a forest remnant in the northwest Andes. DNA barcoding has proved to be a useful tool for the identification of the specimens, with a well-defined gap and producing clusters with unambiguous identifications for all the morphospecies in the study area. The expansion of the geographical scale with published data increased genetic distances within species and reduced those among species, but did not generally reduce the success of specimen identification. Only in Forsterinaria rustica (Butler, 1868), a taxon with high intraspecific variation, the barcode gap was lost and low support for monophyly was obtained. Likewise, expanded sampling resulted in a substantial increase in the intraspecific distance in Morpho sulkowskyi (Kollar, 1850); Panyapedaliodes drymaea (Hewitson, 1858); Lymanopoda obsoleta (Westwood, 1851); and Lymanopoda labda Hewitson, 1861; but for these species, the barcode gap was maintained. These divergent lineages are nonetheless worth a detailed study of external and genitalic morphology variation, as well as ecological features, in order to determine the potential existence of cryptic species. Even including these cases, DNA barcoding performance in specimen identification was 100% successful based on monophyly, an unexpected result in such a taxonomically complicated group." }, { "instance_id": "R140347xR139546", "comparison_id": "R140347", "paper_id": "R139546", "text": "A DNA barcode reference library for Swiss butterflies and forester moths as a tool for species identification, systematics and conservation Butterfly monitoring and Red List programs in Switzerland rely on a combination of observations and collection records to document changes in species distributions through time. While most butterflies can be identified using morphology, some taxa remain challenging, making it difficult to accurately map their distributions and develop appropriate conservation measures. In this paper, we explore the use of the DNA barcode (a fragment of the mitochondrial gene COI) as a tool for the identification of Swiss butterflies and forester moths (Rhopalocera and Zygaenidae). We present a national DNA barcode reference library including 868 sequences representing 217 out of 224 resident species, or 96.9% of Swiss fauna. DNA barcodes were diagnostic for nearly 90% of Swiss species. The remaining 10% represent cases of para- and polyphyly likely involving introgression or incomplete lineage sorting among closely related taxa. We demonstrate that integrative taxonomic methods incorporating a combination of morphological and genetic techniques result in a rate of species identification of over 96% in females and over 98% in males, higher than either morphology or DNA barcodes alone. We explore the use of the DNA barcode for exploring boundaries among taxa, understanding the geographical distribution of cryptic diversity and evaluating the status of purportedly endemic taxa. Finally, we discuss how DNA barcodes may be used to improve field practices and ultimately enhance conservation strategies." }, { "instance_id": "R140347xR139538", "comparison_id": "R140347", "paper_id": "R139538", "text": "High resolution DNA barcode library for European butterflies reveals continental patterns of mitochondrial genetic diversity Abstract The study of global biodiversity will greatly benefit from access to comprehensive DNA barcode libraries at continental scale, but such datasets are still very rare. Here, we assemble the first high-resolution reference library for European butterflies that provides 97% taxon coverage (459 species) and 22,306 COI sequences. We estimate that we captured 62% of the total haplotype diversity and show that most species possess a few very common haplotypes and many rare ones. Specimens in the dataset have an average 95.3% probability of being correctly identified. Mitochondrial diversity displayed elevated haplotype richness in southern European refugia, establishing the generality of this key biogeographic pattern for an entire taxonomic group. Fifteen percent of the species are involved in barcode sharing, but two thirds of these cases may reflect the need for further taxonomic research. This dataset provides a unique resource for conservation and for studying evolutionary processes, cryptic species, phylogeography, and ecology." }, { "instance_id": "R140347xR139508", "comparison_id": "R140347", "paper_id": "R139508", "text": "Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis." }, { "instance_id": "R140347xR136201", "comparison_id": "R140347", "paper_id": "R136201", "text": "DNA barcode analysis of butterfly species from Pakistan points towards regional endemism DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north\u2010central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7\u201314.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour\u2010joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region." }, { "instance_id": "R140347xR140252", "comparison_id": "R140347", "paper_id": "R140252", "text": "Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors." }, { "instance_id": "R140347xR109043", "comparison_id": "R140347", "paper_id": "R109043", "text": "A DNA barcode library for the butterflies of North America Although the butterflies of North America have received considerable taxonomic attention, overlooked species and instances of hybridization continue to be revealed. The present study assembles a DNA barcode reference library for this fauna to identify groups whose patterns of sequence variation suggest the need for further taxonomic study. Based on 14,626 records from 814 species, DNA barcodes were obtained for 96% of the fauna. The maximum intraspecific distance averaged 1/4 the minimum distance to the nearest neighbor, producing a barcode gap in 76% of the species. Most species (80%) were monophyletic, the others were para- or polyphyletic. Although 15% of currently recognized species shared barcodes, the incidence of such taxa was far higher in regions exposed to Pleistocene glaciations than in those that were ice-free. Nearly 10% of species displayed high intraspecific variation (>2.5%), suggesting the need for further investigation to assess potential cryptic diversity. Aside from aiding the identification of all life stages of North American butterflies, the reference library has provided new perspectives on the incidence of both cryptic and potentially over-split species, setting the stage for future studies that can further explore the evolutionary dynamics of this group." }, { "instance_id": "R140347xR139497", "comparison_id": "R140347", "paper_id": "R139497", "text": "Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae) Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data." }, { "instance_id": "R140347xR138551", "comparison_id": "R140347", "paper_id": "R138551", "text": "Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences." }, { "instance_id": "R140347xR139527", "comparison_id": "R140347", "paper_id": "R139527", "text": "DNA barcoding and species delimitation of butterflies (Lepidoptera) from Nigeria Accurate identification of species is a prerequisite for successful biodiversity management and further genetic studies. Species identification techniques often require both morphological diagnostics and molecular tools, such as DNA barcoding, for correct identification. In particular, the use of the subunit I of the mitochondrial cytochrome c oxidase (COI) gene for DNA barcoding has proven useful in species identification for insects. However, to date, no studies have been carried out on the DNA barcoding of Nigerian butterflies. We evaluated the utility of DNA barcoding applied for the first time to 735 butterfly specimens from southern Nigeria. In total, 699 DNA barcodes, resulting in a record of 116 species belonging to 57 genera, were generated. Our study sample comprised 807 DNA barcodes based on sequences generated from our current study and 108 others retrieved from BOLD. Different molecular analyses, including genetic distance-based evaluation (Neighbor-Joining, Maximum Likelihood and Bayesian trees) and species delimitation tests (TaxonDNA, Automated Barcode Gap Discovery, General Mixed Yule-Coalescent, and Bayesian Poisson Tree Processes) were performed to accurately identify and delineate species. The genetic distance-based analyses resulted in 163 well-separated clusters consisting of 147 described and 16 unidentified species. Our findings indicate that about 90.20% of the butterfly species were explicitly discriminated using DNA barcodes. Also, our field collections reported the first country records of ten butterfly species-Acraea serena, Amauris cf. dannfelti, Aterica galena extensa, Axione tjoane rubescens, Charaxes galleyanus, Papilio lormieri lormeri, Pentila alba, Precis actia, Precis tugela, and Tagiades flesus. Further, DNA barcodes revealed a high mitochondrial intraspecific divergence of more than 3% in Bicyclus vulgaris vulgaris and Colotis evagore. Furthermore, our result revealed an overall high haplotype (gene) diversity (0.9764), suggesting that DNA barcoding can provide information at a population level for Nigerian butterflies. The present study confirms the efficiency of DNA barcoding for identifying butterflies from Nigeria. To gain a better understanding of regional variation in DNA barcodes of this biogeographically complex area, future work should expand the DNA barcode reference library to include all butterfly species from Nigeria as well as surrounding countries. Also, further studies, involving relevant genetic and eco-morphological datasets, are required to understand processes governing mitochondrial intraspecific divergences reported in some species complexes." }, { "instance_id": "R140347xR140187", "comparison_id": "R140347", "paper_id": "R140187", "text": "DNA Barcoding the Geometrid Fauna of Bavaria (Lepidoptera): Successes, Surprises, and Questions Background The State of Bavaria is involved in a research program that will lead to the construction of a DNA barcode library for all animal species within its territorial boundaries. The present study provides a comprehensive DNA barcode library for the Geometridae, one of the most diverse of insect families. Methodology/Principal Findings This study reports DNA barcodes for 400 Bavarian geometrid species, 98 per cent of the known fauna, and approximately one per cent of all Bavarian animal species. Although 98.5% of these species possess diagnostic barcode sequences in Bavaria, records from neighbouring countries suggest that species-level resolution may be compromised in up to 3.5% of cases. All taxa which apparently share barcodes are discussed in detail. One case of modest divergence (1.4%) revealed a species overlooked by the current taxonomic system: Eupithecia goossensiata Mabille, 1869 stat.n. is raised from synonymy with Eupithecia absinthiata (Clerck, 1759) to species rank. Deep intraspecific sequence divergences (>2%) were detected in 20 traditionally recognized species. Conclusions/Significance The study emphasizes the effectiveness of DNA barcoding as a tool for monitoring biodiversity. Open access is provided to a data set that includes records for 1,395 geometrid specimens (331 species) from Bavaria, with 69 additional species from neighbouring regions. Taxa with deep intraspecific sequence divergences are undergoing more detailed analysis to ascertain if they represent cases of cryptic diversity." }, { "instance_id": "R140348xR140177", "comparison_id": "R140348", "paper_id": "R140177", "text": "Embedding logical queries on knowledge graphs Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict \"em what drugs are likely to target proteins involved with both diseases X and Y?\" -- a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries -- a flexible but tractable subset of first-order logic -- on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum." }, { "instance_id": "R140348xR140135", "comparison_id": "R140348", "paper_id": "R140135", "text": "node2vec: Scalable Feature Learning for Networks Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks." }, { "instance_id": "R140348xR140245", "comparison_id": "R140348", "paper_id": "R140245", "text": "Onto2vec: joint vector-based representation of biological entities and their ontology-based annotations Motivation Biological knowledge is widely represented in the form of ontology\u2010based annotations: ontologies describe the phenomena assumed to exist within a domain, and the annotations associate a (kind of) biological entity with a set of phenomena within the domain. The structure and information contained in ontologies and their annotations make them valuable for developing machine learning, data analysis and knowledge extraction algorithms; notably, semantic similarity is widely used to identify relations between biological entities, and ontology\u2010based annotations are frequently used as features in machine learning applications. Results We propose the Onto2Vec method, an approach to learn feature vectors for biological entities based on their annotations to biomedical ontologies. Our method can be applied to a wide range of bioinformatics research problems such as similarity\u2010based prediction of interactions between proteins, classification of interaction types using supervised learning, or clustering. To evaluate Onto2Vec, we use the gene ontology (GO) and jointly produce dense vector representations of proteins, the GO classes to which they are annotated, and the axioms in GO that constrain these classes. First, we demonstrate that Onto2Vec\u2010generated feature vectors can significantly improve prediction of protein\u2010protein interactions in human and yeast. We then illustrate how Onto2Vec representations provide the means for constructing data\u2010driven, trainable semantic similarity measures that can be used to identify particular relations between proteins. Finally, we use an unsupervised clustering approach to identify protein families based on their Enzyme Commission numbers. Our results demonstrate that Onto2Vec can generate high quality feature vectors from biological entities and ontologies. Onto2Vec has the potential to significantly outperform the state\u2010of\u2010the\u2010art in several predictive applications in which ontologies are involved. Availability and implementation https://github.com/bio\u2010ontology\u2010research\u2010group/onto2vec" }, { "instance_id": "R140348xR140153", "comparison_id": "R140348", "paper_id": "R140153", "text": "Clinical Concept Embeddings Learned from Massive Sources of Medical Data Word embeddings have emerged as a popular approach to unsupervised learning of word relationships in machine learning and natural language processing. In this article, we benchmark two of the most popular algorithms, GloVe and word2vec, to assess their suitability for capturing medical relationships in large sources of biomedical data. Leaning on recent theoretical insights, we provide a unified view of these algorithms and demonstrate how different sources of data can be combined to construct the largest ever set of embeddings for 108,477 medical concepts using an insurance claims database of 60 million members, 20 million clinical notes, and 1.7 million full text biomedical journal articles. We evaluate our approach, called cui2vec, on a set of clinically relevant benchmarks and in many instances demonstrate state of the art performance relative to previous results. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings." }, { "instance_id": "R140348xR140164", "comparison_id": "R140348", "paper_id": "R140164", "text": "OPA2Vec: combining formal and informal content of biomedical ontologies to improve similarity-based prediction MOTIVATION Ontologies are widely used in biology for data annotation, integration and analysis. In addition to formally structured axioms, ontologies contain meta-data in the form of annotation axioms which provide valuable pieces of information that characterize ontology classes. Annotation axioms commonly used in ontologies include class labels, descriptions or synonyms. Despite being a rich source of semantic information, the ontology meta-data are generally unexploited by ontology-based analysis methods such as semantic similarity measures. RESULTS We propose a novel method, OPA2Vec, to generate vector representations of biological entities in ontologies by combining formal ontology axioms and annotation axioms from the ontology meta-data. We apply a Word2Vec model that has been pre-trained on either a corpus or abstracts or full-text articles to produce feature vectors from our collected data. We validate our method in two different ways: first, we use the obtained vector representations of proteins in a similarity measure to predict protein-protein interaction on two different datasets. Second, we evaluate our method on predicting gene-disease associations based on phenotype similarity by generating vector representations of genes and diseases using a phenotype ontology, and applying the obtained vectors to predict gene-disease associations using mouse model phenotypes. We demonstrate that OPA2Vec significantly outperforms existing methods for predicting gene-disease associations. Using evidence from mouse models, we apply OPA2Vec to identify candidate genes for several thousand rare and orphan diseases. OPA2Vec can be used to produce vector representations of any biomedical entity given any type of biomedical ontology. AVAILABILITY AND IMPLEMENTATION https://github.com/bio-ontology-research-group/opa2vec. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online." }, { "instance_id": "R140348xR140159", "comparison_id": "R140348", "paper_id": "R140159", "text": "LogicENN: A Neural Based Knowledge Graphs Embedding Model with Logical Rules Knowledge graph embedding models have gained significant attention in AI research. The aim of knowledge graph embedding is to embed the graphs into a vector space in which the structure of the graph is preserved. Recent works have shown that the inclusion of background knowledge, such as logical rules, can improve the performance of embeddings in downstream machine learning tasks. However, so far, most existing models do not allow the inclusion of rules. We address the challenge of including rules and present a new neural based embedding model (LogicENN). We prove that LogicENN can learn every ground truth of encoded rules in a knowledge graph. To the best of our knowledge, this has not been proved so far for the neural based family of embedding models. Moreover, we derive formulae for the inclusion of various rules, including (anti-)symmetric, inverse, irreflexive and transitive, implication, composition, equivalence, and negation. Our formulation allows avoiding grounding for implication and equivalence relations. Our experiments show that LogicENN outperforms the existing models in link prediction." }, { "instance_id": "R140348xR140183", "comparison_id": "R140348", "paper_id": "R140183", "text": "Bio-joie: Joint representation learning of biological knowledge bases Abstract The widespread of Coronavirus has led to a worldwide pandemic with a high mortality rate. Currently, the knowledge accumulated from different studies about this virus is very limited. Leveraging a wide-range of biological knowledge, such as gene on-tology and protein-protein interaction (PPI) networks from other closely related species presents a vital approach to infer the molecular impact of a new species. In this paper, we propose the transferred multi-relational embedding model Bio-JOIE to capture the knowledge of gene ontology and PPI networks, which demonstrates superb capability in modeling the SARS-CoV-2-human protein interactions. Bio-JOIE jointly trains two model components. The knowledge model encodes the relational facts from the protein and GO domains into separated embedding spaces, using a hierarchy-aware encoding technique employed for the GO terms. On top of that, the transfer model learns a non-linear transformation to transfer the knowledge of PPIs and gene ontology annotations across their embedding spaces. By leveraging only structured knowledge, Bio-JOIE significantly outperforms existing state-of-the-art methods in PPI type prediction on multiple species. Furthermore, we also demonstrate the potential of leveraging the learned representations on clustering proteins with enzymatic function into enzyme commission families. Finally, we show that Bio-JOIE can accurately identify PPIs between the SARS-CoV-2 proteins and human proteins, providing valuable insights for advancing research on this new disease." }, { "instance_id": "R140348xR140156", "comparison_id": "R140348", "paper_id": "R140156", "text": "OWL2Vec*: Embedding of OWL Ontologies Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments." }, { "instance_id": "R140348xR140168", "comparison_id": "R140348", "paper_id": "R140168", "text": "Capturing semantic and syntactic information for link prediction in knowledge graphs Link prediction has recently been a major focus of knowledge graphs (KGs). It aims at predicting missing links between entities to complement KGs. Most previous works only consider the triples, but the triples provide less information than the paths. Although some works consider the semantic information (i.e. similar entities get similar representations) of the paths using the Word2Vec models, they ignore the syntactic information (i.e. the order of entities and relations) of the paths. In this paper, we propose RW-LMLM, a novel approach for link prediction. RW-LMLM consists of a random walk algorithm for KG (RW) and a language model-based link prediction model (LMLM). The paths generated by RW are viewed as pseudo-sentences for LMLM training. RW-LMLM can capture the semantic and syntactic information in KGs by considering entities, relations, and order information of the paths. Experimental results show that our method outperforms several state-of-the-art models on benchmark datasets. Further analysis shows that our model is highly parameter efficient." }, { "instance_id": "R140348xR140132", "comparison_id": "R140348", "paper_id": "R140132", "text": "DeepWalk: online learning of social representations We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection." }, { "instance_id": "R140465xR109043", "comparison_id": "R140465", "paper_id": "R109043", "text": "A DNA barcode library for the butterflies of North America Although the butterflies of North America have received considerable taxonomic attention, overlooked species and instances of hybridization continue to be revealed. The present study assembles a DNA barcode reference library for this fauna to identify groups whose patterns of sequence variation suggest the need for further taxonomic study. Based on 14,626 records from 814 species, DNA barcodes were obtained for 96% of the fauna. The maximum intraspecific distance averaged 1/4 the minimum distance to the nearest neighbor, producing a barcode gap in 76% of the species. Most species (80%) were monophyletic, the others were para- or polyphyletic. Although 15% of currently recognized species shared barcodes, the incidence of such taxa was far higher in regions exposed to Pleistocene glaciations than in those that were ice-free. Nearly 10% of species displayed high intraspecific variation (>2.5%), suggesting the need for further investigation to assess potential cryptic diversity. Aside from aiding the identification of all life stages of North American butterflies, the reference library has provided new perspectives on the incidence of both cryptic and potentially over-split species, setting the stage for future studies that can further explore the evolutionary dynamics of this group." }, { "instance_id": "R140465xR140252", "comparison_id": "R140465", "paper_id": "R140252", "text": "Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors." }, { "instance_id": "R140465xR137111", "comparison_id": "R140465", "paper_id": "R137111", "text": "DNA barcode reference library for Iberian butterflies enables a continental-scale preview of potential cryptic diversity How common are cryptic species - those overlooked because of their morphological similarity? Despite its wide-ranging implications for biology and conservation, the answer remains open to debate. Butterflies constitute the best-studied invertebrates, playing a similar role as birds do in providing models for vertebrate biology. An accurate assessment of cryptic diversity in this emblematic group requires meticulous case-by-case assessments, but a preview to highlight cases of particular interest will help to direct future studies. We present a survey of mitochondrial genetic diversity for the butterfly fauna of the Iberian Peninsula with unprecedented resolution (3502 DNA barcodes for all 228 species), creating a reliable system for DNA-based identification and for the detection of overlooked diversity. After compiling available data for European butterflies (5782 sequences, 299 species), we applied the Generalized Mixed Yule-Coalescent model to explore potential cryptic diversity at a continental scale. The results indicate that 27.7% of these species include from two to four evolutionary significant units (ESUs), suggesting that cryptic biodiversity may be higher than expected for one of the best-studied invertebrate groups and regions. The ESUs represent important units for conservation, models for studies of evolutionary and speciation processes, and sentinels for future research to unveil hidden diversity." }, { "instance_id": "R140465xR139497", "comparison_id": "R140465", "paper_id": "R139497", "text": "Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae) Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data." }, { "instance_id": "R140465xR108983", "comparison_id": "R140465", "paper_id": "R108983", "text": "Barcoding the butterflies of southern South America: Species delimitation efficacy, cryptic diversity and geographic patterns of divergence Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%\u20139% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail." }, { "instance_id": "R140465xR136193", "comparison_id": "R140465", "paper_id": "R136193", "text": "Complete DNA barcode reference library for a country's butterfly fauna reveals high performance for temperate Europe DNA barcoding aims to accelerate species identification and discovery, but performance tests have shown marked differences in identification success. As a consequence, there remains a great need for comprehensive studies which objectively test the method in groups with a solid taxonomic framework. This study focuses on the 180 species of butterflies in Romania, accounting for about one third of the European butterfly fauna. This country includes five eco-regions, the highest of any in the European Union, and is a good representative for temperate areas. Morphology and DNA barcodes of more than 1300 specimens were carefully studied and compared. Our results indicate that 90 per cent of the species form barcode clusters allowing their reliable identification. The remaining cases involve nine closely related species pairs, some whose taxonomic status is controversial or that hybridize regularly. Interestingly, DNA barcoding was found to be the most effective identification tool, outperforming external morphology, and being slightly better than male genitalia. Romania is now the first country to have a comprehensive DNA barcode reference database for butterflies. Similar barcoding efforts based on comprehensive sampling of specific geographical regions can act as functional modules that will foster the early application of DNA barcoding while a global system is under development." }, { "instance_id": "R140543xR140514", "comparison_id": "R140543", "paper_id": "R140514", "text": "High-Performance Chemical Sensing Using Schottky-Contacted Chemical Vapor Deposition Grown Monolayer MoS2 Transistors Trace chemical detection is important for a wide range of practical applications. Recently emerged two-dimensional (2D) crystals offer unique advantages as potential sensing materials with high sensitivity, owing to their very high surface-to-bulk atom ratios and semiconducting properties. Here, we report the first use of Schottky-contacted chemical vapor deposition grown monolayer MoS2 as high-performance room temperature chemical sensors. The Schottky-contacted MoS2 transistors show current changes by 2-3 orders of magnitude upon exposure to very low concentrations of NO2 and NH3. Specifically, the MoS2 sensors show clear detection of NO2 and NH3 down to 20 ppb and 1 ppm, respectively. We attribute the observed high sensitivity to both well-known charger transfer mechanism and, more importantly, the Schottky barrier modulation upon analyte molecule adsorption, the latter of which is made possible by the Schottky contacts in the transistors and is not reported previously for MoS2 sensors. This study shows the potential of 2D semiconductors as high-performance sensors and also benefits the fundamental studies of interfacial phenomena and interactions between chemical species and monolayer 2D semiconductors." }, { "instance_id": "R140543xR140505", "comparison_id": "R140543", "paper_id": "R140505", "text": "Highly sensitive NO2 gas sensor based on ozone treated graphene Abstract In the present study, we report a simple and reproducible method to improve the sensing performance of a graphene gas sensor using ozone treatment and demonstrate it with nitrogen dioxide (NO2) gas. The ozone-treated graphene (OTG) sensor demonstrated remarkable enhancement of the sensing performances such as percentage response, detection limit and response time. The percentage response of the OTG sensor was twofold higher than that of a pristine graphene sensor when it was exposed to 200 ppm concentration of NO2 at room temperature. It is noteworthy that significant improvement was achieved in the response time by a factor of 8. Extremely low parts-per-billion (ppb) concentrations were clearly detectable, while the pristine graphene sensor could not detect NO2 molecules below 10 ppm concentration. The detection limit of the OTG sensor was estimated to be 1.3 ppb based on the signal to noise ratio, which is the cutting-edge resolution. The present ozone treatment may provide an effective way to improve the performance of the graphene-based sensor, given its simple process, practical usability and cost effectiveness." }, { "instance_id": "R140543xR139324", "comparison_id": "R140543", "paper_id": "R139324", "text": "Graphene Nanomesh As Highly Sensitive Chemiresistor Gas Sensor Graphene is a one atom thick carbon allotrope with all surface atoms that has attracted significant attention as a promising material as the conduction channel of a field-effect transistor and chemical field-effect transistor sensors. However, the zero bandgap of semimetal graphene still limits its application for these devices. In this work, ethanol-chemical vapor deposition (CVD) of a grown p-type semiconducting large-area monolayer graphene film was patterned into a nanomesh by the combination of nanosphere lithography and reactive ion etching and evaluated as a field-effect transistor and chemiresistor gas sensors. The resulting neck-width of the synthesized nanomesh was about \u223c20 nm and was comprised of the gap between polystyrene (PS) spheres that was formed during the reactive ion etching (RIE) process. The neck-width and the periodicities of the graphene nanomesh (GNM) could be easily controlled depending on the duration/power of the RIE and the size of the PS nanospheres. The fabricated GNM transistor device exhibited promising electronic properties featuring a high drive current and an I(ON)/I(OFF) ratio of about 6, significantly higher than its film counterpart. Similarly, when applied as a chemiresistor gas sensor at room temperature, the graphene nanomesh sensor showed excellent sensitivity toward NO(2) and NH(3), significantly higher than their film counterparts. The ethanol-based graphene nanomesh sensors exhibited sensitivities of about 4.32%/ppm in NO(2) and 0.71%/ppm in NH(3) with limits of detection of 15 and 160 ppb, respectively. Our demonstrated studies on controlling the neck width of the nanomesh would lead to further improvement of graphene-based transistors and sensors." }, { "instance_id": "R140543xR140519", "comparison_id": "R140543", "paper_id": "R140519", "text": "Sensing Behavior of Atomically Thin-Layered MoS2 Transistors Most of recent research on layered chalcogenides is understandably focused on single atomic layers. However, it is unclear if single-layer units are the most ideal structures for enhanced gas-solid interactions. To probe this issue further, we have prepared large-area MoS2 sheets ranging from single to multiple layers on 300 nm SiO2/Si substrates using the micromechanical exfoliation method. The thickness and layering of the sheets were identified by optical microscope, invoking recently reported specific optical color contrast, and further confirmed by AFM and Raman spectroscopy. The MoS2 transistors with different thicknesses were assessed for gas-sensing performances with exposure to NO2, NH3, and humidity in different conditions such as gate bias and light irradiation. The results show that, compared to the single-layer counterpart, transistors of few MoS2 layers exhibit excellent sensitivity, recovery, and ability to be manipulated by gate bias and green light. Further, our ab initio DFT calculations on single-layer and bilayer MoS2 show that the charge transfer is the reason for the decrease in resistance in the presence of applied field." }, { "instance_id": "R140543xR140509", "comparison_id": "R140543", "paper_id": "R140509", "text": "Sub-ppt gas detection with pristine graphene Graphene is widely regarded as one of the most promising materials for sensor applications. Here, we demonstrate that a pristine graphene can detect gas molecules at extremely low concentrations with detection limits as low as 158 parts-per-quadrillion (ppq) for a range of gas molecules at room temperature. The unprecedented sensitivity was achieved by applying our recently developed concept of continuous in situ cleaning of the sensing material with ultraviolet light. The simplicity of the concept, together with graphene\u2019s flexibility to be used on various platforms, is expected to intrigue more investigations to develop ever more sensitive sensors." }, { "instance_id": "R140543xR140445", "comparison_id": "R140543", "paper_id": "R140445", "text": "Ultrahigh sensitivity and layer-dependent sensing performance of phosphorene-based gas sensors Two-dimensional (2D) layered materials have attracted significant attention for device applications because of their unique structures and outstanding properties. Here, a field-effect transistor (FET) sensor device is fabricated based on 2D phosphorene nanosheets (PNSs). The PNS sensor exhibits an ultrahigh sensitivity to NO2 in dry air and the sensitivity is dependent on its thickness. A maximum response is observed for 4.8-nm-thick PNS, with a sensitivity up to 190% at 20 parts per billion (p.p.b.) at room temperature. First-principles calculations combined with the statistical thermodynamics modelling predict that the adsorption density is \u223c1015 cm\u22122 for the 4.8-nm-thick PNS when exposed to 20 p.p.b. NO2 at 300 K. Our sensitivity modelling further suggests that the dependence of sensitivity on the PNS thickness is dictated by the band gap for thinner sheets (<10 nm) and by the effective thickness on gas adsorption for thicker sheets (>10 nm)." }, { "instance_id": "R140543xR140526", "comparison_id": "R140543", "paper_id": "R140526", "text": "Fabrication of a novel 2D-graphene/2D-NiO nanosheet-based hybrid nanostructure and its use in highly sensitive NO2 sensors Abstract A highly sensitive gas sensor based on novel hybrid structures composed of 2D graphene and 2D NiO nanosheets (NSs) is fabricated using a low-cost, low temperature and large area scalable solution-based process. The highly developed hierarchically porous structures of 2D NiO sheets are grown on reduced graphene oxide (RGO) surfaces. Sensors fabricated with hybrid structures showed a responsivity and sensitivity two orders higher than that of a NiO NS alone toward NO2 even at 1 ppm level. This is attributed to the effective carrier transfer from NiO NS to graphene and to the well-developed 2D NiO structure. The sensing results are similar when reducing gases such as H2, NH3 and H2S are tested, but the responsivity toward NO2 was highest among all the gases tested." }, { "instance_id": "R140543xR139343", "comparison_id": "R140543", "paper_id": "R139343", "text": "Exfoliated black phosphorus gas sensing properties at room temperature Room temperature gas sensing properties of chemically exfoliated black phosphorus (BP) to oxidizing (NO2, CO2) and reducing (NH3, H2, CO) gases in a dry air carrier have been reported. To study the gas sensing properties of BP, chemically exfoliated BP flakes have been drop casted on Si3N4 substrates provided with Pt comb-type interdigitated electrodes in N2 atmosphere. Scanning electron microscopy and x-ray photoelectron spectroscopy characterizations show respectively the occurrence of a mixed structure, composed of BP coarse aggregates dispersed on BP exfoliated few layer flakes bridging the electrodes, and a clear 2p doublet belonging to BP, which excludes the occurrence of surface oxidation. Room temperature electrical tests in dry air show a p-type response of multilayer BP with measured detection limits of 20 ppb and 10 ppm to NO2 and NH3 respectively. No response to CO and CO2 has been detected, while a slight but steady sensitivity to H2 has been recorded. The reported results confirm, on an experimental basis, what was previously theoretically predicted, demonstrating the promising sensing properties of exfoliated BP." }, { "instance_id": "R140543xR140498", "comparison_id": "R140543", "paper_id": "R140498", "text": "Black Phosphorus Gas Sensors The utilization of black phosphorus and its monolayer (phosphorene) and few-layers in field-effect transistors has attracted a lot of attention to this elemental two-dimensional material. Various studies on optimization of black phosphorus field-effect transistors, PN junctions, photodetectors, and other applications have been demonstrated. Although chemical sensing based on black phosphorus devices was theoretically predicted, there is still no experimental verification of such an important study of this material. In this article, we report on chemical sensing of nitrogen dioxide (NO2) using field-effect transistors based on multilayer black phosphorus. Black phosphorus sensors exhibited increased conduction upon NO2 exposure and excellent sensitivity for detection of NO2 down to 5 ppb. Moreover, when the multilayer black phosphorus field-effect transistor was exposed to NO2 concentrations of 5, 10, 20, and 40 ppb, its relative conduction change followed the Langmuir isotherm for molecules adsorbed on a surface. Additionally, on the basis of an exponential conductance change, the rate constants for adsorption and desorption of NO2 on black phosphorus were extracted for different NO2 concentrations, and they were in the range of 130-840 s. These results shed light on important electronic and sensing characteristics of black phosphorus, which can be utilized in future studies and applications." }, { "instance_id": "R141156xR141147", "comparison_id": "R141156", "paper_id": "R141147", "text": "RF MEMS Shunt Capacitive Switches Using AlN Compared to Si3N4 Dielectric RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 \u03bcm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do." }, { "instance_id": "R141156xR141119", "comparison_id": "R141156", "paper_id": "R141119", "text": "High-isolation CPW MEMS shunt switches-part 1: modeling This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz." }, { "instance_id": "R141156xR141130", "comparison_id": "R141156", "paper_id": "R141130", "text": "Effects of surface roughness on electromagnetic characteristics of capacitive switches This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm." }, { "instance_id": "R141156xR141133", "comparison_id": "R141156", "paper_id": "R141133", "text": "A High-Power Temperature-Stable Electrostatic RF MEMS Capacitive Switch Based on a Thermal Buckle-Beam Design This paper presents the design, fabrication and measurements of a novel vertical electrostatic RF MEMS switch which utilizes the lateral thermal buckle-beam actuator design in order to reduce the switch sensitivity to thermal stresses. The effect of biaxial and stress gradients are taken into consideration, and the buckle-beam designs show minimal sensitivity to these stresses. Several switches with 4,8, and 12 suspension beams are presented. All the switches demonstrate a low sensitivity to temperature, and the variation in the pull-in voltage is ~ -50 mV/\u00b0C from 25-125\u00b0C. The change in the up-state capacitance for the same temperature range is <; \u00b1 3%. The switches also exhibit excellent RF and mechanical performances, and a capacitance ratio of ~ 20-23 (C\u03c5. = 85-115 fF, Cd = 1.7-2.6 pF) with Q > 150 at 10 GHz in the up-state position is reported. The mechanical resonant frequencies and quality factors are f\u03bf = 60-160 kHz and Qm = 2.3-4.5, respectively. The measured switching and release times are ~ 2-5 \u03bcs and ~ 5-6.5 \u03bcs, respectively. Power handling measurements show good stability with ~ 4 W of incident power at 10 GHz." }, { "instance_id": "R141156xR141124", "comparison_id": "R141156", "paper_id": "R141124", "text": "Investigations of rf shunt airbridge switches among different environmental conditions Abstract This paper reports on extensive investigations on capacitive radio frequency (rf) microelectromechanical systems (MEMS) shunt airbridge switches especially among different environmental conditions. Single airbridge switches yield an insertion loss lower than 0.28 dB at 35 GHz and an isolation down to \u221238 dB at 35 GHz. The capacitance ratio Con/Coff is approximately 60 (best case). The measured temperature dependency of the pull-in voltage was \u22120.3 V/\u00b0C in the temperature range from \u221225 to 75\u00b0C. The measured switching time was 11 \u03bcs and the release time 7 \u03bcs. More than 5\u00d7104 switching cycles could be performed in a non-stabilised environment. For optimised switches no self-closing is observed up to 1 W rf power at 30 GHz. The measurement results are in good agreement with the simulated values." }, { "instance_id": "R141156xR141150", "comparison_id": "R141156", "paper_id": "R141150", "text": "Investigation of Charge Injection and Relaxation in Multilayer Dielectric Stacks for Capacitive RF MEMS Switch Application This paper proposes a new approach to the problem of irreversible stiction of capacitive radio frequency (RF) microelectromechanical (MEMS) switch attributed to the dielectric charging. We investigate how charge accumulates in multi- and single-layer dielectric structures for a capacitive RF MEMS switch using metal-insulator-semiconductor (MIS) capacitor structure. Two multidielectric-layers are structured, which are SiO2+Si3N4 and SiO2+Si3N4+SiO2 stack films. Meanwhile, Si3N4 single-layer dielectric structure is also fabricated for comparison. In the experiments, the space charges are first injected into the dielectric layers by stressing MIS devices with a dc bias; then the injected charge kinetics are monitored by capacitance-voltage measurement before and after charge injection. We found that the polarity of charge accumulated in the dielectric is strongly influenced by the dielectric structure. When the metal electrode is positively biased, a negative charge accumulates in the single and triple-layer devices, while a positive charge accumulates in the double-layer devices. Furthermore, the experiment results also show that the lowest charge accumulation can be achieved using double-layer dielectric structure even though the fastest relaxation process takes place in triple-layer dielectric structure." }, { "instance_id": "R141156xR141136", "comparison_id": "R141156", "paper_id": "R141136", "text": "A zipper RF MEMS tunable capacitor with interdigitated RF and actuation electrodes This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks." }, { "instance_id": "R141425xR141399", "comparison_id": "R141425", "paper_id": "R141399", "text": "A novel nanobody targeting Middle East respiratory syndrome coronavirus (MERS-CoV) receptor-binding domain has potent cross-neutralizing activity and protective efficacy against MERS-CoV Therapeutic development is critical for preventing and treating continual MERS-CoV infections in humans and camels. Because of their small size, nanobodies (Nbs) have advantages as antiviral therapeutics (e.g., high expression yield and robustness for storage and transportation) and also potential limitations (e.g., low antigen-binding affinity and fast renal clearance). Here, we have developed novel Nbs that specifically target the receptor-binding domain (RBD) of MERS-CoV spike protein. They bind to a conserved site on MERS-CoV RBD with high affinity, blocking RBD's binding to MERS-CoV receptor. Through engineering a C-terminal human Fc tag, the in vivo half-life of the Nbs is significantly extended. Moreover, the Nbs can potently cross-neutralize the infections of diverse MERS-CoV strains isolated from humans and camels. The Fc-tagged Nb also completely protects humanized mice from lethal MERS-CoV challenge. Taken together, our study has discovered novel Nbs that hold promise as potent, cost-effective, and broad-spectrum anti-MERS-CoV therapeutic agents." }, { "instance_id": "R141425xR141389", "comparison_id": "R141425", "paper_id": "R141389", "text": "Self-assembled star-shaped chiroplasmonic gold nanoparticles for an ultrasensitive chiro-immunosensor for viruses

Nanoengineered chiral gold nanoparticles and quantum dots for ultrasensitive chiroptical sensing of viruses in blood samples.

" }, { "instance_id": "R141425xR141413", "comparison_id": "R141425", "paper_id": "R141413", "text": "Novel coronavirus-like particles targeting cells lining the respiratory tract Virus like particles (VLPs) produced by the expression of viral structural proteins can serve as versatile nanovectors or potential vaccine candidates. In this study we describe for the first time the generation of HCoV-NL63 VLPs using baculovirus system. Major structural proteins of HCoV-NL63 have been expressed in tagged or native form, and their assembly to form VLPs was evaluated. Additionally, a novel procedure for chromatography purification of HCoV-NL63 VLPs was developed. Interestingly, we show that these nanoparticles may deliver cargo and selectively transduce cells expressing the ACE2 protein such as ciliated cells of the respiratory tract. Production of a specific delivery vector is a major challenge for research concerning targeting molecules. The obtained results show that HCoV-NL63 VLPs may be efficiently produced, purified, modified and serve as a delivery platform. This study constitutes an important basis for further development of a promising viral vector displaying narrow tissue tropism." }, { "instance_id": "R141425xR141395", "comparison_id": "R141425", "paper_id": "R141395", "text": "Enhanced Ability of Oligomeric Nanobodies Targeting MERS Coronavirus Receptor-Binding Domain Middle East respiratory syndrome (MERS) coronavirus (MERS-CoV), an infectious coronavirus first reported in 2012, has a mortality rate greater than 35%. Therapeutic antibodies are key tools for preventing and treating MERS-CoV infection, but to date no such agents have been approved for treatment of this virus. Nanobodies (Nbs) are camelid heavy chain variable domains with properties distinct from those of conventional antibodies and antibody fragments. We generated two oligomeric Nbs by linking two or three monomeric Nbs (Mono-Nbs) targeting the MERS-CoV receptor-binding domain (RBD), and compared their RBD-binding affinity, RBD\u2013receptor binding inhibition, stability, and neutralizing and cross-neutralizing activity against MERS-CoV. Relative to Mono-Nb, dimeric Nb (Di-Nb) and trimeric Nb (Tri-Nb) had significantly greater ability to bind MERS-CoV RBD proteins with or without mutations in the RBD, thereby potently blocking RBD\u2013MERS-CoV receptor binding. The engineered oligomeric Nbs were very stable under extreme conditions, including low or high pH, protease (pepsin), chaotropic denaturant (urea), and high temperature. Importantly, Di-Nb and Tri-Nb exerted significantly elevated broad-spectrum neutralizing activity against at least 19 human and camel MERS-CoV strains isolated in different countries and years. Overall, the engineered Nbs could be developed into effective therapeutic agents for prevention and treatment of MERS-CoV infection." }, { "instance_id": "R141425xR141423", "comparison_id": "R141425", "paper_id": "R141423", "text": "Microneedle array delivered recombinant coronavirus vaccines: Immunogenicity and rapid translational development Abstract Background Coronaviruses pose a serious threat to global health as evidenced by Severe Acute Respiratory Syndrome (SARS), Middle East Respiratory Syndrome (MERS), and COVID-19. SARS Coronavirus (SARS-CoV), MERS Coronavirus (MERS-CoV), and the novel coronavirus, previously dubbed 2019-nCoV, and now officially named SARS-CoV-2, are the causative agents of the SARS, MERS, and COVID-19 disease outbreaks, respectively. Safe vaccines that rapidly induce potent and long-lasting virus-specific immune responses against these infectious agents are urgently needed. The coronavirus spike (S) protein, a characteristic structural component of the viral envelope, is considered a key target for vaccines for the prevention of coronavirus infection. Methods We first generated codon optimized MERS-S1 subunit vaccines fused with a foldon trimerization domain to mimic the native viral structure. In variant constructs, we engineered immune stimulants (RS09 or flagellin, as TLR4 or TLR5 agonists, respectively) into this trimeric design. We comprehensively tested the pre-clinical immunogenicity of MERS-CoV vaccines in mice when delivered subcutaneously by traditional needle injection, or intracutaneously by dissolving microneedle arrays (MNAs) by evaluating virus specific IgG antibodies in the serum of vaccinated mice by ELISA and using virus neutralization assays. Driven by the urgent need for COVID-19 vaccines, we utilized this strategy to rapidly develop MNA SARS-CoV-2 subunit vaccines and tested their pre-clinical immunogenicity in vivo by exploiting our substantial experience with MNA MERS-CoV vaccines. Findings Here we describe the development of MNA delivered MERS-CoV vaccines and their pre-clinical immunogenicity. Specifically, MNA delivered MERS-S1 subunit vaccines elicited strong and long-lasting antigen-specific antibody responses. Building on our ongoing efforts to develop MERS-CoV vaccines, promising immunogenicity of MNA-delivered MERS-CoV vaccines, and our experience with MNA fabrication and delivery, including clinical trials, we rapidly designed and produced clinically-translatable MNA SARS-CoV-2 subunit vaccines within 4 weeks of the identification of the SARS-CoV-2 S1 sequence. Most importantly, these MNA delivered SARS-CoV-2 S1 subunit vaccines elicited potent antigen-specific antibody responses that were evident beginning 2 weeks after immunization. Interpretation MNA delivery of coronaviruses-S1 subunit vaccines is a promising immunization strategy against coronavirus infection. Progressive scientific and technological efforts enable quicker responses to emerging pandemics. Our ongoing efforts to develop MNA-MERS-S1 subunit vaccines enabled us to rapidly design and produce MNA SARS-CoV-2 subunit vaccines capable of inducing potent virus-specific antibody responses. Collectively, our results support the clinical development of MNA delivered recombinant protein subunit vaccines against SARS, MERS, COVID-19, and other emerging infectious diseases." }, { "instance_id": "R141425xR141401", "comparison_id": "R141425", "paper_id": "R141401", "text": "Application of camelid heavy-chain variable domains (VHHs) in prevention and treatment of bacterial and viral infections ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity \u2013 characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host\u2013pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future." }, { "instance_id": "R141425xR141417", "comparison_id": "R141425", "paper_id": "R141417", "text": "Multiplex Paper-Based Colorimetric DNA Sensor Using Pyrrolidinyl Peptide Nucleic Acid-Induced AgNPs Aggregation for Detecting MERS-CoV, MTB, and HPV Oligonucleotides The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection." }, { "instance_id": "R141425xR141409", "comparison_id": "R141425", "paper_id": "R141409", "text": "Heterologous prime-boost vaccination with adenoviral vector and protein nanoparticles induces both Th1 and Th2 responses against Middle East respiratory syndrome coronavirus Abstract The Middle East respiratory syndrome coronavirus (MERS-CoV) is a highly pathogenic and zoonotic virus with a fatality rate in humans of over 35%. Although several vaccine candidates have been developed, there is still no clinically available vaccine for MERS-CoV. In this study, we developed two types of MERS-CoV vaccines: a recombinant adenovirus serotype 5 encoding the MERS-CoV spike gene (Ad5/MERS) and spike protein nanoparticles formulated with aluminum (alum) adjuvant. Next, we tested a heterologous prime\u2013boost vaccine strategy, which compared priming with Ad5/MERS and boosting with spike protein nanoparticles and vice versa, with homologous prime\u2013boost vaccination comprising priming and boosting with either spike protein nanoparticles or Ad5/MERS. Although both types of vaccine could induce specific immunoglobulin G against MERS-CoV, neutralizing antibodies against MERS-CoV were induced only by heterologous prime\u2013boost immunization and homologous immunization with spike protein nanoparticles. Interestingly, Th1 cell activation was induced by immunization schedules including Ad5/MERS, but not by those including only spike protein nanoparticles. Heterologous prime\u2013boost vaccination regimens including Ad5/MERS elicited simultaneous Th1 and Th2 responses, but homologous prime\u2013boost regimens did not. Thus, heterologous prime\u2013boost may induce longer-lasting immune responses against MERS-CoV because of an appropriate balance of Th1/Th2 responses. However, both heterologous prime\u2013boost and homologous spike protein nanoparticles vaccinations could provide protection from MERS-CoV challenge in mice. Our results demonstrate that heterologous immunization by priming with Ad5/MERS and boosting with spike protein nanoparticles could be an efficient prophylactic strategy against MERS-CoV infection." }, { "instance_id": "R141425xR141391", "comparison_id": "R141425", "paper_id": "R141391", "text": "Nanoparticulate vacuolar ATPase blocker exhibits potent host-targeted antiviral activity against feline coronavirus Feline infectious peritonitis (FIP), caused by a mutated feline coronavirus, is one of the most serious and fatal viral diseases in cats. The disease remains incurable, and there is no effective vaccine available. In light of the pathogenic mechanism of feline coronavirus that relies on endosomal acidification for cytoplasmic entry, a novel vacuolar ATPase blocker, diphyllin, and its nanoformulation are herein investigated for their antiviral activity against the type II feline infectious peritonitis virus (FIPV). Experimental results show that diphyllin dose-dependently inhibits endosomal acidification in fcwf-4 cells, alters the cellular susceptibility to FIPV, and inhibits the downstream virus replication. In addition, diphyllin delivered by polymeric nanoparticles consisting of poly(ethylene glycol)-block-poly(lactide-co-glycolide) (PEG-PLGA) further demonstrates an improved safety profile and enhanced inhibitory activity against FIPV. In an in vitro model of antibody-dependent enhancement of FIPV infection, diphyllin nanoparticles showed a prominent antiviral effect against the feline coronavirus. In addition, the diphyllin nanoparticles were well tolerated in mice following high-dose intravenous administration. This study highlights the therapeutic potential of diphyllin and its nanoformulation for the treatment of FIP." }, { "instance_id": "R141425xR141403", "comparison_id": "R141425", "paper_id": "R141403", "text": "Nanobodies\u00ae as inhaled biotherapeutics for lung diseases ABSTRACT Local pulmonary delivery of biotherapeutics may offer advantages for the treatment of lung diseases. Delivery of the therapeutic entity directly to the lung has the potential for a rapid onset of action, reduced systemic exposure and the need for a lower dose, as well as needleless administration. However, formulation of a protein for inhaled delivery is challenging and requires proteins with favorable biophysical properties suitable to withstand the forces associated with formulation, delivery, and inhalation devices. Nanobodies are the smallest functional fragments derived from a naturally occurring heavy chain\u2010only immunoglobulin. They are highly soluble, stable, and show biophysical characteristics that are particularly well suited for pulmonary delivery. This paper highlights a number of clinical and preclinical studies on antibodies delivered via the pulmonary route and describes the advantages of using Nanobodies for inhaled delivery to the lung. The latter is illustrated by the specific example of ALX\u20100171, a Nanobody in clinical development for the treatment of respiratory syncytial virus (RSV) infections." }, { "instance_id": "R141425xR141393", "comparison_id": "R141425", "paper_id": "R141393", "text": "Chaperna-mediated assembly of ferritin-based Middle East respiratory syndrome-coronavirus nanoparticles The folding of monomeric antigens and their subsequent assembly into higher ordered structures are crucial for robust and effective production of nanoparticle (NP) vaccines in a timely and reproducible manner. Despite significant advances in in silico design and structure-based assembly, most engineered NPs are refractory to soluble expression and fail to assemble as designed, presenting major challenges in the manufacturing process. The failure is due to a lack of understanding of the kinetic pathways and enabling technical platforms to ensure successful folding of the monomer antigens into regular assemblages. Capitalizing on a novel function of RNA as a molecular chaperone (chaperna: chaperone + RNA), we provide a robust protein-folding vehicle that may be implemented to NP assembly in bacterial hosts. The receptor-binding domain (RBD) of Middle East respiratory syndrome-coronavirus (MERS-CoV) was fused with the RNA-interaction domain (RID) and bacterioferritin, and expressed in Escherichia coli in a soluble form. Site-specific proteolytic removal of the RID prompted the assemblage of monomers into NPs, which was confirmed by electron microscopy and dynamic light scattering. The mutations that affected the RNA binding to RBD significantly increased the soluble aggregation into amorphous structures, reducing the overall yield of NPs of a defined size. This underscored the RNA-antigen interactions during NP assembly. The sera after mouse immunization effectively interfered with the binding of MERS-CoV RBD to the cellular receptor hDPP4. The results suggest that RNA-binding controls the overall kinetic network of the antigen folding pathway in favor of enhanced assemblage of NPs into highly regular and immunologically relevant conformations. The concentration of the ion Fe2+, salt, and fusion linker also contributed to the assembly in vitro, and the stability of the NPs. The kinetic \u201cpace-keeping\u201d role of chaperna in the super molecular assembly of antigen monomers holds promise for the development and delivery of NPs and virus-like particles as recombinant vaccines and for serological detection of viral infections." }, { "instance_id": "R141425xR141397", "comparison_id": "R141425", "paper_id": "R141397", "text": "Chimeric camel/human heavy-chain antibodies protect against MERS-CoV infection Dromedary camel heavy chain\u2013only antibodies may provide novel intervention strategies against MERS coronavirus. Middle East respiratory syndrome coronavirus (MERS-CoV) continues to cause outbreaks in humans as a result of spillover events from dromedaries. In contrast to humans, MERS-CoV\u2013exposed dromedaries develop only very mild infections and exceptionally potent virus-neutralizing antibody responses. These strong antibody responses may be caused by affinity maturation as a result of repeated exposure to the virus or by the fact that dromedaries\u2014apart from conventional antibodies\u2014have relatively unique, heavy chain\u2013only antibodies (HCAbs). These HCAbs are devoid of light chains and have long complementarity-determining regions with unique epitope binding properties, allowing them to recognize and bind with high affinity to epitopes not recognized by conventional antibodies. Through direct cloning and expression of the variable heavy chains (VHHs) of HCAbs from the bone marrow of MERS-CoV\u2013infected dromedaries, we identified several MERS-CoV\u2013specific VHHs or nanobodies. In vitro, these VHHs efficiently blocked virus entry at picomolar concentrations. The selected VHHs bind with exceptionally high affinity to the receptor binding domain of the viral spike protein. Furthermore, camel/human chimeric HCAbs\u2014composed of the camel VHH linked to a human Fc domain lacking the CH1 exon\u2014had an extended half-life in the serum and protected mice against a lethal MERS-CoV challenge. HCAbs represent a promising alternative strategy to develop novel interventions not only for MERS-CoV but also for other emerging pathogens." }, { "instance_id": "R141593xR141444", "comparison_id": "R141593", "paper_id": "R141444", "text": "Low-kfilms modification under EUV and VUV radiation Modification of ultra-low-k films by extreme ultraviolet (EUV) and vacuum ultraviolet (VUV) emission with 13.5, 58.4, 106, 147 and 193 nm wavelengths and fluences up to 6 \u00d7 1018 photons cm\u22122 is studied experimentally and theoretically to reveal the damage mechanism and the most \u2018damaging\u2019 spectral region. Organosilicate glass (OSG) and organic low-k films with k-values of 1.8\u20132.5 and porosity of 24\u201351% are used in these experiments. The Si\u2013CH3 bonds depletion is used as a criterion of VUV damage of OSG low-k films. It is shown that the low-k damage is described by two fundamental parameters: photoabsorption (PA) cross-section \u03c3PA and effective quantum yield \u03c6 of Si\u2013CH3 photodissociation. The obtained \u03c3PA and \u03c6 values demonstrate that the effect of wavelength is defined by light absorption spectra, which in OSG materials is similar to fused silica. This is the reason why VUV light in the range of \u223c58\u2013106 nm having the highest PA cross-sections causes strong Si\u2013CH3 depletion only in the top part of the films (\u223c50\u2013100 nm). The deepest damage is observed after exposure to 147 nm VUV light since this emission is located at the edge of Si\u2013O absorption, has the smallest PA cross-section and provides extensive Si\u2013CH3 depletion over the whole film thickness. The effective quantum yield slowly increases with the increasing porosity but starts to grow quickly when the porosity exceeds the critical threshold located close to a porosity of \u223c50%. The high degree of pore interconnectivity of these films allows easy movement of the detached methyl radicals. The obtained results have a fundamental character and can be used for prediction of ULK material damage under VUV light with different wavelengths." }, { "instance_id": "R141593xR141447", "comparison_id": "R141593", "paper_id": "R141447", "text": "Evaluation of Absolute Flux of Vacuum Ultraviolet Photons in an Electron Cyclotron Resonance Hydrogen Plasma: Comparison with Ion Flux We compared the absolute flux of positive ions with the flux of photons in a vacuum ultraviolet (VUV) wavelength range in an electron cyclotron resonance hydrogen plasma. The absolute flux of positive ions was measured using a Langmuir probe. The absolute flux of VUV photons was evaluated on the basis of the branching ratio between the Lyman and Balmer lines emitted from electronic states with the same principal quantum numbers. The absolute intensities of the Balmer lines were obtained by calibrating the sensitivity of the spectroscopic system using a tungsten standard lamp. It has been found that the flux of VUV photons is, at least, on the comparable order of magnitude with the positive ion flux, suggesting the importance of VUV photons in plasma-induced damage in fabrication processes of ultralarge-scale integrated circuits." }, { "instance_id": "R141593xR141457", "comparison_id": "R141593", "paper_id": "R141457", "text": "Absolute intensities of the vacuum ultraviolet spectra in oxide etch plasma processing discharges In this paper we report absolute intensities of vacuum ultraviolet and near ultraviolet emission lines (4.8 eV to 18 eV ) for aluminum etching discharges in an inductively coupled plasma reactor. We report line intensities as a function of wafer type, pressure, gas mixture and rf excitation level. IrI a standard aluminum etching mixture containing C12 and BC13 almost all the light emitted at energies exceeding 8.8 eV was due to neutral atomic chlorine. Optical trapping of the WV radiation in the discharge complicates calculations of VUV fluxes to the wafer. However, we see total photon fluxes to the wailer at energies above 8.8 eV on the order of 4 x 1014 photons/cm2sec with anon- reactive wafer and 0.7 x 10 `4 photons/cm2sec with a reactive wtier. The maj ority of the radiation observed was between 8.9 and 9.3 eV. At these energies, the photons have enough energy to create electron-hole pairs in Si02, but may penetrate up to a micron into the Si02 before being absorbed. Relevance of these measurements to vacuum-W photon-induced darnage of Si02 during etching is discussed." }, { "instance_id": "R141593xR108948", "comparison_id": "R141593", "paper_id": "R108948", "text": "A microwave plasma source for VUV atmospheric photochemistry Microwave plasma discharges working at low pressure are nowadays a well-developed technique mainly used to provide radiations at different wavelengths. The aim of this work is to show that those discharges are an efficient windowless VUV photon source for planetary atmospheric photochemistry experiments. To do this, we use a surfatron-type discharge with a neon gas flow in the mbar pressure range coupled to a photochemical reactor. Working in the VUV range allows to focus on nitrogen-dominated atmospheres ({\\lambda}<100nm). The experimental setup makes sure that no other energy sources (electrons, metastable atoms) than the VUV photons interact with the reactive medium. Neon owns two resonance lines at 73.6 and 74.3 nm which behave differently regarding the pressure or power conditions. In parallel, the VUV photon flux emitted at 73.6 nm has been experimentally estimated in different conditions of pressure and power and varies in a large range between 2x1013 this http URL-2 and 4x1014 this http URL-2 which is comparable to a VUV synchrotron photon flux. Our first case study is the atmosphere of Titan and its N2-CH4 atmosphere. With this VUV source, the production of HCN and C2N2, two major Titan compounds, is detected, ensuring the suitability of the source for atmospheric photochemistry experiments." }, { "instance_id": "R141593xR108942", "comparison_id": "R141593", "paper_id": "R108942", "text": "Comparison of surface vacuum ultraviolet emissions with resonance level number densities. I. Argon plasmas Vacuum ultraviolet (VUV) photons emitted from excited atomic states are ubiquitous in material processing plasmas. The highly energetic photons can induce surface damage by driving surface reactions, disordering surface regions, and affecting bonds in the bulk material. In argon plasmas, the VUV emissions are due to the decay of the 1s4 and 1s2 principal resonance levels with emission wavelengths of 104.8 and 106.7 nm, respectively. The authors have measured the number densities of atoms in the two resonance levels using both white light optical absorption spectroscopy and radiation-trapping induced changes in the 3p54p\u21923p54s branching fractions measured via visible/near-infrared optical emission spectroscopy in an argon inductively coupled plasma as a function of both pressure and power. An emission model that takes into account radiation trapping was used to calculate the VUV emission rate. The model results were compared to experimental measurements made with a National Institute of Standards and Techn..." }, { "instance_id": "R141593xR108954", "comparison_id": "R141593", "paper_id": "R108954", "text": "Ultraviolet/vacuum-ultraviolet emission from a high power magnetron sputtering plasma with an aluminum target We report the in situ measurement of the ultraviolet/vacuum-ultraviolet (UV/VUV) emission from a plasma produced by high power impulse magnetron sputtering with aluminum target, using argon as background gas. The UV/VUV detection system is based upon the quantification of the re-emitted fluorescence from a sodium salicylate layer that is placed in a housing inside the vacuum chamber, at 11 cm from the center of the cathode. The detector is equipped with filters that allow for differentiating various spectral regions, and with a front collimating tube that provides a spatial resolution \u2248 0.5 cm. Using various views of the plasma, the measured absolutely calibrated photon rates enable to calculate emissivities and irradiances based on a model of the ionization region. We present results that demonstrate that Al++ ions are responsible for most of the VUV irradiance. We also discuss the photoelectric emission due to irradiances on the target ~ 2\u00d71018 s-1 cm-2 produced by high energy photons from resonance lines of Ar+." }, { "instance_id": "R141593xR141550", "comparison_id": "R141593", "paper_id": "R141550", "text": "The effect of VUV radiation from Ar/O2plasmas on low-kSiOCH films The degradation of porous low-k materials, like SiOCH, under plasma processing continues to be a problem in the next generation of integrated-circuit fabrication. Due to the exposure of the film to many species during plasma treatment, such as photons, ions, radicals, etc, it is difficult to identify the mechanisms responsible for plasma-induced damage. Using a vacuum beam apparatus with a calibrated Xe vacuum ultraviolet (VUV) lamp, we show that 147 nm VUV photons and molecular O2 alone can damage these low-k materials. Using Fourier-transform infrared (FTIR) spectroscopy, we show that VUV/O2 exposure causes a loss of methylated species, resulting in a hydrophilic, SiOx-like layer that is susceptible to H2O absorption, leading to an increased dielectric constant. The effect of VUV radiation on chemical modification of porous SiOCH films in the vacuum beam apparatus and in Ar and O2 plasma exposure was found to be a significant contributor to dielectric damage. Measurements of dielectric constant change using a mercury probe are consistent with chemical modification inferred from FTIR analysis. Furthermore, the extent of chemical modification appears to be limited by the penetration depth of the VUV photons, which is dependent on wavelength of radiation. The creation of a SiOx-like layer near the surface of the material, which grows deeper as more methyl is extracted, introduces a dynamic change of VUV absorption throughout the material over time. As a result, the rate of methyl loss is continuously changing during the exposure. We present a model that attempts to capture this dynamic behaviour and compare the model predictions to experimental data through a fitting parameter that represents the effective photo-induced methyl removal. While this model accurately simulates the methyl loss through VUV exposure by the Xe lamp and Ar plasma, the methyl loss from VUV photons in O2 plasma are only accurately depicted at longer exposure times. We conclude that other species, such as oxygen radicals or ions, may play a major role in chemical modification at short times near the surface of the material, while VUV photons contribute to the majority of the damage in the bulk." }, { "instance_id": "R141593xR141542", "comparison_id": "R141593", "paper_id": "R141542", "text": "Comparison of vacuum ultra-violet emission of Ar/CF4and Ar/CF3I capacitively coupled plasmas Spectra in the vacuum-ultra violet range (VUV, 30 nm\u2013200 nm) as well as in the ultra-violet(UV) and visible ranges (UV+vis, 200 nm\u2013800 nm) were measured from Ar/CF3I and Ar/CF4 discharges. The discharges were generated in an industrial 300 mm capacitively coupled plasma source with 27 MHz radio-frequency power. It was seen that the measured spectra were strongly modified. This is mainly due to absorption, especially by CF3I, and Ar self-trapping along the line of sight, towards the detector and in the plasma itself. The estimated unabsorbed VUV spectra were revealed from the spectra of mixtures with low fluorocarbon gas content by means of normalization with unabsorbed I* emission, at 206 nm, and CF band (1B1(0,v',0)A1(0,,0)) emission between 230 nm and 430 nm. Absolute fluences of UV CF emission were derived using hybrid 1-dimensional (1D) particle-in-cell (PIC) Monte-Carlo (MC) model calculations. Absolute calibration of the VUV emission was performed using these calculated values from the model, which has never been done previously for real etch conditions in an industrial chamber. It was seen that the argon resonant lines play a significant role in the VUV spectra. These lines are dominant in the case of etching recipes close to the standard ones. The restored unabsorbed spectra confirm that replacement of conventional CF4 etchant gas with CF3I in low-k etching recipes leads to an increase in the overall VUV emission intensity. However, emission from Ar exhibited the most intense peaks. Damage to low-k SiCOH glasses by the estimated VUV was calculated for blanket samples with pristine k-value of 2.2. The calculations were then compared with Fourier transform infrared (FTIR) data for samples exposed to the similar experimental conditions in the same reactor. It was shown that Ar emission plays the most significant role in VUV-induced damage." }, { "instance_id": "R141593xR141452", "comparison_id": "R141593", "paper_id": "R141452", "text": "HBr Plasma Treatment Versus VUV Light Treatment to Improve 193\u2009nm Photoresist Pattern Linewidth Roughness We have studied the impact of HBr plasma treatment and the role of the VUV light emitted by this plasma on the chemical modifications and resulting roughness of both blanket and patterned photoresists. The experimental results show that both treatments lead to similar resist bulk chemical modifications that result in a decrease of the resist glass transition temperature (Tg). This drop in Tg allows polymer chain rearrangement that favors surface roughness smoothening. The smoothening effect is mainly attributed to main chain scission induced by plasma VUV light. For increased VUV light exposure time, the crosslinking mechanism dominates over main chain scission and limits surface roughness smoothening. In the case of the HBr plasma treatment, the synergy between Bromine radicals and VUV light leads to the formation of dense graphitized layers on top and sidewalls surfaces of the resist pattern. The presence of a dense layer on the HBr cured resist sidewalls prevents from resist pattern reflowing but on the counter side leads to increased surface roughness and linewidth roughness compared to VUV light treatment." }, { "instance_id": "R141699xR141656", "comparison_id": "R141699", "paper_id": "R141656", "text": "Natural Biowaste-Cocoon-Derived Granular Activated Carbon-Coated ZnO Nanorods: A Simple Route To Synthesizing a Core\u2013Shell Structure and Its Highly Enhanced UV and Hydrogen Sensing Properties Granular activated carbon (GAC) materials were prepared via simple gas activation of silkworm cocoons and were coated on ZnO nanorods (ZNRs) by the facile hydrothermal method. The present combination of GAC and ZNRs shows a core-shell structure (where the GAC is coated on the surface of ZNRs) and is exposed by systematic material analysis. The as-prepared samples were then fabricated as dual-functional sensors and, most fascinatingly, the as-fabricated core-shell structure exhibits better UV and H2 sensing properties than those of as-fabricated ZNRs and GAC. Thus, the present core-shell structure-based H2 sensor exhibits fast responses of 11% (10 ppm) and 23.2% (200 ppm) with ultrafast response and recovery. However, the UV sensor offers an ultrahigh photoresponsivity of 57.9 A W-1, which is superior to that of as-grown ZNRs (0.6 A W-1). Besides this, switching photoresponse of GAC/ZNR core-shell structures exhibits a higher switching ratio (between dark and photocurrent) of 1585, with ultrafast response and recovery, than that of as-grown ZNRs (40). Because of the fast adsorption ability of GAC, it was observed that the finest distribution of GAC on ZNRs results in rapid electron transportation between the conduction bands of GAC and ZNRs while sensing H2 and UV. Furthermore, the present core-shell structure-based UV and H2 sensors also well-retained excellent sensitivity, repeatability, and long-term stability. Thus, the salient feature of this combination is that it provides a dual-functional sensor with biowaste cocoon and ZnO, which is ecological and inexpensive." }, { "instance_id": "R141699xR141640", "comparison_id": "R141699", "paper_id": "R141640", "text": "Photoluminescence based H2 and O2 gas sensing by ZnO nanowires Gas sensing properties of ZnO nanowires prepared via thermal chemical vapor deposition method were investigated by analyzing change in their photoluminescence (PL) spectra. The as-synthesized nanowires show two different PL peaks positioned at 380 nm and 520 nm. The 380 nm emission is ascribed to near band edge emission, and the green peak (520 nm) appears due to the oxygen vacancy defects. The intensity of the green PL signal enhances upon hydrogen gas exposure, whereas it gets quenched upon oxygen gas loading. The ZnO nanowires' sensing response values were observed as about 54% for H2 gas and 9% for O2 gas at room temperature for 50 sccm H2/O2 gas flow rate. The sensor response was also analyzed as a function of sample temperature ranging from 300 K to 400 K. A conclusion was derived from the observations that the H2/O2 gases affect the adsorbed oxygen species on the surface of ZnO nanowires. The adsorbed species result in the band bending and hence changes the depletion region which causes variation i..." }, { "instance_id": "R141699xR141644", "comparison_id": "R141699", "paper_id": "R141644", "text": "Fabrication of a highly flexible low-cost H2 gas sensor using ZnO nanorods grown on an ultra-thin nylon substrate A \u201chighly flexible low-cost\u201d H2 gas sensor was fabricated via inclined and vertically well-aligned ZnO nanorods on a \u201ccheap, thin (15 \u00b5m), and highly flexible\u201d nylon substrate using the hydrothermal method. Morphological, crystallinity, and optical properties of the prepared ZnO nanorods were studied by field emission scanning electron microscopy, transmission electron microscopy, energy-dispersive X-ray analysis, X-ray diffraction, and photoluminescence measurements. Results revealed the formation of aligned hexagonal-like nanorods with high aspect ratio and density. The results confirmed the formation of w\u00fcrtzite ZnO phase with a preferred orientation along the (002) direction with high crystallinity, excellent quality, and few defects. The sensitivity and response time behaviors of the ZnO-based gas sensor to hydrogen gas at different operation temperatures and in various hydrogen concentrations were investigated. Under 500 ppm of H2 exposure at different temperatures from room temperature to 180 \u00b0C, the sensitivity increased from 109 to 264 %. When the exposed H2 gas increased from 750 to 2000 ppm at a fixed temperature of 75 \u00b0C, the sensitivity also sharply increased from 246 to 578 %. Moreover, both the response and recovery time of the device during both tests were enhanced. The hydrogen gas sensing mechanisms of ZnO nanorods in low and high operation temperatures were discussed." }, { "instance_id": "R141699xR141634", "comparison_id": "R141699", "paper_id": "R141634", "text": "Ethanol sensing with Au-modified ZnO microwires Abstract A room temperature ethanol sensor based on Au-modified zinc oxide microwires (Au/ZnO MWs) is demonstrated. Au nanoparticles (Au NPs) were immobilized via ion sputtering onto the surface of CVD-fabricated ZnO microwires (ZnO MWs) to serve as sensitizers. Au modification was characterized by scanning electron microscopy, energy-dispersive X-ray spectroscopy, X-ray diffraction, and photoluminescence. Gas-sensing tests indicated that the modified microwires exhibited an enhanced performance relative to the unmodified ZnO microwires over a wide gas concentration range, with very stable repeatability, fast recovery time and high sensing response and selectivity. A sensing mechanism is described in terms of localized surface plasmon resonance (LSPR) effect of Au NPs. The superior sensing characteristics indicate the device's potential applications as room-temperature gas sensor, and a straightforward and economical fabrication also makes it very attractive for more widespread use." }, { "instance_id": "R141699xR141611", "comparison_id": "R141699", "paper_id": "R141611", "text": "UV-activated room-temperature gas sensing mechanism of polycrystalline ZnO The effects of UV illumination on the electronic properties and gas sensing performance of ZnO are reported. It is found that UV light improves the sensitivity and the sensor response and recovery rate. By investigating the photoresponse behavior of ZnO, we observe that the electrons generated by UV light promote the adsorption of oxygen and form the photoinduced oxygen ions [O2\u2212(hv)]. These ions [O2\u2212(hv)] are responsible for the room-temperature gas sensing phenomena and promise enhanced sensor performance through further optimization." }, { "instance_id": "R141699xR141652", "comparison_id": "R141699", "paper_id": "R141652", "text": "Alcohol sensing performance of ZnO hexagonal nanotubes at low temperatures: A qualitative understanding Abstract ZnO hexagonal nanotube array was synthesized on fluorine doped tin-oxide (FTO) coated glass substrate (thickness: 1.1 mm, surface Resistivity: \u223c10 \u03a9/sq), by a two-step process consisting of electro-deposition and electrochemical etching. Aqueous solution of equi-molar zinc nitrate hexahydrate and hexamethylenetetramine was used as the electrolyte for synthesis of ZnO hexagonal nanorods by electro-deposition technique with \u22121.8 V potential at 80 \u00b0C for 40 min. Grown hexagonal nanorods were then electrochemically etched to form the nanotube array by using ethylenediamine at 75 \u00b0C for 2 hours with \u22120.06 V bias. After detailed structural characterizations, resistive mode alcohol sensing (10\u2013700 ppm) was carried out in the temperature range of 27 \u00b0C\u2013150 \u00b0C. 75 \u00b0C was found to be the optimum operating temperature for ethanol, methanol and 2-propanol detection. However, at even room temperature (27 \u00b0C), the sensor offered promising response characteristics towards alcohols. At this temperature, for 10 ppm of ethanol, methanol and 2-propanol, response magnitude was observed to be \u223c30%, \u223c18% and \u223c10%, respectively; while for 700 ppm the corresponding response magnitude was \u223c64%, \u223c51%, \u223c48%, respectively. Under the influence of humidity, the baseline resistance decreased with increased humidity while the corresponding change in response magnitude (compared to dry air) was found to be insignificant. A qualitative model has been demonstrated correlating the surface to volume ratio of the nanotubes with the response characteristics." }, { "instance_id": "R141699xR141618", "comparison_id": "R141699", "paper_id": "R141618", "text": "Enhancement of CuO and ZnO nanowires methanol sensing properties with diode-based structure Metal oxides nanostructures are important materials involving in the development of gas detection systems, but most of them only working at elevated temperature. A diode based structure of p-type copper oxide (CuO) and n-type zinc oxide (ZnO) nanowires (NWs) on silicon, which posses rectifying I\u2013V characteristic, was fabricated to overcome this drawback. Gas sensing characteristics of CuO NWs and ZnO NWs with and without diode structure have been examined by measuring the resistance change towards 0.5% methanol vapour at room temperature. The diode based structures showed significant improvement in sensing behaviours. The implementation of CuO NWs and ZnO NWs with diode based structures showed great enhancement in terms of sensitivity, reliability and recovery rate. The findings can contribute to the development of room temperature gas sensing system. The fabrication procedures and working principles of the diode structures are detailed in this paper. \u00a9 2014 Elsevier B.V. All rights reserved." }, { "instance_id": "R141699xR141647", "comparison_id": "R141699", "paper_id": "R141647", "text": "Room temperature ferromagnetism and gas sensing in ZnO nanostructures: Influence of intrinsic defects and Mn, Co, Cu doping Abstract Undoped and transition metal (Cu, Co and Mn) doped ZnO nanostructures were successfully prepared via a microwave-assisted hydrothermal method followed by annealing at 500 \u00b0C. Numerous characterization facilities such as X-ray powder diffraction (XRD), field emission scanning electron microscopy (FESEM), and high-resolution transmission electron microscopy (HRTEM) were employed to acquire the structural and morphological information of the prepared ZnO based products. Combination of defect structure analysis based on photoluminescence (PL) and electron paramagnetic resonance (EPR) indicated that co-existing oxygen vacancies (V O ) and zinc interstitials (Zn i ) defects are responsible for the observed ferromagnetism in undoped and transition metal (TM) doped ZnO systems. PL analysis demonstrated that undoped ZnO has more donor defects (V O and Zn i ) which are beneficial for gas response enhancement. Undoped ZnO based sensor exhibited a higher sensor response to NH 3 gas compared to its counterparts owing to high content of donor defects while transition metal doped sensors showed short response and recovery times compared to undoped ZnO." }, { "instance_id": "R141699xR141621", "comparison_id": "R141699", "paper_id": "R141621", "text": "Probing the highly efficient room temperature ammonia gas sensing properties of a luminescent ZnO nanowire array prepared via an AAO-assisted template route

A highly ordered luminescent ZnO nanowire array was synthesized which has excellent sensitivity and fast response to NH3 gas.

" }, { "instance_id": "R141723xR140879", "comparison_id": "R141723", "paper_id": "R140879", "text": "Is Growth Obsolete? A long decade ago economic growth was the reigning fashion of political economy. It was simultaneously the hottest subject of economic theory and research, a slogan eagerly claimed by politicians of all stripes, and a serious objective of the policies of governments. The climate of opinion has changed dramatically. Disillusioned critics indict both economic science and economic policy for blind obeisance to aggregate material \"progress,\" and for neglect of its costly side effects. Growth, it is charged, distorts national priorities, worsens the distribution of income, and irreparably damages the environment. Paul Erlich speaks for a multitude when he says, \"We must acquire a life style which has as its goal maximum freedom and happiness for the individual, not a maximum Gross National Product.\" Growth was in an important sense a discovery of economics after the Second World War. Of course economic development has always been the grand theme of historically minded scholars of large mind and bold concept, notably Marx, Schumpeter, Kuznets. But the mainstream of economic analysis was not comfortable with phenomena of change and progress. The stationary state was the long-run equilibrium of classical and neoclassical theory, and comparison of alternative static equilibriums was the most powerful theoretical tool. Technological change and population increase were most readily accommodated as one-time exogenous shocks; comparative static analysis could be used to tell how they altered the equilibrium of the system. The obvious fact that these \"shocks\" were occurring continuously, never allowing the" }, { "instance_id": "R141723xR141039", "comparison_id": "R141723", "paper_id": "R141039", "text": "A Framework to measure the progress of societies Over the last three decades, a number of frameworks have been developed to promote and measure well-being, quality of life, human development and sustainable development. Some frameworks use a conceptual approach while others employ a consultative approach, and different initiatives to measure progress will require different frameworks. The aim of this paper is to present a proposed framework for measuring the progress of societies, and to compare it with other progress frameworks that are currently in use around the world. The framework does not aim to be definitive, but rather to suggest a common starting point that the authors believe is broad-based and flexible enough to be applied in many situations around the world. It is also the intention that the framework could be used to identify gaps in existing statistical standards and to guide work to fill these gaps." }, { "instance_id": "R141747xR140961", "comparison_id": "R141747", "paper_id": "R140961", "text": "Has Australia surpassed its optimal macroeconomic scale? Finding out with the aid of `benefit' and `cost' accounts and a sustainable net benefit index Abstract The sustainable economic welfare of a nation depends largely on the sustainable net benefits the macroeconomy confers to its citizens. For this reason, an optimal macroeconomic scale can be considered one where the physical scale of the macroeconomy and the qualitative nature of the stock of wealth it comprises maximises a nation's sustainable net benefits. The corollary of this definition is thus: the physical scale of the macroeconomy should grow only if, in the process, the sustainable net benefits of a nation increase. It should cease to grow once sustainable net benefits are maximised. Whilst it is one thing to promote an optimal macroeconomic scale, it is another entirely to gain an appreciation of the sustainable net benefits yielded by the macroeconomy. Gaining such an appreciation constitutes the central aim of this paper. With the assistance of two separate `benefit' and `cost' accounts to replace gross domestic product (GDP), a sustainable net benefit index is constructed for Australia for the period 1966\u20131967 to 1994\u20131995. The index, particularly at the per capita level, indicates that economic welfare in Australia is declining (i.e. the average Australian is getting `poorer') despite per capita real GDP increasing. The index therefore suggests that the scale of the Australian macroeconomy has probably well exceeded the optimum." }, { "instance_id": "R141747xR140879", "comparison_id": "R141747", "paper_id": "R140879", "text": "Is Growth Obsolete? A long decade ago economic growth was the reigning fashion of political economy. It was simultaneously the hottest subject of economic theory and research, a slogan eagerly claimed by politicians of all stripes, and a serious objective of the policies of governments. The climate of opinion has changed dramatically. Disillusioned critics indict both economic science and economic policy for blind obeisance to aggregate material \"progress,\" and for neglect of its costly side effects. Growth, it is charged, distorts national priorities, worsens the distribution of income, and irreparably damages the environment. Paul Erlich speaks for a multitude when he says, \"We must acquire a life style which has as its goal maximum freedom and happiness for the individual, not a maximum Gross National Product.\" Growth was in an important sense a discovery of economics after the Second World War. Of course economic development has always been the grand theme of historically minded scholars of large mind and bold concept, notably Marx, Schumpeter, Kuznets. But the mainstream of economic analysis was not comfortable with phenomena of change and progress. The stationary state was the long-run equilibrium of classical and neoclassical theory, and comparison of alternative static equilibriums was the most powerful theoretical tool. Technological change and population increase were most readily accommodated as one-time exogenous shocks; comparative static analysis could be used to tell how they altered the equilibrium of the system. The obvious fact that these \"shocks\" were occurring continuously, never allowing the" }, { "instance_id": "R141752xR141208", "comparison_id": "R141752", "paper_id": "R141208", "text": "Smart Cities and Sustainability Models In our age cities are complex systems and we can say systems of systems. Today locality is the result of using information and communication technologies in all departments of our life, but in future all cities must to use smart systems for improve quality of life and on the other hand for sustainable development. The smart systems make daily activities more easily, efficiently and represent a real support for sustainable city development. This paper analysis the sus-tainable development and identified the key elements of future smart cities." }, { "instance_id": "R141752xR141201", "comparison_id": "R141752", "paper_id": "R141201", "text": "Will the real smart city please stand up?: Intelligent, progressive or entrepreneurial? Debates about the future of urban development in many Western countries have been increasingly influenced by discussions of smart cities. Yet despite numerous examples of this \u2018urban labelling\u2019 phenomenon, we know surprisingly little about so\u2010called smart cities, particularly in terms of what the label ideologically reveals as well as hides. Due to its lack of definitional precision, not to mention an underlying self\u2010congratulatory tendency, the main thrust of this article is to provide a preliminary critical polemic against some of the more rhetorical aspects of smart cities. The primary focus is on the labelling process adopted by some designated smart cities, with a view to problematizing a range of elements that supposedly characterize this new urban form, as well as question some of the underlying assumptions/contradictions hidden within the concept. To aid this critique, the article explores to what extent labelled smart cities can be understood as a high\u2010tech variation of the \u2018entrepreneurial city\u2019, as well as speculates on some general principles which would make them more progressive and inclusive." }, { "instance_id": "R141752xR141227", "comparison_id": "R141752", "paper_id": "R141227", "text": "Modelling the smart city performance This paper aims to offer a profound analysis of the interrelations between smart city components connecting the cornerstones of the triple helix. The triple helix model has emerged as a reference framework for the analysis of knowledge-based innovation systems, and relates the multiple and reciprocal relationships between the three main agencies in the process of knowledge creation and capitalization: university, industry and government. This analysis of the triple helix will be augmented using the Analytic Network Process to model, cluster and begin measuring the performance of smart cities. The model obtained allows interactions and feedbacks within and between clusters, providing a process to derive ratio scales priorities from elements. This offers a more truthful and realistic representation for supporting policy-making. The application of this model is still to be developed, but a full list of indicators, available at urban level, has been identified and selected from literature review." }, { "instance_id": "R141752xR141265", "comparison_id": "R141752", "paper_id": "R141265", "text": "Distributed Framework for Electronic Democracy in Smart Cities Architectural modules based on dual citizen and government participation platforms provide an economically viable way to implement, standardize, and scale services and information exchange-functions essential to citizens' participation in a smart city democracy." }, { "instance_id": "R141752xR141218", "comparison_id": "R141752", "paper_id": "R141218", "text": "Understanding Smart Cities: An Integrative Framework Making a city \"smart\" is emerging as a strategy to mitigate the problems generated by the urban population growth and rapid urbanization. Yet little academic research has sparingly discussed the phenomenon. To close the gap in the literature about smart cities and in response to the increasing use of the concept, this paper proposes a framework to understand the concept of smart cities. Based on the exploration of a wide and extensive array of literature from various disciplinary areas we identify eight critical factors of smart city initiatives: management and organization, technology, governance, policy context, people and communities, economy, built infrastructure, and natural environment. These factors form the basis of an integrative framework that can be used to examine how local governments are envisioning smart city initiatives. The framework suggests directions and agendas for smart city research and outlines practical implications for government professionals." }, { "instance_id": "R141752xR141224", "comparison_id": "R141752", "paper_id": "R141224", "text": "Smart cities in perspective \u2013 a comparative European study by means of self-organizing maps Cities form the heart of a dynamic society. In an open space-economy cities have to mobilize all of their resources to remain attractive and competitive. Smart cities depend on creative and knowledge resources to maximize their innovation potential. This study offers a comparative analysis of nine European smart cities on the basis of an extensive database covering two time periods. After conducting a principal component analysis, a new approach, based on a self-organizing map analysis, is adopted to position the various cities under consideration according to their selected \u201csmartness\u201d performance indicators." }, { "instance_id": "R141752xR141250", "comparison_id": "R141752", "paper_id": "R141250", "text": "Aspirations and Realizations: The Smart City of Seattle Smart city initiatives have been launched on every continent. That notwithstanding the concept of \u201csmart city\u201d has remained ambiguous. We systematically interviewed officials of an acclaimed Smart City (Seattle) and explicitly asked the officials for their own definitions of \u201csmart city,\u201d which we then compared to the respective projects run by that City. While the definitions given by the practitioners were found different from those in the literature, the smart city projects lived up and matched the practitioner definitions to a high degree. We document the projects and their expected and realized benefits, which illustrate where a leading City government is headed in terms of smart government. However, \u201cSmart City\u201d initiatives in local government might be only a steppingstone in making the greater urban space a \u201csmart city,\u201d which appears to be a more challenging undertaking." }, { "instance_id": "R141752xR141253", "comparison_id": "R141752", "paper_id": "R141253", "text": "A Smart City Initiative: the Case of Barcelona Information and communication technology is changing the way in which cities organise policymaking and urban growth. Smart Cities base their strategy on the use of information and communication technologies in several fields such as economy, environment, mobility and governance to transform the city infrastructure and services. This paper draws on the city of Barcelona and intends to analyse its transformation from a traditional agglomeration to a twenty-first century metropolis. The case of Barcelona is of special interest due to its apparent desire, reflected by its current policies regarding urban planning, to be considered as a leading metropolis in Europe. Hence, an assessment of the Smart City initiative will cast light on the current status of Barcelona\u2019s urban policy and its urban policy of Barcelona and its future directions. This article analyses Barcelona\u2019s transformation in the areas of Smart City management; drivers, bottlenecks, conditions and assets. First, it presents the existing literature on Barcelona\u2019s Smart City initiative. Then, the case study analysis is presented with the Barcelona Smart City model. After describing this model, we further explore the main components of the Smart City strategy of Barcelona in terms of Smart districts, living labs, initiatives, e-Services, infrastructures and Open Data. This paper also reveals certain benefits and challenges related to this initiative and its future directions. The results of the case study analysis indicate that Barcelona has been effectively implementing the Smart City strategy with an aim to be a Smart City model for the world." }, { "instance_id": "R141752xR141256", "comparison_id": "R141752", "paper_id": "R141256", "text": "Toward Intelligent Thessaloniki: from an Agglomeration of Apps to Smart Districts The new planning paradigm of \u201cintelligent cities\u201d is replacing the principles of smart growth and new urbanism which have inspired urban planning over the past 20 years. The \u201cIntelligent Thessaloniki\u201d case study highlights how a city is adopting this new paradigm and how the deployment of broadband networks, smart urban spaces, web-based applications and e-services is helping every district of the city to address its particular objectives of competitiveness and sustainable development. The paper examines the current state of development of broadband infrastructure and e-services in the city of Thessaloniki, the strategy that has been adopted to stimulate the future development of the city with respect to smart environments and districts, and the gaps and bottlenecks influencing this transformation of the city. The conclusions stress that a new orientation of urban governance is needed to address the challenges of digital literacy, creativity in the making of smart environments and business models for the long-term sustainability of e-services enhancing urban intelligence." }, { "instance_id": "R141752xR141233", "comparison_id": "R141752", "paper_id": "R141233", "text": "Smart networked cities? This paper aims to critically assess the lack of a global inter-urban perspective in the smart city policy framework from a conceptual standpoint. We argue here that the smart city policy agenda should be informed by and address the structure of transnational urban networks as this can affect the efficiency of such local policies. The significance of this global network structure is essential as cities do not exist in a vacuum. On the contrary, urban development is heavily based on urban interdependencies found at a global scale. After critically analyzing smart city characteristics and the world city network literature, we identify the need for global urban interdependencies to be addressed in a smart city policy framework. While this paper approaches this issue from a theoretical standpoint, some policy examples are also provided." }, { "instance_id": "R141752xR141214", "comparison_id": "R141752", "paper_id": "R141214", "text": "Conceptualizing smart city with dimensions of technology, people, and institutions This conceptual paper discusses how we can consider a particular city as a smart one, drawing on recent practices to make cities smart. A set of the common multidimensional components underlying the smart city concept and the core factors for a successful smart city initiative is identified by exploring current working definitions of smart city and a diversity of various conceptual relatives similar to smart city. The paper offers strategic principles aligning to the three main dimensions (technology, people, and institutions) of smart city: integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement." }, { "instance_id": "R141752xR141221", "comparison_id": "R141752", "paper_id": "R141221", "text": "Towards a smart State? Inter-agency collaboration, information integration, and beyond Information technologies IT can now be considered one of the key components of government administrative reform. The potential is even greater when working across organizational boundaries. Unfortunately, inter-agency collaboration appears to face an even greater number of challenges than similar IT initiatives within a single organization. The challenges include data and technological incompatibility, the lack of institutional incentives to collaborate, and the politics and power struggles around a pervasive silo structure in most governments, among many others. This paper argues that there are clear trends towards greater inter-organizational collaboration, information sharing, and integration, which could lead, in the near future, to what might be called a smart State. The paper starts discussing the promises and challenges that have already been identified for government information sharing and integration initiatives. Then it describes two trends in terms of inter-organizational collaboration and information technologies in government settings. The paper ends by providing reflections about the technical and political feasibility, as well as the social desirability, of an integrated virtual State in which the executive, legislative, and judicial branches^1 are actively collaborating and sharing information through the use of advanced information technologies, sophisticated coordination mechanisms, shared physical infrastructure, and, potentially, new organizational and institutional arrangements." }, { "instance_id": "R141752xR141205", "comparison_id": "R141752", "paper_id": "R141205", "text": "Governance Infrastructures in 2020 A governance infrastructure is the collection of technologies and systems, people, policies, practices, and relationships that interact to support governing activities. Information technology, especially communication and computational technologies, continues to augment society\u2019s ability to organize, interact, and govern. As we think about the future of governance, this article challenges us to move beyond questions of how to best manage government institutions to how to design smart governance systems with the appropriate incentives and rules to harness and coordinate the enthusiasm and capabilities of those governed. This article anticipates how the interaction of technology and society can be leveraged to mindfully design an interaction-defined, participation-based governance infrastructure to return power to the people while increasing accountability. Supporting examples of such governance approaches already exist and are regularly emerging in distributed organizations, online communities, nonprofits, and governments." }, { "instance_id": "R141780xR141687", "comparison_id": "R141780", "paper_id": "R141687", "text": "Multifunctional N,S co-doped carbon quantum dots with pH- and thermo-dependent switchable fluorescent properties and highly selective detection of glutathione Abstract Smart and multifunctional materials that can be responsive to the environment change have aroused extensive attention in the last few years, because they can be suitable for various applications, such as biosensing, biotechnology and drug delivery. Herein, multifunctional N,S co-doped carbon quantum dots (N,S-CQDs) with pH dependent and color-switchable fluorescent property were synthesized directly from L-cysteine and NH3\u00b7H2O by a one-step hydrothermal route at 100 \u00b0C. The N,S-CQDs are responsive to pH and exhibit color-switchable fluorescence performance between alkaline and acidic environments with good reversibility. The reasonable mechanism was also proposed. The N,S-CQDs exhibit well ionic stability, good biocompatibility and temperature sensitive fluorescent properties. In addition, the N,S-CQDs show a highly selective detection towards glutathione from other bithiols such as Cys and Hcy, which make the N,S-CQDs be sensor reagents for glutathione detection. Given these excellent performances, the as-synthesized N,S-CQDs have great potential to be used for pH sensor, temperature sensor and bioengineering applications in the near future." }, { "instance_id": "R141780xR141715", "comparison_id": "R141780", "paper_id": "R141715", "text": "One-step synthesis of multi-emission carbon nanodots for ratiometric temperature sensing Abstract Measuring temperature with greater precision at localized small length scales or in a nonperturbative manner is a necessity in widespread applications, such as integrated photonic devices, micro/nano electronics, biology, and medical diagnostics. To this context, use of nanoscale fluorescent temperature probes is regarded as the most promising method for temperature sensing because they are noninvasive, accurate, and enable remote micro/nanoscale imaging. Here, we propose a novel ratiometric fluorescent sensor for nanothermometry using carbon nanodots (C-dots). The C-dots were synthesized by one-step method using femtosecond laser ablation and exhibit unique multi-emission property due to emissions from abundant functional groups on its surface. The as-prepared C-dots demonstrate excellent ratiometric temperature sensing under single wavelength excitation that achieves high temperature sensitivity with a 1.48% change per \u00b0C ratiometric response over wide-ranging temperature (5\u201385 \u00b0C) in aqueous buffer. The ratiometric sensor shows excellent reversibility and stability, holding great promise for the accurate measurement of temperature in many practical applications." }, { "instance_id": "R141780xR141661", "comparison_id": "R141780", "paper_id": "R141661", "text": "Fluorescent N-Doped Carbon Dots as in Vitro and in Vivo Nanothermometer The fluorescent N-doped carbon dots (N-CDs) obtained from C3N4 emit strong blue fluorescence, which is stable with different ionic strengths and time. The fluorescence intensity of N-CDs decreases with the temperature increasing, while it can recover to the initial one with the temperature decreasing. It is an accurate linear response of fluorescence intensity to temperature, which may be attributed to the synergistic effect of abundant oxygen-containing functional groups and hydrogen bonds. Further experiments also demonstrate that N-CDs can serve as effective in vitro and in vivo fluorescence-based nanothermometer." }, { "instance_id": "R141780xR141724", "comparison_id": "R141780", "paper_id": "R141724", "text": "Intracellular ratiometric temperature sensing using fluorescent carbon dots

A self-referencing dual fluorescing carbon dot-based nanothermometer can ratiometrically sense thermal events in HeLa cells with very high sensitivity.

" }, { "instance_id": "R141780xR141708", "comparison_id": "R141780", "paper_id": "R141708", "text": "N,S co-doped carbon dots as a stable bio-imaging probe for detection of intracellular temperature and tetracycline

N,S-CDs display an unambiguous bioimaging ability in the detection of intracellular temperature and tetracycline with satisfactory results.

" }, { "instance_id": "R141780xR141748", "comparison_id": "R141780", "paper_id": "R141748", "text": "Dual functional highly luminescence B, N Co-doped carbon nanodots as nanothermometer and Fe3+/Fe2+ sensor Abstract Dual functional fluorescence nanosensors have many potential applications in biology and medicine. Monitoring temperature with higher precision at localized small length scales or in a nanocavity is a necessity in various applications. As well as the detection of biologically interesting metal ions using low-cost and sensitive approach is of great importance in bioanalysis. In this paper, we describe the preparation of dual-function highly fluorescent B, N-co-doped carbon nanodots (CDs) that work as chemical and thermal sensors. The CDs emit blue fluorescence peaked at 450 nm and exhibit up to 70% photoluminescence quantum yield with showing excitation-independent fluorescence. We also show that water-soluble CDs display temperature-dependent fluorescence and can serve as highly sensitive and reliable nanothermometers with a thermo-sensitivity 1.8% \u00b0C \u22121 , and wide range thermo-sensing between 0\u201390 \u00b0C with excellent recovery. Moreover, the fluorescence emission of CDs are selectively quenched after the addition of Fe 2+ and Fe 3+ ions while show no quenching with adding other common metal cations and anions. The fluorescence emission shows a good linear correlation with concentration of Fe 2+ and Fe 3+ (R 2 = 0.9908 for Fe 2+ and R 2 = 0.9892 for Fe 3+ ) with a detection limit of of 80.0 \u00b1 0.5 nM for Fe 2+ and 110.0 \u00b1 0.5 nM for Fe 3+ . Considering the high quantum yield and selectivity, CDs are exploited to design a nanoprobe towards iron detection in a biological sample. The fluorimetric assay is used to detect Fe 2+ in iron capsules and total iron in serum samples successfully." }, { "instance_id": "R141780xR141735", "comparison_id": "R141780", "paper_id": "R141735", "text": "A facile synthesis of high-efficient N,S co-doped carbon dots for temperature sensing application Abstract A high-efficient nitrogen and sulfur-doped carbon dots (CDs) were prepared from l -Cysteine (Cys) and trisodium citrate dihydrate using a hydrothermal method. The Cys-CDs were completely water-soluble and remarkably stable under various pH and ionic strength conditions. Cys-CDs show a absorption maximum at 350 nm and PL maximum at 450 nm with high fluorescence quantum yield of 68%. It is found that Cys-CDs exhibit linear temperature-dependent emission intensity responses at in the 283\u2013343 K range, along with strongly temperature-dependent monoexponential decay. The mechanism of temperature-dependent fluorescence is confirmed as temperature enhanced population of non-radiative channels by comparing radiative and nonradiative recombination rates at different temperature. All results indicate that Cys-CDs could be a promising material for fluorescent temperature-sensing application." }, { "instance_id": "R141782xR141227", "comparison_id": "R141782", "paper_id": "R141227", "text": "Modelling the smart city performance This paper aims to offer a profound analysis of the interrelations between smart city components connecting the cornerstones of the triple helix. The triple helix model has emerged as a reference framework for the analysis of knowledge-based innovation systems, and relates the multiple and reciprocal relationships between the three main agencies in the process of knowledge creation and capitalization: university, industry and government. This analysis of the triple helix will be augmented using the Analytic Network Process to model, cluster and begin measuring the performance of smart cities. The model obtained allows interactions and feedbacks within and between clusters, providing a process to derive ratio scales priorities from elements. This offers a more truthful and realistic representation for supporting policy-making. The application of this model is still to be developed, but a full list of indicators, available at urban level, has been identified and selected from literature review." }, { "instance_id": "R141782xR141230", "comparison_id": "R141782", "paper_id": "R141230", "text": "Smart Ideas for Smart Cities: Investigating Crowdsourcing for Generating and Selecting Ideas for ICT Innovation in a City Context Within this article, the strengths and weaknesses of crowdsourcing for idea generation and idea selection in the context of smart city innovation are investigated. First, smart cities are defined next to similar but different concepts such as digital cities, intelligent cities or ubiquitous cities. It is argued that the smart city-concept is in fact a more user-centered evolution of the other city-concepts which seem to be more technological deterministic in nature. The principles of crowdsourcing are explained and the different manifestations are demonstrated. By means of a case study, the generation of ideas for innovative uses of ICT for city innovation by citizens through an online platform is studied, as well as the selection process. For this selection, a crowdsourcing solution is compared to a selection made by external experts. The comparison of both indicates that using the crowd as gatekeeper and selector of innovative ideas yields a long list with high user benefits. However, the generation of ideas in itself appeared not to deliver extremely innovative ideas. Crowdsourcing thus appears to be a useful and effective tool in the context of smart city innovation, but should be thoughtfully used and combined with other user involvement approaches and within broader frameworks such as Living Labs." }, { "instance_id": "R141782xR141256", "comparison_id": "R141782", "paper_id": "R141256", "text": "Toward Intelligent Thessaloniki: from an Agglomeration of Apps to Smart Districts The new planning paradigm of \u201cintelligent cities\u201d is replacing the principles of smart growth and new urbanism which have inspired urban planning over the past 20 years. The \u201cIntelligent Thessaloniki\u201d case study highlights how a city is adopting this new paradigm and how the deployment of broadband networks, smart urban spaces, web-based applications and e-services is helping every district of the city to address its particular objectives of competitiveness and sustainable development. The paper examines the current state of development of broadband infrastructure and e-services in the city of Thessaloniki, the strategy that has been adopted to stimulate the future development of the city with respect to smart environments and districts, and the gaps and bottlenecks influencing this transformation of the city. The conclusions stress that a new orientation of urban governance is needed to address the challenges of digital literacy, creativity in the making of smart environments and business models for the long-term sustainability of e-services enhancing urban intelligence." }, { "instance_id": "R141782xR141214", "comparison_id": "R141782", "paper_id": "R141214", "text": "Conceptualizing smart city with dimensions of technology, people, and institutions This conceptual paper discusses how we can consider a particular city as a smart one, drawing on recent practices to make cities smart. A set of the common multidimensional components underlying the smart city concept and the core factors for a successful smart city initiative is identified by exploring current working definitions of smart city and a diversity of various conceptual relatives similar to smart city. The paper offers strategic principles aligning to the three main dimensions (technology, people, and institutions) of smart city: integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement." }, { "instance_id": "R141782xR141236", "comparison_id": "R141782", "paper_id": "R141236", "text": "Mobile Business and the Smart City: Developing a Business Model Framework to Include Public Design Parameters for Mobile City Services This article proposes a new business model framework that allows the design and analysis of value networks for mobile services in a public context. It starts from a validated business model framework that relies on 12 design parameters to evaluate business models on, and expands it by eight parameters to include important aspects that come into play when a public entity (i.e. a city government) becomes (or wants to become) involved in the value network. This new framework is then applied to the case of the 311 service offered by the City of New York. Given the quickly changing power relations in the mobile telecommunications industry, this framework offers both an academic and practical tool, enabling the comparison and analysis of mobile city service business models." }, { "instance_id": "R141782xR141211", "comparison_id": "R141782", "paper_id": "R141211", "text": "Smart Cities in Europe Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the \u201csmart city\u201d has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the \u201csmart city.\u201d We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape." }, { "instance_id": "R141782xR141262", "comparison_id": "R141782", "paper_id": "R141262", "text": "Smart city policies: A spatial approach Abstract This paper reviews the factors which differentiate policies for the development of smart cities, in an effort to provide a clear view of the strategic choices that come forth when mapping out such a strategy. The paper commences with a review and categorization of four strategic choices with a spatial reference, on the basis of the recent smart city literature and experience. The advantages and disadvantages of each strategic choice are presented. In the second part of the paper, the previous choices are illustrated through smart city strategy cases from all over the world. The third part of the paper includes recommendations for the development of smart cities based on the combined conclusions of the previous parts. The paper closes with a discussion of the insights that were provided and recommendations for future research areas." }, { "instance_id": "R141782xR140112", "comparison_id": "R141782", "paper_id": "R140112", "text": "Smart cities of the future Abstract Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science of smart cities. Graphical abstract" }, { "instance_id": "R141782xR141259", "comparison_id": "R141782", "paper_id": "R141259", "text": "Smart City Reference Model: Assisting Planners to Conceptualize the Building of Smart City Innovation Ecosystems The objective of this paper is to address the smart innovation ecosystem characteristics that elucidate the assembly of all smart city notions into green, interconnected, instrumented, open, integrated, intelligent, and innovating layers composing a planning framework called, Smart City Reference Model. Since cities come in different shapes and sizes, the model could be adopted and utilized in a range of smart policy paradigms that embrace the green, broadband, and urban economies. These paradigms address global sustainability challenges at a local context. Smart city planners could use the reference model to define the conceptual layout of a smart city and describe the smart innovation characteristics in each one of the six layers. Cases of smart cities, such as Barcelona, Edinburgh, and Amsterdam are examined to evaluate their entirety in relation to the Smart City Reference Model." }, { "instance_id": "R141782xR141224", "comparison_id": "R141782", "paper_id": "R141224", "text": "Smart cities in perspective \u2013 a comparative European study by means of self-organizing maps Cities form the heart of a dynamic society. In an open space-economy cities have to mobilize all of their resources to remain attractive and competitive. Smart cities depend on creative and knowledge resources to maximize their innovation potential. This study offers a comparative analysis of nine European smart cities on the basis of an extensive database covering two time periods. After conducting a principal component analysis, a new approach, based on a self-organizing map analysis, is adopted to position the various cities under consideration according to their selected \u201csmartness\u201d performance indicators." }, { "instance_id": "R141782xR141208", "comparison_id": "R141782", "paper_id": "R141208", "text": "Smart Cities and Sustainability Models In our age cities are complex systems and we can say systems of systems. Today locality is the result of using information and communication technologies in all departments of our life, but in future all cities must to use smart systems for improve quality of life and on the other hand for sustainable development. The smart systems make daily activities more easily, efficiently and represent a real support for sustainable city development. This paper analysis the sus-tainable development and identified the key elements of future smart cities." }, { "instance_id": "R141782xR141233", "comparison_id": "R141782", "paper_id": "R141233", "text": "Smart networked cities? This paper aims to critically assess the lack of a global inter-urban perspective in the smart city policy framework from a conceptual standpoint. We argue here that the smart city policy agenda should be informed by and address the structure of transnational urban networks as this can affect the efficiency of such local policies. The significance of this global network structure is essential as cities do not exist in a vacuum. On the contrary, urban development is heavily based on urban interdependencies found at a global scale. After critically analyzing smart city characteristics and the world city network literature, we identify the need for global urban interdependencies to be addressed in a smart city policy framework. While this paper approaches this issue from a theoretical standpoint, some policy examples are also provided." }, { "instance_id": "R141782xR141205", "comparison_id": "R141782", "paper_id": "R141205", "text": "Governance Infrastructures in 2020 A governance infrastructure is the collection of technologies and systems, people, policies, practices, and relationships that interact to support governing activities. Information technology, especially communication and computational technologies, continues to augment society\u2019s ability to organize, interact, and govern. As we think about the future of governance, this article challenges us to move beyond questions of how to best manage government institutions to how to design smart governance systems with the appropriate incentives and rules to harness and coordinate the enthusiasm and capabilities of those governed. This article anticipates how the interaction of technology and society can be leveraged to mindfully design an interaction-defined, participation-based governance infrastructure to return power to the people while increasing accountability. Supporting examples of such governance approaches already exist and are regularly emerging in distributed organizations, online communities, nonprofits, and governments." }, { "instance_id": "R141782xR141265", "comparison_id": "R141782", "paper_id": "R141265", "text": "Distributed Framework for Electronic Democracy in Smart Cities Architectural modules based on dual citizen and government participation platforms provide an economically viable way to implement, standardize, and scale services and information exchange-functions essential to citizens' participation in a smart city democracy." }, { "instance_id": "R141783xR141532", "comparison_id": "R141783", "paper_id": "R141532", "text": "VUV Spectral Irradiance Measurements in H\n 2\n /He/Ar Microwave Plasmas and Comparison with Solar Data Microwave plasmas with H2 and H2/rare gas mixtures are convenient sources of VUV radiation for laboratory simulations of astrophysical media. We recently undertook an extensive study to characterize microwave plasmas in an H2/He gas mixture in order to optimize a VUV solar simulator over the 115\u2013170 nm spectral range. In this paper, we extend our investigation to the effect of the addition of Ar into H2/He plasma on the VUV spectral irradiance. Our study combines various optical diagnostics such as a VUV spectrometer and optical emission spectroscopy. Quantitative measurements of the spectral irradiance and photons flux in different mixtures are accomplished using a combination of VUV spectrometry and chemical actinometry. Results show that the Ar addition into H2/He plasma largely affects the predominant emissions of the hydrogen Ly\u03b1 line (121.6 nm) and H2 (B\u03a3u\u2013X \u03a3g) band (150\u2013170 nm). While a microwave plasma with 1.4% H2/He is required to mimic the entire VUV solar spectrum in the 115\u2013170 nm range, the combination with 1.28% H2/35% Ar/He is the best alternative to obtain a quasi-monochromatic spectrum with emission dominated by the Ly\u03b1 line. The maximum of the spectral irradiance is significantly higher in the ternary mixtures compared to the binary mixture of 1.4% H2/He. Further Ar increase yielded lower spectral irradiance and absolute photon fluxes. Our measured spectral irradiances are compared to VUV solar data in the 115\u2013170 nm range, emphasizing the use of microwave plasmas in astrophysical studies and laboratory simulations of planetary atmospheres." }, { "instance_id": "R141783xR108954", "comparison_id": "R141783", "paper_id": "R108954", "text": "Ultraviolet/vacuum-ultraviolet emission from a high power magnetron sputtering plasma with an aluminum target We report the in situ measurement of the ultraviolet/vacuum-ultraviolet (UV/VUV) emission from a plasma produced by high power impulse magnetron sputtering with aluminum target, using argon as background gas. The UV/VUV detection system is based upon the quantification of the re-emitted fluorescence from a sodium salicylate layer that is placed in a housing inside the vacuum chamber, at 11 cm from the center of the cathode. The detector is equipped with filters that allow for differentiating various spectral regions, and with a front collimating tube that provides a spatial resolution \u2248 0.5 cm. Using various views of the plasma, the measured absolutely calibrated photon rates enable to calculate emissivities and irradiances based on a model of the ionization region. We present results that demonstrate that Al++ ions are responsible for most of the VUV irradiance. We also discuss the photoelectric emission due to irradiances on the target ~ 2\u00d71018 s-1 cm-2 produced by high energy photons from resonance lines of Ar+." }, { "instance_id": "R141783xR108942", "comparison_id": "R141783", "paper_id": "R108942", "text": "Comparison of surface vacuum ultraviolet emissions with resonance level number densities. I. Argon plasmas Vacuum ultraviolet (VUV) photons emitted from excited atomic states are ubiquitous in material processing plasmas. The highly energetic photons can induce surface damage by driving surface reactions, disordering surface regions, and affecting bonds in the bulk material. In argon plasmas, the VUV emissions are due to the decay of the 1s4 and 1s2 principal resonance levels with emission wavelengths of 104.8 and 106.7 nm, respectively. The authors have measured the number densities of atoms in the two resonance levels using both white light optical absorption spectroscopy and radiation-trapping induced changes in the 3p54p\u21923p54s branching fractions measured via visible/near-infrared optical emission spectroscopy in an argon inductively coupled plasma as a function of both pressure and power. An emission model that takes into account radiation trapping was used to calculate the VUV emission rate. The model results were compared to experimental measurements made with a National Institute of Standards and Techn..." }, { "instance_id": "R141783xR108938", "comparison_id": "R141783", "paper_id": "R108938", "text": "Prediction of UV spectra and UV-radiation damage in actual plasma etching processes using on-wafer monitoring technique UV radiation during plasma processing affects the surface of materials. Nevertheless, the interaction of UV photons with surface is not clearly understood because of the difficulty in monitoring photons during plasma processing. For this purpose, we have previously proposed an on-wafer monitoring technique for UV photons. For this study, using the combination of this on-wafer monitoring technique and a neural network, we established a relationship between the data obtained from the on-wafer monitoring technique and UV spectra. Also, we obtained absolute intensities of UV radiation by calibrating arbitrary units of UV intensity with a 126 nm excimer lamp. As a result, UV spectra and their absolute intensities could be predicted with the on-wafer monitoring. Furthermore, we developed a prediction system with the on-wafer monitoring technique to simulate UV-radiation damage in dielectric films during plasma etching. UV-induced damage in SiOC films was predicted in this study. Our prediction results of damage..." }, { "instance_id": "R141783xR141452", "comparison_id": "R141783", "paper_id": "R141452", "text": "HBr Plasma Treatment Versus VUV Light Treatment to Improve 193\u2009nm Photoresist Pattern Linewidth Roughness We have studied the impact of HBr plasma treatment and the role of the VUV light emitted by this plasma on the chemical modifications and resulting roughness of both blanket and patterned photoresists. The experimental results show that both treatments lead to similar resist bulk chemical modifications that result in a decrease of the resist glass transition temperature (Tg). This drop in Tg allows polymer chain rearrangement that favors surface roughness smoothening. The smoothening effect is mainly attributed to main chain scission induced by plasma VUV light. For increased VUV light exposure time, the crosslinking mechanism dominates over main chain scission and limits surface roughness smoothening. In the case of the HBr plasma treatment, the synergy between Bromine radicals and VUV light leads to the formation of dense graphitized layers on top and sidewalls surfaces of the resist pattern. The presence of a dense layer on the HBr cured resist sidewalls prevents from resist pattern reflowing but on the counter side leads to increased surface roughness and linewidth roughness compared to VUV light treatment." }, { "instance_id": "R141783xR141444", "comparison_id": "R141783", "paper_id": "R141444", "text": "Low-kfilms modification under EUV and VUV radiation Modification of ultra-low-k films by extreme ultraviolet (EUV) and vacuum ultraviolet (VUV) emission with 13.5, 58.4, 106, 147 and 193 nm wavelengths and fluences up to 6 \u00d7 1018 photons cm\u22122 is studied experimentally and theoretically to reveal the damage mechanism and the most \u2018damaging\u2019 spectral region. Organosilicate glass (OSG) and organic low-k films with k-values of 1.8\u20132.5 and porosity of 24\u201351% are used in these experiments. The Si\u2013CH3 bonds depletion is used as a criterion of VUV damage of OSG low-k films. It is shown that the low-k damage is described by two fundamental parameters: photoabsorption (PA) cross-section \u03c3PA and effective quantum yield \u03c6 of Si\u2013CH3 photodissociation. The obtained \u03c3PA and \u03c6 values demonstrate that the effect of wavelength is defined by light absorption spectra, which in OSG materials is similar to fused silica. This is the reason why VUV light in the range of \u223c58\u2013106 nm having the highest PA cross-sections causes strong Si\u2013CH3 depletion only in the top part of the films (\u223c50\u2013100 nm). The deepest damage is observed after exposure to 147 nm VUV light since this emission is located at the edge of Si\u2013O absorption, has the smallest PA cross-section and provides extensive Si\u2013CH3 depletion over the whole film thickness. The effective quantum yield slowly increases with the increasing porosity but starts to grow quickly when the porosity exceeds the critical threshold located close to a porosity of \u223c50%. The high degree of pore interconnectivity of these films allows easy movement of the detached methyl radicals. The obtained results have a fundamental character and can be used for prediction of ULK material damage under VUV light with different wavelengths." }, { "instance_id": "R141783xR108946", "comparison_id": "R141783", "paper_id": "R108946", "text": "Quantification of the VUV radiation in low pressure hydrogen and nitrogen plasmas Hydrogen and nitrogen containing discharges emit intense radiation in a broad wavelength region in the VUV. The measured radiant power of individual molecular transitions and atomic lines between 117 nm and 280 nm are compared to those obtained in the visible spectral range and moreover to the RF power supplied to the ICP discharge. In hydrogen plasmas driven at 540 W of RF power up to 110 W are radiated in the VUV, whereas less than 2 W is emitted in the VIS. In nitrogen plasmas the power level of about 25 W is emitted both in the VUV and in the VIS. In hydrogen\u2013nitrogen mixtures, the NH radiation increases the VUV amount. The analysis of molecular and atomic hydrogen emission supported by a collisional radiative model allowed determining plasma parameters and particle densities and thus particle fluxes. A comparison of the fluxes showed that the photon fluxes determined from the measured emission are similar to the ion fluxes, whereas the atomic hydrogen fluxes are by far dominant. Photon fluxes up to 5 \u00d7 1020 m\u22122 s\u22121 are obtained, demonstrating that the VUV radiation should not be neglected in surface modifications processes, whereas the radiant power converted to VUV photons is to be considered in power balances. Varying the admixture of nitrogen to hydrogen offers a possibility to tune photon fluxes in the respective wavelength intervals." }, { "instance_id": "R141783xR108936", "comparison_id": "R141783", "paper_id": "R108936", "text": "Absolute vacuum ultraviolet flux in inductively coupled plasmas and chemical modifications of 193 nm photoresist Vacuum ultraviolet (VUV) photons in plasma processing systems are known to alter surface chemistry and may damage gate dielectrics and photoresist. We characterize absolute VUV fluxes to surfaces exposed in an inductively coupled argon plasma, 1\u201350 mTorr, 25\u2013400 W, using a calibrated VUV spectrometer. We also demonstrate an alternative method to estimate VUV fluence in an inductively coupled plasma (ICP) reactor using a chemical dosimeter-type monitor. We illustrate the technique with argon ICP and xenon lamp exposure experiments, comparing direct VUV measurements with measured chemical changes in 193 nm photoresist-covered Si wafers following VUV exposure." }, { "instance_id": "R141783xR141550", "comparison_id": "R141783", "paper_id": "R141550", "text": "The effect of VUV radiation from Ar/O2plasmas on low-kSiOCH films The degradation of porous low-k materials, like SiOCH, under plasma processing continues to be a problem in the next generation of integrated-circuit fabrication. Due to the exposure of the film to many species during plasma treatment, such as photons, ions, radicals, etc, it is difficult to identify the mechanisms responsible for plasma-induced damage. Using a vacuum beam apparatus with a calibrated Xe vacuum ultraviolet (VUV) lamp, we show that 147 nm VUV photons and molecular O2 alone can damage these low-k materials. Using Fourier-transform infrared (FTIR) spectroscopy, we show that VUV/O2 exposure causes a loss of methylated species, resulting in a hydrophilic, SiOx-like layer that is susceptible to H2O absorption, leading to an increased dielectric constant. The effect of VUV radiation on chemical modification of porous SiOCH films in the vacuum beam apparatus and in Ar and O2 plasma exposure was found to be a significant contributor to dielectric damage. Measurements of dielectric constant change using a mercury probe are consistent with chemical modification inferred from FTIR analysis. Furthermore, the extent of chemical modification appears to be limited by the penetration depth of the VUV photons, which is dependent on wavelength of radiation. The creation of a SiOx-like layer near the surface of the material, which grows deeper as more methyl is extracted, introduces a dynamic change of VUV absorption throughout the material over time. As a result, the rate of methyl loss is continuously changing during the exposure. We present a model that attempts to capture this dynamic behaviour and compare the model predictions to experimental data through a fitting parameter that represents the effective photo-induced methyl removal. While this model accurately simulates the methyl loss through VUV exposure by the Xe lamp and Ar plasma, the methyl loss from VUV photons in O2 plasma are only accurately depicted at longer exposure times. We conclude that other species, such as oxygen radicals or ions, may play a major role in chemical modification at short times near the surface of the material, while VUV photons contribute to the majority of the damage in the bulk." }, { "instance_id": "R141844xR140961", "comparison_id": "R141844", "paper_id": "R140961", "text": "Has Australia surpassed its optimal macroeconomic scale? Finding out with the aid of `benefit' and `cost' accounts and a sustainable net benefit index Abstract The sustainable economic welfare of a nation depends largely on the sustainable net benefits the macroeconomy confers to its citizens. For this reason, an optimal macroeconomic scale can be considered one where the physical scale of the macroeconomy and the qualitative nature of the stock of wealth it comprises maximises a nation's sustainable net benefits. The corollary of this definition is thus: the physical scale of the macroeconomy should grow only if, in the process, the sustainable net benefits of a nation increase. It should cease to grow once sustainable net benefits are maximised. Whilst it is one thing to promote an optimal macroeconomic scale, it is another entirely to gain an appreciation of the sustainable net benefits yielded by the macroeconomy. Gaining such an appreciation constitutes the central aim of this paper. With the assistance of two separate `benefit' and `cost' accounts to replace gross domestic product (GDP), a sustainable net benefit index is constructed for Australia for the period 1966\u20131967 to 1994\u20131995. The index, particularly at the per capita level, indicates that economic welfare in Australia is declining (i.e. the average Australian is getting `poorer') despite per capita real GDP increasing. The index therefore suggests that the scale of the Australian macroeconomy has probably well exceeded the optimum." }, { "instance_id": "R141844xR140879", "comparison_id": "R141844", "paper_id": "R140879", "text": "Is Growth Obsolete? A long decade ago economic growth was the reigning fashion of political economy. It was simultaneously the hottest subject of economic theory and research, a slogan eagerly claimed by politicians of all stripes, and a serious objective of the policies of governments. The climate of opinion has changed dramatically. Disillusioned critics indict both economic science and economic policy for blind obeisance to aggregate material \"progress,\" and for neglect of its costly side effects. Growth, it is charged, distorts national priorities, worsens the distribution of income, and irreparably damages the environment. Paul Erlich speaks for a multitude when he says, \"We must acquire a life style which has as its goal maximum freedom and happiness for the individual, not a maximum Gross National Product.\" Growth was in an important sense a discovery of economics after the Second World War. Of course economic development has always been the grand theme of historically minded scholars of large mind and bold concept, notably Marx, Schumpeter, Kuznets. But the mainstream of economic analysis was not comfortable with phenomena of change and progress. The stationary state was the long-run equilibrium of classical and neoclassical theory, and comparison of alternative static equilibriums was the most powerful theoretical tool. Technological change and population increase were most readily accommodated as one-time exogenous shocks; comparative static analysis could be used to tell how they altered the equilibrium of the system. The obvious fact that these \"shocks\" were occurring continuously, never allowing the" }, { "instance_id": "R142822xR136201", "comparison_id": "R142822", "paper_id": "R136201", "text": "DNA barcode analysis of butterfly species from Pakistan points towards regional endemism DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north\u2010central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7\u201314.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour\u2010joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region." }, { "instance_id": "R142822xR140187", "comparison_id": "R142822", "paper_id": "R140187", "text": "DNA Barcoding the Geometrid Fauna of Bavaria (Lepidoptera): Successes, Surprises, and Questions Background The State of Bavaria is involved in a research program that will lead to the construction of a DNA barcode library for all animal species within its territorial boundaries. The present study provides a comprehensive DNA barcode library for the Geometridae, one of the most diverse of insect families. Methodology/Principal Findings This study reports DNA barcodes for 400 Bavarian geometrid species, 98 per cent of the known fauna, and approximately one per cent of all Bavarian animal species. Although 98.5% of these species possess diagnostic barcode sequences in Bavaria, records from neighbouring countries suggest that species-level resolution may be compromised in up to 3.5% of cases. All taxa which apparently share barcodes are discussed in detail. One case of modest divergence (1.4%) revealed a species overlooked by the current taxonomic system: Eupithecia goossensiata Mabille, 1869 stat.n. is raised from synonymy with Eupithecia absinthiata (Clerck, 1759) to species rank. Deep intraspecific sequence divergences (>2%) were detected in 20 traditionally recognized species. Conclusions/Significance The study emphasizes the effectiveness of DNA barcoding as a tool for monitoring biodiversity. Open access is provided to a data set that includes records for 1,395 geometrid specimens (331 species) from Bavaria, with 69 additional species from neighbouring regions. Taxa with deep intraspecific sequence divergences are undergoing more detailed analysis to ascertain if they represent cases of cryptic diversity." }, { "instance_id": "R142822xR108960", "comparison_id": "R142822", "paper_id": "R108960", "text": "Use of species delimitation approaches to tackle the cryptic diversity of an assemblage of high Andean butterflies (Lepidoptera: Papilionoidea) Cryptic biological diversity has generated ambiguity in taxonomic and evolutionary studies. Single-locus methods and other approaches for species delimitation are useful for addressing this challenge, enabling the practical processing of large numbers of samples for identification and inventory purposes. This study analyzed an assemblage of high Andean butterflies using DNA barcoding and compared the identifications based on the current morphological taxonomy with three methods of species delimitation (automatic barcode gap discovery, generalized mixed Yule coalescent model, and Poisson tree processes). Sixteen potential cryptic species were recognized using these three methods, representing a net richness increase of 11.3% in the assemblage. A well-studied taxon of the genus Vanessa, which has a wide geographical distribution, appeared with the potential cryptic species that had a higher genetic differentiation at the local level than at the continental level. The analyses were useful for identifying the potential cryptic species in Pedaliodes and Forsterinaria complexes, which also show differentiation along altitudinal and latitudinal gradients. This genetic assessment of an entire assemblage of high Andean butterflies (Papilionoidea) provides baseline information for future research in a region characterized by high rates of endemism and population isolation." }, { "instance_id": "R142822xR140197", "comparison_id": "R142822", "paper_id": "R140197", "text": "DNA barcodes distinguish species of tropical Lepidoptera Although central to much biological research, the identification of species is often difficult. The use of DNA barcodes, short DNA sequences from a standardized region of the genome, has recently been proposed as a tool to facilitate species identification and discovery. However, the effectiveness of DNA barcoding for identifying specimens in species-rich tropical biotas is unknown. Here we show that cytochrome c oxidase I DNA barcodes effectively discriminate among species in three Lepidoptera families from Area de Conservaci\u00f3n Guanacaste in northwestern Costa Rica. We found that 97.9% of the 521 species recognized by prior taxonomic work possess distinctive cytochrome c oxidase I barcodes and that the few instances of interspecific sequence overlap involve very similar species. We also found two or more barcode clusters within each of 13 supposedly single species. Covariation between these clusters and morphological and/or ecological traits indicates overlooked species complexes. If these results are general, DNA barcoding will significantly aid species identification and discovery in tropical settings." }, { "instance_id": "R142822xR142517", "comparison_id": "R142822", "paper_id": "R142517", "text": "A DNA barcode library for 5,200 German flies and midges (Insecta: Diptera) and its implications for metabarcoding\u2010based biomonitoring This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species\u2010level assignment, so called \u201cdark taxa.\u201d Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the \u201ctaxonomic impediment\u201d; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species\u2010rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy." }, { "instance_id": "R142822xR109043", "comparison_id": "R142822", "paper_id": "R109043", "text": "A DNA barcode library for the butterflies of North America Although the butterflies of North America have received considerable taxonomic attention, overlooked species and instances of hybridization continue to be revealed. The present study assembles a DNA barcode reference library for this fauna to identify groups whose patterns of sequence variation suggest the need for further taxonomic study. Based on 14,626 records from 814 species, DNA barcodes were obtained for 96% of the fauna. The maximum intraspecific distance averaged 1/4 the minimum distance to the nearest neighbor, producing a barcode gap in 76% of the species. Most species (80%) were monophyletic, the others were para- or polyphyletic. Although 15% of currently recognized species shared barcodes, the incidence of such taxa was far higher in regions exposed to Pleistocene glaciations than in those that were ice-free. Nearly 10% of species displayed high intraspecific variation (>2.5%), suggesting the need for further investigation to assess potential cryptic diversity. Aside from aiding the identification of all life stages of North American butterflies, the reference library has provided new perspectives on the incidence of both cryptic and potentially over-split species, setting the stage for future studies that can further explore the evolutionary dynamics of this group." }, { "instance_id": "R142822xR140252", "comparison_id": "R142822", "paper_id": "R140252", "text": "Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service \u201cMonophylizer\u201d to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is \u223c23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric\u2014conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors." }, { "instance_id": "R142822xR138562", "comparison_id": "R142822", "paper_id": "R138562", "text": "Fast Census of Moth Diversity in the Neotropics: A Comparison of Field-Assigned Morphospecies and DNA Barcoding in Tiger Moths The morphological species delimitations (i.e. morphospecies) have long been the best way to avoid the taxonomic impediment and compare insect taxa biodiversity in highly diverse tropical and subtropical regions. The development of DNA barcoding, however, has shown great potential to replace (or at least complement) the morphospecies approach, with the advantage of relying on automated methods implemented in computer programs or even online rather than in often subjective morphological features. We sampled moths extensively for two years using light traps in a patch of the highly endangered Atlantic Forest of Brazil to produce a nearly complete census of arctiines (Noctuoidea: Erebidae), whose species richness was compared using different morphological and molecular approaches (DNA barcoding). A total of 1,075 barcode sequences of 286 morphospecies were analyzed. Based on the clustering method Barcode Index Number (BIN) we found a taxonomic bias of approximately 30% in our initial morphological assessment. However, a morphological reassessment revealed that the correspondence between morphospecies and molecular operational taxonomic units (MOTUs) can be up to 94% if differences in genitalia morphology are evaluated in individuals of different MOTUs originated from the same morphospecies (putative cases of cryptic species), and by recording if individuals of different genders in different morphospecies merge together in the same MOTU (putative cases of sexual dimorphism). The results of two other clustering methods (i.e. Automatic Barcode Gap Discovery and 2% threshold) were very similar to those of the BIN approach. Using empirical data we have shown that DNA barcoding performed substantially better than the morphospecies approach, based on superficial morphology, to delimit species of a highly diverse moth taxon, and thus should be used in species inventories." }, { "instance_id": "R142822xR138551", "comparison_id": "R142822", "paper_id": "R138551", "text": "Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences." }, { "instance_id": "R142822xR142535", "comparison_id": "R142822", "paper_id": "R142535", "text": "DNA Barcodes for the Northern European Tachinid Flies (Diptera: Tachinidae) This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera." }, { "instance_id": "R142822xR139546", "comparison_id": "R142822", "paper_id": "R139546", "text": "A DNA barcode reference library for Swiss butterflies and forester moths as a tool for species identification, systematics and conservation Butterfly monitoring and Red List programs in Switzerland rely on a combination of observations and collection records to document changes in species distributions through time. While most butterflies can be identified using morphology, some taxa remain challenging, making it difficult to accurately map their distributions and develop appropriate conservation measures. In this paper, we explore the use of the DNA barcode (a fragment of the mitochondrial gene COI) as a tool for the identification of Swiss butterflies and forester moths (Rhopalocera and Zygaenidae). We present a national DNA barcode reference library including 868 sequences representing 217 out of 224 resident species, or 96.9% of Swiss fauna. DNA barcodes were diagnostic for nearly 90% of Swiss species. The remaining 10% represent cases of para- and polyphyly likely involving introgression or incomplete lineage sorting among closely related taxa. We demonstrate that integrative taxonomic methods incorporating a combination of morphological and genetic techniques result in a rate of species identification of over 96% in females and over 98% in males, higher than either morphology or DNA barcodes alone. We explore the use of the DNA barcode for exploring boundaries among taxa, understanding the geographical distribution of cryptic diversity and evaluating the status of purportedly endemic taxa. Finally, we discuss how DNA barcodes may be used to improve field practices and ultimately enhance conservation strategies." }, { "instance_id": "R142850xR142744", "comparison_id": "R142850", "paper_id": "R142744", "text": "Hybrid Nanocrystals: Achieving Concurrent Therapeutic and Bioimaging Functionalities toward Solid Tumors Bioimaging and therapeutic agents accumulated in ectopic tumors following intravenous administration of hybrid nanocrystals to tumor-bearing mice. Solid, nanosized paclitaxel crystals physically incorporated fluorescent molecules throughout the crystal lattice and retained fluorescent properties in the solid state. Hybrid nanocrystals were significantly localized in solid tumors and remained in the tumor for several days. An anticancer effect is expected of these hybrid nanocrystals." }, { "instance_id": "R142850xR142815", "comparison_id": "R142850", "paper_id": "R142815", "text": "In vitro and in vivo antitumor activity of oridonin nanosuspension The aim of the present study was to evaluate the antitumor activity of an oridonin (ORI) nanosuspension relative to ORI solution both in vitro and in vivo. ORI nanosuspension with a particle size of 897.2+/-14.2 nm was prepared by the high pressure homogenization method (HPH). MTT assay showed that ORI nanosuspension could significantly enhance the in vitro cytotoxicity against K562 cells compared to the ORI solution, the IC(50) value at 36 h was reduced from 12.85 micromol/L for ORI solution to 8.11 micromol/L for ORI nanosuspension. Flow cytometric analysis demonstrated that the ORI nanosuspension also induced a higher apoptotic rate in K562 cells compared to ORI solution. In vivo studies in a mouse model of sarcoma-180 solid tumors demonstrated significantly greater inhibition of tumor growth following treatment with ORI nanosuspension than ORI solution at the same dosage. The mice injected with ORI nanosuspension showed a higher reduction in tumor volume and tumor weight at the dose of 20mg/kg compared to the ORI solution (P<0.01), with the tumor inhibition rate increased from 42.49% for ORI solution to 60.23% for the ORI nanosuspension. Taken together, these results suggest that the delivery of ORI in nanosuspension is a promising approach for the treatment of the tumor." }, { "instance_id": "R142850xR142804", "comparison_id": "R142850", "paper_id": "R142804", "text": "Paclitaxel nanosuspension coated with P-gp inhibitory surfactants: II. Ability to reverse the drug-resistance of H460 human lung cancer cells PURPOSE The present studies evaluated the ability of paclitaxel (PTX) nanosuspension coated with TPGS to reverse drug-resistance of P-glycoprotein (P-gp)-overexpressing H460 human lung cancer cells. METHOD P-gp expression level of H460 cells was detected by western blot method. MTT assay was used to investigate in vitro cytotoxicity of PTX formulations and the resistance index (RI) of H460/RT cells. At last the antitumor efficacy of PTX nanosuspension was evaluated in resistant H460 cells xenograft Balb/c mice. RESULTS The P-gp expression level of H460/RT cells was four times more than that of sensitive H460 cells. TPGS could reduce the P-gp expression by 25.41% at a concentration of 100 \u03bcg/ml after 24h exposure. Both PTX solution and nanosuspension exhibited obvious cytotoxicity against sensitive H460 cells. When H460/RT cells were treated, PTX nanosuspension showed significantly higher cytotoxicity compared with PTX solution, with much lower IC50 value and RI at each time point. After intravenous administration PTX nanosuspension exhibited about 5-fold increase in the inhibition rate of tumor growth compared with the mixed solution of PTX and TPGS. CONCLUSIONS PTX nanosuspension coated with TPGS could effectively reverse drug resistance of H460/RT cells. The usage of TPGS as stabilizers on the surface of nanocrystals of insoluble anticancer drugs may be an effective approach to overcome the multi-drug resistances (MDR)." }, { "instance_id": "R142850xR142837", "comparison_id": "R142850", "paper_id": "R142837", "text": "New Method for Delivering a Hydrophobic Drug for Photodynamic Therapy Using Pure Nanocrystal Form of the Drug A carrier-free method for delivery of a hydrophobic drug in its pure form, using nanocrystals (nanosized crystals), is proposed. To demonstrate this technique, nanocrystals of a hydrophobic photosensitizing anticancer drug, 2-devinyl-2-(1-hexyloxyethyl)pyropheophorbide (HPPH), have been synthesized using the reprecipitation method. The resulting drug nanocrystals were monodispersed and stable in aqueous dispersion, without the necessity of an additional stabilizer (surfactant). As shown by confocal microscopy, these pure drug nanocrystals were taken up by the cancer cells with high avidity. Though the fluorescence and photodynamic activity of the drug were substantially quenched in the form of nanocrystals in aqueous suspension, both these characteristics were recovered under in vitro and in vivo conditions. This recovery of drug activity and fluorescence is possibly due to the interaction of nanocrystals with serum albumin, resulting in conversion of the drug nanocrystals into the molecular form. This was confirmed by demonstrating similar recovery in presence of fetal bovine serum (FBS) or bovine serum albumin (BSA). Under similar treatment conditions, the HPPH in nanocrystal form or in 1% Tween-80/water formulation showed comparable in vitro and in vivo efficacy." }, { "instance_id": "R142850xR142790", "comparison_id": "R142850", "paper_id": "R142790", "text": "In Vivo Investigation of Hybrid Paclitaxel Nanocrystals with Dual Fluorescent Probes for Cancer Theranostics ABSTRACTPurposeTo develop novel hybrid paclitaxel (PTX) nanocrystals, in which bioactivatable (MMPSense\u00ae 750 FAST) and near infrared (Flamma Fluor\u00ae FPR-648) fluorophores are physically incorporated, and to evaluate their anticancer efficacy and diagnostic properties in breast cancer xenograft murine model.MethodsThe pure and hybrid paclitaxel nanocrystals were prepared by an anti-solvent method, and their physical properties were characterized. The tumor volume change and body weight change were evaluated to assess the treatment efficacy and toxicity. Bioimaging of treated mice was obtained non-invasively in vivo.ResultsThe released MMPSense molecules from the hybrid nanocrystals were activated by matrix metalloproteinases (MMPs) in vivo, similarly to the free MMPSense, demonstrating its ability to monitor cancer progression. Concurrently, the entrapped FPR-648 was imaged at a different wavelength. Furthermore, when administered at 20 mg/kg, the nanocrystal formulations exerted comparable efficacy as Taxol\u00ae, but with decreased toxicity.ConclusionsHybrid nanocrystals that physically integrated two fluorophores were successfully prepared from solution. Hybrid nanocrystals were shown not only exerting antitumor activity, but also demonstrating the potential of multi-modular bioimaging for diagnostics." }, { "instance_id": "R142850xR142718", "comparison_id": "R142850", "paper_id": "R142718", "text": "Cellular Uptake Mechanism of Paclitaxel Nanocrystals Determined by Confocal Imaging and Kinetic Measurement Nanocrystal formulation has become a viable solution for delivering poorly soluble drugs including chemotherapeutic agents. The purpose of this study was to examine cellular uptake of paclitaxel nanocrystals by confocal imaging and concentration measurement. It was found that drug nanocrystals could be internalized by KB cells at much higher concentrations than a conventional, solubilized formulation. The imaging and quantitative results suggest that nanocrystals could be directly taken up by cells as solid particles, likely via endocytosis. Moreover, it was found that polymer treatment to drug nanocrystals, such as surface coating and lattice entrapment, significantly influenced the cellular uptake. While drug molecules are in the most stable physical state, nanocrystals of a poorly soluble drug are capable of achieving concentrated intracellular presence enabling needed therapeutic effects." }, { "instance_id": "R142850xR142845", "comparison_id": "R142850", "paper_id": "R142845", "text": "RGD-modified PEGylated paclitaxel nanocrystals with enhanced stability and tumor-targeting capability Nanocrystals has been constructed for insoluble drugs as a novel type of nanoscale drug delivery systems with high drug loading. How to prepare nanocrystals with good stability and tumor targeting capability is still challenging. This study was to modify paclitaxel nanocrystals with polyethylene glycol (PEG) for stabilization and RGD peptide for tumor targeting. Inspired by the structure of mussel's foot protein, polydopamine (PDA) was introduced to the drug delivery system for the modification of nanocrystals. Briefly, PDA was coated on the surface of nanocrystals to form a reaction platform for further PEGylation and RGD peptide conjugation. PEGylated nanocrystals with RGD peptide modification (NC@PDA-PEG-RGD) were prepared with near-spheroid shape, drug loading 45.12 \u00b1 1.81% and a hydrodynamic diameter 419.9 \u00b1 80.9 nm. The size of NC@PDA-PEG-RGD remained basically unchanged for at least 72 h in the presence of plasma while the size of unmodified nanocrystals (NC) increased and exceeded 1000 nm in 12 h. Cellular uptake and cellular growth inhibition experiments using the lung cancer cell line A549 demonstrated the superiority of NC@PDA-PEG-RGD over NC or PEGylated nanocrystals without RGD modification (NC@PDA-PEG). In A549 model tumor bearing-mice, NC@PDA-PEG-RGD showed significantly higher intratumor accumulation and slower tumor growth than NC@PDA-PEG or free paclitaxel. In summary, our study suggested the superiority of RGDmodified PEGylated paclitaxel nanocrystals as a lung cancer-targeted delivery system and the potential of PDA coating technique for targeting functionalization of nanocrystals." }, { "instance_id": "R144121xR143873", "comparison_id": "R144121", "paper_id": "R143873", "text": "Data Mining on Folksonomies Social resource sharing systems are central elements of the Web 2.0 and use all the same kind of lightweight knowledge representation, called folksonomy. As these systems are easy to use, they attract huge masses of users. Data Mining provides methods to analyze data and to learn models which can be used to support users. The application and adaptation of known data mining algorithms to folksonomies with the goal to support the users of such systems and to extract valuable information with a special focus on the Semantic Web is the main target of this paper." }, { "instance_id": "R144121xR143919", "comparison_id": "R144121", "paper_id": "R143919", "text": "Enabling Folksonomies for Knowledge Extraction: A Semantic Grounding Approach Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities." }, { "instance_id": "R144121xR143962", "comparison_id": "R144121", "paper_id": "R143962", "text": "Computational and Crowdsourcing Methods for Extracting Ontological Structure from Folksonomy This paper investigates the unification of folksonomies and ontologies in such a way that the resulting structures can better support exploration and search on the World Wide Web. First, an integrated computational method is employed to extract the ontological structures from folksonomies. It exploits the power of low support association rule mining supplemented by an upper ontology such as WordNet. Promising results have been obtained from experiments using tag datasets from Flickr and Citeulike. Next, a crowdsourcing method is introduced to channel online users' search efforts to help evolve the extracted ontology." }, { "instance_id": "R144121xR143857", "comparison_id": "R144121", "paper_id": "R143857", "text": "Mining Association Rules in Folksonomies Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. These systems provide currently relatively few structure. We discuss in this paper, how association rule mining can be adopted to analyze and structure folksonomies, and how the results can be used for ontology learning and supporting emergent semantics. We demonstrate our approach on a large scale dataset stemming from an online system." }, { "instance_id": "R144512xR144478", "comparison_id": "R144512", "paper_id": "R144478", "text": "Co-delivery of doxorubicin and siRNA for glioma therapy by a brain targeting system: angiopep-2-modified poly(lactic-co-glycolic acid) nanoparticles Abstract It is very challenging to treat brain cancer because of the blood\u2013brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue." }, { "instance_id": "R144512xR144378", "comparison_id": "R144512", "paper_id": "R144378", "text": "In vivo biodistribution of venlafaxine-PLGA nanoparticles for brain delivery: plain vs. functionalized nanoparticles ABSTRACT Background: Actually, no drugs provide therapeutic benefit to approximately one-third of depressed patients. Depression is predicted to become the first global disease by 2030. So, new therapeutic interventions are imperative. Research design and methods: Venlafaxine-loaded poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs) were surface functionalized with two ligands against transferrin receptor to enhance access to brain. An in vitro blood\u2013brain barrier model using hCMEC/D3 cell line was developed to evaluate permeability. In vivo biodistribution studies were performed using C57/bl6 mice. Particles were administered intranasal and main organs were analyzed. Results: Particles were obtained as a lyophilized powder easily to re-suspend. Internalization and permeability studies showed the following cell association sequence: TfRp-NPs>Tf-NPs>plain NPs. Permeability studies also showed that encapsulated VLF was not affected by P-gP pump efflux increasing its concentration in the basolateral side after 24 h. In vivo studies showed that 25% of plain NPs reach the brain after 30 min of one intranasal administration while less than 5% of functionalized NPs get the target. Conclusions: Plain NPs showed the highest ability to reach the brain vs. functionalized NPs after 30 min by intranasal administration. We suggest plain NPs probably travel via direct nose-to-brian route whereas functionalized NPs reach the brain by receptor-mediated endocytosis." }, { "instance_id": "R144512xR144436", "comparison_id": "R144512", "paper_id": "R144436", "text": "Novel Curcumin loaded nanoparticles engineered for Blood-Brain Barrier crossing and able to disrupt Abeta aggregates The formation of extracellular aggregates built up by deposits of \u03b2-amyloid (A\u03b2) is a hallmark of Alzheimer's disease (AD). Curcumin has been reported to display anti-amyloidogenic activity, not only by inhibiting the formation of new A\u03b2 aggregates, but also by disaggregating existing ones. However, the uptake of Curcumin into the brain is severely restricted by its low ability to cross the blood-brain barrier (BBB). Therefore, novel strategies for a targeted delivery of Curcumin into the brain are highly desired. Here, we encapsulated Curcumin as active ingredient in PLGA (polylactide-co-glycolic-acid) nanoparticles (NPs), modified with g7 ligand for BBB crossing. We performed in depth analyses of possible toxicity of these NPs, uptake, and, foremost, their ability to influence A\u03b2 pathology in vitro using primary hippocampal cell cultures. Our results show no apparent toxicity of the formulated NPs, but a significant decrease of A\u03b2 aggregates in response to Curcumin loaded NPs. We thus conclude that brain delivery of Curcumin using BBB crossing NPs is a promising future approach in the treatment of AD." }, { "instance_id": "R144512xR144491", "comparison_id": "R144512", "paper_id": "R144491", "text": "Curcumin Loaded-PLGA Nanoparticles Conjugated with Tet-1 Peptide for Potential Use in Alzheimer's Disease Alzheimer's disease is a growing concern in the modern world. As the currently available medications are not very promising, there is an increased need for the fabrication of newer drugs. Curcumin is a plant derived compound which has potential activities beneficial for the treatment of Alzheimer's disease. Anti-amyloid activity and anti-oxidant activity of curcumin is highly beneficial for the treatment of Alzheimer's disease. The insolubility of curcumin in water restricts its use to a great extend, which can be overcome by the synthesis of curcumin nanoparticles. In our work, we have successfully synthesized water-soluble PLGA coated- curcumin nanoparticles and characterized it using different techniques. As drug targeting to diseases of cerebral origin are difficult due to the stringency of blood-brain barrier, we have coupled the nanoparticle with Tet-1 peptide, which has the affinity to neurons and possess retrograde transportation properties. Our results suggest that curcumin encapsulated-PLGA nanoparticles are able to destroy amyloid aggregates, exhibit anti-oxidative property and are non-cytotoxic. The encapsulation of the curcumin in PLGA does not destroy its inherent properties and so, the PLGA-curcumin nanoparticles can be used as a drug with multiple functions in treating Alzheimer's disease proving it to be a potential therapeutic tool against this dreaded disease." }, { "instance_id": "R145685xR145180", "comparison_id": "R145685", "paper_id": "R145180", "text": "Stark Broadening of Neutral Helium Lines in a Plasma The frequency distributions of spectral lines of nonhydrogenic atoms broadened by local fields of both electrons and ions in a plasma are calculated in the classical path approximation. The electron collisions are treated by an impact theory which takes into account deviations from adiabaticity. For the ion effects, the adiabatic approximation can be used to describe the time-dependent wave functions. The various approximations employed were examined for self-consistency, and an accuracy of about 20% in the resulting line profiles is expected. Good agreement with Wulff's experimental helium line profiles was obtained while there are large deviations from the adiabatic theory, especially for the line shifts. Asymptotic distributions for the line wings are given for astrophysical applications. Here the ion effects can be as important as the electron effects and lead to large asymmetries, but near the line core electrons usually dominate. Numerical results are tabulated for 24 neutral helium lines with principal quantum numbers up to five." }, { "instance_id": "R145685xR145171", "comparison_id": "R145685", "paper_id": "R145171", "text": "General Impact Theory of Pressure Broadening The work of two previous papers is extended and a theory of pressure broadening is developed which treats the perturbers quantum mechanically and allows fur inelastic collisions, degeneracy, and overlapping lines. The impact approximation is used. It consists in assuming that it takes, on the average, many collisions to produce an appreciable disturbance in the wave function of the atom, and it results in an isolated line having a Lorentz shape. Validity criteria are given. When the approximation is valid, it is allowable to replace the exact, fluctuating interaction of the perturbers with the atom by a constant effective interaction. The effective interaction is expressed in terms of the one-perturber quantum mechanical transition amplitudes on and near the energy shell and its close relationship to the scattering matrix is stressed. The calculation of the line shape in terms of the effective interaction is the same as when the perturbers move on classical paths. Results are written explicitly fur isolated lines. If the interaction of the perturbers with the final state can be neglected, the shift and width are proportional to the real and imaginary part of the forward elastic scattering amplitude, respectively. By the optical theorem, the width can alsomore \u00bb be written in terms of the total cross section. When the interaction in the final state cannot be neglected, the shift and width are still given in terms of the elastic scattering amplitudes, in a slightly more complicated fashion. Finally, rules are given for taking into account rotational degeneracy of the radiating states. (auth)\u00ab less" }, { "instance_id": "R145685xR145213", "comparison_id": "R145685", "paper_id": "R145213", "text": "Electron impact broadening of spectral lines in Be-like ions: quantum calculations We present in this paper quantum mechanical calculations for the electron impact Stark linewidths of the 2s3s\u20132s3p transitions for the four beryllium-like ions from N IV to Ne VII. Calculations are made in the frame of the impact approximation and intermediate coupling, taking into account fine-structure effects. A comparison between our calculations, experimental and other theoretical results, shows a good agreement. This is the first time that such a good agreement is found between quantum and experimental linewidths of highly charged ions." }, { "instance_id": "R145685xR145194", "comparison_id": "R145685", "paper_id": "R145194", "text": "Stark widths of doubly- and triply-ionized atom lines Abstract In this paper, we report modifications of well known semiempirical and semiclassical approximation furmulas for Stark line-width calculations. Comparisons with experiments for doubly ionized atoms yield, as an average ratio of measured to calculated widths 1.06 \u00b1 0.31 for a modified semiempirical formula and 0.96\u00b10.24 for a modified semiclassical formula. For triply ionized atoms these ratios are 0.91\u00b10.42 and 1.08\u00b10.41, respectively. Comparison with other theoretical calculations have also been made." }, { "instance_id": "R145685xR145197", "comparison_id": "R145685", "paper_id": "R145197", "text": "Calculated stark widths of oxygen ion lines Calculations have been performed on the electron impact broadening of isolated lines from singly-ionized and doubly-ionized oxygen emitted from a plasma of electron density 1017 cm-3 and temperature about 2 eV. These have been compared with results of measurements performed by Platisa, Popovic, and Konjevic on a plasma produced by a low pressure pulsed arc. Good overall agreement has been obtained for both ionization stages, which we interpret as strong support for a recently derived expression for the effective Gaunt factor in line broadening calculations. This in turn indicates the important role that the curvature of the perturber trajectory plays in the broadening process, and that by proper allowance for this effect, classical path calculations of the isolated ion line widths can be extended to spectra of the multiply-charged ions. Some ambiguity still remains, however, as to the proper method of extrapolation of the effective Gaunt factors below threshold energies in the classical path calculation of the elastic contribution to the broadening. The present comparison appears to indicate that for the higher ionization stages, extrapolation of \u1e21 as a constant equal to its threshold value, is satisfactory." }, { "instance_id": "R145685xR145188", "comparison_id": "R145685", "paper_id": "R145188", "text": "Stark-profile calculations for Lyman-series lines of one-electron ions in dense plasmas The frequency distributions of the first six Lyman lines of hydrogen-like carbon, oxygen, neon, magnesium, aluminum, and silicon ions broadened by the local fields of both ions and electrons are calculated for dense plasmas. The electron collisions are treated by an impact theory allowing (approximately) for level splittings caused by the ion fields, finite duration of the collisions, and screening of the electron fields. Ion effects are calculated in the quasistatic, linear Stark-effect approximation, using distribution functions of Hooper and Tighe which include correlation and shielding effects. Theoretical uncertainties from the various approximations are estimated, and the scaling of the profiles with density, temperature and nuclear charge is discussed. A correction for the effects caused by low frequency field fluctuations is suggested." }, { "instance_id": "R145685xR145200", "comparison_id": "R145685", "paper_id": "R145200", "text": "Line shapes of lithium-like ions emitted from plasmas The calculation of the spectral line broadening of lithium-like ions is presented. The motivation for these calculations is to extend present theoretical calculations to more complex atomic structures and provide further diagnostic possibilities. The profiles of Li I, Ti XX and Br XXXIII are shown as a representative sampling of the possible effects which can occur. The calculations are performed for all level 2 to level 3 and 4 transitions, with dipole-forbidden and overlapping components fully taken into account." }, { "instance_id": "R145685xR145222", "comparison_id": "R145685", "paper_id": "R145222", "text": "On the Stark Broadening of Be II Spectral Lines Calculated Stark broadening parameters of singly ionized beryllium spectral lines have been reported. Three spectral series have been studied within semiclassical perturbation theory. The plasma conditions cover temperatures from 2500 to 50,000 K and perturber densities 1011 cm\u22123 and 1013 cm\u22123. The influence of the temperature and the role of the perturbers (electrons, protons and He+ ions) on the Stark width and shift have been discussed. Results could be useful for plasma diagnostics in astrophysics, laboratory, and industrial plasmas." }, { "instance_id": "R145685xR145219", "comparison_id": "R145685", "paper_id": "R145219", "text": "Stark broadening and atomic data for Ar XVI Stark broadening and atomic data calculations have been developed for the recent years, especially atomic and line broadening data for highly ionized ions of argon. We present in this paper atomic data (such as energy levels, line strengths, oscillator strengths and radiative decay rates) for Ar XVI ion and quantum Stark broadening calculations for 10 Ar XVI lines. Radiative atomic data for this ion have been calculated using the University College London (UCL) codes (SUPERSTRUCTURE, DISTORTED WAVE, JAJOM) and have been compared with other results. Using our quantum mechanical method, our Stark broadening calculations for Ar XVI lines are performed at electron density Ne = 10 20 cm\u22123 and for electron temperature varying from 7.5\u00d710 to 7.5\u00d710 K. No Stark broadening results in the literature to compare with. So, our results come to fill this lack of data." }, { "instance_id": "R145685xR145177", "comparison_id": "R145685", "paper_id": "R145177", "text": "Stark Broadening of Hydrogen Lines in a Plasma The frequency distributions of hydrogen lines broadened by the local fields of both ions and electrons in a plasma are calculated in the classical path approximation. The electron collisions are treated by an impact theory which takes into account the Stark splitting caused by the quasistatic ion fields. The ion field-strength distribution function used includes the effect of electron shielding and ion-ion correlations. The various approximations that were employed are examined for self-consistency and an accuracy of about 10% in the resulting line profiles is expected. Good agreement with experimental H/sub beta / profiles is obtained while there are deviations of factors of two with the usual Holtsmark theory. Asymptotic distributions for the line wings are given for astrophysical applications. Also here the electron effects are generally as important as the ion effects for all values of the electron density and in some cases the electron broadening is larger than the ion broadening. (auth)" }, { "instance_id": "R145685xR145203", "comparison_id": "R145685", "paper_id": "R145203", "text": "Stark broadening of neutral helium lines Abstract A semiclassical approach has been used to evaluate Stark broadening of atomic lines and also electron-and proton-impact line widths and shifts of 30 neutral sodium lines. The results are used to investigate Stark broadening-parameter regularities within the spectral series." }, { "instance_id": "R145950xR142767", "comparison_id": "R145950", "paper_id": "R142767", "text": "Overcoming the Heterogeneity in the Internet of Things for Smart Cities In the past few years, the viability of the Internet of Things (IoT) technology has been demonstrated, leading to increased possibilities for novel human-centric services in the smart cities. This development has resulted in numerous approaches being proposed for harnessing IoT for smart city applications. Having received a significant attention by the research community and industry, IoT adaptation has gained momentum. IoT-enabled applications are being rapidly developed in a number of domains such as energy management, waste management, traffic control, mobility, healthcare, ambient assisted living, etc. On the other hand, this high-speed development and adaptation has resulted in the emergence of heterogeneous IoT architectures, standards, middlewares, and applications. This heterogeneity is hindrance in the realization of a much anticipated IoT global eco-system. Hence, the heterogeneity (from hardware level to application level) is a critical issue that needs high-priority and must be resolved as early as possible. In this article, we present and discuss the modelling of heterogeneous IoT data streams in order to overcome the challenge of heterogeneity. The data model is used within the VITAL project which is an open source IoT system of systems. The main objective of the VITAL platform is to enable rapid development of cross-platform and cross-context IoT based applications for smart cities." }, { "instance_id": "R145950xR142756", "comparison_id": "R145950", "paper_id": "R142756", "text": "Smart City Ontologies: Improving the effectiveness of smart city applications This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Prot\u00e9g\u00e9 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities." }, { "instance_id": "R145950xR142747", "comparison_id": "R145950", "paper_id": "R142747", "text": "Interoperability for Smart Appliances in the IoT World Household appliances are set to become highly intelligent, smart and networked devices in the near future. Systematically deployed on the Internet of Things (IoT), they would be able to form complete energy consuming, producing, and managing ecosystems. Smart systems are technically very heterogeneous, and standardized interfaces on a sensor and device level are therefore needed. However, standardization in IoT has largely focused at the technical communication level, leading to a large number of different solutions based on various standards and protocols, with limited attention to the common semantics contained in the message data structures exchanged at the technical level. The Smart Appliance REFerence ontology (SAREF) is a shared model of consensus developed in close interaction with the industry and with the support of the European Commission. It is published as a technical specification by ETSI and provides an important contribution to achieve semantic interoperability for smart appliances. This paper builds on the success achieved in standardizing SAREF and presents SAREF4EE, an extension of SAREF created in collaboration with the EEBus and Energy@Home industry associations to interconnect their (different) data models. By using SAREF4EE, smart appliances from different manufacturers that support the EEBus or Energy@Home standards can easily communicate with each other using any energy management system at home or in the cloud." }, { "instance_id": "R145950xR138642", "comparison_id": "R145950", "paper_id": "R138642", "text": "Km4City ontology building vs data harvesting and cleaning for smart-city services Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-Store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection." }, { "instance_id": "R145950xR142702", "comparison_id": "R145950", "paper_id": "R142702", "text": "The semantics of populations: A city indicator perspective Abstract This paper addresses the question of how to represent the semantics of populations. This question is unusual in the sense that statistics is directly concerned with the definition of populations but is essentially silent on the representation of population definitions from a data modeling perspective. The motivation for this work is the development of ontologies for the representation of city indicator definitions. A city indicator measures the performance of a city in areas such as education, transportation and the environment. The definitions of city indicators rely on definitions for populations of people, built form, events, activities, and sensor measurements. This paper provides a model for representing membership extent, temporal extent, spatial extent, and measurement of populations. It demonstrates the approach by representing the definitions of city indicators as defined by ISO 37120, the interpretation of these definitions by cities, and their comparison to ascertain whether a city\u2019s interpretation is consistent with the standard." }, { "instance_id": "R145950xR142729", "comparison_id": "R145950", "paper_id": "R142729", "text": "CityPulse: Large Scale Data Analytics Framework for Smart Cities Our world and our lives are changing in many ways. Communication, networking, and computing technologies are among the most influential enablers that shape our lives today. Digital data and connected worlds of physical objects, people, and devices are rapidly changing the way we work, travel, socialize, and interact with our surroundings, and they have a profound impact on different domains, such as healthcare, environmental monitoring, urban systems, and control and management applications, among several other areas. Cities currently face an increasing demand for providing services that can have an impact on people's everyday lives. The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams. To goal is to break away from silo applications and enable cross-domain data integration. The CityPulse framework integrates multimodal, mixed quality, uncertain and incomplete data to create reliable, dependable information and continuously adapts data processing techniques to meet the quality of information requirements from end users. Different than existing solutions that mainly offer unified views of the data, the CityPulse framework is also equipped with powerful data analytics modules that perform intelligent data aggregation, event detection, quality assessment, contextual filtering, and decision support. This paper presents the framework, describes its components, and demonstrates how they interact to support easy development of custom-made applications for citizens. The benefits and the effectiveness of the framework are demonstrated in a use-case scenario implementation presented in this paper." }, { "instance_id": "R145950xR142749", "comparison_id": "R145950", "paper_id": "R142749", "text": "From RESTful to SPARQL: A Case Study on Generating Semantic Sensor Data The recent years have seen a vast increase in the amount of environmental sensor data that is being published on the web. Semantic enrichment of sensor data addresses the problems of (re-)use, integration, and discovery. A critical issue is how to generate semantic sensor data from existing data sources. In this paper, we present our approach to semantically augment an existing sensor data infrastructure, in which data is published via a RESTful API as inter-linked json documents. In particular, we describe and discuss the use of ontologies and the design and development of seraw, a system that transforms a set of json documents into an rdf graph augmented with links to other resources in the Linked Open Data cloud. This transformation is based on user-provided mappings and supported by a library of purpose-built functions. We discuss lessons learned during development and outline remaining open problems." }, { "instance_id": "R145950xR142709", "comparison_id": "R145950", "paper_id": "R142709", "text": "Unified IoT ontology to enable interoperability and federation of testbeds After a thorough analysis of existing Internet of Things (IoT) related ontologies, in this paper we propose a solution that aims to achieve semantic interoperability among heterogeneous testbeds. Our model is framed within the EU H2020's FIESTA-IoT project, that aims to seamlessly support the federation of testbeds through the usage of semantic-based technologies. Our proposed model (ontology) takes inspiration from the well-known Noy et al. methodology for reusing and interconnecting existing ontologies. To build the ontology, we leverage a number of core concepts from various mainstream ontologies and taxonomies, such as Semantic Sensor Network (SSN), M3-lite (a lite version of M3 and also an outcome of this study), WGS84, IoT-lite, Time, and DUL. In addition, we also introduce a set of tools that aims to help external testbeds adapt their respective datasets to the developed ontology." }, { "instance_id": "R145950xR142759", "comparison_id": "R145950", "paper_id": "R142759", "text": "SOSA: A lightweight ontology for sensors, observations, samples, and actuators Abstract The Sensor, Observation, Sample, and Actuator (SOSA) ontology provides a formal but lightweight general-purpose specification for modellingthe interaction between the entities involved in the acts of observation, actuation, and sampling. SOSA is the result of rethinking the W3C-XG Semantic Sensor Network (SSN) ontology based on changes in scope and target audience, technical developments, and lessons learned over the past years. SOSA also acts as a replacement of SSN\u2019s Stimulus Sensor Observation (SSO) core. It has been developed by the first joint working group of the Open Geospatial Consortium (OGC) and the World Wide Web Consortium (W3C) on Spatial Data on the Web. In this work, we motivate the need for SOSA, provide an overview of the main classes and properties, and briefly discuss its integration with the new release of the SSN ontology as well as various other alignments to specifications such as OGC\u2019s Observations and Measurements (O&M), Dolce-Ultralite (DUL), and other prominent ontologies. We will also touch upon common modelling problems and application areas related to publishing and searching observation, sampling, and actuation data on the Web. The SOSA ontology and standard can be accessed at https://www.w3.org/TR/vocab-ssn/ ." }, { "instance_id": "R145951xR142759", "comparison_id": "R145951", "paper_id": "R142759", "text": "SOSA: A lightweight ontology for sensors, observations, samples, and actuators Abstract The Sensor, Observation, Sample, and Actuator (SOSA) ontology provides a formal but lightweight general-purpose specification for modellingthe interaction between the entities involved in the acts of observation, actuation, and sampling. SOSA is the result of rethinking the W3C-XG Semantic Sensor Network (SSN) ontology based on changes in scope and target audience, technical developments, and lessons learned over the past years. SOSA also acts as a replacement of SSN\u2019s Stimulus Sensor Observation (SSO) core. It has been developed by the first joint working group of the Open Geospatial Consortium (OGC) and the World Wide Web Consortium (W3C) on Spatial Data on the Web. In this work, we motivate the need for SOSA, provide an overview of the main classes and properties, and briefly discuss its integration with the new release of the SSN ontology as well as various other alignments to specifications such as OGC\u2019s Observations and Measurements (O&M), Dolce-Ultralite (DUL), and other prominent ontologies. We will also touch upon common modelling problems and application areas related to publishing and searching observation, sampling, and actuation data on the Web. The SOSA ontology and standard can be accessed at https://www.w3.org/TR/vocab-ssn/ ." }, { "instance_id": "R145951xR142764", "comparison_id": "R145951", "paper_id": "R142764", "text": "The modular SSN ontology: A joint W3C and OGC standard specifying the semantics of sensors, observations, sampling, and actuation The joint W3C (World Wide Web Consortium) and OGC (Open Geospatial Consortium) Spatial Data on the Web (SDW) Working Group developed a set of ontologies to describe sensors, actuators, samplers as well as their observations, actuation, and sampling activities. The ontologies have been published both as a W3C recommendation and as an OGC implementation standard. The set includes a lightweight core module called SOSA (Sensor, Observation, Sampler, and Actuator) available at: http://www.w3.org/ns/sosa/, and a more expressive extension module called SSN (Semantic Sensor Network) available at: http://www.w3.org/ns/ssn/. Together they describe systems of sensors and actuators, observations, the used procedures, the subjects and their properties being observed or acted upon, samples and the process of sampling, and so forth. The set of ontologies adopts a modular architecture with SOSA as a self-contained core that is extended by SSN and other modules to add expressivity and breadth. The SOSA/SSN ontologies are able to support a wide range of applications and use cases, including satellite imagery, large-scale scientific monitoring, industrial and household infrastructures, social sensing, citizen science, observation-driven ontology engineering, and the Internet of Things. In this paper we give an overview of the ontologies and discuss the rationale behind key design decisions, reporting on the differences between the new SSN ontology presented here and its predecessor [9] developed by the W3C Semantic Sensor Network Incubator group (the SSN-XG). We present usage examples and describe alignment modules that foster interoperability with other ontologies." }, { "instance_id": "R145951xR142767", "comparison_id": "R145951", "paper_id": "R142767", "text": "Overcoming the Heterogeneity in the Internet of Things for Smart Cities In the past few years, the viability of the Internet of Things (IoT) technology has been demonstrated, leading to increased possibilities for novel human-centric services in the smart cities. This development has resulted in numerous approaches being proposed for harnessing IoT for smart city applications. Having received a significant attention by the research community and industry, IoT adaptation has gained momentum. IoT-enabled applications are being rapidly developed in a number of domains such as energy management, waste management, traffic control, mobility, healthcare, ambient assisted living, etc. On the other hand, this high-speed development and adaptation has resulted in the emergence of heterogeneous IoT architectures, standards, middlewares, and applications. This heterogeneity is hindrance in the realization of a much anticipated IoT global eco-system. Hence, the heterogeneity (from hardware level to application level) is a critical issue that needs high-priority and must be resolved as early as possible. In this article, we present and discuss the modelling of heterogeneous IoT data streams in order to overcome the challenge of heterogeneity. The data model is used within the VITAL project which is an open source IoT system of systems. The main objective of the VITAL platform is to enable rapid development of cross-platform and cross-context IoT based applications for smart cities." }, { "instance_id": "R145951xR142749", "comparison_id": "R145951", "paper_id": "R142749", "text": "From RESTful to SPARQL: A Case Study on Generating Semantic Sensor Data The recent years have seen a vast increase in the amount of environmental sensor data that is being published on the web. Semantic enrichment of sensor data addresses the problems of (re-)use, integration, and discovery. A critical issue is how to generate semantic sensor data from existing data sources. In this paper, we present our approach to semantically augment an existing sensor data infrastructure, in which data is published via a RESTful API as inter-linked json documents. In particular, we describe and discuss the use of ontologies and the design and development of seraw, a system that transforms a set of json documents into an rdf graph augmented with links to other resources in the Linked Open Data cloud. This transformation is based on user-provided mappings and supported by a library of purpose-built functions. We discuss lessons learned during development and outline remaining open problems." }, { "instance_id": "R145951xR142742", "comparison_id": "R145951", "paper_id": "R142742", "text": "IoT-O, a Core-Domain IoT Ontology to Represent Connected Devices Networks Smart objects are now present in our everyday lives, and the Internet of Things is expanding both in number of devices and in volume of produced data. These devices are deployed in dynamic ecosystems, with spatial mobility constraints, intermittent network availability depending on many parameters e.g. battery level or duty cycle, etc. To capture knowledge describing such evolving systems, open, shared and dynamic knowledge representations are required. These representations should also have the ability to adapt over time to the changing state of the world. That is why we propose IoT-O, a core-domain modular IoT ontology proposing a vocabulary to describe connected devices and their relation with their environment. First, existing IoT ontologies are described and compared to requirements an IoT ontology should be compliant with. Then, after a detailed description of its modules, IoT-O is instantiated in a home automation use case to illustrate how it supports the description of evolving systems." }, { "instance_id": "R145951xR142747", "comparison_id": "R145951", "paper_id": "R142747", "text": "Interoperability for Smart Appliances in the IoT World Household appliances are set to become highly intelligent, smart and networked devices in the near future. Systematically deployed on the Internet of Things (IoT), they would be able to form complete energy consuming, producing, and managing ecosystems. Smart systems are technically very heterogeneous, and standardized interfaces on a sensor and device level are therefore needed. However, standardization in IoT has largely focused at the technical communication level, leading to a large number of different solutions based on various standards and protocols, with limited attention to the common semantics contained in the message data structures exchanged at the technical level. The Smart Appliance REFerence ontology (SAREF) is a shared model of consensus developed in close interaction with the industry and with the support of the European Commission. It is published as a technical specification by ETSI and provides an important contribution to achieve semantic interoperability for smart appliances. This paper builds on the success achieved in standardizing SAREF and presents SAREF4EE, an extension of SAREF created in collaboration with the EEBus and Energy@Home industry associations to interconnect their (different) data models. By using SAREF4EE, smart appliances from different manufacturers that support the EEBus or Energy@Home standards can easily communicate with each other using any energy management system at home or in the cloud." }, { "instance_id": "R145951xR142756", "comparison_id": "R145951", "paper_id": "R142756", "text": "Smart City Ontologies: Improving the effectiveness of smart city applications This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Prot\u00e9g\u00e9 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities." }, { "instance_id": "R145951xR142721", "comparison_id": "R145951", "paper_id": "R142721", "text": "The SEAS Knowledge Model This deliverable concentrates on the results of task 2.2 of work package 2. It describes the SEAS Knowledge Model as a basis for semantic interoperability in the SEAS ecosystem. The SEAS Knowledge Model consists of an innovative Web ontology that is designed to: (i) meet the current best practices in terms of quality, metadata, and publication, (ii) reuse or align to existing standards, and (iii) cover the required expressivity for the SEAS use cases, while being extensible to other use cases and domains (gas, water, air, waste management). This document is a snapshot of the situation at the end of the SEAS project. Up-to-date information can be found at the following websites: https://w3id.org/seas/ for the SEAS Knowledge Model, and contributing to it; https://w3id.org/pep/ for the Process Executor Platform ontology. Deliverable D2.2 SEAS Knowledge Model 2 19 December 2016 Version 1.0 Smart Energy Aware Systems ITEA2 \u2013 12004" }, { "instance_id": "R145951xR142709", "comparison_id": "R145951", "paper_id": "R142709", "text": "Unified IoT ontology to enable interoperability and federation of testbeds After a thorough analysis of existing Internet of Things (IoT) related ontologies, in this paper we propose a solution that aims to achieve semantic interoperability among heterogeneous testbeds. Our model is framed within the EU H2020's FIESTA-IoT project, that aims to seamlessly support the federation of testbeds through the usage of semantic-based technologies. Our proposed model (ontology) takes inspiration from the well-known Noy et al. methodology for reusing and interconnecting existing ontologies. To build the ontology, we leverage a number of core concepts from various mainstream ontologies and taxonomies, such as Semantic Sensor Network (SSN), M3-lite (a lite version of M3 and also an outcome of this study), WGS84, IoT-lite, Time, and DUL. In addition, we also introduce a set of tools that aims to help external testbeds adapt their respective datasets to the developed ontology." }, { "instance_id": "R146458xR146150", "comparison_id": "R146458", "paper_id": "R146150", "text": "A Roadmap on Improved Performance-centric Cloud Storage Estimation Approach for Database System Deployment in Cloud Environment Cloud computing has taken the limelight with respect to the present industry scenario due to its multi-tenant and pay-as-you-use models, where users need not bother about buying resources like hardware, software, infrastructure, etc. on an permanently basis. As much as the technological benefits, cloud computing also has its downside. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on its concerns, like security, performance, estimation, availability, etc. At the same time due to its risks, customers - relatively majority in number, avoid migration towards cloud. Considering this fact, performance and estimation are being the major critical factors for any application deployment in cloud environment; this paper brings the roadmap for an improved performance-centric cloud storage estimation approach, which is based on balanced PCTFree allocation technique for database systems deployment in cloud environment. Objective of this approach is to highlight the set of key activities that have to be jointly done by the database technical team and business users of the software system in order to perform an accurate analysis to arrive at estimation for sizing of the database. For the evaluation of this approach, an experiment has been performed through varied-size PCTFree allocations on an experimental setup with 100000 data records. The result of this experiment shows the impact of PCTFree configuration on database performance. Basis this fact, we propose an improved performance-centric cloud storage estimation approach in cloud. Further, this paper applies our improved performance-centric storage estimation approach on decision support system (DSS) as a case study." }, { "instance_id": "R146458xR146134", "comparison_id": "R146458", "paper_id": "R146134", "text": "Agile digital transformation of System-of-Systems architecture models using Zachman framework Abstract Emergent behavior is behavior of a system that does not depend on its individual parts, but on their relationships to one another. Such behavior exists in biological systems, physical systems as well as in the human performance. It is an inherited nature of a System-of-Systems (SoS). A suitable framework is needed to guide the development of SoS architecture, which includes emergent behavior. Enterprise architecture (EA) is a discipline driving change within organizations. Aligning and integrating business and IT thereby belongs to strategic management. The management of EA change is a challenging task for enterprise architects, due to complex dependencies amongst EA models, when evolving towards different alternatives. In this paper, various architecture frameworks are explored for an application on SoS architecture: the Department of Defense Architecture Framework (DoDAF) and Ministry of Defense Architecture Framework (MODAF) are declared inappropriate. The Open Group Architecture Framework (TOGAF), the Federal Enterprise Architecture Framework (FEAF) and the Zachman Framework on the other hand are suitable. The use of Zachman Framework to guide the architecture development is described in step-by-step details in this paper. The agent-based simulation is recommended to develop the SoS architectural models following the Zachman Framework guidance. Ultimately, SysML and UML should be integrated with the agent-based model. An example with the collaborative engineering services for the global automotive supply chain is hereby described." }, { "instance_id": "R146458xR146157", "comparison_id": "R146458", "paper_id": "R146157", "text": "The digital transformation and smart data analytics: An overview of enabling developments and application areas The digital transformation enables new business models and enhanced business processes by utilizing available data for analytics, prediction, and decision support. We give an overview of the enabling developments for the digital transformation, the areas of application, and concrete use case examples. We summarize our findings in a framework for the digital transformation and discuss the potential for new and adapted business models." }, { "instance_id": "R146458xR146051", "comparison_id": "R146458", "paper_id": "R146051", "text": "Innovation Management in the Context of Smart Cities Digital Transformation The paper introduces important aspects of doctoral research concerning innovation management in the context of business management challenges posed by digital transformation. The research was conducted as part of the Research Centre of Business Administration in The Bucharest University of Economic Studies, Romania. The study aims to identify and display key components of innovation management \u2013 with a primary focus on topics spurred by the recent wave of digital evolution. Against this background, the issue of smart city solutions makes for an interesting case \u2013 firstly, because it affects a large number of people and businesses around the globe and secondly, the complexity of the topic forces companies to pursue different innovation management approaches to successfully manage its associated challenges as well as opportunities. The paper consists of an overview on the existing literature and a concise outline of our research. Both researches from professional associations as well as recognized publishers were considered. Furthermore, market data were gathered and processed. More than 50 publications were analyzed to better understand trends in digital transformation and its impact on innovation management. Our research revealed that in the light of the fundamental challenges posed by digitization, companies are required to take a structured approach towards their innovation management options. In the context of smart city solutions, the adoption of the \u201c4I Solutions Model\u201d enables businesses to choose the strategic option suitable to their individual case. Concisely, this framework includes four different approaches ranging from initiating groundwork innovation internally to establishing partnerships with selected external parties." }, { "instance_id": "R146458xR146122", "comparison_id": "R146458", "paper_id": "R146122", "text": "Evolution of Enterprise Architecture for Digital Transformation The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture." }, { "instance_id": "R146458xR146070", "comparison_id": "R146458", "paper_id": "R146070", "text": "Smart city initiatives in the context of digital transformation: scope, services and technologies Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives." }, { "instance_id": "R146458xR146022", "comparison_id": "R146458", "paper_id": "R146022", "text": "A Framework for a Smart City Design: Digital Transformation in the Helsinki Smart City Recently, there has been substantial interest in the concept of a smart city, as it has been a viable solution to the dilemmas created by the urbanization of cities. Digital technologies\u2014such as Internet-of-Things, artificial intelligence, big data, and geospatial technologies\u2014are closely associated with the concept of a smart city. By means of modern digital technologies, cities aim to optimize their performance and services. Further, cities actively endorse modern digital technologies to foster digitalization and the emergence of data-based innovations and a knowledge economy. In this paper, a framework for a smart city design is presented. The framework considers a smart city from the perspective of four dimensions\u2014strategy, technology, governance, and stakeholders. The framework is complemented with sub-dimensions, and the purpose of this framework is to strengthen the governance and sustainability of smart city initiatives. Further, the proposed framework is applied to the Helsinki smart city, the capital of Finland. The objective is to analyse the Helsinki smart city through dimensions presented in the framework and learn how the city of Helsinki governs and implements its smart city initiatives." }, { "instance_id": "R146458xR146181", "comparison_id": "R146458", "paper_id": "R146181", "text": "Enterprise Architecture in the Age of Digital Transformation Recent advances in digital technologies are enabling enterprises to undergo transformations for streamlining business processes, offering new products and services, expanding in new areas, and even changing their business models. Current enterprise architecture frameworks are used for analysis, design, and strategy execution, helping an enterprise transition from an as-is state to a to-be state. However emerging trends suggest the need for richer models to support on-going adaptations and periodic transformations. The scope of enterprise architecture modeling needs to be expanded to include the multiple levels of dynamics that exist within any enterprise, the sense-and-respond pathways that drive change at operational and strategic levels, and the tension between centralized control and local autonomy." }, { "instance_id": "R146458xR146032", "comparison_id": "R146458", "paper_id": "R146032", "text": "Digital transformation of existing cities The article focuses on the range of problems arising on the way of innovative technologies implementation in the structure of existing cities. The concept of intellectualization of historic cities, as illustrated by Samara, is offered, which was chosen for the realization of a large Russian project \u201cSmart City. Successful Region\u201d in 2018. One of the problems was to study the experience of information hubs projecting with the purpose of determination of their priority functional directions. The following typology of information hubs was made: scientific and research ones, scientific and technical ones, innovative and cultural ones, cultural and informational ones, scientific and informational ones, technological ones, centres for data processing, scientific centres with experimental and production laboratories. As a result of the conducted research, a suggestion on smart city\u2019s infrastructure is developed, the final levels of innovative technologies implementation in the structure of historic territories are determined. A model suggestion on the formation of a scientific and project centre with experimental and production laboratories branded as named \u201cPark-plant\u201d is developed. Smart (as well as real) city technologies, which are supposed to be placed on the territory of \u201cPark-plant\u201d, are systematized. The organizational structure of the promotion of model projects is offered according to the concept of \u201ctriad of development agents\u201d, in which the flagship university \u2013 urban community \u2013 park-plant interact within the project programme. The effects of the development of the being renovated territory of the historic city centre are enumerated." }, { "instance_id": "R146458xR146112", "comparison_id": "R146458", "paper_id": "R146112", "text": "Industry 4.0 Complemented with EA Approach: A Proposal for Digital Transformation Success Manufacturing industry based on steam know as Industry 1.0 is evolving to Industry 4.0 a digital ecosystem consisting of an interconnected automated system with real-time data. This paper investigates and proposes, how the digital ecosystem complemented with Enterprise Architecture practice will ensure the success of digital transformation." }, { "instance_id": "R146851xR146515", "comparison_id": "R146851", "paper_id": "R146515", "text": "An Internet-based surveillance system for tuberculosis in Korea SETTING The Korea Tuberculosis Surveillance (KTBS) network includes 248 health centres throughout the country, as well as other public and private health institutions. OBJECTIVE To develop a web-based surveillance system for tuberculosis (TB) and to monitor implementation of the National TB Control Programme (NTP) on an ongoing basis. DESIGN A TB notification form was developed with new case definitions, and standardised to obtain uniform essential information of the cases with ease and speed. Data collection, compilation, analysis and feedback were made available at every level of the health authority via the Internet without restrictions of time and space. RESULTS The Internet-based surveillance system was successfully implemented across the country, providing real-time national figures of TB using different variables-patient, time, area, site and type of disease--and facilitating on-line evaluation of NTP implementation. CONCLUSION The web-based surveillance system has been well established within the existing health infrastructure, providing real-time figures on the TB burden. However, it requires continued improvement of the quality of information and of case reporting activities." }, { "instance_id": "R146851xR145085", "comparison_id": "R146851", "paper_id": "R145085", "text": "Developing open source, self-contained disease surveillance software applications for use in resource-limited settings Abstract Background Emerging public health threats often originate in resource-limited countries. In recognition of this fact, the World Health Organization issued revised International Health Regulations in 2005, which call for significantly increased reporting and response capabilities for all signatory nations. Electronic biosurveillance systems can improve the timeliness of public health data collection, aid in the early detection of and response to disease outbreaks, and enhance situational awareness. Methods As components of its Suite for Automated Global bioSurveillance (SAGES) program, The Johns Hopkins University Applied Physics Laboratory developed two open-source, electronic biosurveillance systems for use in resource-limited settings. OpenESSENCE provides web-based data entry, analysis, and reporting. ESSENCE Desktop Edition provides similar capabilities for settings without internet access. Both systems may be configured to collect data using locally available cell phone technologies. Results ESSENCE Desktop Edition has been deployed for two years in the Republic of the Philippines. Local health clinics have rapidly adopted the new technology to provide daily reporting, thus eliminating the two-to-three week data lag of the previous paper-based system. Conclusions OpenESSENCE and ESSENCE Desktop Edition are two open-source software products with the capability of significantly improving disease surveillance in a wide range of resource-limited settings. These products, and other emerging surveillance technologies, can assist resource-limited countries compliance with the revised International Health Regulations." }, { "instance_id": "R146851xR146600", "comparison_id": "R146851", "paper_id": "R146600", "text": "Coronavirus disease 2019 (COVID-19) surveillance system: Development of COVID-19 minimum data set and interoperable reporting framework INTRODUCTION: The 2019 coronavirus disease (COVID-19) is a major global health concern. Joint efforts for effective surveillance of COVID-19 require immediate transmission of reliable data. In this regard, a standardized and interoperable reporting framework is essential in a consistent and timely manner. Thus, this research aimed at to determine data requirements towards interoperability. MATERIALS AND METHODS: In this cross-sectional and descriptive study, a combination of literature study and expert consensus approach was used to design COVID-19 Minimum Data Set (MDS). A MDS checklist was extracted and validated. The definitive data elements of the MDS were determined by applying the Delphi technique. Then, the existing messaging and data standard templates (Health Level Seven-Clinical Document Architecture [HL7-CDA] and SNOMED-CT) were used to design the surveillance interoperable framework. RESULTS: The proposed MDS was divided into administrative and clinical sections with three and eight data classes and 29 and 40 data fields, respectively. Then, for each data field, structured data values along with SNOMED-CT codes were defined and structured according HL7-CDA standard. DISCUSSION AND CONCLUSION: The absence of effective and integrated system for COVID-19 surveillance can delay critical public health measures, leading to increased disease prevalence and mortality. The heterogeneity of reporting templates and lack of uniform data sets hamper the optimal information exchange among multiple systems. Thus, developing a unified and interoperable reporting framework is more effective to prompt reaction to the COVID-19 outbreak." }, { "instance_id": "R146851xR145399", "comparison_id": "R146851", "paper_id": "R145399", "text": "Epidemic surveillance in a low resource setting: lessons from an evaluation of the Solomon Islands syndromic surveillance system, 2017 BackgroundSolomon Islands is one of the least developed countries in the world. Recognising that timely detection of outbreaks is needed to enable early and effective response to disease outbreaks, the Solomon Islands government introduced a simple syndromic surveillance system in 2011. We conducted the first evaluation of the system and the first exploration of a national experience within the broader multi-country Pacific Syndromic Surveillance System to determine if it is meeting its objectives and to identify opportunities for improvement.MethodsWe used a multi-method approach involving retrospective data collection and statistical analysis, modelling, qualitative research and observational methods.ResultsWe found that the system was well accepted, highly relied upon and designed to account for contextual limitations. We found the syndromic algorithm used to identify outbreaks was moderately sensitive, detecting 11.8% (IQR: 6.3\u201325.0%), 21.3% (IQR: 10.3\u201336.8%), 27.5% (IQR: 12.8\u201352.3%) and 40.5% (IQR: 13.5\u201365.7%) of outbreaks that caused small, moderate, large and very large increases in case presentations to health facilities, respectively. The false alert rate was 10.8% (IQR: 4.8\u201324.5%). Rural coverage of the system was poor. Limited workforce, surveillance resourcing and other \u2018upstream\u2019 health system factors constrained performance.ConclusionsThe system has made a significant contribution to public health security in Solomon Islands, but remains insufficiently sensitive to detect small-moderate sized outbreaks and hence should not be relied upon as a stand-alone surveillance strategy. Rather, the system should sit within a complementary suite of early warning surveillance activities including event-based, in-patient- and laboratory-based surveillance methods. Future investments need to find a balance between actions to address the technical and systems issues that constrain performance while maintaining simplicity and hence sustainability." }, { "instance_id": "R146851xR145318", "comparison_id": "R146851", "paper_id": "R145318", "text": "Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE): Overview, Components, and Public Health Applications Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement\u2013driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design." }, { "instance_id": "R146851xR146321", "comparison_id": "R146851", "paper_id": "R146321", "text": "Introduction of software tools for epidemiological surveillance in infection control in Colombia Introduction: Healthcare-Associated Infections (HAI) are a challenge for patient safety in the hospitals. Infection control committees (ICC) should follow CDC definitions when monitoring HAI. The handmade method of epidemiological surveillance (ES) may affect the sensitivity and specificity of the monitoring system, while electronic surveillance can improve the performance, quality and traceability of recorded information. Objective: To assess the implementation of a strategy for electronic surveillance of HAI, Bacterial Resistance and Antimicrobial Consumption by the ICC of 23 high-complexity clinics and hospitals in Colombia, during the period 2012-2013. Methods: An observational study evaluating the introduction of electronic tools in the ICC was performed; we evaluated the structure and operation of the ICC, the degree of incorporation of the software HAI Solutions and the adherence to record the required information. Results: Thirty-eight percent of hospitals (8/23) had active surveillance strategies with standard criteria of the CDC, and 87% of institutions adhered to the module of identification of cases using the HAI Solutions software. In contrast, compliance with the diligence of the risk factors for device-associated HAIs was 33%. Conclusions: The introduction of ES could achieve greater adherence to a model of active surveillance, standardized and prospective, helping to improve the validity and quality of the recorded information." }, { "instance_id": "R146851xR146223", "comparison_id": "R146851", "paper_id": "R146223", "text": "Implementation and evaluation of an automated surveillance system to detect hospital outbreak HighlightsReal\u2010time surveillance system for clusters is useful for infection control programs.Using free WHONET\u2010SaTScan software allows for automation of surveillance.Surveillance system detected clusters of organisms otherwise unbeknownst.System was flexible, timely, acceptable, useful, and sensitive according to the Centers for Disease Control and Prevention's guidelines. Background: The timely identification of a cluster is a critical requirement for infection prevention and control (IPC) departments because these events may represent transmission of pathogens within the health care setting. Given the issues with manual review of hospital infections, a surveillance system to detect clusters in health care settings must use automated data capture, validated statistical methods, and include all significant pathogens, antimicrobial susceptibility patterns, patient care locations, and health care teams. Methods: We describe the use of SaTScan statistical software to identify clusters, WHONET software to manage microbiology laboratory data, and electronic health record data to create a comprehensive outbreak detection system in our hospital. We also evaluated the system using the Centers for Disease Control and Prevention's guidelines. Results: During an 8\u2010month surveillance time period, 168 clusters were detected, 45 of which met criteria for investigation, and 6 were considered transmission events. The system was felt to be flexible, timely, accepted by the department and hospital, useful, and sensitive, but it required significant resources and has a low positive predictive value. Conclusions: WHONET\u2010SaTScan is a useful addition to a robust IPC program. Although the resources required were significant, this prospective, real\u2010time cluster detection surveillance system represents an improvement over historical methods. We detected several episodes of transmission which would have eluded us previously, and allowed us to focus infection prevention efforts and improve patient safety." }, { "instance_id": "R146851xR146131", "comparison_id": "R146851", "paper_id": "R146131", "text": "Hospital adoption of automated surveillance technology and the implementation of infection prevention and control programs BACKGROUND This research analyzes the relationship between hospital use of automated surveillance technology (AST) for identification and control of hospital-acquired infections (HAI) and implementation of evidence-based infection control practices. Our hypothesis is that hospitals that use AST have made more progress implementing infection control practices than hospitals that rely on manual surveillance. METHODS A survey of all acute general care hospitals in California was conducted from October 2008 through January 2009. A structured computer-assisted telephone interview was conducted with the quality director of each hospital. The final sample includes 241 general acute care hospitals (response rate, 83%). RESULTS Approximately one third (32.4%) of California's hospitals use AST for monitoring HAI. Adoption of AST is statistically significant and positively associated with the depth of implementation of evidence-based practices for methicillin-resistant Staphylococcus aureus and ventilator-associated pneumonia and adoption of contact precautions and surgical care infection practices. Use of AST is also statistically significantly associated with the breadth of hospital implementation of evidence-based practices across all 5 targeted HAI. CONCLUSION Our findings suggest that hospitals using AST can achieve greater depth and breadth in implementing evidenced-based infection control practices." }, { "instance_id": "R146851xR145380", "comparison_id": "R146851", "paper_id": "R145380", "text": "Electronic surveillance systems in infection prevention: Organizational support, program characteristics, and user satisfaction BACKGROUND The use of electronic surveillance systems (ESSs) is gradually increasing in infection prevention and control programs. Little is known about the characteristics of hospitals that have a ESS, user satisfaction with ESSs, and organizational support for implementation of ESSs. METHODS A total of 350 acute care hospitals in California were invited to participate in a Web-based survey; 207 hospitals (59%) agreed to participate. The survey included a description of infection prevention and control department staff, where and how they spent their time, a measure of organizational support for infection prevention and control, and reported experience with ESSs. RESULTS Only 23% (44/192) of responding infection prevention and control departments had an ESS. No statistically significant difference was seen in how and where infection preventionists (IPs) who used an ESS and those who did not spend their time. The 2 significant predictors of whether an ESS was present were score on the Organizational Support Scale (odds ratio [OR], 1.10; 95% confidence interval [CI], 1.02-1.18) and hospital bed size (OR, 1.004; 95% CI, 1.00-1.007). Organizational support also was positively correlated with IP satisfaction with the ESS, as measured on the Computer Usability Scale (P = .02). CONCLUSION Despite evidence that such systems may improve efficiency of data collection and potentially improve patient outcomes, ESSs remain relatively uncommon in infection prevention and control programs. Based on our findings, organizational support appears to be a major predictor of the presence, use, and satisfaction with ESSs in infection prevention and control programs." }, { "instance_id": "R146851xR146256", "comparison_id": "R146851", "paper_id": "R146256", "text": "Improving national surveillance of Lyme neuroborreliosis in Denmark through electronic reporting of specific antibody index testing from 2010 to 2012 Our aim was to evaluate the results of automated surveillance of Lyme neuroborreliosis (LNB) in Denmark using the national microbiology database (MiBa), and to describe the epidemiology of laboratory-confirmed LNB at a national level. MiBa-based surveillance includes electronic transfer of laboratory results, in contrast to the statutory surveillance based on manually processed notifications. Antibody index (AI) testing is the recommend laboratory test to support the diagnosis of LNB in Denmark. In the period from 2010 to 2012, 217 clinical cases of LNB were notified to the statutory surveillance system, while 533 cases were reported AI positive by the MiBa system. Thirty-five unconfirmed cases (29 AI-negative and 6 not tested) were notified, but not captured by MiBa. Using MiBa, the number of reported cases was increased almost 2.5 times. Furthermore, the reporting was timelier (median lag time: 6 vs 58 days). Average annual incidence of AI-confirmed LNB in Denmark was 3.2/100,000 population and incidences stratified by municipality ranged from none to above 10/100,000. This is the first study reporting nationwide incidence of LNB using objective laboratory criteria. Laboratory-based surveillance with electronic data-transfer was more accurate, complete and timely compared to the surveillance based on manually processed notifications. We propose using AI test results for LNB surveillance instead of clinical reporting." }, { "instance_id": "R146851xR146576", "comparison_id": "R146851", "paper_id": "R146576", "text": "Comparative evaluation of three surveillance systems for infectious equine diseases in France and implications for future synergies SUMMARY It is necessary to assess surveillance systems for infectious animal diseases to ensure they meet their objectives and provide high-quality health information. Each system is generally dedicated to one disease and often comprises various components. In many animal industries, several surveillance systems are implemented separately even if they are based on similar components. This lack of synergy may prevent optimal surveillance. The purpose of this study was to assess several surveillance systems within the same industry using the semi-quantitative OASIS method and to compare the results of the assessments in order to propose improvements, including future synergies. We have focused on the surveillance of three major equine diseases in France. We have identified the mutual and specific strengths and weaknesses of each surveillance system. Furthermore, the comparative assessment has highlighted many possible synergies that could improve the effectiveness and efficiency of surveillance as a whole, including the implementation of new joint tools or the pooling of existing teams, tools or skills. Our approach is an original application of the OASIS method, which requires minimal financial resources and is not very time-consuming. Such a comparative evaluation could conceivably be applied to other surveillance systems, other industries and other countries. This approach would be especially relevant to enhance the efficiency of surveillance activities when resources are limited." }, { "instance_id": "R146851xR145301", "comparison_id": "R146851", "paper_id": "R145301", "text": "Electronic Surveillance System for Monitoring Surgical Antimicrobial Prophylaxis Objectives. Antimicrobial surgical prophylaxis comprises one third of all antibiotic use in pediatric hospitals and 80% of all antibiotic use in surgery. Previous studies reported that antimicrobial surgical prophylaxis is often inconsistent with recommended guidelines. An electronic surveillance system was developed to measure antimicrobial utilization and to identify opportunities to improve and monitor the administration of antibiotics for surgical prophylaxis. Methods. A retrospective cohort study was conducted on patients with selected inpatient surgical procedures performed from May 1999 to April 2000 at 4 US children\u2019s hospitals. International Classification of Diseases, Ninth Revision surgical procedure codes were divided into clean or unclean categories, and an electronic surveillance system was designed using antibiotic and microbiologic culture utilization data to measure appropriate antimicrobial use associated with the surgical procedure. A medical chart review was conducted to validate the electronic system. Results. Ninety percent of cases were classified properly by the electronic surveillance system as confirmed by medical chart review. Surgical antibiotic prophylaxis was not in accordance with the American Academy of Pediatrics (AAP) guidelines for almost half of all procedures. Prolonged antimicrobial administration in clean surgical procedures was the most frequent deviation from guidelines. Statistical differences between the index hospital and the comparison hospitals reflect both over- and underutilization of surgical prophylaxis with significant opportunity to improve prophylaxis for all hospitals. Conclusions. Antimicrobial surgical prophylaxis at the children\u2019s hospitals studied is not always consistent with published AAP guidelines. This electronic surveillance system provides a rapid, reproducible, and validated tool to measure easily the efforts to improve adherence to AAP surgical prophylaxis guidelines." }, { "instance_id": "R146851xR146490", "comparison_id": "R146851", "paper_id": "R146490", "text": "Rapid implementation of mobile technology for real-time epidemiology of COVID-19 Mobile symptom tracking The rapidity with which severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spreads through a population is defying attempts at tracking it, and quantitative polymerase chain reaction testing so far has been too slow for real-time epidemiology. Taking advantage of existing longitudinal health care and research patient cohorts, Drew et al. pushed software updates to participants to encourage reporting of potential coronavirus disease 2019 (COVID-19) symptoms. The authors recruited about 2 million users (including health care workers) to the COVID Symptom Study (previously known as the COVID Symptom Tracker) from across the United Kingdom and the United States. The prevalence of combinations of symptoms (three or more), including fatigue and cough, followed by diarrhea, fever, and/or anosmia, was predictive of a positive test verification for SARS-CoV-2. As exemplified by data from Wales, United Kingdom, mathematical modeling predicted geographical hotspots of incidence 5 to 7 days in advance of official public health reports. Science, this issue p. 1362 A mobile app, the COVID Symptom Study, offers data on risk factors, early symptoms, clinical outcomes, and geographical hotspots. The rapid pace of the coronavirus disease 2019 (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) presents challenges to the robust collection of population-scale data to address this global health crisis. We established the COronavirus Pandemic Epidemiology (COPE) Consortium to unite scientists with expertise in big data research and epidemiology to develop the COVID Symptom Study, previously known as the COVID Symptom Tracker, mobile application. This application\u2014which offers data on risk factors, predictive symptoms, clinical outcomes, and geographical hotspots\u2014was launched in the United Kingdom on 24 March 2020 and the United States on 29 March 2020 and has garnered more than 2.8 million users as of 2 May 2020. Our initiative offers a proof of concept for the repurposing of existing approaches to enable rapidly scalable epidemiologic data collection and analysis, which is critical for a data-driven response to this public health challenge." }, { "instance_id": "R146851xR146244", "comparison_id": "R146851", "paper_id": "R146244", "text": "Improvements in Timeliness Resulting from Implementation of Electronic Laboratory Reporting and an Electronic Disease Surveillance System Objectives. Electronic laboratory reporting (ELR) reduces the time between communicable disease diagnosis and case reporting to local health departments (LHDs). However, it also imposes burdens on public health agencies, such as increases in the number of unique and duplicate case reports. We assessed how ELR affects the timeliness and accuracy of case report processing within public health agencies. Methods. Using data from May\u2013August 2010 and January\u2013March 2012, we assessed timeliness by calculating the time between receiving a case at the LHD and reporting the case to the state (first stage of reporting) and between submitting the report to the state and submitting it to the Centers for Disease Control and Prevention (second stage of reporting). We assessed accuracy by calculating the proportion of cases returned to the LHD for changes or additional information. We compared timeliness and accuracy for ELR and non-ELR cases. Results. ELR was associated with decreases in case processing time (median = 40 days for ELR cases vs. 52 days for non-ELR cases in 2010; median = 20 days for ELR cases vs. 25 days for non-ELR cases in 2012; both p<0.001). ELR also allowed time to reduce the backlog of unreported cases. Finally, ELR was associated with higher case reporting accuracy (in 2010, 2% of ELR case reports vs. 8% of non-ELR case reports were returned; in 2012, 2% of ELR case reports vs. 6% of non-ELR case reports were returned; both p<0.001). Conclusion. The overall impact of increased ELR is more efficient case processing at both local and state levels." }, { "instance_id": "R147040xR145502", "comparison_id": "R147040", "paper_id": "R145502", "text": "Barcoding of biting midges in the genus Culicoides: a tool for species determination Biting midges of the genus Culicoides (Diptera: Ceratopogonidae) are insect vectors of economically important veterinary diseases such as African horse sickness virus and bluetongue virus. However, the identification of Culicoides based on morphological features is difficult. The sequencing of mitochondrial cytochrome oxidase subunit I (COI), referred to as DNA barcoding, has been proposed as a tool for rapid identification to species. Hence, a study was undertaken to establish DNA barcodes for all morphologically determined Culicoides species in Swedish collections. In total, 237 specimens of Culicoides representing 37 morphologically distinct species were used. The barcoding generated 37 supported clusters, 31 of which were in agreement with the morphological determination. However, two pairs of closely related species could not be separated using the DNA barcode approach. Moreover, Culicoides obsoletus Meigen and Culicoides newsteadi Austen showed relatively deep intraspecific divergence (more than 10 times the average), which led to the creation of two cryptic species within each of C. obsoletus and C. newsteadi. The use of COI barcodes as a tool for the species identification of biting midges can differentiate 95% of species studied. Identification of some closely related species should employ a less conserved region, such as a ribosomal internal transcribed spacer." }, { "instance_id": "R147040xR145497", "comparison_id": "R147040", "paper_id": "R145497", "text": "Half of the European fruit fly species barcoded (Diptera, Tephritidae); a feasibility test for molecular identification Abstract A feasibility test of molecular identification of European fruit flies (Diptera: Tephritidae) based on COI barcode sequences has been executed. A dataset containing 555 sequences of 135 ingroup species from three subfamilies and 42 genera and one single outgroup species has been analysed. 73.3% of all included species could be identified based on their COI barcode gene, based on similarity and distances. The low success rate is caused by singletons as well as some problematic groups: several species groups within the genus Terellia and especially the genus Urophora. With slightly more than 100 sequences \u2013 almost 20% of the total \u2013 this genus alone constitutes the larger part of the failure for molecular identification for this dataset. Deleting the singletons and Urophora results in a success-rate of 87.1% of all queries and 93.23% of the not discarded queries as correctly identified. Urophora is of special interest due to its economic importance as beneficial species for weed control, therefore it is desirable to have alternative markers for molecular identification. We demonstrate that the success of DNA barcoding for identification purposes strongly depends on the contents of the database used to BLAST against. Especially the necessity of including multiple specimens per species of geographically distinct populations and different ecologies for the understanding of the intra- versus interspecific variation is demonstrated. Furthermore thresholds and the distinction between true and false positives and negatives should not only be used to increase the reliability of the success of molecular identification but also to point out problematic groups, which should then be flagged in the reference database suggesting alternative methods for identification." }, { "instance_id": "R147040xR142471", "comparison_id": "R147040", "paper_id": "R142471", "text": "DNA barcoding of Northern Nearctic Muscidae (Diptera) reveals high correspondence between morphological and molecular species limits Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate \u201cspecies\u201d diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon." }, { "instance_id": "R147040xR142517", "comparison_id": "R147040", "paper_id": "R142517", "text": "A DNA barcode library for 5,200 German flies and midges (Insecta: Diptera) and its implications for metabarcoding\u2010based biomonitoring This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species\u2010level assignment, so called \u201cdark taxa.\u201d Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the \u201ctaxonomic impediment\u201d; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species\u2010rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy." }, { "instance_id": "R147040xR145509", "comparison_id": "R147040", "paper_id": "R145509", "text": "Identifying Canadian mosquito species through DNA barcodes Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617\u2010bp fragment from the 5\u2032 end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2\u201317.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0\u20133.9%)." }, { "instance_id": "R147040xR146942", "comparison_id": "R147040", "paper_id": "R146942", "text": "Discrimination of Cricotopus species (Diptera: Chironomidae) by DNA barcoding Abstract Chironomids (Diptera) typically comprise the most abundant group of macroinvertebrates collected in water quality surveys. Species in the genus Cricotopus display a wide range of tolerance for manmade pollutants, making them excellent bioindicators. Unfortunately, the usefulness of Cricotopus is overshadowed by the difficulty of accurately identifying larvae using current morphological keys. Molecular approaches are now being used for identification and taxonomic resolution in many animal taxa. In this study, a sequence-based approach for the mitochondrial gene, cytochrome oxidase I ( COI ), was developed to facilitate identification of Cricotopus species collected from Baltimore area streams. Using unique COI sequence variations, we developed profiles for seven described Cricotopus sp., four described Orthocladius sp., one described Paratrichocladius sp. and one putative species of Cricotopus . In addition to providing an accurate method for identification of Cricotopus , this method will make a useful contribution to the development of keys for Nearctic Cricotopus ." }, { "instance_id": "R147040xR145506", "comparison_id": "R147040", "paper_id": "R145506", "text": "Identification of Nearctic black flies using DNA barcodes (Diptera: Simuliidae) DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615\u2010bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83\u201315.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0\u20133.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58\u20136.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies." }, { "instance_id": "R147040xR145495", "comparison_id": "R147040", "paper_id": "R145495", "text": "DNA Barcoding for the Identification of Sand Fly Species (Diptera, Psychodidae, Phlebotominae) in Colombia Sand flies include a group of insects that are of medical importance and that vary in geographic distribution, ecology, and pathogen transmission. Approximately 163 species of sand flies have been reported in Colombia. Surveillance of the presence of sand fly species and the actualization of species distribution are important for predicting risks for and monitoring the expansion of diseases which sand flies can transmit. Currently, the identification of phlebotomine sand flies is based on morphological characters. However, morphological identification requires considerable skills and taxonomic expertise. In addition, significant morphological similarity between some species, especially among females, may cause difficulties during the identification process. DNA-based approaches have become increasingly useful and promising tools for estimating sand fly diversity and for ensuring the rapid and accurate identification of species. A partial sequence of the mitochondrial cytochrome oxidase gene subunit I (COI) is currently being used to differentiate species in different animal taxa, including insects, and it is referred as a barcoding sequence. The present study explored the utility of the DNA barcode approach for the identification of phlebotomine sand flies in Colombia. We sequenced 700 bp of the COI gene from 36 species collected from different geographic localities. The COI barcode sequence divergence within a single species was <2% in most cases, whereas this divergence ranged from 9% to 26.6% among different species. These results indicated that the barcoding gene correctly discriminated among the previously morphologically identified species with an efficacy of nearly 100%. Analyses of the generated sequences indicated that the observed species groupings were consistent with the morphological identifications. In conclusion, the barcoding gene was useful for species discrimination in sand flies from Colombia." }, { "instance_id": "R147040xR146646", "comparison_id": "R147040", "paper_id": "R146646", "text": "Comprehensive evaluation of DNA barcoding for the molecular species identification of forensically important Australian Sarcophagidae (Diptera) Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunitI gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81\u201311.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology." }, { "instance_id": "R147040xR146639", "comparison_id": "R147040", "paper_id": "R146639", "text": "DNA barcodes for species delimitation in Chironomidae (Diptera): a case study on the genus Labrundinia Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from S\u00e3o Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers." }, { "instance_id": "R147040xR145491", "comparison_id": "R147040", "paper_id": "R145491", "text": "DNA barcoding of tropical black flies (Diptera: Simuliidae) of Thailand The ecological and medical importance of black flies drives the need for rapid and reliable identification of these minute, structurally uniform insects. We assessed the efficiency of DNA barcoding for species identification of tropical black flies. A total of 351 cytochrome c oxidase subunit 1 sequences were obtained from 41 species in six subgenera of the genus Simulium in Thailand. Despite high intraspecific genetic divergence (mean = 2.00%, maximum = 9.27%), DNA barcodes provided 96% correct identification. Barcodes also differentiated cytoforms of selected species complexes, albeit with varying levels of success. Perfect differentiation was achieved for two cytoforms of Simulium feuerborni, and 91% correct identification was obtained for the Simulium angulistylum complex. Low success (33%), however, was obtained for the Simulium siamense complex. The differential efficiency of DNA barcodes to discriminate cytoforms was attributed to different levels of genetic structure and demographic histories of the taxa. DNA barcode trees were largely congruent with phylogenies based on previous molecular, chromosomal and morphological analyses, but revealed inconsistencies that will require further evaluation." }, { "instance_id": "R147040xR145296", "comparison_id": "R147040", "paper_id": "R145296", "text": "Molecular identification of mosquitoes (Diptera: Culicidae) in southeastern Australia Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species \u2500 Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes \u2500 were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p\u2010distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the \u2018Mosquitoes of Australia\u2013Victoria\u2019 project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs." }, { "instance_id": "R147040xR142535", "comparison_id": "R147040", "paper_id": "R142535", "text": "DNA Barcodes for the Northern European Tachinid Flies (Diptera: Tachinidae) This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera." }, { "instance_id": "R147040xR145482", "comparison_id": "R147040", "paper_id": "R145482", "text": "DNA barcoding for identification of sand fly species (Diptera: Psychodidae) from leishmaniasis-endemic areas of Peru Phlebotomine sand flies are the only proven vectors of leishmaniases, a group of human and animal diseases. Accurate knowledge of sand fly species identification is essential in understanding the epidemiology of leishmaniasis and vector control in endemic areas. Classical identification of sand fly species based on morphological characteristics often remains difficult and requires taxonomic expertise. Here, we generated DNA barcodes of the cytochrome c oxidase subunit 1 (COI) gene using 159 adult specimens morphologically identified to be 19 species of sand flies, belonging to 6 subgenera/species groups circulating in Peru, including the vector species. Neighbor-joining (NJ) analysis based on Kimura 2-Parameter genetic distances formed non-overlapping clusters for all species. The levels of intraspecific genetic divergence ranged from 0 to 5.96%, whereas interspecific genetic divergence among different species ranged from 8.39 to 19.08%. The generated COI barcodes could discriminate between all the sand fly taxa. Besides its success in separating known species, we found that DNA barcoding is useful in revealing population differentiation and cryptic diversity, and thus promises to be a valuable tool for epidemiological studies of leishmaniasis." }, { "instance_id": "R147040xR146643", "comparison_id": "R147040", "paper_id": "R146643", "text": "Revision of Nearctic Dasysyrphus Enderlein (Diptera: Syrphidae) Dasysyrphus Enderlein (Diptera: Syrphidae) has posed taxonomic challenges to researchers in the past, primarily due to their lack of interspecific diagnostic characters. In the present study, DNA data (mitochondrial cytochrome c oxidase sub-unit I\u2014COI) were combined with morphology to help delimit species. This led to two species being resurrected from synonymy (D. laticaudus and D. pacificus) and the discovery of one new species (D. occidualis sp. nov.). An additional new species was described based on morphology alone (D. richardi sp. nov.), as the specimens were too old to obtain COI. Part of the taxonomic challenge presented by this group arises from missing type specimens. Neotypes are designated here for D. pauxillus and D. pinastri to bring stability to these names. An illustrated key to 13 Nearctic species is presented, along with descriptions, maps and supplementary data. A phylogeny based on COI is also presented and discussed." }, { "instance_id": "R148381xR148267", "comparison_id": "R148381", "paper_id": "R148267", "text": "Enhanced delivery of etoposide across the blood\u2013brain barrier to restrain brain tumor growth using melanotransferrin antibody- and tamoxifen-conjugated solid lipid nanoparticles Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood\u2013brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA\u2013TX\u2013ETP\u2013SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX\u2013ETP\u2013SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA\u2013TX\u2013ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA\u2013TX\u2013ETP-SLNs > TX\u2013ETP-SLNs > ETP-SLNs > SLNs. The capability of MA\u2013TX\u2013ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA\u2013TX\u2013ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM." }, { "instance_id": "R148381xR148313", "comparison_id": "R148381", "paper_id": "R148313", "text": "Improved oral bioavailability and brain transport of Saquinavir upon administration in novel nanoemulsion formulations The aim of this investigation was to develop novel oil-in-water (o/w) nanoemulsions containing Saquinavir (SQV), an anti-HIV protease inhibitor, for enhanced oral bioavailability and brain disposition. SQV was dissolved in different types of edible oils rich in essential polyunsaturated fatty acids (PUFA) to constitute the internal oil phase of the nanoemulsions. The external phase consisted of surfactants Lipoid-80 and deoxycholic acid dissolved in water. The nanoemulsions with an average oil droplet size of 100-200 nm, containing tritiated [(3)H]-SQV, were administered orally and intravenously to male Balb/c mice. The SQV bioavailability as well as distribution in different organ systems was examined. SQV concentrations in the systemic circulation administered in flax-seed oil nanoemulsions were threefold higher as compared to the control aqueous suspension. The oral bioavailability and distribution to the brain, a potential sanctuary site for HIV, were significantly enhanced with SQV delivered in nanoemulsion formulations. In comparing SQV in flax-seed oil nanoemulsion with aqueous suspension, the maximum concentration (C(max)) and the area-under-the-curve (AUC) values were found to be five- and threefold higher in the brain, respectively, suggesting enhanced rate and extent of SQV absorption following oral administration of nanoemulsions. The results of this study show that oil-in-water nanoemulsions made with PUFA-rich oils may be very promising for HIV/AIDS therapy, in particular, for reducing the viral load in important anatomical reservoir sites." }, { "instance_id": "R148381xR148289", "comparison_id": "R148381", "paper_id": "R148289", "text": "Vincristine and temozolomide combined chemotherapy for the treatment of glioma: a comparison of solid lipid nanoparticles and nanostructured lipid carriers for dual drugs delivery Abstract Context: Glioma is a common malignant brain tumor originating in the central nervous system. Efficient delivery of therapeutic agents to the cells and tissues is a difficult challenge. Co-delivery of anticancer drugs into the cancer cells or tissues by multifunctional nanocarriers may provide a new paradigm in cancer treatment. Objective: In this study, solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) were constructed for co-delivery of vincristine (VCR) and temozolomide (TMZ) to develop the synergetic therapeutic action of the two drugs. The antitumor effects of these two systems were compared to provide a better choice for gliomatosis cerebri treatment. Methods: VCR- and TMZ-loaded SLNs (VT-SLNs) and NLCs (VT-NLCs) were formulated. Their particle size, zeta potential, drug encapsulation efficiency (EE) and drug loading capacity were evaluated. The single TMZ-loaded SLNs and NLCs were also prepared as contrast. Anti-tumor efficacies of the two kinds of carriers were evaluated on U87 malignant glioma cells and mice bearing malignant glioma model. Results: Significantly better glioma inhibition was observed on NLCs formulations than SLNs, and dual drugs displayed the highest antitumor efficacy in vivo and in vitro than all the other formulations used. Conclusion: VT-NLCs can deliver VCR and TMZ into U87MG cells more efficiently, and inhibition efficacy is higher than VT-SLNs. This dual drugs-loaded NLCs could be an outstanding drug delivery system to achieve excellent therapeutic efficiency for the treatment of malignant gliomatosis cerebri." }, { "instance_id": "R148381xR148280", "comparison_id": "R148381", "paper_id": "R148280", "text": "Lactoferrin bioconjugated solid lipid nanoparticles: a new drug delivery system for potential brain targeting Abstract Background: Delivery of drugs to brain is a subtle task in the therapy of many severe neurological disorders. Solid lipid nanoparticles (SLN) easily diffuse the blood\u2013brain barrier (BBB) due to their lipophilic nature. Furthermore, ligand conjugation on SLN surface enhances the targeting efficiency. Lactoferin (Lf) conjugated SLN system is first time attempted for effective brain targeting in this study. Purpose: Preparation of Lf-modified docetaxel (DTX)-loaded SLN for proficient delivery of DTX to brain. Methods: DTX-loaded SLN were prepared using emulsification and solvent evaporation method and conjugation of Lf on SLN surface (C-SLN) was attained through carbodiimide chemistry. These lipidic nanoparticles were evaluated by DLS, AFM, FTIR, XRD techniques and in vitro release studies. Colloidal stability study was performed in biologically simulated environment (normal saline and serum). These lipidic nanoparticles were further evaluated for its targeting mechanism for uptake in brain tumour cells and brain via receptor saturation studies and distribution studies in brain, respectively. Results: Particle size of lipidic nanoparticles was found to be optimum. Surface morphology (zeta potential, AFM) and surface chemistry (FTIR) confirmed conjugation of Lf on SLN surface. Cytotoxicity studies revealed augmented apoptotic activity of C-SLN than SLN and DTX. Enhanced cytotoxicity was demonstrated by receptor saturation and uptake studies. Brain concentration of DTX was elevated significantly with C-SLN than marketed formulation. Conclusions: It is evident from the cytotoxicity, uptake that SLN has potential to deliver drug to brain than marketed formulation but conjugating Lf on SLN surface (C-SLN) further increased the targeting potential for brain tumour. Moreover, brain distribution studies corroborated the use of C-SLN as a viable vehicle to target drug to brain. Hence, C-SLN was demonstrated to be a promising DTX delivery system to brain as it possessed remarkable biocompatibility, stability and efficacy than other reported delivery systems." }, { "instance_id": "R148381xR147240", "comparison_id": "R148381", "paper_id": "R147240", "text": "Liposome-based glioma targeted drug delivery enabled by stable peptide ligands The treatment of glioma is one of the most challenging tasks in clinic. As an intracranial tumor, glioma exhibits many distinctive characteristics from other tumors. In particular, various barriers including enzymatic barriers in the blood and brain capillary endothelial cells, blood-brain barrier (BBB) and blood-brain tumor barrier (BBTB) rigorously prevent drug and drug delivery systems from reaching the tumor site. To tackle this dilemma, we developed a liposomal formulation to circumvent multiple-barriers by modifying the liposome surface with proteolytically stable peptides, (D)CDX and c(RGDyK). (D)CDX is a D-peptide ligand of nicotine acetylcholine receptors (nAChRs) on the BBB, and c(RGDyK) is a ligand of integrin highly expressed on the BBTB and glioma cells. Lysosomal compartments of brain capillary endothelial cells are implicated in the transcytosis of those liposomes. However, both peptide ligands displayed exceptional stability in lysosomal homogenate, ensuring that intact ligands could exert subsequent exocytosis from brain capillary endothelial cells and glioma targeting. In the cellular uptake studies, dually labeled liposomes could target both brain capillary endothelial cells and tumor cells, effectively traversing the BBB and BBTB monolayers, overcoming enzymatic barrier and targeting three-dimensional tumor spheroids. Its targeting ability to intracranial glioma was further verified in vivo by ex vivo imaging and histological studies. As a result, doxorubicin liposomes modified with both (D)CDX and c(RGDyK) presented better anti-glioma effect with prolonged median survival of nude mice bearing glioma than did unmodified liposomes and liposomes modified with individual peptide ligand. In conclusion, the liposome suggested in the present study could effectively overcome multi-barriers and accomplish glioma targeted drug delivery, validating its potential value in improving the therapeutic efficacy of doxorubicin for glioma." }, { "instance_id": "R148381xR148275", "comparison_id": "R148381", "paper_id": "R148275", "text": "Galantamine-loaded solid\u2013lipid nanoparticles for enhanced brain delivery: preparation, characterization, in vitro and in vivo evaluations Abstract Galantamine hydrobromide, a promising acetylcholinesterase inhibitor is reported to be associated with cholinergic side effects. Its poor brain penetration results in lower bioavailability to the target site. With an aim to overcome these limitations, solid\u2013lipid nanoparticulate formulation of galantamine hydrobromide was developed employing biodegradable and biocompatible components. The selected galantamine hydrobromide-loaded solid\u2013lipid nanoparticles offered nanocolloidal with size lower than 100 nm and maximum drug entrapment 83.42 \u00b1 0.63%. In vitro drug release from these spherical drug-loaded nanoparticles was observed to be greater than 90% for a period of 24 h in controlled manner. In vivo evaluations demonstrated significant memory restoration capability in cognitive deficit rats in comparison with naive drug. The developed carriers offered approximately twice bioavailability to that of plain drug. Hence, the galantamine hydrobromide-loaded solid\u2013lipid nanoparticles can be a promising vehicle for safe and effective delivery especially in disease like Alzheimer\u2019s." }, { "instance_id": "R148574xR148398", "comparison_id": "R148574", "paper_id": "R148398", "text": "Topical delivery of 5-aminolevulinic acid-encapsulated ethosomes in a hyperproliferative skin animal model using the CLSM technique to evaluate the penetration behavior Psoriasis, an inflammatory skin disease, exhibits recurring itching, soreness, and cracked and bleeding skin. Currently, the topical delivery of 5-aminolevulinic acid-photodynamic therapy (ALA-PDT) is an optional treatment for psoriasis which provides long-term therapeutic effects, is non-toxic and enjoys better compliance with patients. However, the precursor of ALA is hydrophilic, and thus its ability to penetrate the skin is limited. Also, little research has provided a platform to investigate the penetration behavior in disordered skin. We employed a highly potent ethosomal carrier (phosphatidylethanolamine; PE) to investigate the penetration behavior of ALA and the recovery of skin in a hyperproliferative murine model. We found that the application of ethosomes produced a significant increase in cumulative amounts of 5-26-fold in normal and hyperproliferative murine skin samples when compared to an ALA aqueous solution; and the ALA aqueous solution appeared less precise in terms of the penetration mode in hyperproliferative murine skin. After the ethosomes had been applied, the protoporphyrin IX (PpIX) intensity increased about 3.64-fold compared with that of the ALA aqueous solution, and the penetration depth reached 30-80 microm. The results demonstrated that the ethosomal carrier significantly improved the delivery of ALA and the formation of PpIX in both normal and hyperproliferative murine skin samples, and the expression level of tumor necrosis factor (TNF)-alpha was reduced after the ALA-ethosomes were applied to treat hyperproliferative murine skin. Furthermore, the results of present study encourage more investigations on the mechanism of the interaction with ethosomes and hyperproliferative murine skin." }, { "instance_id": "R148574xR148407", "comparison_id": "R148574", "paper_id": "R148407", "text": "Synergistic penetration enhancement effect of ethanol and phospholipids on the topical delivery of cyclosporin A In the present study, ethanol was used with a commercially available lipid mixture, NAT 8539, to improve the topical delivery of cyclosporin A (CyA). The vesicles formed from this solution ranged from 56.6 to 100.6 nm in diameter, depending on the amount of ethanol added in the formulation. In-vitro skin penetration studies were carried out with Franz diffusion cell using human abdominal skin. There was a decrease in average size of vesicles, as the amount of ethanol in formulation increased from 0% to 3.3% and a further addition of ethanol resulted in an increase in average diameter of vesicles. CyA vesicles containing 10% and 20% ethanol showed statistically enhanced deposition of CyA into the stratum corneum (SC), as compared to vesicles prepared without ethanol. CyA vesicles prepared with NAT 8539/ethanol (10/3.3) showed a 2.1-fold, CyA vesicles with NAT 8539/ethanol (10/10) showed a 4.4-fold, and CyA vesicles with NAT 8539/ethanol (10/20) showed a 2.2-fold higher deposition of CyA into SC, as compared to vesicles made of NAT 8539 without ethanol [NAT 8539/ethanol (10/0)]. The efficiency of the formulations was sequenced in the order of: NAT 8539/ethanol (10/10)>NAT 8539/ethanol (10/20)>NAT 8539/ethanol (10/3.3)>ethanol>NAT 8539/ethanol (10/0). These results can be considered a step forward for the topical delivery of problematic molecules like CyA using liposomes as a tool for the treatment of inflammatory skin diseases like psoriasis, atopic dermatitis, and diseases of the hair follicle like alopecia areata, etc." }, { "instance_id": "R148574xR148531", "comparison_id": "R148574", "paper_id": "R148531", "text": "Development of a new topical system: Drug-in-cyclodextrin-in-deformable liposome A new delivery system for cutaneous administration combining the advantages of cyclodextrin inclusion complexes and those of deformable liposomes was developed, leading to a new concept: drug-in-cyclodextrin-in-deformable liposomes. Deformable liposomes made of soybean phosphatidylcholine (PC) or dimyristoylphosphatidylcholine (DMPC) and sodium deoxycholate as edge activator were compared to classical non-deformable liposomes. Liposomes were prepared by the film evaporation method. Betamethasone, chosen as the model drug, was encapsulated in the aqueous cavity of liposomes by the use of cyclodextrins. Cyclodextrins allow an increase in the aqueous solubility of betamethasone and thus, the encapsulation efficiency in liposome vesicles. Liposome size, deformability and encapsulation efficiency were calculated. The best results were obtained with deformable liposomes made of PC in comparison with DMPC. The stability of PC vesicles was evaluated by measuring the leakage of encapsulated calcein on the one hand and the leakage of encapsulated betamethasone on the other hand. In vitro diffusion studies were carried out on Franz type diffusion cells through polycarbonate membranes. In comparison with non-deformable liposomes, these new vesicles showed improved encapsulation efficiency, good stability and higher in vitro diffusion percentages of encapsulated drug. They are therefore promising for future use in ex vivo and in vivo experiments." }, { "instance_id": "R148574xR148525", "comparison_id": "R148574", "paper_id": "R148525", "text": "Tacrolimus-loaded ethosomes: Physicochemical characterization and in vivo evaluation The purpose of this work was to prepare and characterize a novel ethosomal carrier for tacrolimus, an immunosuppressant treating atopic dermatitis (AD), and to investigate inhibition action upon allergic reactions of mice aiming at improving pharmacological effect for tacrolimus in that commercial tacrolimus ointment (Protopic\u00ae) with poor penetration capability exhibited weak impact on AD compared with common glucocorticoid. Results indicated that the ethosomes showed lower vesicle size and higher encapsulation efficiency (EE) as compared with traditional liposomes with cholesterol. In addition, the quantity of tacrolimus remaining in the epidermis at the end of the 24-h experiment was statistically significantly greater from the ethosomal delivery system than from commercial ointment (Protopic\u00ae) (p<0.01), suggesting the greater penetration ability to the deep strata of the skin for ethosomes. Interestingly, tacrolimus-loaded ethosomes with ethanol, in contrast to that with propylene glycol, showed relatively higher penetration activity except insignificant differences in EE and polydispersity index. Topical application of ethosomal tacrolimus displayed the lowest ear swelling in BALB/c mice model induced by repeated topical application of 2,4-dinitrofluorobenzene compared to traditional liposomes and commercial ointment and effectively impeded accumulation of mast cells in the ear of the mice, suggesting efficient suppression for the allergic reactions. In conclusion, the ethosomal tacrolimus delivery systems may be a promising candidate for topical delivery of tacrolimus in treatment of AD." }, { "instance_id": "R149847xR145482", "comparison_id": "R149847", "paper_id": "R145482", "text": "DNA barcoding for identification of sand fly species (Diptera: Psychodidae) from leishmaniasis-endemic areas of Peru Phlebotomine sand flies are the only proven vectors of leishmaniases, a group of human and animal diseases. Accurate knowledge of sand fly species identification is essential in understanding the epidemiology of leishmaniasis and vector control in endemic areas. Classical identification of sand fly species based on morphological characteristics often remains difficult and requires taxonomic expertise. Here, we generated DNA barcodes of the cytochrome c oxidase subunit 1 (COI) gene using 159 adult specimens morphologically identified to be 19 species of sand flies, belonging to 6 subgenera/species groups circulating in Peru, including the vector species. Neighbor-joining (NJ) analysis based on Kimura 2-Parameter genetic distances formed non-overlapping clusters for all species. The levels of intraspecific genetic divergence ranged from 0 to 5.96%, whereas interspecific genetic divergence among different species ranged from 8.39 to 19.08%. The generated COI barcodes could discriminate between all the sand fly taxa. Besides its success in separating known species, we found that DNA barcoding is useful in revealing population differentiation and cryptic diversity, and thus promises to be a valuable tool for epidemiological studies of leishmaniasis." }, { "instance_id": "R149847xR146639", "comparison_id": "R149847", "paper_id": "R146639", "text": "DNA barcodes for species delimitation in Chironomidae (Diptera): a case study on the genus Labrundinia Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from S\u00e3o Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers." }, { "instance_id": "R149847xR140197", "comparison_id": "R149847", "paper_id": "R140197", "text": "DNA barcodes distinguish species of tropical Lepidoptera Although central to much biological research, the identification of species is often difficult. The use of DNA barcodes, short DNA sequences from a standardized region of the genome, has recently been proposed as a tool to facilitate species identification and discovery. However, the effectiveness of DNA barcoding for identifying specimens in species-rich tropical biotas is unknown. Here we show that cytochrome c oxidase I DNA barcodes effectively discriminate among species in three Lepidoptera families from Area de Conservaci\u00f3n Guanacaste in northwestern Costa Rica. We found that 97.9% of the 521 species recognized by prior taxonomic work possess distinctive cytochrome c oxidase I barcodes and that the few instances of interspecific sequence overlap involve very similar species. We also found two or more barcode clusters within each of 13 supposedly single species. Covariation between these clusters and morphological and/or ecological traits indicates overlooked species complexes. If these results are general, DNA barcoding will significantly aid species identification and discovery in tropical settings." }, { "instance_id": "R149847xR140263", "comparison_id": "R149847", "paper_id": "R140263", "text": "DNA Barcoding of an Assembly of Montane Andean Butterflies (Satyrinae): Geographical Scale and Identification Performance DNA barcoding is a technique used primarily for the documentation and identification of biological diversity based on mitochondrial DNA sequences. Butterflies have received particular attention in DNA barcoding studies, although varied performance may be obtained due to different scales of geographic sampling and speciation processes in various groups. The montane Andean Satyrinae constitutes a challenging study group for taxonomy. The group displays high richness, with more of 550 species, and remarkable morphological similarity among taxa, which renders their identification difficult. In the present study, we evaluated the effectiveness of DNA barcodes in the identification of montane Andean satyrines and the effect of increased geographical scale of sampling on identification performance. Mitochondrial sequences were obtained from 104 specimens of 39 species and 16 genera, collected in a forest remnant in the northwest Andes. DNA barcoding has proved to be a useful tool for the identification of the specimens, with a well-defined gap and producing clusters with unambiguous identifications for all the morphospecies in the study area. The expansion of the geographical scale with published data increased genetic distances within species and reduced those among species, but did not generally reduce the success of specimen identification. Only in Forsterinaria rustica (Butler, 1868), a taxon with high intraspecific variation, the barcode gap was lost and low support for monophyly was obtained. Likewise, expanded sampling resulted in a substantial increase in the intraspecific distance in Morpho sulkowskyi (Kollar, 1850); Panyapedaliodes drymaea (Hewitson, 1858); Lymanopoda obsoleta (Westwood, 1851); and Lymanopoda labda Hewitson, 1861; but for these species, the barcode gap was maintained. These divergent lineages are nonetheless worth a detailed study of external and genitalic morphology variation, as well as ecological features, in order to determine the potential existence of cryptic species. Even including these cases, DNA barcoding performance in specimen identification was 100% successful based on monophyly, an unexpected result in such a taxonomically complicated group." }, { "instance_id": "R149847xR138562", "comparison_id": "R149847", "paper_id": "R138562", "text": "Fast Census of Moth Diversity in the Neotropics: A Comparison of Field-Assigned Morphospecies and DNA Barcoding in Tiger Moths The morphological species delimitations (i.e. morphospecies) have long been the best way to avoid the taxonomic impediment and compare insect taxa biodiversity in highly diverse tropical and subtropical regions. The development of DNA barcoding, however, has shown great potential to replace (or at least complement) the morphospecies approach, with the advantage of relying on automated methods implemented in computer programs or even online rather than in often subjective morphological features. We sampled moths extensively for two years using light traps in a patch of the highly endangered Atlantic Forest of Brazil to produce a nearly complete census of arctiines (Noctuoidea: Erebidae), whose species richness was compared using different morphological and molecular approaches (DNA barcoding). A total of 1,075 barcode sequences of 286 morphospecies were analyzed. Based on the clustering method Barcode Index Number (BIN) we found a taxonomic bias of approximately 30% in our initial morphological assessment. However, a morphological reassessment revealed that the correspondence between morphospecies and molecular operational taxonomic units (MOTUs) can be up to 94% if differences in genitalia morphology are evaluated in individuals of different MOTUs originated from the same morphospecies (putative cases of cryptic species), and by recording if individuals of different genders in different morphospecies merge together in the same MOTU (putative cases of sexual dimorphism). The results of two other clustering methods (i.e. Automatic Barcode Gap Discovery and 2% threshold) were very similar to those of the BIN approach. Using empirical data we have shown that DNA barcoding performed substantially better than the morphospecies approach, based on superficial morphology, to delimit species of a highly diverse moth taxon, and thus should be used in species inventories." }, { "instance_id": "R149847xR145468", "comparison_id": "R149847", "paper_id": "R145468", "text": "DNA barcoding of Neotropical black flies (Diptera: Simuliidae): Species identification and discovery of cryptic diversity in Mesoamerica Although correct taxonomy is paramount for disease control programs and epidemiological studies, morphology-based taxonomy of black flies is extremely difficult. In the present study, the utility of a partial sequence of the COI gene, the DNA barcoding region, for the identification of species of black flies from Mesoamerica was assessed. A total of 32 morphospecies were analyzed, one belonging to the genus Gigantodax and 31 species to the genus Simulium and six of its subgenera (Aspathia, Eusimulium, Notolepria, Psaroniocompsa, Psilopelmia, Trichodagmia). The Neighbour Joining tree (NJ) derived from the DNA barcodes grouped most specimens according to species or species groups recognized by morphotaxonomic studies. Intraspecific sequence divergences within morphologically distinct species ranged from 0.07% to 1.65%, while higher divergences (2.05%-6.13%) in species complexes suggested the presence of cryptic diversity. The existence of well-defined groups within S. callidum (Dyar & Shannon), S. quadrivittatum Loew, and S. samboni Jennings revealed the likely inclusion of cryptic species within these taxa. In addition, the suspected presence of sibling species within S. paynei Vargas and S. tarsatum Macquart was supported. DNA barcodes also showed that specimens of species that are difficult to delimit morphologically such as S. callidum, S. pseudocallidum D\u00edaz N\u00e1jera, S. travisi Vargas, Vargas & Ram\u00edrez-P\u00e9rez, relatives of the species complexes such as S. metallicum Bellardi s.l. (e.g., S. horacioi Okazawa & Onishi, S. jobbinsi Vargas, Mart\u00ednez Palacios, D\u00edaz N\u00e1jera, and S. puigi Vargas, Mart\u00ednez Palacios & D\u00edaz N\u00e1jera), and S. virgatum Coquillett complex (e.g., S. paynei and S. tarsatum) grouped together in the NJ analysis, suggesting they represent valid species. DNA barcoding combined with a sound morphotaxonomic framework provided an effective approach for the identification of medically important black flies species in Mesoamerica and for the discovery of hidden diversity within this group." }, { "instance_id": "R149847xR108960", "comparison_id": "R149847", "paper_id": "R108960", "text": "Use of species delimitation approaches to tackle the cryptic diversity of an assemblage of high Andean butterflies (Lepidoptera: Papilionoidea) Cryptic biological diversity has generated ambiguity in taxonomic and evolutionary studies. Single-locus methods and other approaches for species delimitation are useful for addressing this challenge, enabling the practical processing of large numbers of samples for identification and inventory purposes. This study analyzed an assemblage of high Andean butterflies using DNA barcoding and compared the identifications based on the current morphological taxonomy with three methods of species delimitation (automatic barcode gap discovery, generalized mixed Yule coalescent model, and Poisson tree processes). Sixteen potential cryptic species were recognized using these three methods, representing a net richness increase of 11.3% in the assemblage. A well-studied taxon of the genus Vanessa, which has a wide geographical distribution, appeared with the potential cryptic species that had a higher genetic differentiation at the local level than at the continental level. The analyses were useful for identifying the potential cryptic species in Pedaliodes and Forsterinaria complexes, which also show differentiation along altitudinal and latitudinal gradients. This genetic assessment of an entire assemblage of high Andean butterflies (Papilionoidea) provides baseline information for future research in a region characterized by high rates of endemism and population isolation." }, { "instance_id": "R149849xR142517", "comparison_id": "R149849", "paper_id": "R142517", "text": "A DNA barcode library for 5,200 German flies and midges (Insecta: Diptera) and its implications for metabarcoding\u2010based biomonitoring This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species\u2010level assignment, so called \u201cdark taxa.\u201d Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the \u201ctaxonomic impediment\u201d; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species\u2010rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy." }, { "instance_id": "R149849xR109043", "comparison_id": "R149849", "paper_id": "R109043", "text": "A DNA barcode library for the butterflies of North America Although the butterflies of North America have received considerable taxonomic attention, overlooked species and instances of hybridization continue to be revealed. The present study assembles a DNA barcode reference library for this fauna to identify groups whose patterns of sequence variation suggest the need for further taxonomic study. Based on 14,626 records from 814 species, DNA barcodes were obtained for 96% of the fauna. The maximum intraspecific distance averaged 1/4 the minimum distance to the nearest neighbor, producing a barcode gap in 76% of the species. Most species (80%) were monophyletic, the others were para- or polyphyletic. Although 15% of currently recognized species shared barcodes, the incidence of such taxa was far higher in regions exposed to Pleistocene glaciations than in those that were ice-free. Nearly 10% of species displayed high intraspecific variation (>2.5%), suggesting the need for further investigation to assess potential cryptic diversity. Aside from aiding the identification of all life stages of North American butterflies, the reference library has provided new perspectives on the incidence of both cryptic and potentially over-split species, setting the stage for future studies that can further explore the evolutionary dynamics of this group." }, { "instance_id": "R149849xR138551", "comparison_id": "R149849", "paper_id": "R138551", "text": "Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences." }, { "instance_id": "R149849xR142471", "comparison_id": "R149849", "paper_id": "R142471", "text": "DNA barcoding of Northern Nearctic Muscidae (Diptera) reveals high correspondence between morphological and molecular species limits Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate \u201cspecies\u201d diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon." }, { "instance_id": "R149849xR139538", "comparison_id": "R149849", "paper_id": "R139538", "text": "High resolution DNA barcode library for European butterflies reveals continental patterns of mitochondrial genetic diversity Abstract The study of global biodiversity will greatly benefit from access to comprehensive DNA barcode libraries at continental scale, but such datasets are still very rare. Here, we assemble the first high-resolution reference library for European butterflies that provides 97% taxon coverage (459 species) and 22,306 COI sequences. We estimate that we captured 62% of the total haplotype diversity and show that most species possess a few very common haplotypes and many rare ones. Specimens in the dataset have an average 95.3% probability of being correctly identified. Mitochondrial diversity displayed elevated haplotype richness in southern European refugia, establishing the generality of this key biogeographic pattern for an entire taxonomic group. Fifteen percent of the species are involved in barcode sharing, but two thirds of these cases may reflect the need for further taxonomic research. This dataset provides a unique resource for conservation and for studying evolutionary processes, cryptic species, phylogeography, and ecology." }, { "instance_id": "R149849xR140197", "comparison_id": "R149849", "paper_id": "R140197", "text": "DNA barcodes distinguish species of tropical Lepidoptera Although central to much biological research, the identification of species is often difficult. The use of DNA barcodes, short DNA sequences from a standardized region of the genome, has recently been proposed as a tool to facilitate species identification and discovery. However, the effectiveness of DNA barcoding for identifying specimens in species-rich tropical biotas is unknown. Here we show that cytochrome c oxidase I DNA barcodes effectively discriminate among species in three Lepidoptera families from Area de Conservaci\u00f3n Guanacaste in northwestern Costa Rica. We found that 97.9% of the 521 species recognized by prior taxonomic work possess distinctive cytochrome c oxidase I barcodes and that the few instances of interspecific sequence overlap involve very similar species. We also found two or more barcode clusters within each of 13 supposedly single species. Covariation between these clusters and morphological and/or ecological traits indicates overlooked species complexes. If these results are general, DNA barcoding will significantly aid species identification and discovery in tropical settings." }, { "instance_id": "R149849xR145304", "comparison_id": "R149849", "paper_id": "R145304", "text": "Analyzing Mosquito (Diptera: Culicidae) Diversity in Pakistan by DNA Barcoding Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010\u20132013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0\u20132.4%, while congeneric species showed from 2.3\u201317.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations." }, { "instance_id": "R150058xR147085", "comparison_id": "R150058", "paper_id": "R147085", "text": "Pattern-based Acquisition of Scientific Entities from Scholarly Article Titles We describe a rule-based approach for the automatic acquisition of salient scientific entities from Computational Linguistics (CL) scholarly article titles. Two observations motivated the approach: (i) noting salient aspects of an article\u2019s contribution in its title; and (ii) pattern regularities capturing the salient terms that could be expressed in a set of rules. Only those lexico-syntactic patterns were selected that were easily recognizable, occurred frequently, and positionally indicated a scientific entity type. The rules were developed on a collection of 50,237 CL titles covering all articles in the ACL Anthology. In total, 19,799 research problems, 18,111 solutions, 20,033 resources, 1,059 languages, 6,878 tools, and 21,687 methods were extracted at an average precision of 75%." }, { "instance_id": "R150058xR146872", "comparison_id": "R150058", "paper_id": "R146872", "text": "Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain." }, { "instance_id": "R150058xR69291", "comparison_id": "R150058", "paper_id": "R69291", "text": "The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978\u20132006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task." }, { "instance_id": "R150058xR146081", "comparison_id": "R150058", "paper_id": "R146081", "text": "Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article\u2019s" }, { "instance_id": "R150058xR69282", "comparison_id": "R150058", "paper_id": "R69282", "text": "SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities." }, { "instance_id": "R150058xR145757", "comparison_id": "R150058", "paper_id": "R145757", "text": "SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios." }, { "instance_id": "R150058xR69288", "comparison_id": "R150058", "paper_id": "R69288", "text": "Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature." }, { "instance_id": "R150058xR146716", "comparison_id": "R150058", "paper_id": "R146716", "text": "AI-KG: An Automatically Generated Knowledge Graph of Artificial Intelligence Scientific knowledge has been traditionally disseminated and preserved through research articles published in journals, conference proceedings, and online archives. However, this article-centric paradigm has been often criticized for not allowing to automatically process, categorize, and reason on this knowledge. An alternative vision is to generate a semantically rich and interlinked description of the content of research publications. In this paper, we present the Artificial Intelligence Knowledge Graph (AI-KG), a large-scale automatically generated knowledge graph that describes 820K research entities. AI-KG includes about 14M RDF triples and 1.2M reified statements extracted from 333K research publications in the field of AI, and describes 5 types of entities (tasks, methods, metrics, materials, others) linked by 27 relations. AI-KG has been designed to support a variety of intelligent services for analyzing and making sense of research dynamics, supporting researchers in their daily job, and helping to inform decision-making in funding bodies and research policymakers. AI-KG has been generated by applying an automatic pipeline that extracts entities and relationships using three tools: DyGIE++, Stanford CoreNLP, and the CSO Classifier. It then integrates and filters the resulting triples using a combination of deep learning and semantic technologies in order to produce a high-quality knowledge graph. This pipeline was evaluated on a manually crafted gold standard, yielding competitive results. AI-KG is available under CC BY 4.0 and can be downloaded as a dump or queried via a SPARQL endpoint." }, { "instance_id": "R150058xR149709", "comparison_id": "R150058", "paper_id": "R149709", "text": "Automated Mining of Leaderboards for Empirical AI Research With the rapid growth of research publications, empowering scientists to keep oversight over the scientific progress is of paramount importance. In this regard, the Leaderboards facet of information organization provides an overview on the state-of-the-art by aggregating empirical results from various studies addressing the same research challenge. Crowdsourcing efforts like PapersWithCode among others are devoted to the construction of Leaderboards predominantly for various subdomains in Artificial Intelligence. Leaderboards provide machine-readable scholarly knowledge that has proven to be directly useful for scientists to keep track of research progress. The construction of Leaderboards could be greatly expedited with automated text mining. This study presents a comprehensive approach for generating Leaderboards for knowledge-graph-based scholarly information organization. Specifically, we investigate the problem of automated Leaderboard construction using state-of-the-art transformer models, viz. Bert, SciBert, and XLNet. Our analysis reveals an optimal approach that significantly outperforms existing baselines for the task with evaluation scores above 90% in F1. This, in turn, offers new state-of-the-art results for Leaderboard extraction. As a result, a vast share of empirical AI research can be organized in the next-generation digital libraries as knowledge graphs." }, { "instance_id": "R150058xR146853", "comparison_id": "R150058", "paper_id": "R146853", "text": "SciREX: A Challenge Dataset for Document-Level Information Extraction Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX ." }, { "instance_id": "R150570xR148073", "comparison_id": "R150570", "paper_id": "R148073", "text": "BioInfer: a corpus for information extraction in the biomedical domain BackgroundLately, there has been a great interest in the application of information extraction methods to the biomedical domain, in particular, to the extraction of relationships of genes, proteins, and RNA from scientific publications. The development and evaluation of such methods requires annotated domain corpora.ResultsWe present BioInfer (Bio Information Extraction Resource), a new public resource providing an annotated corpus of biomedical English. We describe an annotation scheme capturing named entities and their relationships along with a dependency analysis of sentence syntax. We further present ontologies defining the types of entities and relationships annotated in the corpus. Currently, the corpus contains 1100 sentences from abstracts of biomedical research articles annotated for relationships, named entities, as well as syntactic dependencies. Supporting software is provided with the corpus. The corpus is unique in the domain in combining these annotation types for a single set of sentences, and in the level of detail of the relationship annotation.ConclusionWe introduce a corpus targeted at protein, gene, and RNA relationships which serves as a resource for the development of information extraction systems and their components such as parsers and domain analyzers. The corpus will be maintained and further developed with a current version being available at http://www.it.utu.fi/BioInfer." }, { "instance_id": "R150570xR147995", "comparison_id": "R150570", "paper_id": "R147995", "text": "Concept annotation in the CRAFT corpus BackgroundManually annotated corpora are critical for the training and evaluation of automated methods to identify concepts in biomedical text.ResultsThis paper presents the concept annotations of the Colorado Richly Annotated Full-Text (CRAFT) Corpus, a collection of 97 full-length, open-access biomedical journal articles that have been annotated both semantically and syntactically to serve as a research resource for the biomedical natural-language-processing (NLP) community. CRAFT identifies all mentions of nearly all concepts from nine prominent biomedical ontologies and terminologies: the Cell Type Ontology, the Chemical Entities of Biological Interest ontology, the NCBI Taxonomy, the Protein Ontology, the Sequence Ontology, the entries of the Entrez Gene database, and the three subontologies of the Gene Ontology. The first public release includes the annotations for 67 of the 97 articles, reserving two sets of 15 articles for future text-mining competitions (after which these too will be released). Concept annotations were created based on a single set of guidelines, which has enabled us to achieve consistently high interannotator agreement.ConclusionsAs the initial 67-article release contains more than 560,000 tokens (and the full set more than 790,000 tokens), our corpus is among the largest gold-standard annotated biomedical corpora. Unlike most others, the journal articles that comprise the corpus are drawn from diverse biomedical disciplines and are marked up in their entirety. Additionally, with a concept-annotation count of nearly 100,000 in the 67-article subset (and more than 140,000 in the full collection), the scale of conceptual markup is also among the largest of comparable corpora. The concept annotations of the CRAFT Corpus have the potential to significantly advance biomedical text mining by providing a high-quality gold standard for NLP systems. The corpus, annotation guidelines, and other associated resources are freely available at http://bionlp-corpora.sourceforge.net/CRAFT/index.shtml." }, { "instance_id": "R150570xR148131", "comparison_id": "R150570", "paper_id": "R148131", "text": "Construction of an annotated corpus to support biomedical information extraction Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes." }, { "instance_id": "R150570xR150508", "comparison_id": "R150570", "paper_id": "R150508", "text": "Introduction to the Bio-entity Recognition Task at JNLPBA We describe here the JNLPBA shared task of bio-entity recognition using an extended version of the GENIA version 3 named entity corpus of MEDLINE abstracts. We provide background information on the task and present a general discussion of the approaches taken by participating systems." }, { "instance_id": "R150570xR147545", "comparison_id": "R150570", "paper_id": "R147545", "text": "GENIA corpus--a semantically annotated corpus for bio-textmining MOTIVATION Natural language processing (NLP) methods are regarded as being useful to raise the potential of text mining from biological literature. The lack of an extensively annotated corpus of this literature, however, causes a major bottleneck for applying NLP techniques. GENIA corpus is being developed to provide reference materials to let NLP techniques work for bio-textmining. RESULTS GENIA corpus version 3.0 consisting of 2000 MEDLINE abstracts has been released with more than 400,000 words and almost 100,000 annotations for biological terms." }, { "instance_id": "R150570xR148549", "comparison_id": "R150570", "paper_id": "R148549", "text": "Medmentions: a large biomedical corpus annotated with UMLS concepts This paper presents the formal release of {\\em MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described." }, { "instance_id": "R150570xR150517", "comparison_id": "R150570", "paper_id": "R150517", "text": "Semi-automatic semantic annotation of PubMed queries: A study on quality, efficiency, satisfaction Information processing algorithms require significant amounts of annotated data for training and testing. The availability of such data is often hindered by the complexity and high cost of production. In this paper, we investigate the benefits of a state-of-the-art tool to help with the semantic annotation of a large set of biomedical queries. Seven annotators were recruited to annotate a set of 10,000 PubMed\u00ae queries with 16 biomedical and bibliographic categories. About half of the queries were annotated from scratch, while the other half were automatically pre-annotated and manually corrected. The impact of the automatic pre-annotations was assessed on several aspects of the task: time, number of actions, annotator satisfaction, inter-annotator agreement, quality and number of the resulting annotations. The analysis of annotation results showed that the number of required hand annotations is 28.9% less when using pre-annotated results from automatic tools. As a result, the overall annotation time was substantially lower when pre-annotations were used, while inter-annotator agreement was significantly higher. In addition, there was no statistically significant difference in the semantic distribution or number of annotations produced when pre-annotations were used. The annotated query corpus is freely available to the research community. This study shows that automatic pre-annotations are found helpful by most annotators. Our experience suggests using an automatic tool to assist large-scale manual annotation projects. This helps speed-up the annotation time and improve annotation consistency while maintaining high quality of the final annotations." }, { "instance_id": "R150570xR148050", "comparison_id": "R150570", "paper_id": "R148050", "text": "Tagging gene and protein names in biomedical text MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors." }, { "instance_id": "R150570xR69300", "comparison_id": "R150570", "paper_id": "R69300", "text": "NCBI disease corpus: A resource for disease name recognition and concept normalization Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora. This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH\u00ae) or Online Mendelian Inheritance in Man (OMIM\u00ae). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency. The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks. The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/." }, { "instance_id": "R150570xR148039", "comparison_id": "R150570", "paper_id": "R148039", "text": "GENETAG: a tagged corpus for gene/protein named entity recognition Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE \u00ae sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the \"gold standard\". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year." }, { "instance_id": "R150570xR148032", "comparison_id": "R150570", "paper_id": "R148032", "text": "MedTag: A Collection of Biomedical Annotations We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz." }, { "instance_id": "R25093xR25089", "comparison_id": "R25093", "paper_id": "R25089", "text": "Predicting Personality Traits using Multimodal Information Measuring personality traits has a long story in psychology where analysis has been done by asking sets of questions. These question sets (inventories) have been designed by investigating lexical terms that we use in our daily communications or by analyzing biological phenomena. Whether consciously or unconsciously we express our thoughts and behaviors when communicating with others, either verbally, non-verbally or using visual expressions. Recently, research in behavioral signal processing has focused on automatically measuring personality traits using different behavioral cues that appear in our daily communication. In this study, we present an approach to automatically recognize personality traits using a video-blog (vlog) corpus, consisting of transcription and extracted audio-visual features. We analyzed linguistic, psycholinguistic and emotional features in addition to the audio-visual features provided with the dataset. We also studied whether we can better predict a trait by identifying other traits. Using our best models we obtained very promising results compared to the official baseline." }, { "instance_id": "R25093xR25075", "comparison_id": "R25093", "paper_id": "R25075", "text": "Predicting Personality with Social Behavior In this paper, we examine to which degree behavioral measures can be used to predict personality. Personality is one factor that dictates people's propensity to trust and their relationships with others. In previous work, we have shown that personality can be predicted relatively accurately by analyzing social media profiles. We demonstrated this using public data from facebook profiles and text from Twitter streams. As social situations are crucial in the formation of one's personality, one's social behavior could be a strong indicator of her personality. Given most users of social media sites typically have a large number of friends and followers, considering only these aspects may not provide an accurate picture of personality. To overcome this problem, we develop a set of measures based on one's behavior towards her friends and followers. We introduce a number of measures that are based on the intensity and number of social interactions one has with friends along a number of dimensions such as reciprocity and priority. We analyze these features along with a set of features based on the textual analysis of the messages sent by the users. We show that behavioral features are very useful in determining personality and perform as well as textual features." }, { "instance_id": "R25093xR25081", "comparison_id": "R25093", "paper_id": "R25081", "text": "Private traits and attributes are predictable from digital records of human behavior We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait \u201cOpenness,\u201d prediction accuracy is close to the test\u2013retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy." }, { "instance_id": "R25093xR25077", "comparison_id": "R25093", "paper_id": "R25077", "text": "Personality and patterns of Facebook usage We show how users' activity on Facebook relates to their personality, as measured by the standard Five Factor Model. Our dataset consists of the personality profiles and Facebook profile data of 180,000 users. We examine correlations between users' personality and the properties of their Facebook profiles such as the size and density of their friendship network, number uploaded photos, number of events attended, number of group memberships, and number of times user has been tagged in photos. Our results show significant relationships between personality traits and various features of Facebook profiles. We then show how multivariate regression allows prediction of the personality traits of an individual user given their Facebook profile. The best accuracy of such predictions is achieved for Extraversion and Neuroticism, the lowest accuracy is obtained for Agreeableness, with Openness and Conscientiousness lying in the middle." }, { "instance_id": "R25093xR25079", "comparison_id": "R25093", "paper_id": "R25079", "text": "Machine prediction of personality from Facebook profiles An increasing number of Americans use social networking sites such as Facebook, but few fully appreciate the amount of information they share with the world as a result. Although studies exist on the sharing of specific types of information (photos, posts, etc.), one area that has been less explored is how Facebook profiles can share personality information in a broad, machine-readable fashion. In this study, we apply data-mining and machine learning techniques to predict users' personality traits (specifically, the traits of the Big Five personality model) using only demographic and text-based attributes extracted from their profiles. We then use these predictions to rank individuals in terms of the five traits, predicting which users will appear in the top or bottom 5% or 10% of these traits. Our results show that when using certain models, we can find the top 10% most Open individuals with nearly 75% accuracy, and across all traits and directions, we can predict the top 10% with at least 34.5% accuracy (exceeding 21.8%, which is the best accuracy when using just the best-performing profile attribute). These results have privacy implications in terms of allowing advertisers and other groups to focus on a specific subset of individuals based on their personality traits." }, { "instance_id": "R25093xR25087", "comparison_id": "R25093", "paper_id": "R25087", "text": "Mining facebook data for predictive personality modeling Beyond being facilitators of human interactions, social networks have become an interesting target of research, providing rich information for studying and modeling user\u2019s behavior. Identification of personality-related indicators encrypted in Facebook profiles and activities are of special concern in our current research efforts. This paper explores the feasibility of modeling user personality based on a proposed set of features extracted from the Facebook data. The encouraging results of our study, exploring the suitability and performance of several classification techniques, will also be presented." }, { "instance_id": "R25093xR25083", "comparison_id": "R25093", "paper_id": "R25083", "text": "Evaluating Content-Independent Features for Personality Recognition This paper describes our submission for the WCPR14 shared task on computational personality recognition. We have investigated whether the features proposed by Soler and Wanner (2014) for gender prediction might also be useful in personality recognition. We have compared these features with simple approaches using token unigrams, character trigrams and liwc features. Although the newly investigated features seem to work quite well on certain personality traits, they do not outperform the simple approaches." }, { "instance_id": "R25093xR25066", "comparison_id": "R25093", "paper_id": "R25066", "text": "Predicting Personality from Twitter Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains." }, { "instance_id": "R25115xR25111", "comparison_id": "R25115", "paper_id": "R25111", "text": "How much participation is enough? This paper considers the relationship between depth of participation (i.e., the effort and resources invested in participation) versus (tangible) outcomes. The discussion is based on experiences from six participatory research projects of different sizes and durations all taking place within a two year period and all aiming to develop new digital technologies to address an identified social need. The paper asks the fundamental question: how much participation is enough? That is, it challenges the notion that more participation is necessarily better, and, by using the experience of these six projects, it asks whether a more light touch or 'lean' participatory process can still achieve good outcomes, but at reduced cost. The paper concludes that participatory design researchers could consider 'agile' principles from the software development field as one way to streamline participatory processes." }, { "instance_id": "R25115xR25101", "comparison_id": "R25115", "paper_id": "R25101", "text": "User Advocacy in Participatory Design: Designers' Experiences with a New Communication Channel We report on participatory design activities within the PoliTeam project, a large project which introduces groupware into the German government. Working with a representative small group of users in different worksites, an existing system was adapted to user and organizational needs, with the plan to improve and expand the system to a large scale. We integrated new approaches of user advocacy and osmosis with an evolutionary cycling process. User advocates and osmosis were techniques used to explore the users' needs during actual system use. These techniques were incorporated into the system development. In this paper, we present experiences with this approach and reflect on its impact on the design process from the designers' point of view." }, { "instance_id": "R25115xR25103", "comparison_id": "R25115", "paper_id": "R25103", "text": "Sustained Participatory Design: Extending the Iterative Approach With its 10th biennial anniversary conference in 2008, Participatory Design (PD) was leaving its teens and must now be considered ready to join the adult world and to think big: PD should engage in large-scale information-systems development and opt for a sustained PD approach applied throughout design and organizational implementation. To pursue this aim we extend the iterative PD approach by (1) emphasizing PD experiments that transcend traditional prototyping and evaluate systems during real work; (2) incorporating improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extending initial design and development into a sustained, stepwise implementation that constitutes an overall technology-driven organizational change. Sustained PD is exemplified through a PD experiment in the Danish healthcare sector. We reflect on our experiences from this experiment and discuss four challenges PD must address in dealing with large-scale systems development." }, { "instance_id": "R25115xR25097", "comparison_id": "R25115", "paper_id": "R25097", "text": "A retrospective look at PD projects W h i l e m o d e r n m e t h o d s f o r i n f o r m a t i o n s y s t e m d e v e l o p m e n t g e n e r a l l y a c c e p t t h a t u s e r s s h o u l d b e i n v o l v e d in s o m e w a y [15], t h e f o r m o f t h e i n v o l v e m e n t d i f f e r s c o n s i d e r a b l y . M o s t l y , u s e r s a r e v i e w e d a s r e l a t i v e l y p a s s i v e s o u r c e s o f i n f o r m a t i o n , a n d t h e i n v o l v e m e n t is r e g a r d e d a s \" f u n c t i o n a l , \" in t h e s e n s e t h a t i t s h o u l d y i e l d b e t t e r s y s t e m r e q u i r e m e n t s a n d i n c r e a s e d a c c e p t a n c e b y u s e r s ." }, { "instance_id": "R25115xR25107", "comparison_id": "R25115", "paper_id": "R25107", "text": "Participants' view on personal gains and PD process While it is commonly claimed that users of participatory design projects reap benefits from their participation, little research exists that shows if this truly occurs in the real world. In this paper, we introduce the method and results of assessing the participants' perception of their personal benefits and the degree of participation in a large project in the healthcare field. Our research shows that a well-executed participatory design project can produce most of the benefits hypothesized in the literature but also highlights the challenges of assessing individual benefits and the PD process." }, { "instance_id": "R25160xR25135", "comparison_id": "R25160", "paper_id": "R25135", "text": "Integrating Off-Board Cameras and Vehicle On-Board Localization for Pedestrian Safety Situational awareness for industrial vehicles is crucial to ensure safety of personnel and equipment. While human drivers and onboard sensors are able to detect obstacles and pedestrians within line-of-sight, in complex environments, initially occluded or obscured dynamic objects can unpredictably enter the path of a vehicle. We propose a system that integrates a vision-based offboard pedestrian tracking subsystem with an onboard localization and navigation subsystem. This combination enables warnings to be communicated and effectively extends the vehicle controller's field of view to include areas that would otherwise be blind spots. A simple flashing light interface in the vehicle cabin provides a clear and intuitive interface to alert drivers of potential collisions. Alternatively, the system can be also applied to vehicles that have autonomous navigation capabilities, in which case, instead of alert lights, the vehicle is halted or redirected. We implemented and tested the proposed solution on an automated industrial vehicle under autonomous operation and on a human-driven vehicle in a full-scale production facility, over a period of four months." }, { "instance_id": "R25160xR25133", "comparison_id": "R25160", "paper_id": "R25133", "text": "Complementary Audio-Visual Collision Warnings The growing number of driver assistance systems increases the demand for warnings that are intuitively comprehensible. Particularly in hazardous situations, such as a threatening collision, a driver must understand the warning immediately. For this reason, collision warnings should convey as much information as needed to interpret the situation properly and to prepare preventive actions. The present study investigated whether informing about the object and the location of an imminent crash by a multimodal warning (visual and auditory) leads to shorter reaction times and fewer collisions compared to warning signals which only inform about the object of the crash (auditory icons) or give no additional information (simple tone). Results reveal that multimodal warnings have the potential to produce a significant advantage over unimodal signals as long as their components complement each other in a way that realistically fits the situation at hand." }, { "instance_id": "R25160xR25158", "comparison_id": "R25160", "paper_id": "R25158", "text": "TactiCar While deciding if it is possible to overtake a slower car, drivers have to take several factors into account. Accident statistics show that many drivers make mistakes in this sit-uation. We want to assist drivers during lane change deci-sion without raising his or her mental workload. Since vision is already highly loaded to assess the situation, we address vibration in this work and investigated if it is possible to con-tinuously inform drivers about a closing car using a vibro-tactile belt. We developed two different tactile patterns and tested them in a driving simulator. Both patterns achieved promising results regarding usability. In the future, we plan to refine the patterns and evaluate the impact on workload and safety." }, { "instance_id": "R25160xR25151", "comparison_id": "R25160", "paper_id": "R25151", "text": "Light my way In demanding driving situations, the front-seat passenger can become a supporter of the driver by, e.g., monitoring the scene or providing hints about upcoming hazards or turning points. A fast and efficient communication of such spatial information can help the driver to react properly, with more foresight. As shown in previous research, this spatial referencing can be facilitated by providing the driver a visualization of the front-seat passenger's gaze. In this paper, we focus on the question how the gaze should be visualized for the driver, taking into account the feasibility of implementation in a real car. We present the results from a driving simulator study, where we compared an LED visualization (glowing LEDs on an LED stripe mounted at the bottom of the windshield, indicating the horizontal position of the gaze) with a visualization of the gaze as a dot in the simulated environment. Our results show that LED visualization comes with benefits with regard to driver distraction but also bears disadvantages with regard to accuracy and control for the front-seat passenger." }, { "instance_id": "R25160xR25118", "comparison_id": "R25160", "paper_id": "R25118", "text": "VisionSense: an advanced lateral collision warning system VisionSense is an advanced driver assistance system which combines a lateral collision warning system with vehicle-to-vehicle communication. This paper shows the results of user needs assessment and traffic safety modelling of VisionSense. User needs were determined by means of a Web-based survey. The results show, that VisionSense is most appreciated when it uses a light signal to warn the driver in a possibly hazardous situation on a highway. The willingness to pay is estimated at 300 Euros. Another conclusion based on the survey is that frequent car users want less assistance than less frequent drivers. Besides the user needs the impact on traffic safety is modelled. The results are indicative and more research has to be done. Traffic safety effects of VisionSense on a highway were modelled by means of a microscopic car following and lane change algorithm. Twelve different traffic scenarios were modelled with and without VisionSense. With VisionSense no traffic conflicts occur due to lane changing and less lane changes are performed. VisionSense is a system that can improve traffic safety in the future." }, { "instance_id": "R25160xR25154", "comparison_id": "R25160", "paper_id": "R25154", "text": "A color scenario of Eco & Healthy Driving for the RGB LED based interface display of a climate control device The study demonstrates a process of synergizing both exploratory and confirmatory research approaches to design the color for a luminescent surface facilitated by RGB LEDs. Focusing on the relationship between color and in-door climate of automobiles, the study consists of three parts: In Part I, a workshop of ten designers was executed in which ideas were exploited to find in-car scenarios. The scenarios were evaluated based on the criteria of interesting, informative, and inspiring aspects to conclusively derive the scenario labeled \u201cEco & Healthy Driving\u201d In Part II, a user test was carried out to investigate the relationship between the attributes of luminescent color-hue, brightness, and purity- and an indoor climate condition. In the user test (n= 36), subjects were instructed to match a luminescent color to a given in-car climate condition. The user test results revealed that hue category of luminescent surface is related to temperature while brightness of luminescent color is correlated with blow level; Lastly, in Part III, by employing the results of user test, a guideline for implementing the new design scenario, \u201cEco & Healthy Driving\u201d was projected for further development and application." }, { "instance_id": "R25160xR25142", "comparison_id": "R25160", "paper_id": "R25142", "text": "A large-scale LED array to support anticipatory driving We present a novel assistance system which supports anticipatory driving by means of fostering early deceleration. Upcoming technologies like Car2X communication provide information about a time interval which is currently uncovered. This information shall be used in the proposed system to inform drivers about future situations which require reduced speed. Such situations include traffic jams, construction sites or speed limits. The HMI is an optical output system based on line arrays of RGB-LEDs. Our contribution presents construction details as well as user evaluations. The results show an earlier deceleration of 3.9 \u2013 11.5 s and a shorter deceleration distance of 2 \u2013 166 m." }, { "instance_id": "R25160xR25129", "comparison_id": "R25160", "paper_id": "R25129", "text": "A New Driving Assistant for Automobiles This paper introduces an inexpensive car security system which addresses the needs for broader area coverage around the vehicle and stronger indication signals to drivers. The new driving assistant features simple ultrasonic-based sensors, implemented at the two front corners and the two blind spots of the vehicle. In order to report the close-by objects to the driver, the system employs a multitude of feedback devices, including tactile vibrators attached to the steering wheel, audible signals, and an LED display mounted on the dash board. The sensor system and the feedback devices are controlled in real-time by microcontrollers over a wireless communication network The final prototype system was installed and tested on a ride-on toy car." }, { "instance_id": "R25160xR25137", "comparison_id": "R25160", "paper_id": "R25137", "text": "Driver assistance via optical information with spatial reference The occurrence of accidents caused by deficiencies in risk recognition by the driver can be prevented by presenting relevant information in real time to the driver. In this paper it is proposed to draw the driver's attention towards relevant traffic objects, which might be a safety hazard, by a LED strip which is affixed 360\u00b0 around the interior of the car's cabin. With this approach a higher number of use cases can be covered than with existing HMIs. The effectiveness of this system is evaluated in a driving simulator study with 13 subjects in four critical traffic situations. The gaze attention times are ascertained with eye tracking technology; mental effort and acceptance are determined by questionnaires and the comprehensibility by semi-structured interviews. There are indications of shortened gaze attention times using the LED strip compared to the baseline without driver support. The subjects understand the information submitted mostly intuitively. The acceptance ratings overall are in a positive range, but differ between scenarios." }, { "instance_id": "R25160xR25122", "comparison_id": "R25160", "paper_id": "R25122", "text": "Assessment of safety levels and an innovative design for the Lane Change Assistant In this paper we propose a novel design for the Lane Change Assistant (LCA). For drivers on the highway, LCA advises them on whether it is safe to change lanes under the current traffic conditions. We focus on how the LCA can provide a reliable advice in practice by considering the issues of changing circumstances and measurement uncertainties. Under some generic assumptions we develop a micro-simulation model for the lane change safety assessment. The model is in line with the car following models and lane change algorithms available in literature. It retains a probabilistic character to accurately represent realistic situations. Based on a sensitivity study we are able to develop a robust design for the LCA. In this design the system accounts for the practical uncertainties by including appropriate extra safety distances. The driver interface consists of a spectrum of five LED lights, each operating on a distinct color (varying from red to green) and guaranteeing a certain safety degree. Our results allow car developers to easily acquire reliable designs for the LCA." }, { "instance_id": "R25160xR25146", "comparison_id": "R25160", "paper_id": "R25146", "text": "ChaseLight In order to support drivers to maintain a predefined driving speed, we introduce ChaseLight, an in-car system that uses a programmable LED stripe mounted along the A-pillar of a car. The chase light (i.e., stripes of adjacent LEDs that are turned on and off frequently to give the illusion of lights moving along the stripe) provides ambient feedback to the driver about speed. We present a simulator based user study that uses three different types of feedback: (1) chase light with constant speed, (2) with proportional speed (i.e., chase light speed correlates with vehicle speed), and (3) with adaptive speed (i.e., chase light speed adapts to a target speed of the vehicle). Our results show that the adaptive condition is suited best to help a driver to control driving speed. The proportional speed condition resulted in a significantly slower mean speed than the baseline condition (no chase light)." }, { "instance_id": "R25160xR25144", "comparison_id": "R25160", "paper_id": "R25144", "text": "GPS enabled speed control embedded system speed limiting device with display and engine control interface In the past decade, there have been close to 350,000 fatal crashes in the United States [1]. With various improvements in traffic and vehicle safety, the number of such crashes is decreasing every year. One of the ways to reduce vehicle crashes is to prevent excessive speeding in the roads and highways. The paper aims to outline the design of an embedded system that will automatically control the speed of a motor vehicle based on its location determined by a GPS device. The embedded system will make use of an AVR ATMega128 microcontroller connected to an EM-406A GPS receiver. The large amount of location input data justifies the use of an ATMega128 microcontroller which has 128KB of programmable flash memory as well as 4KB SRAM, and a 4KB EEPROM Memory [2]. The output of the ATMega128 will be a DOGMI63W-A LCD module which will display information of the current and the set-point speed of the vehicle at the current position. A discrete indicator LED will flash at a pre-determined frequency when the speed of the vehicle has exceeded the recommended speed limit. Finally, the system will have outputs that will communicate with the Engine Control Unit (ECU) of the vehicle. For the limited scope of this project, the ECU is simulated as an external device with two inputs that will acknowledge pulse-trains of particular frequencies to limit the speed of a vehicle. The speed control system will be programmed using mixed language C and Assembly with the latter in use for some pre-written subroutines to drive the LCD module. The GPS module will transmit National Marine Electronics Association (NMEA) data strings to the microcontroller (MCU) using Serial Peripheral Interface (SPI). The MCU will use the location coordinates (latitude and longitude) and the speed from the NMEA RMC output string. The current speed is then compared against the recommended speed for the vehicle's location. The memory locations in the ATMega128 can be used to store set-point speed values against a particular set of location co-ordinates. Apart from its implementation in human operated vehicles, the project can be used to control speed of autonomous cars and to implement the idea of a variable speed limit on roads introduced by the Department of Transportation [3]." }, { "instance_id": "R25160xR25120", "comparison_id": "R25160", "paper_id": "R25120", "text": "HMI Principles for Lateral Safe Applications LATERAL SAFE is a subproject of the PREVENT Integrated Project, co-funded by the European Commission under the 6th Framework Programme. LATERAL SAFE introduces a cluster of safety applications of the future vehicles, in order to prevent lateral/rear related accidents and assist the driver in adverse or low visibility conditions and blind spot areas. LATERAL SAFE applications include a lateral and rear monitoring system (LRM), a lane change assistant (LCA) and a lateral collision warning (LCW). An effective Human Machine Interface (HMI) is being developed, addressing each application, on the basis of the results emerged from mock-up tests realised in three sites (one in Greece and two in Sweden), aiming to determine which is the best HMI solution to be provided in each case. In the current paper, the final HMI principles, adopted and demonstrated for each application, are presented." }, { "instance_id": "R25201xR25182", "comparison_id": "R25201", "paper_id": "R25182", "text": "Off-line Signature Verification Based on Chain Code Histogram and Support Vector Machine In this paper, we present an approach based on chain code histogram features enhanced through Laplacian of Gaussian filter for off-line signature verification. In the proposed approach, the four-directional chain code histogram of each grid on the contour of the signature image is extracted. The Laplacian of Gaussian filter is used to enhance the extracted features of each signature sample. Thus, the extracted and enhanced features of all signature samples of the off-line signature dataset constitute the knowledge base. Subsequently, the Support Vector Machine (SVM) classifier is used as the verification tool. The SVM is trained with the randomly selected training sample's features including genuine and random forgeries and tested with the remaining untrained genuine along with the skilled forge sample features to classify the tested/questioned sample as genuine or forge. Similar to the real time scenario, in the proposed approach we have not considered the skilled fore sample to train the classifier. Extensive experimentations have been conducted to exhibit the performance of the proposed approach on the publicly available datasets namely, CEDAR, GPDS-100 and MUKOS, a regional language dataset. The state-of-art off-line signature verification methods are considered for comparative study to justify the feasibility of the proposed approach for off-line signature verification and to reveal its accuracy over the existing approaches." }, { "instance_id": "R25201xR25191", "comparison_id": "R25201", "paper_id": "R25191", "text": "GMM For Offline Signature Forgery Detection As signature continues to play a crucial part in personal identification for number of applications including financial transaction, an efficient signature authentication system becomes more and more important. Various researches in the field of signature authentication has been dynamically pursued for many years and its extent is still being explored. Signature verification is the process which is carried out to determine whether a given signature is genuine or forged. It can be distinguished into two types such as the Online and the Offline. In this paper we presented the Offline signature verification system and extracted some new local and geometric features like QuadSurface feature, Area ratio, Distance ratio etc. For this we have taken some genuine signatures from 5 different persons and extracted the features from all of the samples after proper preprocessing steps. The training phase uses Gaussian Mixture Model (GMM) technique to obtain a reference model for each signature sample of a particular user. By computing Euclidian distance between reference signature and all the training sets of signatures, acceptance range is defined. If the Euclidian distance of a query signature is within the acceptance range then it is detected as an authenticated signature else, a forged signature." }, { "instance_id": "R25201xR25180", "comparison_id": "R25201", "paper_id": "R25180", "text": "Off-line English and Chinese Signature Identification Using Foreground and Background Features In the field of information security, the usage of biometrics is growing for user authentication. Automatic signature recognition and verification is one of the biometric techniques, which is only one of several used to verify the identity of individuals. In this paper, a foreground and background based technique is proposed for identification of scripts from bi-lingual (English/Roman and Chinese) off-line signatures. This system will identify whether a claimed signature belongs to the group of English signatures or Chinese signatures. The identification of signatures based on its script is a major contribution for multi-script signature verification. Two background information extraction techniques are used to produce the background components of the signature images. Gradient-based method was used to extract the features of the foreground as well as background components. Zernike Moment feature was also employed on signature samples. Support Vector Machine (SVM) is used as the classifier for signature identification in the proposed system. A database of 1120 (640 English+480 Chinese) signature samples were used for training and 560 (320 English+240 Chinese) signature samples were used for testing the proposed system. An encouraging identification accuracy of 97.70% was obtained using gradient feature from the experiment." }, { "instance_id": "R25201xR25178", "comparison_id": "R25201", "paper_id": "R25178", "text": "Tsang Ing Re, Off-line signature verification: an approach based on combining distances and one-class classifiers This paper presents an off-line signature verification system composed of a combination of several different classifiers. Identity authentication is a very important characteristics specially in systems that requires a high degree of security such as in bank transactions. In our experiments, one-class classifier was used to create a signature verification system, consequently only genuine signatures were necessary for the training phase. We proposed five distances measurement as features for the classification system. The distances extracted from the signature database were: furthest, nearest, template, central and ncentral. Also, a normalization procedure was applied to turn the distance scale invariant. These distances were combined using four operation: product, mean, maximum and minimum. The calculated distances were used as a feature vector to represent the signatures. Finally, the distances measurement and their combinations were used as input vector for different classifiers. The proposed signature verification method obtained very good rates." }, { "instance_id": "R25201xR25163", "comparison_id": "R25201", "paper_id": "R25163", "text": "Applying Dissimilarity Representation to Off-line Signature Verification In this paper, a two-stage off-line signature verification system based on dissimilarity representation is proposed. In the first stage, a set of discrete left-to-right HMMs trained with different number of states and codebook sizes is used to measure similarity values that populate new feature vectors. Then, these vectors are input to the second stage, which provides the final classification. Experiments were performed using two different classification techniques -- AdaBoost, and Random Subspaces with SVMs -- and a real-world signature verification database. Results indicate that the performance is significantly better with the proposed system over other reference signature verification systems from literature." }, { "instance_id": "R25201xR25175", "comparison_id": "R25201", "paper_id": "R25175", "text": "Offline Signature Verification Using Classifier Combination of HOG and LBP Features We present an offline signature verification system based on a signature's local histogram features. The signature is divided into zones using both the Cartesian and polar coordinate systems and two different histogram features are calculated for each zone: histogram of oriented gradients (HOG) and histogram of local binary patterns (LBP)." }, { "instance_id": "R25201xR25166", "comparison_id": "R25201", "paper_id": "R25166", "text": "Off- line signature verification based on grey level information using texture features A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS-100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance." }, { "instance_id": "R25201xR25193", "comparison_id": "R25201", "paper_id": "R25193", "text": "Discriminative DCT: An Efficient and Accurate Approach for Off-line Signature Verification In this paper, we proposed to combine the transform based approach with dimensionality reduction technique for off-line signature verification. The proposed approach has four major phases: Preprocessing, Feature extraction, Feature reduction and Classification. In the feature extraction phase, Discrete Cosine Transform (DCT) is employed on the signature image to obtain the upper-left corner block of size mX n as a representative feature vector. These features are subjected to Linear Discriminant Analysis (LDA) for further reduction and representing the signature with optimal set of features. Thus obtained features from all the samples in the dataset form the knowledge base. The Support Vector Machine (SVM), a bilinear classifier is used for classification and the performance is measured through FAR/FRR metric. Experiments have been conducted on standard signature datasets namely CEDAR and GPDS-160, and MUKOS, a regional language (Kannada) dataset. The comparative study is also provided with the well known approaches to exhibit the performance of the proposed approach." }, { "instance_id": "R25223xR25203", "comparison_id": "R25223", "paper_id": "R25203", "text": "Replication Algorithms in a Remote Caching Architectures Studies the cache performance in a remote caching architecture. The authors develop a set of distributed object replication policies that are designed to implement different optimization goals. Each site is responsible for local cache decisions, and modifies cache contents in response to decisions made by other sites. The authors use the optimal and greedy policies as upper and lower bounds, respectively, for performance in this environment. Critical system parameters are identified, and their effect on system performance studied. Performance of the distributed algorithms is found to be close to optimal, while that of the greedy algorithms is far from optimal. >" }, { "instance_id": "R25223xR25211", "comparison_id": "R25223", "paper_id": "R25211", "text": "Distributed Selfish Caching Although cooperation generally increases the amount of resources available to a community of nodes, thus improving individual and collective performance, it also allows for the appearance of potential mistreatment problems through the exposition of one node's resources to others. We study such concerns by considering a group of independent, rational, self-aware nodes that cooperate using online caching algorithms, where the exposed resource is the storage at each node. Motivated by content networking applications - including Web caching, content delivery networks (CDNs), and peer-to-peer (P2P) - this paper extends our previous work on the offline version of the problem, which was conducted under a game-theoretic framework and limited to object replication. We identify and investigate two causes of mistreatment: 1) cache state interactions (due to the cooperative servicing of requests) and 2) the adoption of a common scheme for cache management policies. Using analytic models, numerical solutions of these models, and simulation experiments, we show that online cooperation schemes using caching are fairly robust to mistreatment caused by state interactions. To appear in a substantial manner, the interaction through the exchange of miss streams has to be very intense, making it feasible for the mistreated nodes to detect and react to exploitation. This robustness ceases to exist when nodes fetch and store objects in response to remote requests, that is, when they operate as level-2 caches (or proxies) for other nodes. Regarding mistreatment due to a common scheme, we show that this can easily take place when the \"outlier\" characteristics of some of the nodes get overlooked. This finding underscores the importance of allowing cooperative caching nodes the flexibility of choosing from a diverse set of schemes to fit the peculiarities of individual nodes. To that end, we outline an emulation-based framework for the development of mistreatment-resilient distributed selfish caching schemes." }, { "instance_id": "R25223xR25221", "comparison_id": "R25223", "paper_id": "R25221", "text": "Dynamic Replica Placement for Scalable Content Delivery In this paper, we propose the dissemination tree, a dynamic content distribution system built on top of a peer-to-peer location service. We present a replica placement protocol that builds the tree while meeting QoS and server capacity constraints. The number of replicas as well as the delay and bandwidth consumption for update propagation are significantly reduced. Simulation results show that the dissemination tree has close to the optimal number of replicas, good load distribution, small delay and bandwidth penalties for update multicast compared with the ideal case: static replica placement on IP multicast." }, { "instance_id": "R25223xR25215", "comparison_id": "R25223", "paper_id": "R25215", "text": "A Distributed Algorithm for Web content Replication Web caching and replication techniques increase accessibility of Web contents and reduce Internet bandwidth requirements. In this paper, we are considering the replica placement problem in a distributed replication group. The replication group consists of servers dedicating certain amount of memory for replicating objects. The replica placement problem is to place the replica at the servers within the replication group such that the access time over all objects and servers is minimized. We design a distributed 2-approximation algorithm that solves this optimization problem. We show that the communication and computational complexity of the algorithm is polynomial in the number of servers and objects. We perform simulation experiments to investigate the performance of our algorithm." }, { "instance_id": "R25223xR25209", "comparison_id": "R25223", "paper_id": "R25209", "text": "Distributed Selfish Replication A commonly employed abstraction for studying the object placement problem for the purpose of Internet content distribution is that of a distributed replication group. In this work, the initial model of the distributed replication group of Leff et al. [CHECK END OF SENTENCE] is extended to the case that individual nodes act selfishly, i.e., cater to the optimization of their individual local utilities. Our main contribution is the derivation of equilibrium object placement strategies that 1) can guarantee improved local utilities for all nodes concurrently as compared to the corresponding local utilities under greedy local object placement, 2) do not suffer from potential mistreatment problems, inherent to centralized strategies that aim at optimizing the social utility, and 3) do not require the existence of complete information at all nodes. We develop a baseline computationally efficient algorithm for obtaining the aforementioned equilibrium strategies and then extend it to improve its performance with respect to fairness. Both algorithms are realizable, in practice, through a distributed protocol that requires only a limited exchange of information." }, { "instance_id": "R25223xR25205", "comparison_id": "R25223", "paper_id": "R25205", "text": "A QOS-Aware Intelligent Replica Management Architecture for Content Distribution in Peer-to-Peer Overlay Networks The large scale content distribution systems were improved broadly using the replication techniques. The demanded contents can be brought closer to the clients by multiplying the source of information geographically, which in turn reduce both the access latency and the network traffic. The system scalability can be improved by distributing the load across multiple servers which is proposed by replication. If a copy of the requested object (e.g., a web page or an image) is located in its closer proximity then the clients would feel low access latency. Depending on the position of the replicas, the effectiveness of replication tends to a large extent. A QoS based overlay network architecture involving an intelligent replica placement algorithm is proposed in this paper. Its main goal is to improve the network utilization and fault tolerance of the P2P system. In addition to the replica placement, it also has a caching technique, to reduce the search latency. We are able to show that our proposed architecture attains less latency and better throughput with reduced bandwidth usage, through the simulation results. Keywords-Clusters, Content, Overlay, QoS, Replica, Routing" }, { "instance_id": "R25255xR25247", "comparison_id": "R25255", "paper_id": "R25247", "text": "SLSA: A Sentiment Lexicon for Standard Arabic Sentiment analysis has been a major area of interest, for which the existence of highquality resources is crucial. In Arabic, there is a reasonable number of sentiment lexicons but with major deficiencies. The paper presents a large-scale Standard Arabic Sentiment Lexicon (SLSA) that is publicly available for free and avoids the deficiencies in the current resources. SLSA has the highest up-to-date reported coverage. The construction of SLSA is based on linking the lexicon of AraMorph with SentiWordNet along with a few heuristics and powerful back-off. SLSA shows a relative improvement of 37.8% over a state-of-theart lexicon when tested for accuracy. It also outperforms it by an absolute 3.5% of F1-score when tested for sentiment analysis." }, { "instance_id": "R25255xR25233", "comparison_id": "R25255", "paper_id": "R25233", "text": "Arabic SentiWordNet in relation to SentiWordNet Sentiment analysis and opinion mining are the tasks of identifying positive or negative opinions and emotions from pieces of text. The SentiWordNet (SWN) plays an important role in extracting opinions from texts. It is a publicly available sentiment measuring tool used in sentiment classification and opinion mining. We firstly discuss the development of the English SWN for versions 1.0 and 3.0. This is to provide the basis for developing an equivalent SWN for the Arabic language through a mapping to the latest version of the English SWN 3.0. We also discuss the construction of an annotated sentiment corpus for Arabic and its relationship to the Arabic SWN." }, { "instance_id": "R25255xR25249", "comparison_id": "R25255", "paper_id": "R25249", "text": "NileULex: A phrase and word level sentiment lexicon for egyptian and modern standard Arabic This paper presents NileULex, which is an Arabic sentiment lexicon containing close to six thousands Arabic words and compound phrases. Forty five percent of the terms and expressions in the lexicon are Egyptian or colloquial while fifty five percent are Modern Standard Arabic. While the collection of many of the terms included in the lexicon was done automatically, the actual addition of any term was done manually. One of the important criterions for adding terms to the lexicon, was that they be as unambiguous as possible. The result is a lexicon with a much higher quality than any translated variant or automatically constructed one. To demonstrate that a lexicon such as this can directly impact the task of sentiment analysis, a very basic machine learning based sentiment analyser that uses unigrams, bigrams, and lexicon based features was applied on two different Twitter datasets. The obtained results were compared to a baseline system that only uses unigrams and bigrams. The same lexicon based features were also generated using a publicly available translation of a popular sentiment lexicon. The experiments show that usage of the developed lexicon improves the results over both the baseline and the publicly available lexicon." }, { "instance_id": "R25255xR25241", "comparison_id": "R25255", "paper_id": "R25241", "text": "SANA: A large scale multi-genre, multi-dialect lexicon for Arabic subjectivity and sentiment analysis The computational treatment of subjectivity and sentiment in natural language is usually significantly improved by applying features exploiting lexical resources where entries are tagged with semantic orientation (e.g., positive, negative values). In spite of the fair amount of work on Arabic sentiment analysis over the past few years (e.g., (Abbasi et al., 2008; Abdul-Mageed et al., 2014; Abdul-Mageed et al., 2012; Abdul-Mageed and Diab, 2012a; Abdul-Mageed and Diab, 2012b; Abdul-Mageed et al., 2011a; Abdul-Mageed and Diab, 2011)), the language remains under-resourced as to these polarity repositories compared to the English language. In this paper, we report efforts to build and present SANA, a large-scale, multi-genre, multi-dialect multi-lingual lexicon for the subjectivity and sentiment analysis of the Arabic language and dialects." }, { "instance_id": "R25255xR25243", "comparison_id": "R25255", "paper_id": "R25243", "text": "Building Large Arabic Multi-domain Resources for Sentiment Analysis While there has been a recent progress in the area of Arabic Sentiment Analysis, most of the resources in this area are either of limited size, domain specific or not publicly available. In this paper, we address this problem by generating large multi-domain datasets for Sentiment Analysis in Arabic. The datasets were scrapped from different reviewing websites and consist of a total of 33K annotated reviews for movies, hotels, restaurants and products. Moreover we build multi-domain lexicons from the generated datasets. Different experiments have been carried out to validate the usefulness of the datasets and the generated lexicons for the task of sentiment classification. From the experimental results, we highlight some useful insights addressing: the best performing classifiers and feature representation methods, the effect of introducing lexicon based features and factors affecting the accuracy of sentiment classification in general. All the datasets, experiments code and results have been made publicly available for scientific purposes." }, { "instance_id": "R25255xR25251", "comparison_id": "R25255", "paper_id": "R25251", "text": "Arabic senti-lexicon: Constructing publicly available language resources for Arabic sentiment analysis Sentiment analysis is held to be one of the highly dynamic recent research fields in Natural Language Processing, facilitated by the quickly growing volume of Web opinion data. Most of the approaches in this field are focused on English due to the lack of sentiment resources in other languages such as the Arabic language and its large variety of dialects. In most sentiment analysis applications, good sentiment resources play a critical role. Based on that, in this article, several publicly available sentiment analysis resources for Arabic are introduced. This article introduces the Arabic senti-lexicon, a list of 3880 positive and negative synsets annotated with their part of speech, polarity scores, dialects synsets and inflected forms. This article also presents a Multi-domain Arabic Sentiment Corpus (MASC) with a size of 8860 positive and negative reviews from different domains. In this article, an in-depth study has been conducted on five types of feature sets for exploiting effective features and investigating their effect on performance of Arabic sentiment analysis. The aim is to assess the quality of the developed language resources and to integrate different feature sets and classification algorithms to synthesise a more accurate sentiment analysis method. The Arabic senti-lexicon is used for generating feature vectors. Five well-known machine learning algorithms: na\u00efve Bayes, k-nearest neighbours, support vector machines (SVMs), logistic linear regression and neural network are employed as base-classifiers for each of the feature sets. A wide range of comparative experiments on standard Arabic data sets were conducted, discussion is presented and conclusions are drawn. The experimental results show that the Arabic senti-lexicon is a very useful resource for Arabic sentiment analysis. Moreover, results show that classifiers which are trained on feature vectors derived from the corpus using the Arabic sentiment lexicon are more accurate than classifiers trained using the raw corpus." }, { "instance_id": "R25255xR25245", "comparison_id": "R25255", "paper_id": "R25245", "text": "Automatic expandable large-scale sentiment lexicon of modern standard Arabic and colloquial In subjectivity and sentiment analysis (SSA), there are two main requirements are necessary to improve sentiment analysis effectively in any language and genres, first, high coverage sentiment lexicon - where entries are tagged with semantic orientation (positive, negative and neutral) - second, tagged corpora to train the sentiment classifier. Much of research has been conducted in this area during the last decade, but the need of building these resources is still ongoing, especially for morphologically-Rich language (MRL) such as Arabic. In this paper, we present an automatic expandable wide coverage polarity lexicon of Arabic sentiment words, this lexical resource explicitly devised for supporting Arabic sentiment classification and opinion mining applications. The lexicon is built using a seed of gold-standard Arabic sentiment words which are manually collected and annotated with semantic orientation (positive or negative), and automatically expanded with sentiment orientation detection of the new sentiment words by exploiting some lexical information such as part-of-speech (POS) tags and using synset aggregation techniques from free online Arabic lexicons, thesauruses. We report efforts to expand a manually-built our polarity lexicon using different types of data. Finally, we used various tagged data to evaluate the coverage and quality of our polarity lexicon, moreover, to evaluate the lexicon expansion and its effects on the sentiment analysis accuracy. Our data focus on modern standard Arabic (MSA) and Egyptian dialectal Arabic tweets and Arabic microblogs (hotel reservation, product reviews, and TV program comments)." }, { "instance_id": "R25255xR25229", "comparison_id": "R25255", "paper_id": "R25229", "text": "Arabic sentiment analysis: Lexicon-based and corpus-based The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach." }, { "instance_id": "R25358xR25334", "comparison_id": "R25358", "paper_id": "R25334", "text": "URBE: Web Service Retrieval Based on Similarity Evaluation In this work, we present UDDI registry by example (Urbe), a novel approach for Web service retrieval based on the evaluation of similarity between Web service interfaces. Our approach assumes that the Web service interfaces are defined with Web service description language (WSDL) and the algorithm combines the analysis of their structures and the analysis of the terms used inside them. The higher the similarity, the less are the differences among their interfaces. As a consequence, Urbe is useful when we need to find a Web service suitable to replace an existing one that fails. Especially in autonomic systems, this situation is very common since we need to ensure the self-management, the self-configuration, the self-optimization, the self-healing, and the self-protection of the application that is based on the failed Web service. A semantic-oriented variant of the approach is also proposed, where we take advantage of annotations semantically enriching WSDL specifications. Semantic Annotation for WSDL (SAWSDL) is adopted as a language to annotate a WSDL description. The Urbe approach has been implemented in a prototype that extends a universal description, discovery and integration (UDDI) compliant Web service registry." }, { "instance_id": "R25358xR25348", "comparison_id": "R25358", "paper_id": "R25348", "text": "Study on Web Service Matching and Composition Based on Ontology Because web service function is continually strengthened, and web service quantity increases rapidly, it is urgent to solve the problem on automatic service matching and composition to meet service request. The service matching algorithm for bionic manufacturing domain-specific ontology is proposed in the paper. For this algorithm, the semantic similarity between two concepts is calculated according to the distance of two nodes corresponding to two words of service request and description. The revised index for matching the key word is considered to emphasize the key parameter among service description, and the matching accuracy is highly improved. The total matching degree of service is equal to the average of all semantic similarities between the service request and the service description. It can be accurately judged if the service matching is successful by the total matching degree. According to the dynamic hierarchical structure of domain-specific ontology model of bionic manufacturing, the service auto-link composition algorithm is proposed based on the recursive principle in the paper. The service is automatically and intelligently matched and reasoned by using the heuristic knowledge of knowledge base and the semantic information of ontology base of bionic manufacturing. The structure-well user objective IOPEs (Input, Output, Precondition, and Effect) is linked to get the abstract or concrete service-flow framework. The automatic service composition is implemented from single service to higher concept hierarchical service. Lastly, the system structure of semantic web service matching and composition is designed based on two described algorithms" }, { "instance_id": "R25358xR25285", "comparison_id": "R25358", "paper_id": "R25285", "text": "YASA-M: A Semantic Web Service Matchmaker In this paper, we present new algorithms for matching Web services described in YASA4WSDL (YASA for short). We have already defined YASA, a semantic description of services that overcomes some issues in WSDL or SAWSDL. In this paper, we continue on our contribution and show how YASA Web services are matched based on the specificities of YASA descriptions. Our matching algorithm consists of three variants based on three different semantic matching degree aggregations. This algorithm was implemented in YASA-M, a new Web service matchmaker. YASA-M is evaluated and compared to well known approaches for service matching. Experiments show that YASA-M provides better results, in terms of precision, response time, and scalability, than a well known matchmaker." }, { "instance_id": "R25358xR25354", "comparison_id": "R25358", "paper_id": "R25354", "text": "A New Approach for Semantic Web Matching In this work we propose a new approach for semantic web matching to improve the performance of Web Service replacement. Because in automatic systems we should ensure the self-healing, self-configuration, self-optimization and self-management, all services should be always available and if one of them crashes, it should be replaced with the most similar one. Candidate services are advertised in Universal Description, Discovery and Integration (UDDI) all in Web Ontology Language (OWL). By the help of bipartite graph, we did the matching between the crashed service and a Candidate one. Then we chose the best service, which had the maximum rate of matching. In fact we compare two services\u2018 functionalities and capabilities to see how much they match. We found that the best way for matching two web services, is comparing the functionalities of them." }, { "instance_id": "R25358xR25331", "comparison_id": "R25358", "paper_id": "R25331", "text": "Research on Fuzzy Matching Model for Semantic Web Services Semantic descriptions of Web services are necessary in order to enable their automatic discovery, composition and execution across heterogeneous users and domains on the basis of ontology. Matching approach is considered as one of the crucial factors to ensure dynamic discovery and composition of Web services. Current matching methods such as UDDI or Larks are inadequate given their inability to abstract and classify Web services. Therefore proposes a novel matching model which exploits fuzzy logic in order to abstract and classify the underlying data of Web services by fuzzy terms and rules. The aim is to make the match between service advertisement with service request more effective and allow vague terms in the search query and to provide more suited services for requesters." }, { "instance_id": "R25358xR25326", "comparison_id": "R25358", "paper_id": "R25326", "text": "Incremental Service Composition Based on Partial Matching of Visual Contracts Services provide access to software components that can be discovered dynamically via the Internet. The increasing number of services a requester may be able to use demand support for finding and selecting services. In particular, it is unrealistic to expect that a single service will satisfy complex requirements, so services will have to be combined to match clients\u2019 requests. In this paper, we propose a visual, incremental approach for the composition of services, in which we describe the requirements of a requester as a goal which is matched against multiple provider offers. After every match with an offer we decompose the goal into satisfied and remainder parts. We iterate the decomposition until the goal is satisfied or we run out of offers, leading to a resolution-like matching strategy. Finally, the individual offers can be composed into a single combined offer and shown to the requester for feedback. Our approach is based on visual specifications of pre- and postconditions by graph transformation systems with loose semantics, where a symbolic approach based on constraints is used to represent attributes and their computation in graphs." }, { "instance_id": "R25358xR25269", "comparison_id": "R25358", "paper_id": "R25269", "text": "On Extending Semantic Matchmaking to Include Preconditions and Effects Central to the notion of dynamic binding and loose coupling that underlie service-oriented architectures is dynamic service discovery. At the heart of most service discovery mechanisms is a matchmaking algorithm that matches a semantic query to a set of compatible web service advertisements. These advertisements also describe service semantics as a set of OWL-S terms. Most current matchmaking algorithms are based on semantic matching of input and output terms alone. However, a complete description of the service profile also includes preconditions and effects and in order to find a true match the matchmaker needs to match on these aspects of the advertisement as well. In this paper, we make the case for augmenting existing matchmaking algorithms with preconditions and effects in the context of Web Services. Further, we propose an algorithm for condition matching that is layered on the top of input-output term matching that overcomes the limitations of existing work. Although the problem of condition matching is NP-Complete, we can overcome this limitation by using a set of heuristics that gives us results in polynomial time. We also analyze complexity of the algorithm by comparing it with brute force approach of matching. We show that our algorithm yields results more efficiently than brute force matching but with the same accuracy." }, { "instance_id": "R25358xR25276", "comparison_id": "R25358", "paper_id": "R25276", "text": "Semantics-based composition-oriented discovery of Web services Service discovery and service aggregation are two crucial issues in the emerging area of service-oriented computing (SOC). We propose a new technique for the discovery of (Web) services that accounts for the need of composing several services to satisfy a client query. The proposed algorithm makes use of OWL-S ontologies, and explicitly returns the sequence of atomic process invocations that the client must perform in order to achieve the desired result. When no full match is possible, the algorithm features a flexible matching by returning partial matches and by suggesting additional inputs that would produce a full match." }, { "instance_id": "R25358xR25328", "comparison_id": "R25358", "paper_id": "R25328", "text": "Effective and Flexible NFP-Based Ranking of Web Services Service discovery is a key activity to actually identify the Web services (WSs) to be invoked and composed. Since it is likely that more than one service fulfill a set of user requirements, some ranking mechanisms based on non-functional properties (NFPs) are needed to support automatic or semi-automatic selection. This paper introduces an approach to NFP-based ranking of WSs providing support for semantic mediation, consideration of expressive NFP descriptions both on provider and client side, and novel matching functions for handling either quantitative or qualitative NFPs. The approach has been implemented in a ranker that integrates reasoning techniques with algorithmic ones in order to overcome current and intrinsic limitations of semantic Web technologies and to provide algorithmic techniques with more flexibility. Moreover, to the best of our knowledge, this paper presents the first experimental results related to NFP-based ranking of WSs considering a significant number of expressive NFP descriptions, showing the effectiveness of the approach." }, { "instance_id": "R25358xR25290", "comparison_id": "R25358", "paper_id": "R25290", "text": "An abstract model of service discovery and binding We propose a formal operational semantics for service discovery and binding. This semantics is based on a graph-based representation of the configuration of global computers typed by business activities. Business activities execute distributed workflows that can trigger, at run time, the discovery, ranking and selection of services to which they bind, thus reconfiguring the workflows that they execute. Discovery, ranking and selection are based on compliance with required business and interaction protocols and optimisation of quality-of-service constraints. Binding and reconfiguration are captured as algebraic operations on configuration graphs. We also discuss the methodological implications that this model framework has on software engineering using a typical travel-booking scenario. To the best of our knowledge, our approach is the first to provide a clear separation between service computation and discovery/instantiation/binding, and to offer a formal framework that is independent of the SOA middleware components that act as service registries or brokers, and the protocols through which bindings and invocations are performed." }, { "instance_id": "R25358xR25260", "comparison_id": "R25358", "paper_id": "R25260", "text": "Adaptive fuzzy-valued service selection Service composition concerns both integration of heterogeneous distributed applications and dynamic selection of services. QoS-aware selection enables a service requester with certain QoS requirements to classify services according to their QoS guarantees. In this paper we present a method that allows for a fuzzy-valued description of QoS parameters. Fuzzy sets are suited to specify both the QoS preferences raised by a service requester such as 'response time must be as lower as possible and cannot be more that 1000ms' and approximate estimates a provider can make on the QoS capabilities of its services like 'availability is roughly between 95% and 99%'. We propose a matchmaking procedure based on a fuzzy-valued similarity measure that, given the specifications of QoS parameters of the requester and the providers, selects the most appropriate service among several functionally-equivalent ones. We also devise a method for dynamical update of service offers by means of runtime monitoring of the actual QoS performance." }, { "instance_id": "R25358xR25307", "comparison_id": "R25358", "paper_id": "R25307", "text": "Efficient QoS-Aware Service Composition with a Probabilistic Service Selection Policy Service-Oriented Architecture enables the composition of loosely coupled services provided with varying Quality of Service (QoS) levels. Given a composition, finding the set of services that optimizes some QoS attributes under given QoS constraints has been shown to be NP-hard. Until now the problem has been considered only for a single execution, choosing a single service for each workflow element. This contrasts with reality where services often are executed hundreds and thousands of times. Therefore, we modify the problem to consider repeated executions of services in the long-term. We also allow to choose multiple services for the same workflow element according to a probabilistic selection policy. We model this modified problem with Linear Programming, allowing us to solve it optimally in polynomial time. We discuss and evaluate the different applications of our approach, show in which cases it yields the biggest utility gains, and compare it to the original problem." }, { "instance_id": "R25358xR25264", "comparison_id": "R25358", "paper_id": "R25264", "text": "Semantic matchmaker with precondition and effect matching using SWRL Service oriented architectures provide more effective and dynamic applications. Using semantic Web Services in service oriented architectures improves interoperability and scalability. A very important aspect of using semantic Web Services is the matchmaking process. Semantic matchmaking is used during discovery and composition of semantic Web Services to find valuable service candidates. Among these candidates, best ones are chosen to build up the composition, or for substitution in the case of an execution failure. Our proposed matchmaker architecture performs semantic matching of Web Services on the basis of input and output descriptions of semantic Web Services as well as precondition and effect matching. We present a novel approach for assigning matchmaking scores to condition expressions in OWL-S documents written in SWRL during matchmaking." }, { "instance_id": "R25358xR25317", "comparison_id": "R25358", "paper_id": "R25317", "text": "A Fuzzy-set based Semantic Similarity Matching Algorithm for Web Service A critical step in the process of reusing existing WSDL-specified services for building web-based applications is the discovery of potentially relevant services. However, the category-based service discovery, such as UDDI, is clearly insufficient. Semantic Web Services, augmenting Web service descriptions using Semantic Web technology, were introduced to facilitate the publication, discovery, and execution of Web services at the semantic level. Semantic matchmaker enhances the capability of UDDI service registries in the Semantic Web Services architecture by applying some matching algorithms between advertisements and requests described in OWL-S to recognize various degrees of matching for Web services. Based on Semantic Web Service framework, semantic matchmaker, specification matching and probabilistic matching approach, this paper proposes a fuzzy-set based semantic similarity matching algorithm for Web Service to support a more automated and veracity service discovery process in the Semantic Web Service Framework." }, { "instance_id": "R25358xR25320", "comparison_id": "R25358", "paper_id": "R25320", "text": "Measuring Similarity of Web Services Based on WSDL Web service has already been an important paradigm for web applications. Growing number of services need efficiently locating the desired web services. The similarity metric of web services plays important role in service search and classification. The very small text fragments in WSDL of web services are unsuitable for applying the traditional IR techniques. We describe our approach which supports the similarity search and classification of service operations. The approach firstly employs the external knowledge to compute the semantic distance of terms from two compared services. The similarity of services is measured upon these distances. Previous researches treat terms within the same WSDL documents as the isolated words and neglect the semantic association among them, hence lower down the accuracy of the similarity metric. We provide our method which tries to reflect the underlying semantics of web services by utilizing the terms within WSDL fully. The experiments show that our method works well on both service classification and query." }, { "instance_id": "R25358xR25298", "comparison_id": "R25358", "paper_id": "R25298", "text": "Research on Services Matching and Ranking Based on Fuzzy QoS Ontology Services discovery based on the non-functional QoS service features has received increasing attention by service-oriented computing research community. In order to implement QoS description provided the express of the uncertain knowledge, in this paper, a fuzzy QoS ontology is proposed firstly. Then, services match based on fuzzy QoS description can be transformed to reasoning in Fuzzy description logic, and the rationality of this method be illustrated through an example. At last, a novel rank algorithm for the match results is proposed." }, { "instance_id": "R25358xR25280", "comparison_id": "R25358", "paper_id": "R25280", "text": "Discovering semantic Web services via advanced graph-based matching One of the main advantages of Web services is that they can be composed into more complex processes in order to achieve a given business goal. However, such potentiality cannot be fully exploited until suitable methods and techniques allowing us to enable automatic discovery of composed processes are provided. Indeed, nowadays service discovery still focuses on matching atomic services by typically checking the similarity of functional parameters, such as inputs and outputs. However, a more profitable process discovering can be reached if both internal structure and component services are taken into account. Based on this main intuition, in this paper we describe a method for discovering composite OWL-S processes that founds on the following main contributions: (i) proposing a graph-based representation of composite OWL-S processes; and (ii) introducing an algorithm that matches over such (graph-based) representations and computes their degree of matching via combining the similarity of the atomic services they comprise and the similarity of the control flow among them. Finally, as another contribution of our research, we conducted a comprehensive experimental campaign where we tested our proposed algorithm by deriving insightful trade-offs of benefits and limitations of the overall framework for discovering Semantic Web services." }, { "instance_id": "R25358xR25356", "comparison_id": "R25358", "paper_id": "R25356", "text": "WSExpress: A QoS-aware Search Engine for Web Services Web services are becoming prevalent nowadays. Finding desired Web services is becoming an emergent and challenging research problem. In this paper, we present WSExpress (Web Service Express), a novel Web service search engine to expressively find expected Web services. WSExpress ranks the publicly available Web services not only by functional similarities to users\u2019 queries, but also by nonfunctional QoS characteristics of Web services. WSExpress provides three searching styles, which can adapt to the scenario of finding an appropriate Web service and the scenario of automatically replacing a failed Web service with a suitable one. WSExpress is implemented by Java language and large-scale experiments employing real-world Web services are conducted. Totally 3,738 Web services (15,811 operations) from 69 countries are involved in our experiments. The experimental results show that our search engine can find Web services with the desired functional and non-functional requirements. Extensive experimental studies are also conducted on a well known benchmark dataset consisting of 1,000 Web service operations to show the recall and precision performance of our search engine." }, { "instance_id": "R25358xR25314", "comparison_id": "R25358", "paper_id": "R25314", "text": "Consumer-centric QoS-aware selection of web services There exist many web services which exhibit similar functional characteristics. It is imperative to provide service consumers with facilities for selecting required web services according to their non-functional characteristics or quality of service (QoS). However, the selection process is greatly complicated by the distinct views of service providers and consumers on the services QoS. For instance, they may have distinct views of the service reliability-wherein a consumer considers that a service is reliable if its success rate is higher than 99%, while a provider may consider its service as reliable if its success rate is higher than 90%. The aim of this paper is to resolve such conflicts and to ensure consensus on the QoS characteristics in the selection of web services. It proposes a QoS Consensus Moderation Approach (QCMA) in order to perform QoS consensus and to alleviate the differences on QoS characteristics in the selection of web services. The proposed approach is implemented as a prototype tool and is tested on a case study of a hotel booking web service. Experimental results show that the proposed approach greatly improves the service selection process in a dynamic and uncertain environment of web services." }, { "instance_id": "R25358xR25283", "comparison_id": "R25358", "paper_id": "R25283", "text": "BeMatch The capability to easily find useful services (software applications, software components, scientific computations) becomes increasingly critical in several fields. Current approaches for services retrieval are mostly limited to the matching of their inputs/outputs possibly enhanced with some ontological knowledge. Recent works have demonstrated that this approach is not sufficient to discover relevant components. Motivated by these concerns, we have developed BeMatch platform for ranking web services based on behavior matchmaking. We developed matching techniques that operate on behavior models and allow delivery of partial matches and evaluation of semantic distance between these matches and user requirements. Consequently, even if a service satisfying exactly the user requirements does not exist, the most similar ones will be retrieved and proposed for reuse by extension or modification. We exemplify our approach for behavioral services matchmaking by describing two demonstration scenarios for matchmaking BPEL and WSCL protocols, respectively. A demo scenario is also described concerning the tool for evaluating the effectiveness of the behavioral matchmaking method." }, { "instance_id": "R25358xR25338", "comparison_id": "R25358", "paper_id": "R25338", "text": "Efficient Semantic Web Service Discovery in Centralized and P2P Environments Efficient and scalable discovery mechanisms are critical for enabling service-oriented architectures on the Semantic Web. The majority of currently existing approaches focuses on centralized architectures, and deals with efficiency typically by pre-computing and storing the results of the semantic matcher for all possible query concepts. Such approaches, however, fail to scale with respect to the number of service advertisements and the size of the ontologies involved. On the other hand, this paper presents an efficient and scalable index-based method for Semantic Web service discovery that allows for fast selection of services at query time and is suitable for both centralized and P2P environments. We employ a novel encoding of the service descriptions, allowing the match between a request and an advertisement to be evaluated in constant time, and we index these representations to prune the search space, reducing the number of comparisons required. Given a desired ranking function, the search algorithm can retrieve the top-k matches progressively, i.e., better matches are computed and returned first, thereby further reducing the search engine's response time. We also show how this search can be performed efficiently in a suitable structured P2P overlay network. The benefits of the proposed method are demonstrated through experimental evaluation on both real and synthetic data." }, { "instance_id": "R25358xR25350", "comparison_id": "R25358", "paper_id": "R25350", "text": "A New Framework for Web Service Discovery Based on Behavior With the rapid expanse of the web service over the internet, discovering related web service is becoming the most urgent problem. Traditional methods focus on the interfaces of service through using ontology without considering service behavior. In this paper, the Calculus of Communicating Systems(CCS) is exploited to specify web service's behavior, weak equivalence on behavior is used for matchmaking between the advertised service and the requested service. Then, The paper combines behavior matching with fuzzy similarity of ontological concept to propose a new matching algorithm for web service. Therefore, a promising framework for web service discovery is proposed." }, { "instance_id": "R25358xR25346", "comparison_id": "R25358", "paper_id": "R25346", "text": "QoS-aware web services selection with intuitionistic fuzzy set under consumer\u2019s vague perception Appropriate application of service selection based on QoS-aware can bring great benefits to service consumers, as it is able to reduce redundancy in search. It also generates advantages for service providers who deliver valuable services. However, non-functional QoS attributes are not easy to measure due to their complexity and the involvement of consumer's fuzzy perceptions of QoS. In this paper, a new decision model under vague information is proposed. It extends Max-Min-Max composition of intuitionistic fuzzy sets (IFS) for selection of web services. Furthermore, an improved fuzzy ranking index is proposed to alleviate the bias of existing approaches. The index aggregates both concord and discord degrees of the decision maker's satisfaction in order to analyze the synthetic satisfaction degree for web services. In addition, an example of QoS-aware web services selection is illustrated to demonstrate the proposed approach. Finally, the proposed method is verified by a sensitivity analysis." }, { "instance_id": "R25358xR25309", "comparison_id": "R25358", "paper_id": "R25309", "text": "iSeM: Approximated Reasoning for Adaptive Hybrid Selection of Semantic Services We present an intelligent service matchmaker, called iSeM, for adaptive and hybrid semantic service selection that exploits the full semantic profile in terms of signature annotations in description logic ${\\mathcal SH}$ and functional specifications in SWRL. In particular, iSeM complements its strict logical signature matching with approximated reasoning based on logical concept abduction and contraction together with information-theoretic similarity and evidential coherence-based valuation of the result, and non-logic-based approximated matching. Besides, it may avoid failures of signature matching only through logical specification plug-in matching of service preconditions and effects. Eventually, it learns the optimal aggregation of its logical and non-logic-based matching filters off-line by means of binary SVM-based service relevance classifier with ranking. We demonstrate the usefulness of iSeM by example and preliminary results of experimental performance evaluation." }, { "instance_id": "R25400xR25367", "comparison_id": "R25400", "paper_id": "R25367", "text": "Evolving object oriented design to improve code traceability Traceability is a key issue to ensure consistency among software artifacts of subsequent phases of the development cycle. However, few works have so far addressed the theme of tracing object oriented design into its implementation and evolving it. The paper presents an approach to checking the compliance of OO design with respect to source code and support its evolution. The process works on design artifacts expressed in the OMT notation and accepts C++ source code. It recovers an \"as is\" design from the code, compares recovered design with the actual design and helps the user to deal with inconsistencies. The recovery process exploits the edit distance computation and the maximum match algorithm to determine traceability links between design and code. The output is a similarity measure associated to each matched class, plus a set of unmatched classes. A graphic display of the design with different colors associated to different levels of match is provided as a support to update the design and improve its traceability to the code." }, { "instance_id": "R25400xR25384", "comparison_id": "R25400", "paper_id": "R25384", "text": "A tactic-centric approach for automating traceability of quality concerns The software architectures of business, mission, or safety critical systems must be carefully designed to balance an exacting set of quality concerns describing characteristics such as security, reliability, and performance. Unfortunately, software architectures tend to degrade over time as maintainers modify the system without understanding the underlying architectural decisions. Although this problem can be mitigated by manually tracing architectural decisions into the code, the cost and effort required to do this can be prohibitively expensive. In this paper we therefore present a novel approach for automating the construction of traceability links for architectural tactics. Our approach utilizes machine learning methods and lightweight structural analysis to detect tactic-related classes. The detected tactic-related classes are then mapped to a Tactic Traceability Information Model. We train our trace algorithm using code extracted from fifteen performance-centric and safety-critical open source software systems and then evaluate it against the Apache Hadoop framework. Our results show that automatically generated traceability links can support software maintenance activities while helping to preserve architectural qualities." }, { "instance_id": "R25400xR25377", "comparison_id": "R25400", "paper_id": "R25377", "text": "Automatically identifying changes that impact code-to-design traceability during evolution An approach is presented that automatically determines if a given source code change impacts the design (i.e., UML class diagram) of the system. This allows code-to-design traceability to be consistently maintained as the source code evolves. The approach uses lightweight analysis and syntactic differencing of the source code changes to determine if the change alters the class diagram in the context of abstract design. The intent is to support both the simultaneous updating of design documents with code changes and bringing old design documents up to date with current code given the change history. An efficient tool was developed to support the approach and is applied to an open source system. The results are evaluated and compared against manual inspection by human experts. The tool performs better than (error prone) manual inspection. The developed approach and tool were used to empirically investigate and understand how changes to source code (i.e., commits) break code-to-design traceability during evolution and the benefits from such understanding. Commits are categorized as design impact or no impact. The commits of four open source projects over 3-year time durations are extracted and analyzed. The results of the study show that most of the code changes do not impact the design and these commits have a smaller number of changed files and changed less lines compared to commits with design impact. The results also show that most bug fixes do not impact design." }, { "instance_id": "R25400xR25363", "comparison_id": "R25400", "paper_id": "R25363", "text": "Experiments in the use of XML to enhance traceability between object-oriented design specifications and source code In this paper we explain how we implemented traceability between a UML design specification and its implementing source code using XML technologies. In our linking framework an XMI file represents a detailed-design specification and a JavaML file represents its source code. These XML-derivative representations were linked using another XML file, an Xlink link-base, containing our linking information. This link-base states which portions of the source code implement which portions of a design specification and vice-versa. We also rendered those links to an HTML file using XSL and traversed from our design specification to its implementing source code. This is the first step in our traceability endeavors where we aim to achieve total traceability among software life-cycle deliverables form requirements to source code." }, { "instance_id": "R25400xR25380", "comparison_id": "R25400", "paper_id": "R25380", "text": "Enabling Automated Traceability Maintenance through the Upkeep of Traceability Relations Traceability is demanded within mature development processes and offers a wide range of advantages. Nevertheless, there are deterrents to establishing traceability: it can be painstaking to achieve initially and then subject to almost instantaneous decay. To be effective, this is clearly an investment that should be retained. We therefore focus on reducing the manual effort incurred in performing traceability maintenance tasks. We propose an approach to recognize those changes to structural UML models that impact existing traceability relations and, based upon this knowledge, we provide a mix of automated and semi-automated strategies to update these relations. This paper provides technical details on the update process; it builds upon a previous publication that details how triggers for these updates can be recognized in an automated manner. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided." }, { "instance_id": "R25400xR25398", "comparison_id": "R25400", "paper_id": "R25398", "text": "Architectural point mapping for design traceability AOP can be applied to not only modularization of crosscutting concerns but also other kinds of software development processes. As one of the applications, this paper proposes a design traceability mechanism originating in join points and pointcuts. It is not easy to design software architecture reflecting the intention of developers and implement the result of design as a program while preserving the architectural correctness. To deal with this problem, we propose two novel ideas: Archpoint (Architectural point) and Archmapping (Archpoint Mapping). Archpoints are points for representing the essence of architectural design in terms of behavioral and structural aspects. By defining a set of archpoints, we can describe the inter-component structure and the message interaction among components. Archmapping is a mechanism for checking the bidirectional traceability between design and code. The traceability can be verified by checking whether archpoints in design are consistently mapped to program points in code. For this checking, we use an SMT (Satisfiability Modulo Theories) solver, a tool for deciding the satisfiability of logical formulas. The idea of archpoints, program points, and their selection originates in AOP." }, { "instance_id": "R25400xR25371", "comparison_id": "R25400", "paper_id": "R25371", "text": "Automatic Tracing of Decisions to Architecture and Implementation Traceability requires capturing the relations between software artifacts like requirements, architecture and implementation explicitly. Manual discovery and recovery of tracing information by studying documents, architecture documentation and implementation is time-intensive, costly, and may miss important information not found in the analyzed artifacts. Approaches for explicitly capturing traces exist, but either require manual capturing or lack comprehensive tracing to both architecture and implementation. In this paper we present an approach for (semi)automatically capturing traceability relationships from requirements and design decisions to architecture and implementation. Traces are captured in a non-intrusive way during architecture design and implementation. The captured traces are integrated with a semi-formally defined architecture description model and serve as the basis for different kinds of architecture-related activities." }, { "instance_id": "R25447xR25416", "comparison_id": "R25447", "paper_id": "R25416", "text": "Dependability and Rollback Recovery For Composite Web Services In this paper, we propose a service-oriented reliability model that dynamically calculates the reliability of composite web services with rollback recovery based on the real-time reliabilities of the atomic web services of the composition. Our model is a hybrid reliability model based on both path-based and state-based models. Many reliability models assume that failure or error arrival times are exponentially distributed. This is inappropriate for web services as error arrival times are dependent on the operating state including workload of servers where the web service resides. In this manuscript, we modify our previous model (for software based on the Doubly Stochastic Model and Renewal Processes) to evaluate the reliability of atomic web services. In order to fix our idea, we developed the case of one simple web service which contains two states, i.e., idle and active states. In real-world applications, where web services could contain quite a large number of atomic services, the calculus as well as the computing complexity increases greatly. To limit our computing efforts and calculus, we chose the bounded set techniques that we apply using the previously developed stochastic model. As a first type of system combination, we proposed to study a scheme based on combining web services into parallel and serial configurations with centralized coordination. In this case, the broker has an acceptance testing mechanism that examines the results returned from a particular web service. If it was acceptable, then the computation continues to the next web service. Otherwise, it involves rollback and invokes another web service already specified by a checkpoint algorithm. Finally, the acceptance test is conducted using the broker. The broker can be considered as a single point of failure. To increase the reliability of the broker introduced in our systems and mask out errors at the broker level, we suggest a modified general scheme based on Triple modular redundancy and N-version programming. To imitate a real scenario where errors could happen at any stage of our application and improve the quality of Service QoS of the proposed model, we introduce fault-tolerance techniques using an adaption of the recovery block technique." }, { "instance_id": "R25447xR25439", "comparison_id": "R25447", "paper_id": "R25439", "text": "Reliability of Component Based systems- a Critical Survey Software reliability is defined as the probability of the failure free operation of a software system for a specified period of time in a specified environment. Day by day software applications are growing more complex and with more emphasis on reuse. Component Based Software (CBS) applications have emerged. The focus of this paper is to provide an overview for the state of the art of Component Based Systems reliability estimation. In this paper, we discussed various approaches in terms of their scope, model, methods, technique and validation scheme. This comparison provides insight into determining the direction of future CBS reliability research." }, { "instance_id": "R25447xR25431", "comparison_id": "R25447", "paper_id": "R25431", "text": "Automatic Reliability Management in SOA-based critical systems A well-known concept for the design and development of distributed software systems is service-orientation. In SOA, an interacting group of autonomous services realize a dynamic adaptive heterogenous distributed system. Because of its flexibility, SOA allows an easy adaptation of new business requirements. This also makes the serviceorientation idea a suitable concept for development of critical software systems. Reliability is a central parameter for developing critical software systems. SOA brings some additional requirements to the usual reliability models currently being used for standard software solutions. In order to fullfil all requirements and guarantee a certain degree of reliability, a generic reliability management model is needed for SOA based software systems. This article defines research challenges in this area and gives an approach to solve this problem." }, { "instance_id": "R25447xR25404", "comparison_id": "R25447", "paper_id": "R25404", "text": "A Reliability Evaluation Framework on Composite Web Service The composition of web-based services is a process that usually requires advanced programming skills and vast knowledge about specific technologies. How to carry out web service composition according to functional sufficiency and performance is widely studied. Non-functional characteristics like reliability and security play an important role in the selection of web services composition process. This paper provides a web service reliability model for atomic web service without structural information and the composite web service consist of atomic web service and its redundant services. It outlines a framework based on client feedback to gather trustworthiness attributes to service registry for reliability evaluation." }, { "instance_id": "R25447xR25433", "comparison_id": "R25447", "paper_id": "R25433", "text": "Component-Based Software Engineering: Technologies, Development Frameworks, and Quality Assurance Schemes\u201d, Asia-Pacific Software Engineering Conference Component-based software development approach is based on the idea to develop software systems by selecting appropriate off-the-shelf components and then to assemble them with a well-defined software architecture. Because the new software development paradigm is very different from the traditional approach, quality assurance (QA) for component-based software development is a new topic in the software engineering community. In this paper, we survey current component-based software technologies, describe their advantages and disadvantages, and discuss the features they inherit. We also address QA issues for component-based software. As a major contribution, we propose a QA model for component-based software which covers component requirement analysis, component development, component certification, component customization, and system architecture design, integration, testing and maintenance." }, { "instance_id": "R25447xR25442", "comparison_id": "R25447", "paper_id": "R25442", "text": "Synergies between SOA and Grid computing Service Oriented Architecture (SOA) is an architectural style for developing and integrating enterprise applications to enable an enterprise to deliver self-describing and platform independent business functionality. Grid Computing (GC) is a framework that allows pooling of physical resources to enable virtualisation of distributed computing, enterprise data and enterprise functionality. The two are synergetic in the sense that, whereas SOA can provide a strong basis for GC, technical framework based on GC provides the optimum foundation for SOA. This paper discusses the two paradigms and provides some useful information for the benefit of large enterprises that wish to embark on the development and implementation of SOA based on Grid Computing." }, { "instance_id": "R25447xR25408", "comparison_id": "R25447", "paper_id": "R25408", "text": "A Rule-Based Approach For Estimating The Reliability Of Component-Based Systems Reliability is one of the most important nonfunctional requirements for software. Accurately estimating reliability for component-based software systems (CBSSs) is not an easy task, and researchers have proposed many approaches to CBSS reliability estimation. Some of these approaches focus on component reliability and others focus on glue code reliability. All of the approaches that have been proposed are mathematical. However, because reliability is a real-world phenomenon with associated real-time issues, it cannot be measured accurately and efficiently with mathematical models. Soft computing techniques that have recently emerged can be used to model the solution of real-world problems that are too difficult to model mathematically. The two basic soft computing techniques are fuzzy computing and probabilistic computing. In this paper, we focus on four factors that have the strongest effect on CBSS reliability. Based on these four factors, we propose a new fuzzy-logic-based model for estimating CBSS reliability. We implemented and validated our proposed model on small applications, and the results confirm the effectiveness of our model." }, { "instance_id": "R25447xR25423", "comparison_id": "R25447", "paper_id": "R25423", "text": "Reliability Modeling for SOA Systems Service-oriented architecture (SOA) is a popular paradigm for development of distributed systems by composing the functionality provided by the services exposed on the network. In effect, the services can use functionalities of other services to accomplish their own goals. Although such an architecture provides an elegant solution to simple construction of loosely coupled distributed systems, it also introduces additional concerns. One of the primary concerns in designing a SOA system is the overall system reliability. Since the building blocks are services provided by various third parties, it is often not possible to apply the well established fault removal techniques during the development phases. Therefore, in order to reach desirable system reliability for SOA systems, the focus shifts towards fault prediction and fault tolerance techniques. In this paper an overview of existing reliability modeling techniques for SOA-based systems is given. Furthermore, we present a model for reliability estimation of a service composition using directed acyclic graphs. The model is applied to the service composition based on the orchestration model. A case study for the proposed model is presented by analyzing a simple Web Service composition scenario." }, { "instance_id": "R25447xR25418", "comparison_id": "R25447", "paper_id": "R25418", "text": "Composite web QoS with workflow conditional pathways using bounded sets In our previous work (Dillon and Mansour 2009), a stochastic reliability model of atomic web services was proposed. Using the well-known classic two-state bounded set technique, we developed a service-oriented model that dynamically calculates the reliability of composite web services with rollback recovery (Mansour and Dillon in IEEE Trans Serv Comput 4(4), 2011). In order to improve the Quality of Service, fault tolerance techniques have been introduced using recovery block adaptation. Our workflow was based on series-parallel structures that constitute parts of existing structures. It is worth mentioning that major service-oriented systems contain larger and more complex structures than the simple series and parallel ones. This is a limitation in our previous approach. In order to consider more realistic service-oriented systems, other main structures, such as AND, XOR and Loop, should be included into our model. In this article, our previous structures are generalized to include AND, XOR and Loops. In addition to generalized structures, we extended the existing two-state bounded set technique to include three-state systems. This extension was especially motivated by XOR-based structures. A comparative study between bounded set techniques and a new stochastic model is also presented. Our simulation results accurately reflect the performance of the new proposed model and confirm our theoretical studies. Furthermore, Monte Carlos simulations were performed and the results obtained clearly validate our stochastic model." }, { "instance_id": "R25495xR25471", "comparison_id": "R25495", "paper_id": "R25471", "text": "Continuous blood glucose level prediction of Type 1 Diabetes based on Artificial Neural Network Abstract Recent technological advancements in diabetes technologies, such as Continuous Glucose Monitoring (CGM) systems, provide reliable sources to blood glucose data. Following its development, a new challenging area in the field of artificial intelligence has been opened and an accurate prediction method of blood glucose levels has been targeted by scientific researchers. This article proposes a new method based on Artificial Neural Networks (ANN) for blood glucose level prediction of Type 1 Diabetes (T1D) using only CGM data as inputs. To show the efficiency of our method and to validate our ANN, real CGM data of 13 patients were investigated. The accuracy of the strategy is discussed based on some statistical criteria such as the Root Mean Square Error (RMSE) and the Mean Absolute Percentage Error (MAPE). The obtained averages of RMSE are 6.43 mg/dL, 7.45 mg/dL, 8.13 mg/dL and 9.03 mg/dL for Prediction Horizon (PH) respectively 15 min, 30 min, 45 min and 60 min and the average of MAPE was 3.87% for PH = 15 min, knowing that the smaller is the RMSE and MAPE, the more accurate is the prediction. Experimental results show that the proposed ANN is accurate, adaptive, and very encouraging for a clinical implementation. Furthermore, while other studies have only focused on the prediction accuracy of blood glucose, this work aims to improve the quality of life of T1D patients by using only CGM data as inputs and by limiting human intervention." }, { "instance_id": "R25495xR25483", "comparison_id": "R25495", "paper_id": "R25483", "text": "Adaptive model predictive control for a dual-hormone artificial pancreas Abstract We report the closed-loop performance of adaptive model predictive control (MPC) algorithms for a dual-hormone artificial pancreas (AP) intended for patients with type 1 diabetes. The dual-hormone AP measures the interstitial glucose concentration using a subcutaneous continuous glucose monitor (CGM) and administers glucagon and rapid-acting insulin subcutaneously. The discrete-time transfer function models used in the insulin and glucagon MPCs comprise a deterministic part and a stochastic part. The deterministic part of the MPC model is individualized using patient-specific information and describes the glucose-insulin and glucose-glucagon dynamics. The stochastic part of the MPC model describes the uncertainties that are not included in the deterministic part of the MPC model. Using closed-loop simulation of the MPCs, we evaluate the performance obtained using the different deterministic and stochastic models for the MPC on three virtual patients. We simulate a scenario including meals and daily variations in the model parameters for two settings. In the first setting, we try five different models for the deterministic part of the MPC model and use a fixed model for the stochastic part of the MPC model. In the second setting, we use a second-order model for the deterministic part of the MPC model and estimate the stochastic part of the MPC model adaptively. The results show that the controller is robust to daily variations in the model parameters. The numerical results also suggest that the deterministic part of the MPC model does not play a major role in the closed-loop performance of MPC. This is ascribed to the availability of feedback and the poor prediction capability of the model, i.e. the large disturbances and model-patient mismatch. Moreover, a second order adaptive model for the stochastic part of the MPC model offers a marginally better performance in closed-loop, in particular if the model-patient mismatch is large." }, { "instance_id": "R25495xR25479", "comparison_id": "R25495", "paper_id": "R25479", "text": "A Long-Term Model of the Glucose-Insulin Dynamics of Type 1 Diabetes A new glucose-insulin model is introduced which fits with the clinical data from in- and outpatients for two days. Its stability property is consistent with the glycemia behavior for type 1 diabetes. This is in contrast to traditional glucose-insulin models. Prior models fit with clinical data for a few hours only or display some nonnatural equilibria. The parameters of this new model are identifiable from standard clinical data as continuous glucose monitoring, insulin injection, and carbohydrate estimate. Moreover, it is shown that the parameters from the model allow the computation of the standard tools used in functional insulin therapy as the basal rate of insulin and the insulin sensitivity factor. This is a major outcome as they are required in therapeutic education of type 1 diabetic patients." }, { "instance_id": "R25495xR25469", "comparison_id": "R25495", "paper_id": "R25469", "text": "Postprandial fuzzy adaptive strategy for a hybrid proportional derivative controller for the artificial pancreas AbstractThis paper presents a support fuzzy adaptive system for a hybrid proportional derivative controller that will refine its parameters during postprandial periods to enhance performance. Even though glucose controllers have improved over the last decade, tuning them and keeping them tuned are still major challenges. Changes in a patient\u2019s lifestyle, stress, exercise, or other activities may modify their blood glucose system, making it necessary to retune or change the insulin dosing algorithm. This paper presents a strategy to adjust the parameters of a proportional derivative controller using the so-called safety auxiliary feedback element loop for type 1 diabetic patients. The main parameters, such as the insulin on board limit and proportional gain are tuned using postprandial performance indexes and the information given by the controller itself. The adaptive and robust performance of the control algorithm was assessed \u201cin silico\u201d on a cohort of virtual patients under challenging realistic scenarios considering mixed meals, circadian variations, time-varying uncertainties, sensor errors, and other disturbances. The results showed that an adaptive strategy can significantly improve the performance of postprandial glucose control, individualizing the tuning by directly taking into account the intra-patient variability of type 1 patients. Graphical Abstract title: Postprandial glycaemia improvement via fuzzy adaptive controlA fuzzy inference engine was implemented within a clinically tested artificial pancreas control system. The aim of the fuzzy system was to adapt controller parameters to improve postprandial blood glucose control while ensuring safety. Results show a significant improvement over time of the postprandial glucose response due to the adaptation, thus demonstrating the usefulness of the fuzzy adaptive system." }, { "instance_id": "R25495xR25455", "comparison_id": "R25495", "paper_id": "R25455", "text": "Rapid Model Identification for Online Subcutaneous Glucose Concentration Prediction for New Subjects With Type I Diabetes Goal: For conventional modeling methods, the work of model identification has to be repeated with sufficient data for each subject because different subjects may have different response to exogenous inputs. This may cause repetitive cost and burden for patients and clinicians and require a lot of modeling efforts. Here, to overcome the aforementioned problems, a rapid model development strategy for new subjects is proposed using the idea of model migration for online glucose prediction. Methods: First, a base model is obtained that can be empirically identified from any subject or constructed by a priori knowledge. Then, parameters of inputs in the base model are properly revised based on a small amount of new data from new subjects so that the updated models can reflect the specific glucose dynamics excited by inputs for new subjects. These problems are investigated by developing autoregressive models with exogenous inputs (ARX) based on 30 in silico subjects using UVA/Padova metabolic simulator. Results: The prediction accuracy of the rapid modeling method is comparable to that for subject-dependent modeling method for some cases. Also, it can present better generalization ability. Conclusion: The proposed method can be regarded as an effective and economic modeling method instead of repetitive subject-dependent modeling method especially for lack of modeling data." }, { "instance_id": "R25495xR25467", "comparison_id": "R25495", "paper_id": "R25467", "text": "Performance Analysis of Fuzzy-PID Controller for Blood Glucose Regulation in Type-1 Diabetic Patients This paper presents Fuzzy-PID (FPID) control scheme for a blood glucose control of type 1 diabetic subjects. A new metaheuristic Cuckoo Search Algorithm (CSA) is utilized to optimize the gains of FPID controller. CSA provides fast convergence and is capable of handling global optimization of continuous nonlinear systems. The proposed controller is an amalgamation of fuzzy logic and optimization which may provide an efficient solution for complex problems like blood glucose control. The task is to maintain normal glucose levels in the shortest possible time with minimum insulin dose. The glucose control is achieved by tuning the PID (Proportional Integral Derivative) and FPID controller with the help of Genetic Algorithm and CSA for comparative analysis. The designed controllers are tested on Bergman minimal model to control the blood glucose level in the facets of parameter uncertainties, meal disturbances and sensor noise. The results reveal that the performance of CSA-FPID controller is superior as compared to other designed controllers." }, { "instance_id": "R25495xR25477", "comparison_id": "R25495", "paper_id": "R25477", "text": "Model Free iPID Control for Glycemia Regulation of Type-1 Diabetes Objective: The objective is to design a fully automated glycemia controller of Type-1 Diabetes (T1D) in both fasting and postprandial phases on a large number of virtual patients. Methods: A model-free intelligent proportional-integral-derivative (iPID) is used to infuse insulin. The feasibility of iPID is tested in silico on two simulators with and without measurement noise. The first simulator is derived from a long-term linear time-invariant model. The controller is also validated on the UVa/Padova metabolic simulator on 10 adults under 25 runs/subject for noise robustness test. Results: It was shown that without measurement noise, iPID mimicked the normal pancreatic secretion with a relatively fast reaction to meals as compared to a standard PID. With the UVa/Padova simulator, the robustness against CGM noise was tested. A higher percentage of time in target was obtained with iPID as compared to standard PID with reduced time spent in hyperglycemia. Conclusion: Two different T1D simulators tests showed that iPID detects meals and reacts faster to meal perturbations as compared to a classic PID. The intelligent part turns the controller to be more aggressive immediately after meals without neglecting safety. Further research is suggested to improve the computation of the intelligent part of iPID for such systems under actuator constraints. Any improvement can impact the overall performance of the model-free controller. Significance: The simple structure iPID is a step for PID-like controllers since it combines the classic PID nice properties with new adaptive features." }, { "instance_id": "R25495xR25473", "comparison_id": "R25495", "paper_id": "R25473", "text": "Toward a Run-to-Run Adaptive Artificial Pancreas: In Silico Results Objective: Contemporary and future outpatient long-term artificial pancreas (AP) studies need to cope with the well-known large intra- and interday glucose variability occurring in type 1 diabetic (T1D) subjects. Here, we propose an adaptive model predictive control (MPC) strategy to account for it and test it in silico. Methods: A run-to-run (R2R) approach adapts the subcutaneous basal insulin delivery during the night and the carbohydrate-to-insulin ratio (CR) during the day, based on some performance indices calculated from subcutaneous continuous glucose sensor data. In particular, R2R aims, first, to reduce the percentage of time in hypoglycemia and, secondarily, to improve the percentage of time in euglycemia and average glucose. In silico simulations are performed by using the University of Virginia/Padova T1D simulator enriched by incorporating three novel features: intra- and interday variability of insulin sensitivity, different distributions of CR at breakfast, lunch, and dinner, and dawn phenomenon. Results: After about two months, using the R2R approach with a scenario characterized by a random $\\pm$30% variation of the nominal insulin sensitivity the time in range and the time in tight range are increased by 11.39% and 44.87%, respectively, and the time spent above 180 mg/dl is reduced by 48.74%. Conclusions : An adaptive MPC algorithm based on R2R shows in silico great potential to capture intra- and interday glucose variability by improving both overnight and postprandial glucose control without increasing hypoglycemia. Significance: Making an AP adaptive is key for long-term real-life outpatient studies. These good in silico results are very encouraging and worth testing in vivo." }, { "instance_id": "R25495xR25463", "comparison_id": "R25495", "paper_id": "R25463", "text": "A deep analysis on optimization techniques for appropriate PID tuning to incline efficient artificial pancreas Juvenile diabetes or a type-1 diabetic can be seen in 5% of the patients, who affected by this form of the disease. The type-1 diabetic can be seen mostly in children and young adults, which continue to spread all over the world. The developments of the artificial pancreas give hope to develop glucose monitoring sensors and insulin pump for those who suffer from severe lack of insulin generation. On the other hand, taking control of blood sugar is a challenging task in which specific factors of the body will limit the ability of closed-loop systems to perform well. This paper presents an investigation of the optimized control strategy to deal with the closed-loop artificial pancreas, which is based on the proportional\u2013integral\u2013derivative (PID). The primary objective of this investigation is to find the best optimized model to maintain the best glucose monitoring and insulin delivery. In order to tune the PID controller to decide on the efficient insulin injection, an investigation was conducted for an optimization algorithm [such as genetic algorithm, gravitational search algorithm, particle swarm optimization, sequential randomized algorithm, brain storm optimization algorithm, class topper optimization, and gray wolf optimization algorithm (GWOA)]. Among these, it is found that the GWOA gives a promising result compare to the other." }, { "instance_id": "R25495xR25459", "comparison_id": "R25495", "paper_id": "R25459", "text": "Model predictive control with integral action for artificial pancreas Abstract A Model Predictive Control (MPC) approach with integral action, called Integral MPC (IMPC), for Artificial Pancreas systems is proposed. IMPC ensures beneficial effects in terms of regulation to target in presence of disturbances and model uncertainties. The proposed approach exploits individualized models identified by Constrained Optimization (CO) described in Messori et al. (2016). In order to assess the proposed IMPC in comparison with a previously published MPC, in silico experiments are carried out on realistic scenarios performed on the 100 virtual patients of the UVA/PADOVA simulator." }, { "instance_id": "R25495xR25475", "comparison_id": "R25495", "paper_id": "R25475", "text": "Modeling Day-to-Day Variability of Glucose\u2013Insulin Regulation Over 12-Week Home Use of Closed-Loop Insulin Delivery Parameters of physiological models of glucose\u2013insulin regulation in type 1 diabetes have previously been estimated using data collected over short periods of time and lack the quantification of day-to-day variability. We developed a new hierarchical model to relate subcutaneous insulin delivery and carbohydrate intake to continuous glucose monitoring over 12 weeks while describing day-to-day variability. Sensor glucose data sampled every 10-min, insulin aspart delivery and meal intake were analyzed from eight adults with type 1 diabetes (male/female 5/3, age ${\\text{39.9}\\,\\pm \\,\\text{9.5}}$ years, BMI $\\text{25.4}\\,\\pm \\,\\text{4.4 kg/ m}^{2}$, HbA1c ${\\text{8.4}\\,\\pm \\,\\text{0.6}}$%) who underwent a 12-week home study of closed-loop insulin delivery. A compartment model comprised of five linear differential equations; model parameters were estimated using the Markov chain Monte Carlo approach within a hierarchical Bayesian model framework. Physiologically, plausible a posteriori distributions of model parameters including insulin sensitivity, time-to-peak insulin action, time-to-peak gut absorption, and carbohydrate bioavailability, and good model fit were observed. Day-to-day variability of model parameters was estimated in the range of 38\u201379% for insulin sensitivity and 27\u201348% for time-to-peak of insulin action. In conclusion, a linear Bayesian hierarchical approach is feasible to describe a 12-week glucose\u2013insulin relationship using conventional clinical data. The model may facilitate in silico testing to aid the development of closed-loop insulin delivery systems." }, { "instance_id": "R25495xR25491", "comparison_id": "R25495", "paper_id": "R25491", "text": "Model individualization for artificial pancreas BACKGROUND AND OBJECTIVE The inter-subject variability characterizing the patients affected by type 1 diabetes mellitus makes automatic blood glucose control very challenging. Different patients have different insulin responses, and a control law based on a non-individualized model could be ineffective. The definition of an individualized control law in the context of artificial pancreas is currently an open research topic. In this work we consider two novel identification approaches that can be used for individualizing linear glucose-insulin models to a specific patient. METHODS The first approach belongs to the class of black-box identification and is based on a novel kernel-based nonparametric approach, whereas the second is a gray-box identification technique which relies on a constrained optimization and requires to postulate a model structure as prior knowledge. The latter is derived from the linearization of the average nonlinear adult virtual patient of the UVA/Padova simulator. Model identification and validation are based on in silico data collected during simulations of clinical protocols designed to produce a sufficient signal excitation without compromising patient safety. The identified models are evaluated in terms of prediction performance by means of the coefficient of determination, fit, positive and negative max errors, and root mean square error. RESULTS Both identification approaches were used to identify a linear individualized glucose-insulin model for each adult virtual patient of the UVA/Padova simulator. The resulting model simulation performance is significantly improved with respect to the performance achieved by a linear average model. CONCLUSIONS The approaches proposed in this work have shown a good potential to identify glucose-insulin models for designing individualized control laws for artificial pancreas." }, { "instance_id": "R25495xR25489", "comparison_id": "R25495", "paper_id": "R25489", "text": "Adaptive fuzzy integral sliding mode control of blood glucose level in patients with type 1 diabetes: In silico studies Currently, artificial pancreas is an alternative treatment instead of insulin therapy for patients with type 1 diabetes mellitus. Closed-loop control of blood glucose level (BGL) is one of the difficult tasks in biomedical engineering field due to nonlinear time-varying dynamics of insulin-glucose relation that is combined with time delays and model uncertainties. In this paper, we propose a novel adaptive fuzzy integral sliding mode control scheme for BGL regulation. System dynamics is identified online using fuzzy logic systems. The presented method is evaluated in silico studies by nine different virtual patients in three different groups for two continuous days. Simulation results demonstrate effective performance of the proposed control scheme of BGL regulation in presence of simultaneous meal and physical exercise disturbances. Comparison of the proposed control method with proportional-integral-derivative (PID) control and model predictive control (MPC) shows a superiority of the adaptive fuzzy integral sliding mode control with regard to two conventional methods of BGL regulation (PID and MPC) and sliding mode control." }, { "instance_id": "R25495xR25481", "comparison_id": "R25495", "paper_id": "R25481", "text": "An Ontology-Based Interpretable Fuzzy Decision Support System for Diabetes Diagnosis Diabetes is a serious chronic disease. The importance of clinical decision support systems (CDSSs) to diagnose diabetes has led to extensive research efforts to improve the accuracy, applicability, interpretability, and interoperability of these systems. However, this problem continues to require optimization. Fuzzy rule-based systems are suitable for the medical domain, where interpretability is a main concern. The medical domain is data-intensive, and using electronic health record data to build the FRBS knowledge base and fuzzy sets is critical. Multiple variables are frequently required to determine a correct and personalized diagnosis, which usually makes it difficult to arrive at accurate and timely decisions. In this paper, we propose and implement a new semantically interpretable FRBS framework for diabetes diagnosis. The framework uses multiple aspects of knowledge-fuzzy inference, ontology reasoning, and a fuzzy analytical hierarchy process (FAHP) to provide a more intuitive and accurate design. First, we build a two-layered hierarchical and interpretable FRBS; then, we improve this by integrating an ontology reasoning process based on SNOMED CT standard ontology. We incorporate FAHP to determine the relative medical importance of each sub-FRBS. The proposed system offers numerous unique and critical improvements regarding the implementation of an accurate, dynamic, semantically intelligent, and interpretable CDSS. The designed system considers the ontology semantic similarity of diabetes complications and symptoms concepts in the fuzzy rules\u2019 evaluation process. The framework was tested using a real data set, and the results indicate how the proposed system helps physicians and patients to accurately diagnose diabetes mellitus." }, { "instance_id": "R25495xR25485", "comparison_id": "R25495", "paper_id": "R25485", "text": "Adaptive sliding mode Gaussian controller for artificial pancreas in TIDM patient Abstract Optimal closed loop control of blood glucose (BG) level has been a major focus for the past so many years to realize an artificial pancreas for type-I diabetes mellitus (TIDM) patients. There is an urgency for controlled drug delivery system to design with appropriate controller not only to regulate the BG level, but also for other chronic clinical disorders requiring continuous long term medication. As a solution to the above problem, a novel sliding mode Gaussian controller with state estimation (SMGC/SE) is proposed, whose gains dynamically vary with respect to the error signal. For the designing of the SMGC/SE, a nonlinear TIDM patient model is linearized as a 9th order state-space model with a micro-insulin dispenser. This controller is evaluated and compared with other recently published control techniques. Obtained results clearly reveal the better performance of the proposed method to regulate the BG level within the normoglycaemic range in terms of accuracy and robustness." }, { "instance_id": "R25495xR25493", "comparison_id": "R25495", "paper_id": "R25493", "text": "A fuzzy- ontology-oriented case-based reasoning framework for semantic diabetes diagnosis OBJECTIVE Case-based reasoning (CBR) is a problem-solving paradigm that uses past knowledge to interpret or solve new problems. It is suitable for experience-based and theory-less problems. Building a semantically intelligent CBR that mimic the expert thinking can solve many problems especially medical ones. METHODS Knowledge-intensive CBR using formal ontologies is an evolvement of this paradigm. Ontologies can be used for case representation and storage, and it can be used as a background knowledge. Using standard medical ontologies, such as SNOMED CT, enhances the interoperability and integration with the health care systems. Moreover, utilizing vague or imprecise knowledge further improves the CBR semantic effectiveness. This paper proposes a fuzzy ontology-based CBR framework. It proposes a fuzzy case-base OWL2 ontology, and a fuzzy semantic retrieval algorithm that handles many feature types. MATERIAL This framework is implemented and tested on the diabetes diagnosis problem. The fuzzy ontology is populated with 60 real diabetic cases. The effectiveness of the proposed approach is illustrated with a set of experiments and case studies. RESULTS The resulting system can answer complex medical queries related to semantic understanding of medical concepts and handling of vague terms. The resulting fuzzy case-base ontology has 63 concepts, 54 (fuzzy) object properties, 138 (fuzzy) datatype properties, 105 fuzzy datatypes, and 2640 instances. The system achieves an accuracy of 97.67%. We compare our framework with existing CBR systems and a set of five machine-learning classifiers; our system outperforms all of these systems. CONCLUSION Building an integrated CBR system can improve its performance. Representing CBR knowledge using the fuzzy ontology and building a case retrieval algorithm that treats different features differently improves the accuracy of the resulting systems." }, { "instance_id": "R25529xR25523", "comparison_id": "R25529", "paper_id": "R25523", "text": "Toward a better understanding of crowdfunding, openness and the consequences for innovation Crowdfunding is now a commonly used tool for innovating entrepreneurs, yet many unresolved questions surrounding crowdfunding\u2019s effect on innovation remain. Often, crowdfunding backers play an active role in the innovation conversation. Thus, crowdfunding can be viewed as one form of open search (actively seeking out ideas from outsiders). Beyond open search, backers also generate word of mouth awareness for the crowdfunded product. Crowdfunding backers can be thought of as the earliest possible adopters, who may be even more valuable than traditional early adopting consumers. In this study, data pertaining to crowdfunded products from the Kickstarter platform is coupled with survey data from the respective innovating entrepreneurs to better understand the effects of elements of crowdfunding on the subsequent market success of the crowdfunded product as well as the innovation focus of the crowdfunding organization. Results indicate that the amount of funding raised during a crowdfunding campaign does not significantly impact the later market performance of the crowdfunded product, while the number of backers attracted to the campaign does. Open search depth (drawing intensely from external sources) enhances product market performance, while open search breadth (drawing from many external sources) induces a radical innovation focus. Interestingly, adverse effects from over-relying on external knowledge sources are not observed. The small size of the crowdfunding organizations in this study is seen as a boundary condition to previous findings of inverse U-shaped performance effects. Finally, the portion of product development complete when crowdfunding impacts the entrepreneurs\u2019 subsequent focus on radical innovation." }, { "instance_id": "R25529xR25513", "comparison_id": "R25529", "paper_id": "R25513", "text": "The dynamics of crowdfunding: An exploratory study Crowdfunding allows founders of for-profit, artistic, and cultural ventures to fund their efforts by drawing on relatively small contributions from a relatively large number of individuals using the internet, without standard financial intermediaries. Drawing on a dataset of over 48,500 projects with combined funding over $237 M, this paper offers a description of the underlying dynamics of success and failure among crowdfunded ventures. It suggests that personal networks and underlying project quality are associated with the success of crowdfunding efforts, and that geography is related to both the type of projects proposed and successful fundraising. Finally, I find that the vast majority of founders seem to fulfill their obligations to funders, but that over 75% deliver products later than expected, with the degree of delay predicted by the level and amount of funding a project receives. These results offer insight into the emerging phenomenon of crowdfunding, and also shed light more generally on the ways that the actions of founders may affect their ability to receive entrepreneurial financing." }, { "instance_id": "R25529xR25497", "comparison_id": "R25529", "paper_id": "R25497", "text": "Crowdfunding to generate crowdsourced R&D: The alternative paradigm of societal problem solving offered by second generation innovation and R&D In a global context of resource scarcity few incentives exist for firms to pursue innovations that provide social externalities if these are not inherently profitable. The purpose of this article is to present an alternative paradigm of societal problem solving entirely premised on second generation innovation processes. Further, a theoretical model of multidimensional, or three dimensional, knowledge creation is offered, together with the notion of a multiplier effect that relates to how knowledge creation can increase exponentially when knowledge is not constrained by proprietary requirements. Second generation innovation is based on probabilistic processes that utilize and maximize economies of scale in pursuit of problem solving. Two processes that contribute to the potential of second generation innovation to solve societal problems are crowdfunding and crowdsourcing. It is argued that the processes required to enable a new paradigm in societal problem solving already exist. A further model is developed based on potential synergies between crowdfunding and crowdsourced research and development. This theoretical model predicts that R&D productivity can be accelerated significantly, and if applied in fields such as proteomics or medical research in general can accelerated increases in research output and therefore benefits to society." }, { "instance_id": "R25529xR25507", "comparison_id": "R25529", "paper_id": "R25507", "text": "The formation and interplay of social capital in crowdfunded social ventures The multi-levelled processes taking place in Crowdfunding (CF), when tapping a large heterogeneous crowd for resources, and the often fundamentally different intentions of individual crowd members in the case of highly desirable social ventures with little prospect for economic gains, may lead to a different logic and approach to how entrepreneurship develops. Using this under-institutionalized sphere as both, context and subject, the author seeks evidence and a new understanding of entrepreneurial routes by using the sociological perspectives of Bourdieus' four forms of capital as a lens on 36 cases of social ventures. In the cases, opportunity recognition, formation and exploitation could not be distinguished as separate processes. CF and sourcing help form the actual opportunity and disperse information at the same time. In addition, the \u2018nexus\u2019 of opportunity and entrepreneur is breached in CF of social causes through the constant exchange of ideas with the crowd, leading to norm-value pairs between the funders and the entrepreneurs. Issues of identification and control are thus not based upon any formal relationship but based on perceived legitimization and offered democratic participation leading to the transformation of social capital (SC) into economic capital (EC). Success is based upon the SC of the entrepreneurial teams, yet the actual resource exchange and transformation into EC is highly moderated by cultural and symbolic capital that is being built up through the process." }, { "instance_id": "R25529xR25527", "comparison_id": "R25529", "paper_id": "R25527", "text": "What Goes around Comes Around? Rewards as Strategic Assets in Crowdfunding In crowdfunding, rewards can make or break success. Yet reward design, choice, and planning still occur based on availability rather than strategy. To address this challenge, this article provides an empirically derived crowd-funding reward toolbox offering guidance in strategically selecting rewards. Based on a large-scale analysis of successful and unsuccessful Kickstarter projects, this article classifies rewards that are currently offered along eight dimensions. It identifies emerging patterns and derives five strategic core tools and two add-on tools. Finally, it delivers exploratory insights into the relative effectiveness of different tools that can facilitate decision making and strategic planning for entrepreneurs and individuals who plan to launch a crowdfunding project and who seek ways to reward their supporters." }, { "instance_id": "R25529xR25521", "comparison_id": "R25529", "paper_id": "R25521", "text": "The backer\u2013developer connection: Exploring crowdfunding\u2019s influence on video game production As video game development studios increasingly turn to digital crowdfunding platforms such as Kickstarter for financing, this article explores the ways in which these processes shape production. It examines in particular the interactions that typically occur between studios and players as part of crowdfunded development, analysing the ways in which these activities inform aspects of video game design. By charting the implications of this burgeoning economic model, the article contributes to scholarship concerning video game production and intervenes within more specific discussions concerning the role of the player within development. The article\u2019s case study, which draws from evidence of production concerning multiple Kickstarter projects, is organised into two sections. The first ascertains the degrees to which Kickstarter users can influence the details of a proposed project during a crowdfunding campaign; the second looks at how developers involve crowdfunding communities within production once funding is secured." }, { "instance_id": "R25529xR25509", "comparison_id": "R25529", "paper_id": "R25509", "text": "Entrepreneurial implications of crowdfunding as alternative funding source for innovations Crowdfunding (CF) is a form of early-stage financing for innovative ventures, which has seen tremendous growth in the past few years \u2013 partly because it provides a desperately needed alternative to the scarcity of traditional sources of finance during the so called \u2018credit crunch\u2019. CF ranges from a simple form of pre-financing to full grown debt or equity investments, but they are typically small pledges that can add up to incredible amounts. Scholarly literature has only started to examine CF and is still in an early stage when it comes to identifying implications for entrepreneurs apart from often over-simplified anecdotal evidence of success. The authors argue that CF can by no means be seen from a financial perspective only, rather it needs to be addressed as a bundle of processes leading to innovative entrepreneurial business-models. This qualitative study explores four extreme cases from the information and communications technology sphere to find out non-financial implications of CF as alternative funding source for innovative entrepreneurs and their business models." }, { "instance_id": "R25529xR25505", "comparison_id": "R25529", "paper_id": "R25505", "text": "Social finance and crowdfunding for social enterprises: a public\u2013private case study providing legitimacy and leverage The authors work closely with academia and governmental organizations in the UK and abroad to develop new, innovative schemes for social impact investing. Such schemes include considerations for public\u2013private collaborations, legislative actions, and especially in this case, for the leveraged use of public and philanthropic funds in Crowdfunding (CF). The relatively new phenomenon of CF can not only provide necessary funds for the social enterprises, it may also lead to a higher legitimacy of these through early societal interaction and participation. This legitimacy can be understood as a strong positive signal for further investors. Governmental tax-reliefs and guarantees from venture-philanthropic funds provide additional incentives for investment and endorse future scaling by leveraging additional debt-finance from specialized social banks. This case study identifies idiosyncratic hurdles to why an efficient social finance market has yet to be created and examines a schema as a case of how individual players\u2019 strengths and weaknesses can be balanced out by a concerted action. The paper discusses the necessary actions, benefits and implications for the involved actors from the public, private and third sector." }, { "instance_id": "R25529xR25503", "comparison_id": "R25529", "paper_id": "R25503", "text": "Product and Pricing Decisions in Crowdfunding This paper studies the optimal product and pricing decisions in a crowdfunding mechanism by which a project between a creator and many buyers will be realized only if the total funds committed by the buyers reach a specified goal. When the buyers are sufficiently heterogeneous in their product valuations, the creator should offer a line of products with different levels of product quality. Compared to the traditional situation where orders are placed and fulfilled individually, with the crowdfunding mechanism, a product line is more likely than a single product to be optimal and the quality gap between products is smaller. This paper also shows the effect of the crowdfunding mechanism on pricing dynamics over time. Together, these results underscore the substantial influence of the emerging crowdfunding mechanisms on common marketing decisions." }, { "instance_id": "R25583xR25573", "comparison_id": "R25583", "paper_id": "R25573", "text": "Models and mechanisms for implementing playful scenarios Serious games are becoming an increasingly used alternative in technical/professional/academic fields. However, scenario development poses a challenging problem since it is an expensive task, only devoted to computer specialists (game developers, programmers\u2026). The ultimate goal of our work is to propose a new scenario-building approach capable of ensuring a high degree of deployment and reusability. Thus, we will define in this paper a new generation mechanism. This mechanism is built upon a model driven architecture (MDA). We have started up by enriching the existing standards, which resulted in defining a new generic meta-model (CIM). The resulting meta-model is capable of describing and standardizing game scenarios. Then, we have laid down a new transformational mechanism in order to integrate the indexed game components into operational platforms (PSM). Finally, the effectiveness of our strategy was assessed under two separate contexts (target platforms) : the claroline-connect platform and the unity 3D environment." }, { "instance_id": "R25583xR25541", "comparison_id": "R25583", "paper_id": "R25541", "text": "Enabling control of 3D visuals, scenarios and non-linear gameplay in serious game development through model-driven authoring Due to the absence of high-level authoring environments and support for non-technical domain experts to create custom serious games, a model-driven authoring framework is presented in this paper. Through model-driven authoring, non-technical people can manipulate the 3D visuals of their serious game, model the scenarios of the game, and even easily add non-linear narrative to the game. The different tools and methods have been implemented and are currently used to build a serious game for the Friendly ATTAC project in order to help youngsters who are confronted with cyberbullying. The presented model-driven authoring framework enables non-technical domain experts to produce serious games easily and quickly, at a lower cost, and therefore lowers the barriers that hinder the production of serious games." }, { "instance_id": "R25583xR25569", "comparison_id": "R25583", "paper_id": "R25569", "text": "Engine- Cooperative Game Modeling (ECGM) Today game engines are popular in commercial game development, as they lower the threshold of game production by providing common technologies and convenient content-creation tools. Game engine based development is therefore the mainstream methodology in the game industry. Model-Driven Game Development (MDGD) is an emerging game development methodology, which applies the Model-Driven Software Development (MDSD) method in the game development domain. This simplifies game development by reducing the gap between game design and implementation. MDGD has to take advantage of the existing game engines in order to be useful in commercial game development practice. However, none of the existing MDGD approaches in literature has convincingly demonstrated good integration of its tools with the game engine tool-chain. In this paper, we propose a hybrid approach named ECGM to address the integration challenges of two methodologies with a focus on the technical aspects. The approach makes a run-time engine the base of the domain framework, and uses the game engine tool-chain together with the MDGD tool-chain. ECGM minimizes the change to the existing workflow and technology, thus reducing the cost and risk of adopting MDGD in commercial game development. Our contribution is one important step towards MDGD industrialization." }, { "instance_id": "R25583xR25553", "comparison_id": "R25583", "paper_id": "R25553", "text": "The RPG DSL: A case study of language engineering using MDD for generating RPG games for mobile phones It is typical in the domain of digital games to have many development problems due to its increasing complexity. Those difficulties include: i)little code reuse in order to develop a cross-platform game; and ii)performing game's verification through extensive and expensive tests. This of course results in low productivity in the development (evolution and maintenance) of game solutions. In this paper, we present a domain-specific language (DSL) for a Role-Playing Game (RPG) product lines, which was completely built using a software development technique driven by high level abstractions---called Model-Driven Development (MDD). Also, we discuss and demonstrate the several benefits of applying MDD in terms of rapid prototyping of cross-platform games, and their evaluation by means of static and dynamic verification of the game's logic properties." }, { "instance_id": "R25583xR25565", "comparison_id": "R25583", "paper_id": "R25565", "text": "Using domain-specific modeling towards computer games development industrialization This paper proposes that computer games development, in spite of its inherently creative and innovative nature, is subject of systematic industrialization targeted at predictability and productivity. The proposed approach encompasses visual domain-specific languages, semantic validators and code generators to make game developers and designers to work more productively, with a higher level of abstraction and closer to their application domain. Such concepts were implemented and deployed into a host development environment, and a real-world scenario was developed to illustrate and validate the proposal." }, { "instance_id": "R25583xR25539", "comparison_id": "R25583", "paper_id": "R25539", "text": "Enabling Educators to Design Serious Games \u2013 A Serious Game Logic and Structure Modeling Language Serious games are applications combining educational content with gameplay by integrating learning objectives into a game-like environment to keep up the player's motivation to continue playing, and hence learning. This characteristic is highly sought after in educational contexts, making serious games a big asset for didactics [1]. Offering new learning contents through a game not only induces higher motivation, employing serious games can also yield higher learning success than presenting material in a classical, non-computer based, way [2]. Only few people having the proper didactical background to tailor the learning objectives to the students' need also have the programming knowledge and game design skills allowing them to develop didactically and technically sound serious games [3, 4]. In this paper, we argue for an approach to enable didactical experts, i.e. educators, to develop serious games adapted to their own learning content. To address this problem we develop a tool allowing educators to visually design their serious games, which is based on model driven development techniques that allow the generation of software from visual models. We describe the first step towards this tool, the development of the underlying domain specific modeling language DSML." }, { "instance_id": "R25583xR25577", "comparison_id": "R25583", "paper_id": "R25577", "text": "How to integrate domain-specific languages into the game development process Domain-specific languages make the relevant details of a domain explicit while omitting the distracting ones. This implies many benefits regarding development speed and quality as well as the exchange of information between expert groups. In order to utilize these benefits for game development, we present a language engineering workflow that describes best practices to identify a reasonable domain abstraction, illustrated by means of a language for 2D Point &Click Adventures. We discuss how this process can be integrated into an agile, iterative development process and what thereby needs to be considered." }, { "instance_id": "R25583xR25571", "comparison_id": "R25583", "paper_id": "R25571", "text": "A Model-Based Approach for Designing Location-Based Games Location-Based Games (LBGs) are a subclass of pervasive games that make use of location technologies to consider the players' geographic position in the game rules and mechanics. This research presents a model to describe and represent LBGs. The proposed model decouples location, mechanics, and game content from their implementation. We aim at allowing LBGs to be edited quickly and deployed on many platforms. The core model component is LEGaL, a language derived from NCL (Nested Context Language) to model and represented the game structure and its multimedia contents (e.g., video, audio, 3D objects, etc.). It allows the modelling of mission-based games by supporting spatial and temporal relationships between game elements and multimedia documents. We validated our approach by implementing a LEGaL interpreter, which was coupled to an LBG authoring tool and a Game Server. These tools enabled us to reimplement a real LBG using the proposed model to attest its utility. We also edited the original game by using an external tool to showcase how simple is to transpose an LBG using the concepts introduced in this work. Results indicate both the model and LEGaL can be used to foster the design of LBGs." }, { "instance_id": "R25583xR25555", "comparison_id": "R25583", "paper_id": "R25555", "text": "Virtual worlds on demand? Model-driven development of javascript-based virtual world UI components for mobile apps Virtual worlds and avatar-based interactive computer games are a hype among consumers and researchers for many years now. In recent years, such games on mobile devices also became increasingly important. However, most virtual worlds require the use of proprietary clients and authoring environments and lack portability, which limits their usefulness for targeting wider audiences like e.g. in consumer marketing or sales. Using mobile devices and client-side web technologies like i.e. JavaScript in combination with a more automatic generation of customer-specific virtual worlds could help to overcome these limitations. Here, model-driven software development (MDD) provides a promising approach for automating the creation of user interface (UI) components for games on mobile devices. Therefore, in this paper an approach is proposed for the model-driven generation of UI components for virtual worlds using JavaScript and the upcoming Famo.us framework. The feasibilty of the approach is evaluated by implementing a proof-of-concept scenario." }, { "instance_id": "R25583xR25543", "comparison_id": "R25583", "paper_id": "R25543", "text": "Generating Ambient Behaviors in Computer Role-Playing Games To compete in today's market, companies that develop computer role-playing games (CRPGs) must quickly and reliably create realistic, engaging game stories. Indeed, intricate storylines and realism that goes beyond graphics have become major product differentiators. To establish both, it's essential that companies use AI to create nonplayer characters (NPCs) that exhibit near-realistic ambient behaviors. Doing so offers players a rich background tapestry that makes the game more entertaining. Because storylines must come first, however, NPCs that aren't critical to the plot are often added at the end of the game development cycle - if resources are available. To control NPCs' ambient behaviors, many computer games use custom scripts. Story authors must therefore write computer code fragments for the game world's hundreds or thousands of NPCs. Our approach lets game authors use generative behavior patterns to create scripts of complex NPC behaviors and interactions. To build these patterns; they use our publicly available ScriptEase tool, which lets them create game stories using a high-level, menu-driven programming model" }, { "instance_id": "R25583xR25581", "comparison_id": "R25583", "paper_id": "R25581", "text": "RealCoins A Case Study of Enhanced Model Driven Development for Pervasive Games Model Driven Development (MDD) and Domain Specific Modeling (DSM) have been widely used in information system domains and achieved success in many open or inhouse scenarios. But its application in the game domain is seldom and immature. In our research, we identified three issues that should be considered carefully in order to play the strength of MDD in the game development environment to a larger extend: 1) structured domain analysis should be done to assure the size and familiarity of the domain; 2) adapted process should be designed to save cost and support evolution; and 3) proper tools (especially language workbenches) should be evaluated and utilized to ease DSM tasks and accelerate iterations. In this paper, we explain these three issues and illustrate our solutions to them by presenting the development details (both technical and procedural) of one pervasive game case. We evaluate the gains and costs by involving MDD into the game development process. We reflect on the issues we have met, and discuss possible future works as well." }, { "instance_id": "R25583xR25557", "comparison_id": "R25583", "paper_id": "R25557", "text": "A visual language for the creation of narrative educational games This paper presents a DSVL that simplifies educational video game development for educators, who do not have programming backgrounds. Other solutions that reduce the cost and complexity of educational video game development have been proposed, but simple to use approaches tailored to the specific needs of educators are still needed. We use a multidisciplinary approach based on visual language and narrative theory concepts to create an easy to understand and maintain description of games. This language specifically targets games of the adventure point-and-click genre. The resulting DVSL uses an explicit flow representation to help educational game authors (i.e. educators) to design the story-flow of adventure games, while providing specific features for the integration of educational characteristics (e.g. student assessment and content adaptation). These highly visual descriptions can then be automatically transformed into playable educational video games." }, { "instance_id": "R25583xR25551", "comparison_id": "R25583", "paper_id": "R25551", "text": "Model-driven development of interactive and integrated 2D and 3D user interfaces using mml While there is a lot of research done in the area of 2D or 3D user interfaces (UIs) construction, comparatively little is known about systematic approaches to designing and developing integrated 2D/3D UIs and applications. The previously developed Multimedia Modeling Language (MML) provides a top down approach for a model driven development of 2D/3D UIs and applications. The MML Structure Model and Media Components provide support for including X3D based content and automatic generation of application skeletons. We use a work instruction manual for a woodchipper as an example to illustrate how to apply MML. We discuss the ramifications of this approach and opportunities for some improvements." }, { "instance_id": "R25583xR25559", "comparison_id": "R25583", "paper_id": "R25559", "text": "Gade4all: Developing multi-platform videogames based on domain specific languages and model-driven engineering The development of applications for mobile devices is a constantly growing market which and more and more enterprises support the development of applications for this kind of devices. In that sense, videogames for mobile devices have become very popular worldwide and are now part of highly profitable and competitive industry. Due to the diversity of platforms and mobile devices and the complexity of this kind of applications, the development time and the number of errors within that development process have increased. The productivity of the developers has also decreased due to the necessity of using many programming languages in the development process. One of the most popular strategies is to employ specialized people to perform the development tasks more efficiently, but this involves an increase of the costs, which makes some applications economically unviable. In this article we present the Gade4all Project, consisting in a new platform that aims to facilitate the development of videogames and entertainment software through the use of Domain Specific Languages and Model Driven Engineering. This tool makes possible for users without previous knowledge in the field of software development to create 2D videogames for multiplatform mobile devices in a simple and innovative way." }, { "instance_id": "R25583xR25533", "comparison_id": "R25583", "paper_id": "R25533", "text": "A Flexible Model-Driven Game Development Approach Game developers are facing an increasing demand for new games every year. Game development tools can be of great help, but require highly specialized professionals. Also, just as any software development effort, game development has some challenges. Model-Driven Game Development (MDGD) is suggested as a means to solve some of these challenges, but with a loss in flexibility. We propose a MDGD approach that combines multiple domain-specific languages (DSLs) with design patterns to provide flexibility and allow generated code to be integrated with manual code. After experimentation, we observed that, with the approach, less experienced developers can create games faster and more easily, and the product of code generation can be customized with manually written code, providing flexibility. However, with MDGD, developers become less familiar with the code, making manual codification more difficult." }, { "instance_id": "R25583xR25549", "comparison_id": "R25583", "paper_id": "R25549", "text": "Model-Driven Serious Game Development Integration of the Gamification Modeling Language GaML with Unity The development of gamification within non-game information systems as well as serious games has recently gained an important role in a variety of business fields due to promising behavioral or psychological improvements. However, industries still struggle with the high efforts of implementing gameful affordances in non-game systems. In order to decrease factors such as project costs, development cycles, and resource consumption as well as to improve the quality of products, the gamification modeling language has been proposed in prior research. However, the language is on a descriptive level only, i.e., Cannot be used to automatically generate executable software artifacts. In this paper and based on this language, we introduce a model-driven architecture for designing as well as generating building blocks for serious games. Furthermore, we give a validation of our approach by going through the different steps of designing an achievement system in the context of an existing serious game." }, { "instance_id": "R25629xR25625", "comparison_id": "R25629", "paper_id": "R25625", "text": "Assimilation of agile practices in use Agile method use in information systems development (ISD) has grown dramatically in recent years. The emergence of these alternative approaches was very much industry\u2010led at the outset, and while agile method research is growing, the vast majority of these studies are descriptive and often lack a strong theoretical and conceptual base. Insights from innovation adoption research can provide a new perspective on analysing agile method use. This paper is based on an exploratory study of the application of the innovation assimilation stages to understand the use of agile practices, focusing in particular on the later stages of assimilation, namely acceptance, routinisation and infusion. Four case studies were conducted, and based on the case study findings, the concepts of acceptance, routinisation and infusion were adapted and applied to agile software development. These adapted concepts were used to glean interesting insights into agile practice use. For example, it was shown that the period of use of agile practices does not have a proportional effect on their assimilation depths. We also reflected on the sequential assumption underlying the assimilation stages, showing that adopting teams do not always move through the assimilation stages in a linear manner." }, { "instance_id": "R25629xR25608", "comparison_id": "R25629", "paper_id": "R25608", "text": "Identifying some important success factors in adopting agile software development practices Agile software development (ASD) is an emerging approach in software engineering, initially advocated by a group of 17 software professionals who practice a set of ''lightweight'' methods, and share a common set of values of software development. In this paper, we advance the state-of-the-art of the research in this area by conducting a survey-based ex-post-facto study for identifying factors from the perspective of the ASD practitioners that will influence the success of projects that adopt ASD practices. In this paper, we describe a hypothetical success factors framework we developed to address our research question, the hypotheses we conjectured, the research methodology, the data analysis techniques we used to validate the hypotheses, and the results we obtained from data analysis. The study was conducted using an unprecedentedly large-scale survey-based methodology, consisting of respondents who practice ASD and who had experience practicing plan-driven software development in the past. The study indicates that nine of the 14 hypothesized factors have statistically significant relationship with ''Success''. The important success factors that were found are: customer satisfaction, customer collaboration, customer commitment, decision time, corporate culture, control, personal characteristics, societal culture, and training and learning." }, { "instance_id": "R25629xR25591", "comparison_id": "R25629", "paper_id": "R25591", "text": "Usage and Perceptions of Agile Software Development in an Industrial Context: An Exploratory Study Agile development methodologies have been gaining acceptance in the mainstream software development community. While there are numerous studies of agile development in academic and educational settings, there has been little detailed reporting of the usage, penetration and success of agile methodologies in traditional, professional software development organizations. We report on the results of an empirical study conducted at Microsoft to learn about agile development and its perception by people in development, testing, and management. We found that one-third of the study respondents use agile methodologies to varying degrees, and most view it favorably due to improved communication between team members, quick releases and the increased flexibility of agile designs. The scrum variant of agile methodologies is by far the most popular at Microsoft. Our findings also indicate that developers are most worried about scaling agile to larger projects (greater than twenty members), attending too many meetings and the coordinating agile and non-agile teams." }, { "instance_id": "R25629xR25602", "comparison_id": "R25629", "paper_id": "R25602", "text": "Acceptance of software process innovations- The case of extreme programming Extreme programming (XP), arguably the most popular agile development methodology, is increasingly finding favor among software developers. Its adoption and acceptance require significant changes in work habits inculcated by traditional approaches that emphasize planning, prediction, and control. Given the growing interest in XP, it is surprising that there is a paucity of research articles that examine the factors that facilitate or hinder its adoption and eventual acceptance. This study aims to fill this void. Using a case study approach, we provide insights into individual, team, technological, task, and environmental factors that expedite or impede the organization-wide acceptance of XP. In particular, we study widely differing patterns of adherence to XP practices within an organization, and tease out the various issues and challenges posed by the adoption of XP. Based on our findings, we evolve factors and discuss their implications on the acceptance of XP practices." }, { "instance_id": "R25629xR25599", "comparison_id": "R25629", "paper_id": "R25599", "text": "Effects of agile practices on social factors Programmers are living in an age of accelerated change. State of the art technology that was employed to facilitate projects a few years ago are typically obsolete today. Presently, there are requirements for higher quality software with less tolerance for errors, produced in compressed timelines with fewer people. Therefore, project success is more elusive than ever and is contingent upon many key aspects. One of the most crucial aspects is social factors. These social factors, such as knowledge sharing, motivation, and customer collaboration, can be addressed through agile practices. This paper will demonstrate two successful industrial software projects which are different in all aspects; however, both still apply agile practices to address social factors. The readers will see how agile practices in both projects were adapted to fit each unique team environment. The paper will also provide lessons learned and recommendations based on retrospective reviews and observations. These recommendations can lead to an improved chance of success in a software development project." }, { "instance_id": "R25629xR25619", "comparison_id": "R25629", "paper_id": "R25619", "text": "The Impact of Organizational Culture on Agile Method Use Agile method proponents believe that organizational culture has an effect on the extent to which an agile method is used. Research into the relationship between organizational culture and information systems development methodology deployment has been explored by others using the Competing Values Framework (CVF). However this relationship has not been explored with respect to the agile development methodologies. Based on a multi-case study of nine projects we show that specific organizational culture factors correlate with effective use of an agile method. Our results contribute to the literature on organizational culture and system development methodology use." }, { "instance_id": "R25629xR25595", "comparison_id": "R25629", "paper_id": "R25595", "text": "Agile systems development and stakeholder satisfaction: a South African empirical study The high rate of systems development (SD) failure is often attributed to the complexity of traditional SD methodologies (e.g. Waterfall) and their inability to cope with changes brought about by today's dynamic and evolving business environment. Agile methodologies (AM) have emerged to challenge traditional SD and overcome their limitations. Yet empirical research into AM is sparse. This paper develops and tests a research model that hypothesizes the effects of five characteristics of agile systems development (iterative development; continuous integration; test-driven design; feedback; and collective ownership) on two dependent stakeholder satisfaction measures, namely stakeholder satisfaction with the development process and with the development outcome. An empirical study of 59 South African development projects (using self reported data) provided support for all hypothesized relationships and generally supports the efficacy of AM. Iteration and integration together with collective ownership have the strongest effects on the dependent satisfaction measures." }, { "instance_id": "R25629xR25627", "comparison_id": "R25629", "paper_id": "R25627", "text": "Experience Report: The Social Nature of Agile Teams Agile software development is often, but not always, associated with the term dasiaproject chemistry,psila or the positive team climate that can contribute to high performance. A qualitative study involving 22 participants in agile teams sought to explore this connection, and answer the question: what aspects of agile software development are related to team cohesion? The following is a discussion of participant experiences as seen through a socio-psychological lens. It draws from social-identity theory and socio-psychological literature to explain, not only how, but why agile methodologies support teamwork and collective progress. Agile practices are shown to produce a socio-psychological environment of high-performance, with many of the practical benefits of agile practices being supported and mediated by social and personal concerns." }, { "instance_id": "R25629xR25593", "comparison_id": "R25629", "paper_id": "R25593", "text": "A survey study of critical success factors in agile software projects While software is so important for all facets of the modern world, software development itself is not a perfect process. Agile software engineering methods have recently emerged as a new and different way of developing software as compared to the traditional methodologies. However, their success has mostly been anecdotal, and research in this subject is still scant in the academic circles. This research study was a survey study on the critical success factors of Agile software development projects using quantitative approach. Based on existing literature, a preliminary list of potential critical success factors of Agile projects were identified and compiled. Subsequently, reliability analysis and factor analysis were conducted to consolidate this preliminary list into a final set of 12 possible critical success factors for each of the four project success categories - Quality, Scope, Time, and Cost. A survey was conducted among Agile professionals, gathering survey data from 109 Agile projects from 25 countries across the world. Multiple regression techniques were used, both at the full regression model and at the optimized regression model via the stepwise screening procedure. The results revealed that only 10 out of 48 hypotheses were supported, identifying three critical success factors for Agile software development projects: (a) Delivery Strategy, (b) Agile Software Engineering Techniques, and (c) Team Capability. Limitations of the study are discussed together with interpretations for practitioners. To ensure success of their projects, managers are urged to focus on choosing a high-caliber team, practicing Agile engineering techniques and following Agile-style delivery strategy." }, { "instance_id": "R25629xR25612", "comparison_id": "R25629", "paper_id": "R25612", "text": "Investigating the Long-Term Acceptance of Agile Methodologies: An Empirical Study of Developer Perceptions in Scrum Projects Agile development methodologies have gained great interest in research and practice. As their introduction considerably changes traditional working habits of developers, the long-term acceptance of agile methodologies becomes a critical success factor. Yet, current studies primarily examine the early adoption stage of agile methodologies. To investigate the long-term acceptance, we conducted a study at a leading insurance company that introduced Scrum in 2007. Using a qualitative research design and the Diffusion of Innovations Theory as a lens for analysis, we gained in-depth insights into factors influencing the acceptance of Scrum. Particularly, developers felt Scrum to be more compatible to their actual working practices. Moreover, they perceived the use of Scrum to deliver numerous relative advantages. However, we also identified possible barriers to acceptance since developers felt both the complexity of Scrum and the required discipline to be higher in comparison with traditional development methodologies." }, { "instance_id": "R25629xR25615", "comparison_id": "R25629", "paper_id": "R25615", "text": "The Impact of Methods and Techniques on Outcomes from Agile Software Development Projects Agile software development methods have become increasingly popular since the late 1990s, and may offer improved outcomes for software development projects when compared to more traditional approaches. However there has previously been little published empirical evidence to either prove or disprove this assertion. A survey carried out in March 2006 gathered responses from a large number of software development professionals who were using many different methods, both traditional and agile. A statistical analysis of this data reveals that agile methods do indeed improve outcomes from software development projects in terms of quality, satisfaction, and productivity, without a significant increase in cost. However, adoption of methods appears to involve a high degree of adaptivity, with many methods being used in combination and sets of techniques being adopted on an ad hoc basis. In this context, our analysis suggests that choosing specific combinations of methods can be particularly beneficial. However, we also find that successful adoption of an agile method is to some extent dependent on rigorous integration of certain core techniques." }, { "instance_id": "R25629xR25617", "comparison_id": "R25629", "paper_id": "R25617", "text": "Understanding post-adoptive agile usage: An exploratory cross- case analysis The widespread adoption of agile methodologies raises the question of their continued and effective usage in organizations. An agile usage model consisting of innovation, sociological, technological, team, and organizational factors is used to inform an analysis of post-adoptive usage of agile practices in two major organizations. Analysis of the two case studies found that a methodology champion and top management support were the most important factors influencing continued usage, while innovation factors such as compatibility seemed less influential. Both horizontal and vertical usage was found to have significant impact on the effectiveness of agile usage." }, { "instance_id": "R25663xR25639", "comparison_id": "R25663", "paper_id": "R25639", "text": "Pick-by-Vision: A first stress test In this paper we report on our ongoing studies around the application of Augmented Reality methods to support the order picking process of logistics applications. Order picking is the gathering of goods out of a prepared range of items following some customer orders. We named the visual support of this order picking process using Head-mounted Displays \u201cPick-by-Vision\u201d. This work presents the case study of bringing our previously developed Pickby-Vision system from the lab to an experimental factory hall to evaluate it under more realistic conditions. This includes the execution of two user studies. In the first one we compared our Pickby-Vision system with and without tracking to picking using a paper list to check picking performance and quality in general. In a second test we had subjects using the Pick-by-Vision system continuously for two hours to gain in-depth insight into the longer use of our system, checking user strain besides the general performance. Furthermore, we report on the general obstacles of trying to use HMD-based AR in an industrial setup and discuss our observations of user behaviour." }, { "instance_id": "R25663xR25631", "comparison_id": "R25663", "paper_id": "R25631", "text": "Reducing Warehouse Employee Errors Using Voice-Assisted Technology That Provided Immediate Feedback Abstract A foodservice distributor in the southeastern United States implemented a voice assisted selecting tool to reduce selector errors by providing immediate feedback when errors occurred. An AB design with a nonequivalent comparison group was used to examine the effects of the voice technology on 132 selectors whose mispicks and shorts were collected over 6 weeks of baseline and 8 weeks of the intervention phase. Selector errors were reduced from 2.44 errors per 1,000 cases picked to 0.94 errors per 1,000 cases when voice technology was implemented. Further analysis indicated that the immediate feedback provided by voice had a greater impact on employees who were making the most errors during baseline." }, { "instance_id": "R25663xR25661", "comparison_id": "R25663", "paper_id": "R25661", "text": "A Comparative Study of an Assistance System for Manual Order Picking -- Called Pick-by-Projection -- with the Guiding Systems Pick-by-Paper, Pick-by-Light and Pick-by-Display Changes and innovations are needed in the area of instruction and control of employees to perform reliably and cost effective in order picking. The information must be easily accessible - communicated in a succinct way to overcome intellectual, socio-educational as well as language barriers. This case mainly focuses on conducting, research in the field of technical support by assistance systems for impaired people and people with altered performance, but also for people without restrictions. This paper aims at presenting the prototype of a new assistance system for manual order picking. In addition, the prototype was evaluated in a user study involving 24 people with three other methods (pick-by-paper, pick-by-light, pick-by-display), which represent the current state of the art. We report about the number of errors, the task completion time and the cognitive workload for all four approaches. The results show that this new kind of assistance system can have benefits, particularly in the area of error prevention and workload." }, { "instance_id": "R25663xR25635", "comparison_id": "R25663", "paper_id": "R25635", "text": "Augmented & Virtual Reality applications in the field of logistics Changing basic conditions in the field of logistics requires intuitive planning methods as well as new ways of supporting and educating the operative staff. VR and AR show a great promise for that." }, { "instance_id": "R25663xR25657", "comparison_id": "R25663", "paper_id": "R25657", "text": "Pick from here! Order Picking is not only one of the most important but also most mentally demanding and error-prone tasks in the industry. Both stationary and wearable systems have been introduced to facilitate this task. Existing stationary systems are not scalable because of the high cost and wearable systems have issues being accepted by the workers. In this paper, we introduce a mobile camera-projector cart called OrderPickAR, which combines the benefits of both stationary and mobile systems to support order picking through Augmented Reality. Our system dynamically projects in-situ picking information into the storage system and automatically detects when a picking task is done. In a lab study, we compare our system to existing approaches, i.e, Pick-by-Paper, Pick-by-Voice, and Pick-by-Vision. The results show that using the proposed system, order picking is almost twice as fast as other approaches, the error rate is decreased up to 9 times, and mental demands are reduced up to 50%." }, { "instance_id": "R25663xR25653", "comparison_id": "R25663", "paper_id": "R25653", "text": "A comparison of order picking assisted by head-up display (HUD), cart-mounted display (CMD), light, and paper pick list Wearable and contextually aware technologies have great applicability in task guidance systems. Order picking is the task of collecting items from inventory in a warehouse and sorting them for distribution; this process accounts for about 60% of the total operational costs of these warehouses. Current practice in industry includes paper pick lists and pick-by-light systems. We evaluated order picking assisted by four approaches: head-up display (HUD); cart-mounted display (CMD); pick-by-light; and paper pick list. We report accuracy, error types, task time, subjective task load and user preferences for all four approaches. The findings suggest that pick-by-HUD and pick-by-CMD are superior on all metrics to the current practices of pick-by-paper and pick-by-light." }, { "instance_id": "R25663xR25643", "comparison_id": "R25663", "paper_id": "R25643", "text": "Pick-by-vision: there is something to pick at the end of the augmented tunnel We report on the long process of exploring, evaluating and refining augmented reality-based methods to support the order picking process of logistics applications. Order picking means that workers have to pick items out of numbered boxes in a warehouse, according to a work order. To support those workers, we have evaluated different HMD-based visualizations in six user studies, starting in a laboratory setup and continuing later in an industrial environment. This was a challenging task, as we had to conquer different kinds of navigation problems from very coarse to very fine granularity and accuracy. The resulting setup consists of a combined and adaptive visualization to precisely and efficiently guide the user even if the actual picking target is not always in the field of view of the HMD." }, { "instance_id": "R25694xR6515", "comparison_id": "R25694", "paper_id": "R6515", "text": "Formal Linked Data Visualization Model Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data." }, { "instance_id": "R25694xR6519", "comparison_id": "R25694", "paper_id": "R6519", "text": "Payola: Collaborative Linked Data Analysis and Visualization Framework Payola is a framework for Linked Data analysis and visualization. The goal of the project is to provide end users with a tool enabling them to analyze Linked Data in a user-friendly way and without knowledge of SPARQL query language. This goal can be achieved by populating the framework with variety of domain-specific analysis and visualization plugins. The plugins can be shared and reused among the users as well as the created analyses. The analyses can be executed using the tool and the results can be visualized using a variety of visualization plugins. The visualizations can be further customized according to ontologies used in the resulting data. The framework is highly extensible and uses modern technologies such as HTML5 and Scala. In this paper we show two use cases, one general and one from the domain of public procurement." }, { "instance_id": "R25694xR6531", "comparison_id": "R25694", "paper_id": "R6531", "text": "Using Semantics for Interactive Visual Analysis of Linked Open Data Providing easy to use methods for visual analysis of Linked Data is often hindered by the complexity of semantic technologies. On the other hand, semantic information inherent to Linked Data provides opportunities to support the user in interactively analysing the data. This paper provides a demonstration of an interactive, Web-based visualisation tool, the \"Vis Wizard\", which makes use of semantics to simplify the process of setting up visualisations, transforming the data and, most importantly, interactively analysing multiple datasets using brushing and linking methods." }, { "instance_id": "R25694xR6535", "comparison_id": "R25694", "paper_id": "R6535", "text": "LinkDaViz \u2013 Automatic Binding of Linked Data to Visualizations As the Web of Data is growing steadily, the demand for user-friendly means for exploring, analyzing and visualizing Linked Data is also increasing. The key challenge for visualizing Linked Data consists in providing a clear overview of the data and supporting non-technical users in finding suitable visualizations while hiding technical details of Linked Data and visualization configuration. In order to accomplish this, we propose a largely automatic workflow which guides users through the process of creating visualizations by automatically categorizing and binding data to visualization parameters. The approach is based on a heuristic analysis of the structure of the input data and a comprehensive visualization model facilitating the automatic binding between data and visualization parameters. The resulting assignments are ranked and presented to the user. With LinkDaViz we provide a web-based implementation of the approach and demonstrate the feasibility by an extended user and performance evaluation." }, { "instance_id": "R25694xR25683", "comparison_id": "R25694", "paper_id": "R25683", "text": "Information content based ranking metric for linked open vocabularies It is widely accepted that by controlling metadata, it is easier to publish high quality data on the web. Metadata, in the context of Linked Data, refers to vocabularies and ontologies used for describing data. With more and more data published on the web, the need for reusing controlled taxonomies and vocabularies is becoming more and more a necessity. Catalogues of vocabularies are generally a starting point to search for vocabularies based on search terms. Some recent studies recommend that it is better to reuse terms from \"popular\" vocabularies [4]. However, there is not yet an agreement on what makes a popular vocabulary since it depends on diverse criteria such as the number of properties, the number of datasets using part or the whole vocabulary, etc. In this paper, we propose a method for ranking vocabularies based on an information content metric which combines three features: (i) the datasets using the vocabulary, (ii) the outlinks from the vocabulary and (iii) the inlinks to the vocabulary. We applied this method to 366 vocabularies described in the LOV catalogue. The results are then compared with other catalogues which provide alternative rankings." }, { "instance_id": "R25694xR6499", "comparison_id": "R25694", "paper_id": "R6499", "text": "Facets and Pivoting for Flexible and Usable Linked Data Exploration The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset." }, { "instance_id": "R25726xR25698", "comparison_id": "R25726", "paper_id": "R25698", "text": "Node-centric RDF Graph Visualization RDF, visualization, Resource Description Framework, graph, browser, nodecentric This paper describes a node-centric technique for visualizing Resource Description Framework (RDF) graphs. Nodes of interest are discovered by searching over literals. Subgraphs for display are constructed by using the area around selected nodes. Wider views are created by sorting and displaying nodes based on the number of incoming and outgoing arcs." }, { "instance_id": "R25726xR6429", "comparison_id": "R25726", "paper_id": "R6429", "text": "Using Clusters in RDF Visualization Clustered graph visualization techniques are an easy to understand way of hiding complex parts of a visualized graph when they are not needed by the user. When visualizing RDF, there are several situations where such clusters are defined in a very natural way. Using this techniques, we can give the user optional access to some detailed information without unnecessarily occupying space in the basic view of the data. This paper describes algorithms for clustered visualization used in the Trisolda RDF visualizer. Most notable is the newly added clustered navigation technique." }, { "instance_id": "R25726xR6465", "comparison_id": "R25726", "paper_id": "R6465", "text": "Visualizing Populated Ontologies with OntoTrix Research on visualizing Semantic Web data has yielded many tools that rely on information visualization techniques to better support the user in understanding and editing these data. Most tools structure the visualization according to the concept definitions and interrelations that constitute the ontology\u2019s vocabulary. Instances are often treated as somewhat peripheral information, when considered at all. These instances, that populate ontologies, represent an essential part of any knowledge base. Understanding instance-level data might be easier for users because of their higher concreteness, but instances will often be orders of magnitude more numerous than the concept definitions that give them machine-processable meaning. As such, the visualization of instance-level data poses different but real challenges. The authors present a visualization technique designed to enable users to visualize large instance sets and the relations that connect them. This visualization uses both node-link and adjacency matrix representations of graphs to visualize different parts of the data depending on their semantic and local structural properties. The technique was originally devised for simple social network visualization. The authors extend it to handle the richer and more complex graph structures of populated ontologies, exploiting ontological knowledge to drive the layout of, and navigation in, the representation embedded in a smooth zoomable environment." }, { "instance_id": "R25726xR6445", "comparison_id": "R25726", "paper_id": "R6445", "text": "ZoomRDF: semantic fisheye zooming on RDF data. With the development of Semantic Web in recent years, an increasing amount of semantic data has been created in form of Resource Description Framework (RDF). Current visualization techniques help users quickly understand the underlying RDF data by displaying its structure in an overview. However, detailed information can only be accessed by further navigation. An alternative approach is to display the global context as well as the local details simultaneously in a unified view. This view supports the visualization and navigation on RDF data in an integrated way. In this demonstration, we present ZoomRDF, a framework that: i) adapts a space-optimized visualization algorithm for RDF, which allows more resources to be displayed, thus maximizes the utilization of display space, ii) combines the visualization with a fisheye zooming concept, which assigns more space to some individual nodes while still preserving the overview structure of the data, iii) considers both the importance of resources and the user interaction on them, which offers more display space to those elements the user may be interested in. We implement the framework based on the Gene Ontology and demonstrate that it facilitates tasks like RDF data exploration and editing." }, { "instance_id": "R25726xR25695", "comparison_id": "R25726", "paper_id": "R25695", "text": "GrouseFlocks: Steerable Exploration of Graph Hierarchy Space Several previous systems allow users to interactively explore a large input graph through cuts of a superimposed hierarchy. This hierarchy is often created using clustering algorithms or topological features present in the graph. However, many graphs have domain-specific attributes associated with the nodes and edges, which could be used to create many possible hierarchies providing unique views of the input graph. GrouseFlocks is a system for the exploration of this graph hierarchy space. By allowing users to see several different possible hierarchies on the same graph, the system helps users investigate graph hierarchy space instead of a single fixed hierarchy. GrouseFlocks provides a simple set of operations so that users can create and modify their graph hierarchies based on selections. These selections can be made manually or based on patterns in the attribute data provided with the graph. It provides feedback to the user within seconds, allowing interactive exploration of this space." }, { "instance_id": "R25726xR25706", "comparison_id": "R25726", "paper_id": "R25706", "text": "Gephi: An Open Source Software for Exploring and Manipulating Networks Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization." }, { "instance_id": "R25726xR6413", "comparison_id": "R25726", "paper_id": "R6413", "text": "NodeTrix: a Hybrid Visualization of Social Networks The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results." }, { "instance_id": "R25726xR25710", "comparison_id": "R25726", "paper_id": "R25710", "text": "A Visualization Service for the Semantic Web The Web presents an opportunity to openly collaborate and share visualizations of semantic web data. Many desktop tools for visually exploring ontologies exist; however, few researchers have investigated how visualizations could be used in an online environment to enhance the semantic web. In this paper, we present our experience with developing a visualization service for the semantic web. We discuss the advantages and challenges with moving to a web-based platform, as well as the features of the service through several case studies. We reflect on this experience and provide recommendations for future work and data integrations." }, { "instance_id": "R25726xR25701", "comparison_id": "R25726", "paper_id": "R25701", "text": "GrOWL: A Tool for Visualization and Editing of OWL Ontologies In an effort to optimize visualization and editing of OWL ontologies we have developed GrOWL: a browser and visual editor for OWL that accurately visualizes the underlying DL semantics of OWL ontologies while avoiding the difficulties of the verbose OWL syntax. In this paper, we discuss GrOWL visualization model and the essential visualization techniques implemented in GrOWL." }, { "instance_id": "R25726xR6421", "comparison_id": "R25726", "paper_id": "R6421", "text": "Browsing Linked Data with Fenfire A wealth of information has recently become available as browsable RDF data on the Web, but the selection of client applications to interact with this Linked Data remains limited. We show how to browse Linked Data with Fenfire, a Free and Open Source Software RDF browser and editor that employs a graph view and focuses on an engaging and interactive browsing experience. This sets Fenfire apart from previous table- and outline-based Linked Data browsers." }, { "instance_id": "R25726xR25714", "comparison_id": "R25726", "paper_id": "R25714", "text": "A Novel Approach to Visualizing and Navigating Ontologies There is empirical evidence that the user interaction metaphors used in ontology engineering toolkits are largely inadequate and that novel interactive frameworks for human ontology interaction are needed. Here we present a novel tool for visualizing and navigating ontologies, called KC Viz, which exploits an innovative ontology summarization method to support a \u2019middleout ontology browsing\u2019 approach, where it becomes possible to navigate ontologies starting from the most information-rich nodes (i.e., key concepts). This approach is similar to map-based visualization and navigation in Geographical Information Systems, where, e.g., major cities are displayed more prominently than others, depending on the current level of granularity." }, { "instance_id": "R25726xR6441", "comparison_id": "R25726", "paper_id": "R6441", "text": "Interactive Relationship Discovery via the Semantic Web This paper presents an approach for the interactive discovery of relationships between selected elements via the Semantic Web. It emphasizes the human aspect of relationship discovery by offering sophisticated interaction support. Selected elements are first semi-automatically mapped to unique objects of Semantic Web datasets. These datasets are then crawled for relationships which are presented in detail and overview. Interactive features and visual clues allow for a sophisticated exploration of the found relationships. The general process is described and the RelFinder tool as a concrete implementation and proof-of-concept is presented and evaluated in a user study. The application potentials are illustrated by a scenario that uses the RelFinder and DBpedia to assist a business analyst in decision-making. Main contributions compared to previous and related work are data aggregations on several dimensions, a graph visualization that displays and connects relationships also between more than two given objects, and an advanced implementation that is highly configurable and applicable to arbitrary RDF datasets." }, { "instance_id": "R25726xR6417", "comparison_id": "R25726", "paper_id": "R6417", "text": "RDF data exploration and visualization We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the \"PGV explorer\" and b) the \"RDF pager\" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV." }, { "instance_id": "R25762xR25746", "comparison_id": "R25762", "paper_id": "R25746", "text": "Mining High Utility Itemsets in Large High Dimensional Data Existing algorithms for utility mining are inadequate on datasets with high dimensions or long patterns. This paper proposes a hybrid method, which is composed of a row enumeration algorithm (i.e., inter-transaction) and a column enumeration algorithm (i.e., two-phase), to discover high utility itemsets from two directions: Two-phase seeks short high utility itemsets from the bottom, while inter-transaction seeks long high utility itemsets from the top. In addition, optimization technique is adopted to improve the performance of computing the intersection of transactions. Experiments on synthetic data show that the hybrid method achieves high performance in large high dimensional datasets." }, { "instance_id": "R25762xR25728", "comparison_id": "R25762", "paper_id": "R25728", "text": "A fast high utility itemsets mining algorithm Association rule mining (ARM) identifies frequent itemsets from databases and generates association rules by considering each item in equal value. However, items are actually different in many aspects in a number of real applications, such as retail marketing, network log, etc. The difference between items makes a strong impact on the decision making in these applications. Therefore, traditional ARM cannot meet the demands arising from these applications. By considering the different values of individual items as utilities, utility mining focuses on identifying the itemsets with high utilities. As \"downward closure property\" doesn't apply to utility mining, the generation of candidate itemsets is the most costly in terms of time and memory space. In this paper, we present a Two-Phase algorithm to efficiently prune down the number of candidates and can precisely obtain the complete set of high utility itemsets. In the first phase, we propose a model that applies the \"transaction-weighted downward closure property\" on the search space to expedite the identification of candidates. In the second phase, one extra database scan is performed to identify the high utility itemsets. We also parallelize our algorithm on shared memory multi-process architecture using Common Count Partitioned Database (CCPD) strategy. We verify our algorithm by applying it to both synthetic and real databases. It performs very efficiently in terms of speed and memory cost, and shows good scalability on multiple processors, even on large databases that are difficult for existing algorithms to handle." }, { "instance_id": "R25762xR25750", "comparison_id": "R25762", "paper_id": "R25750", "text": "Efficient Mining of High Utility Itemsets from Large Datasets High utility itemsets mining extends frequent pattern mining to discover itemsets in a transaction database with utility values above a given threshold. However, mining high utility itemsets presents a greater challenge than frequent itemset mining, since high utility itemsets lack the anti-monotone property of frequent itemsets. Transaction Weighted Utility (TWU) proposed recently by researchers has anti-monotone property, but it is an overestimate of itemset utility and therefore leads to a larger search space. We propose an algorithm that uses TWU with pattern growth based on a compact utility pattern tree data structure. Our algorithm implements a parallel projection scheme to use disk storage when the main memory is inadequate for dealing with large datasets. Experimental evaluation shows that our algorithm is more efficient compared to previous algorithms and can mine larger datasets of both dense and sparse data containing long patterns." }, { "instance_id": "R25762xR25738", "comparison_id": "R25762", "paper_id": "R25738", "text": "CTU-Mine: An Efficient High Utility Itemset Mining Algorithm Using the Pattern Growth Approach Frequent pattern mining discovers patterns in transaction databases based only on the relative frequency of occurrence of items without considering their utility. For many real world applications, however, utility of itemsets based on cost, profit or revenue is of importance. The utility mining problem is to find itemsets that have higher utility than a user specified minimum. Unlike itemset support in frequent pattern mining, itemset utility does not have the anti-monotone property and so efficient high utility mining poses a greater challenge. Recent research on utility mining has been based on the candidate-generation-and-test approach which is suitable for sparse data sets with short patterns, but not feasible for dense data sets or long patterns. In this paper we propose a new algorithm called CTU-Mine that mines high utility itemsets using the pattern growth approach. We have tested our algorithm on several dense data sets, compared it with the recent algorithms and the results show that our algorithm works efficiently." }, { "instance_id": "R25762xR25740", "comparison_id": "R25762", "paper_id": "R25740", "text": "An efficient algorithm for mining temporal high utility itemsets from data streams Utility of an itemset is considered as the value of this itemset, and utility mining aims at identifying the itemsets with high utilities. The temporal high utility itemsets are the itemsets whose support is larger than a pre-specified threshold in current time window of the data stream. Discovery of temporal high utility itemsets is an important process for mining interesting patterns like association rules from data streams. In this paper, we propose a novel method, namely THUI (Temporal High Utility Itemsets)-Mine, for mining temporal high utility itemsets from data streams efficiently and effectively. To the best of our knowledge, this is the first work on mining temporal high utility itemsets from data streams. The novel contribution of THUI-Mine is that it can effectively identify the temporal high utility itemsets by generating fewer candidate itemsets such that the execution time can be reduced substantially in mining all high utility itemsets in data streams. In this way, the process of discovering all temporal high utility itemsets under all time windows of data streams can be achieved effectively with less memory space and execution time. This meets the critical requirements on time and space efficiency for mining data streams. Through experimental evaluation, THUI-Mine is shown to significantly outperform other existing methods like Two-Phase algorithm under various experimental conditions." }, { "instance_id": "R25762xR25734", "comparison_id": "R25762", "paper_id": "R25734", "text": "A Fast Algorithm for Mining Share-Frequent Itemsets Itemset share has been proposed as a measure of the importance of itemsets for mining association rules. The value of the itemset share can provide useful information such as total profit or total customer purchased quantity associated with an itemset in database. The discovery of share-frequent itemsets does not have the downward closure property. Existing algorithms for discovering share-frequent itemsets are inefficient or do not find all share-frequent itemsets. Therefore, this study proposes a novel Fast Share Measure (FSM) algorithm to efficiently generate all share-frequent itemsets. Instead of the downward closure property, FSM satisfies the level closure property. Simulation results reveal that the performance of the FSM algorithm is superior to the ZSP algorithm two to three orders of magnitude between 0.2% and 2% minimum share thresholds." }, { "instance_id": "R25762xR25754", "comparison_id": "R25762", "paper_id": "R25754", "text": "Fuzzy Weighted Association Rule Mining with Weighted Support and Confidence Framework In this paper we extend the problem of mining weighted association rules. A classical model of boolean and fuzzy quantitative association rule mining is adopted to address the issue of invalidation of downward closure property (DCP) in weighted association rule mining where each item is assigned a weight according to its significance w.r.t some user defined criteria. Most works on DCP so far struggle with invalid downward closure property and some assumptions are made to validate the property. We generalize the problem of downward closure property and propose a fuzzy weighted support and confidence framework for boolean and quantitative items with weighted settings. The problem of invalidation of the DCP is solved using an improved model of weighted support and confidence framework for classical and fuzzy association rule mining. Our methodology follows an Apriori algorithm approach and avoids pre and post processing as opposed to most weighted ARM algorithms, thus eliminating the extra steps during rules generation. The paper concludes with experimental results and discussion on evaluating the proposed framework." }, { "instance_id": "R25762xR25742", "comparison_id": "R25762", "paper_id": "R25742", "text": "A transaction mapping algorithm for frequent itemsets mining In this paper, we present a novel algorithm for mining complete frequent itemsets. This algorithm is referred to as the TM (transaction mapping) algorithm from hereon. In this algorithm, transaction ids of each itemset are mapped and compressed to continuous transaction intervals in a different space and the counting of itemsets is performed by intersecting these interval lists in a depth-first order along the lexicographic tree. When the compression coefficient becomes smaller than the average number of comparisons for intervals intersection at a certain level, the algorithm switches to transaction id intersection. We have evaluated the algorithm against two popular frequent itemset mining algorithms, FP-growth and dEclat, using a variety of data sets with short and long frequent patterns. Experimental data show that the TM algorithm outperforms these two algorithms." }, { "instance_id": "R25762xR25744", "comparison_id": "R25762", "paper_id": "R25744", "text": "A Parallel Apriori Algorithm for Frequent Itemsets Mining Finding frequent itemsets is one of the most investigated fields of data mining. The Apriori algorithm is the most established algorithm for frequent itemsets mining (FIM). Several implementations of the Apriori algorithm have been reported and evaluated. One of the implementations optimizing the data structure with a trie by Bodon catches our attention. The results of the Bodon's implementation for finding frequent itemsets appear to be faster than the ones by Borgelt and Goethals. In this paper, we revised Bodon's implementation into a parallel one where input transactions are read by a parallel computer. The effect a parallel computer on this modified implementation is presented" }, { "instance_id": "R25762xR25752", "comparison_id": "R25762", "paper_id": "R25752", "text": "An effective Fuzzy Healthy Association Rule Mining Algorithm (FHARM) In this paper we propose an effective and efficient new Fuzzy Healthy Association Rule Mining Algorithm (FHARM) that produces more interesting and quality rules by introducing new quality measures. In this approach, edible attributes are filtered from transactional input data by projections and are then converted to Required Daily Allowance (RDA) numeric values. The averaged RDA database is then converted to a fuzzy database that contains normalized fuzzy attributes comprising different fuzzy sets. Analysis of nutritional information is then performed from the converted normalized fuzzy transactional database. The paper presents various performance tests and interestingness measures to demonstrate the effectiveness of the approach and proposes further work on evaluating our approach with other generic fuzzy association rule algorithms." }, { "instance_id": "R25762xR25730", "comparison_id": "R25762", "paper_id": "R25730", "text": "A Fast Algorithm for Mining Utility-Frequent Itemsets Utility-based data mining is a new research area interested in all types of utility factors in data mining processes and targeted at incorporating utility considerations in both predictive and descriptive data mining tasks. High utility itemset mining is a research area of utilitybased descriptive data mining, aimed at finding itemsets that contribute most to the total utility. A specialized form of high utility itemset mining is utility-frequent itemset mining, which \u2013 in addition to subjectively defined utility \u2013 also takes into account itemset frequencies. This paper presents a novel efficient algorithm FUFM (Fast Utility-Frequent Mining) which finds all utility-frequent itemsets within the given utility and support constraints threshold. It is faster and simpler than the original 2P-UF algorithm (2 Phase Utility-Frequent), as it is based on efficient methods for frequent itemset mining. Experimental evaluation on artificial datasets show that, in contrast with 2P-UF, our algorithm can also be applied to mine large databases." }, { "instance_id": "R25857xR25812", "comparison_id": "R25857", "paper_id": "R25812", "text": "MOF-Confined Sub-2 nm Atomically Ordered Intermetallic PdZn Nanoparticles as High-Performance Catalysts for Selective Hydrogenation of Acetylene Controllable synthesis of ultrasmall atomically ordered intermetallic nanoparticles is a challenging task, owing to the high temperature commonly required for the formation of intermetallic phases. Here, a metal-organic framework (MOF)-confined co-reduction strategy is developed for the preparation of sub-2 nm intermetallic PdZn nanoparticles, by employing the well-defined porous structures of calcinated ZIF-8 (ZIF-8C) and an in situ co-reduction therein. HAADF-STEM, HRTEM, and EDS characterizations reveal the homogeneous dispersion of these sub-2 nm intermetallic PdZn nanoparticles within the ZIF-8C frameworks. XRD, XPS, and EXAFS measurements further confirm the atomically ordered intermetallic phase nature of these sub-2 nm PdZn nanoparticles. Selective hydrogenation of acetylene evaluation results show the excellent catalytic properties of the sub-2 nm intermetallic PdZn, which result from the energetically more favorable path for acetylene hydrogenation and ethylene desorption over the ultrasmall particles than over larger-sized intermetallic PdZn as revealed by density functional theory (DFT) calculations. Moreover, this protocol is also extendable for the preparation of sub-2 nm intermetallic PtZn nanoparticles and is expected to provide a novel methodology in synthesizing ultrasmall atomically ordered intermetallic nanomaterials by rationally functionalizing MOFs." }, { "instance_id": "R25857xR25820", "comparison_id": "R25857", "paper_id": "R25820", "text": "Atomically Dispersed Pd on Nanodiamond/Graphene Hybrid for Selective Hydrogenation of Acetylene We reported here a strategy to use a defective nanodiamond-graphene (ND@G) to prepare an atomically dispersed metal catalyst, i.e., in the current case atomically dispersed palladium catalyst which is used for selective hydrogenation of acetylene in the presence of abundant ethylene. The catalyst exhibits remarkable performance for the selective conversion of acetylene to ethylene: high conversion (100%), ethylene selectivity (90%), and good stability. The unique structure of the catalyst (i.e., atomically dispersion of Pd atoms on graphene through Pd-C bond anchoring) blocks the formation of unselective subsurface hydrogen species and ensures the facile desorption of ethylene against the overhydrogenation to undesired ethane, which is the key for the outstanding selectivity of the catalyst." }, { "instance_id": "R25857xR25795", "comparison_id": "R25857", "paper_id": "R25795", "text": "Performance of Pd\u2013Ag/Al2O3 catalysts prepared by the selective deposition of Ag onto Pd in acetylene hydrogenation Abstract The performance of Ag-promoted Pd/Al2O3 catalysts, which were prepared by the selective deposition of Ag onto Pd using a surface redox (SR) method, during acetylene hydrogenation was compared with that of catalysts prepared by impregnation. The Pd surface was more effectively modified with Ag added by SR, even when small amounts of Ag were added. The catalyst prepared by SR showed a higher ethylene selectivity than the one prepared by impregnation, because SR allowed both the preferential deposition of Ag on the low-coordination sites of Pd and a greater electronic modification of Pd by Ag." }, { "instance_id": "R25857xR25816", "comparison_id": "R25857", "paper_id": "R25816", "text": "Single-Atom Pd1/Graphene Catalyst Achieved by Atomic Layer Deposition: Remarkable Performance in Selective Hydrogenation of 1,3-Butadiene We reported that atomically dispersed Pd on graphene can be fabricated using the atomic layer deposition technique. Aberration-corrected high-angle annular dark-field scanning transmission electron microscopy and X-ray absorption fine structure spectroscopy both confirmed that isolated Pd single atoms dominantly existed on the graphene support. In selective hydrogenation of 1,3-butadiene, the single-atom Pd1/graphene catalyst showed about 100% butenes selectivity at 95% conversion at a mild reaction condition of about 50 \u00b0C, which is likely due to the changes of 1,3-butadiene adsorption mode and enhanced steric effect on the isolated Pd atoms. More importantly, excellent durability against deactivation via either aggregation of metal atoms or carbonaceous deposits during a total 100 h of reaction time on stream was achieved. Therefore, the single-atom catalysts may open up more opportunities to optimize the activity, selectivity, and durability in selective hydrogenation reactions." }, { "instance_id": "R25857xR25772", "comparison_id": "R25857", "paper_id": "R25772", "text": "Detecting the Genesis of a High-Performance Carbon-Supported Pd Sulfide Nanophase and Its Evolution in the Hydrogenation of Butadiene A new procedure for preparation of palladium sulfide nanoparticles, which are deposited and anchored over highly graphitized carbon nanofibers, is presented. The preparation method is based on the use of PdSO4 as metal precursor or alternatively in the previous functionalization of the carbon surfaces with sulfonic groups by treatment with fuming sulfuric acid. Using an in situ high-energy X-ray diffraction technique, in both cases it is demonstrated that during the reduction treatment, the initially present palladium hydride is transformed into a palladium sulfide (Pd4S). The catalytic properties of these materials have been tested in the gas-phase butadiene partial reduction to butenes. Although metallic palladium nanoparticles supported in the same carbon fibers produce butane as the principal product, the supported Pd4S nanocrystals mainly yield different isomers of butenes independently of the conversion level. Furthermore, applying the same X-ray diffraction method reveals that this catalytic phase ..." }, { "instance_id": "R25857xR25799", "comparison_id": "R25857", "paper_id": "R25799", "text": "Palladium\u2013gallium intermetallic compounds for the selective hydrogenation of acetylenePart II: Surface characterization and catalytic performance The structurally well-defined intermetallic compounds PdGa and Pd3Ga7 constitute suitable catalysts for the selective hydrogenation of acetylene. The surface properties of PdGa and Pd3Ga7 were characterized by X-ray photoelectron spectroscopy, ion scattering spectroscopy and CO chemi- sorption. Catalytic activity, selectivity and long-term stability of PdGa and Pd3Ga7 were investigated under different acetylene hydrogenation reaction conditions, in absence and in excess of ethylene, in temperature-programmed and isothermal long-term experiments. Chemical treatment with ammonia solution - performed to remove the gallium oxide layer introduced during the milling procedure from the surface of the intermetal- lic compounds - yielded a significant increase in activity. Compared to Pd/Al2O3 and Pd20Ag80 reference catalysts, PdGa and Pd3Ga7 exhibited a similar activity per surface area, but higher selectivity and stability. The superior catalytic properties are attributed to the isolation of active Pd sites in the crystallographic structure of PdGa and Pd3Ga7 according to the active-site isolation concept." }, { "instance_id": "R25857xR25842", "comparison_id": "R25857", "paper_id": "R25842", "text": "Cooperative Effects in Ternary Cu\u2212Ni\u2212Fe Catalysts Lead to Enhanced Alkene Selectivity in Alkyne Hydrogenation A new generation of heterogeneous Cu-Ni-Fe catalysts with appropriate metal ratios displayed outstanding alkene selectivity in the gas-phase hydrogenation of propyne (S(C(3)H(6)) up to 100%) and ethyne (S(C(2)H(4)) up to 80%). The design was accomplished by orchestrating key functions in the catalyst: copper is the base hydrogenation metal, nickel increases the hydrogen coverage to minimize oligomerization, and iron acts as structural promoter. In addition to the largely improved alkene selectivity compared to that of the commonly applied Pd catalysts, the ternary Cu-Ni-Fe catalysts promise substantial process advantages, since they do not require CO feeding as selectivity enhancer and they yield high alkene selectivity in a broad window of H(2)/alkyne ratios. The ternary system requires higher operating temperatures compared to those for palladium." }, { "instance_id": "R25857xR25855", "comparison_id": "R25857", "paper_id": "R25855", "text": "Crystal-Facet Effect of \u03b3-Al2O3 on Supporting CrOx for Catalytic Semihydrogenation of Acetylene With the successful preparation of \u03b3-alumina with high-energy external surfaces such as {111} facets, the crystal-facet effect of \u03b3-Al2O3 on surface-loaded CrOx has been explored for semihydrogenation of acetylene. Our results indeed demonstrate that the harmonious interaction of CrOx with traditional \u03b3-Al2O3, the external surfaces of which are typically low-energy{110} facets, has caused a highly efficient performance for semihydrogenation of acetylene over CrOx/(110)\u03b3-Al2O3 catalyst, whereas the activity of the CrOx/(111)\u03b3-Al2O3 catalyst for acetylene hydrogenation is suppressed dramatically due to the limited formation of active Cr species, restrained by the high-energy {111} facets of \u03b3-Al2O3. Furthermore, the use of inexpensive CrOx as the active component for semihydrogenation of acetylene is an economically friendly alternative relative to commercial precious Pd catalysts. This work sheds light on a strategy for exploiting the crystal-facet effect of the supports to purposefully tailor the catalyti..." }, { "instance_id": "R25857xR25785", "comparison_id": "R25857", "paper_id": "R25785", "text": "Selective hydrogenation of 1,3-butadiene on platinum\u2013copper alloys at the single-atom limit Platinum is ubiquitous in the production sectors of chemicals and fuels; however, its scarcity in nature and high price will limit future proliferation of platinum-catalysed reactions. One promising approach to conserve platinum involves understanding the smallest number of platinum atoms needed to catalyse a reaction, then designing catalysts with the minimal platinum ensembles. Here we design and test a new generation of platinum\u2013copper nanoparticle catalysts for the selective hydrogenation of 1,3-butadiene,, an industrially important reaction. Isolated platinum atom geometries enable hydrogen activation and spillover but are incapable of C\u2013C bond scission that leads to loss of selectivity and catalyst deactivation. \u03b3-Alumina-supported single-atom alloy nanoparticle catalysts with <1 platinum atom per 100 copper atoms are found to exhibit high activity and selectivity for butadiene hydrogenation to butenes under mild conditions, demonstrating transferability from the model study to the catalytic reaction under practical conditions." }, { "instance_id": "R25857xR25801", "comparison_id": "R25857", "paper_id": "R25801", "text": "Synthesis and Catalytic Properties of Nanoparticulate Intermetallic Ga\u2013Pd Compounds A two-step synthesis for the preparation of single-phase and nanoparticulate GaPd and GaPd(2) by coreduction of ionic metal-precursors with LiHBEt(3) in THF without additional stabilizers is described. The coreduction leads initially to the formation of Pd nanoparticles followed by a Pd-mediated reduction of Ga(3+) on their surfaces, requiring an additional annealing step. The majority of the intermetallic particles have diameters of 3 and 7 nm for GaPd and GaPd(2), respectively, and unexpected narrow size distributions as determined by disk centrifuge measurements. The nanoparticles have been characterized by XRD, TEM, and chemical analysis to ensure the formation of the intermetallic compounds. Unsupported nanoparticles possess high catalytic activity while maintaining the excellent selectivity of the ground bulk materials in the semihydrogenation of acetylene. The activity could be further increased by depositing the particles on \u03b1-Al(2)O(3)." }, { "instance_id": "R25857xR25844", "comparison_id": "R25857", "paper_id": "R25844", "text": "Selective hydrogenation of acetylene in an ethylene-rich stream over silica supported Ag-Ni bimetallic catalysts Abstract Semi-hydrogenation of acetylene in an ethylene-rich stream is an industrially important process. Recent work on the purification of ethylene mainly focuses on the modification of Pd catalysts; little attention has been paid to the development of alternative catalysts with low-cost metals. Herein, a series of Ag-Ni/SiO 2 bimetallic catalysts, with varied Ni/Ag atomic ratios, were prepared by wetness co-impregnation method. Their activity for the selective hydrogenation of acetylene in an ethylene-rich stream was evaluated, which showed that the introduction of Ag decreased the formation of both ethane and methane, thus increased the ethylene selectivity. The ethylene selectivity over the AgNi 0.25 /SiO 2 catalyst was increased by >600% when compared with the corresponding monometallic Ni 0.25 /SiO 2 as well as the simple physical mixture of the monometallic Ag/SiO 2 and Ni 0.25 /SiO 2 catalysts. As was verified by a combination of the X-ray diffraction, high angular annular dark field scanning transmission electron microscopy, energy-dispersive X-ray spectroscopy under scanning transmission electron microscopy results, decreased Ni content induced the sintering of the bimetallic nanoparticles while with uniform dispersion. Temperature-programmed reduction results demonstrated that, compared with the corresponding monometallic catalysts, both the reduction of AgO x and NiO x were promoted in the Ag-Ni/SiO 2 bimetallic catalysts. In-situ Fourier-transform infrared spectroscopy results also illustrated obvious interaction between Ag and Ni. The contact between Ag and Ni may account for the enhanced ethylene selectivity." }, { "instance_id": "R25857xR25848", "comparison_id": "R25857", "paper_id": "R25848", "text": "Selective hydrogenation of acetylene on SiO2 supported Ni-In bimetallic catalysts: Promotional effect of In Abstract Ni/SiO 2 and the bimetallic Ni x In/SiO 2 catalysts with different Ni/In ratios were tested for the selective hydrogenation of acetylene, and their physicochemical properties before and after the reaction were characterized by means of N 2 -sorption, H 2 -TPR, XRD, TEM, XPS, H 2 chemisorption, C 2 H 4 -TPD, NH 3 -TPD, FT-IR of adsorbed pyridine, and TG/DTA and Raman. A promotional effect of In on the performance of Ni/SiO 2 was found, and Ni x In/SiO 2 with a suitable Ni/In ratio gave much higher acetylene conversion, ethylene selectivity and catalyst stability than Ni/SiO 2 . This is ascribed to the geometrical isolation of the reactive Ni atoms with the inert In ones and the charge transfer from the In atoms to Ni ones, both of which are favorable for reducing the adsorption strength of ethylene and restraining the C C hydrogenolysis and the polymerizations of acetylene and the intermediate compounds. On the whole, Ni 6 In/SiO 2 and Ni 10 In/SiO 2 had better performance. Nevertheless, with increasing the In content, the selectivity to the C4+ hydrocarbons tended to increase due to the enhanced catalyst acidity because of the charge transfer from the In atoms to Ni ones. As the Lewis acid ones, the In sites could promote the polymerization. The catalyst deactivation was also analyzed. We propose that the Ni/SiO 2 deactivation is mainly attributed to the phase change from metallic Ni to nickel carbide. The introduction of In inhibited the formation of nickel carbide. However, as the In content increased, the carbonaceous deposit became the main reason for the Ni x In/SiO 2 deactivation due to the enhanced catalyst acidity." }, { "instance_id": "R25857xR25779", "comparison_id": "R25857", "paper_id": "R25779", "text": "TiO2-modified nano-egg-shell Pd catalyst for selective hydrogenation of acetylene Abstract Pd-based egg-shell nano-catalysts were prepared using porous hollow silica nanoparticles (PHSNs) as support, and the as-prepared catalysts were modified with TiO 2 to promote their selectivity for hydrogenation of acetylene. Pd nanoparticles were loaded evenly on PHSNs and TiO 2 was loaded on the active Pd particles. The effects of reduction time and temperature and the amount of TiO 2 added on catalytic performances were investigated by using a fixed-bed micro-reactor. It was found that the catalysts showed better performance when reduced at 300 \u00b0C than at 500 \u00b0C, and if reduced for 1 h than 3 h. When the amount of Ti added was 6 times that of Pd, the catalyst showed the highest ethylene selectivity." }, { "instance_id": "R25857xR25836", "comparison_id": "R25857", "paper_id": "R25836", "text": "Comparative study of Au/ZrO2 catalysts in CO oxidation and 1,3-butadiene hydrogenation Abstract This work investigates the effects of Au3+/Au0 ratio or distribution of gold oxidation states in Au/ZrO2 catalysts of different gold loadings (0.01\u20130.76% Au) on CO oxidation and 1,3-butadiene hydrogenation by regulating the temperature of catalyst calcination (393\u2013673 K) and pre-reduction with hydrogen (473\u2013523 K). The catalysts were prepared by deposition\u2013precipitation and were characterized with elemental analysis, nitrogen adsorption/desorption, TEM, XPS and TPR. The catalytic data showed that the exposed metallic Au0 atoms at the surface of Au particles were not the only catalytic sites for the two reactions, isolated Au3+ ions at the surface of ZrO2, such as those in the catalysts containing no more than 0.08% Au were more active by TOF. For 0.76% Au/ZrO2 catalysts having coexisting Au3+ and Au0, the catalytic activity changed differently with varying the Au3+/Au0 ratio in the two reactions. The highest activity for the CO oxidation reaction was observed over the catalyst of Au3+/Au0 = 0.33. However, catalyst with a higher Au3+/Au0 ratio showed always a higher activity for the hydrogenation reaction; co-existance of Au0 with Au3+ ions lowered the catalyst activity. Moreover, the coexisting Au particles changed the product selectivity of 1,3-butadiene hydrogenation to favor the formation of more trans-2-butene and butane. It is thus suggested that for better control of the catalytic performance of Au catalyst the effect of Au3+/Au0 ratio on catalytic reactions should be investigated in combination with the particle size effect of Au." }, { "instance_id": "R25857xR25774", "comparison_id": "R25857", "paper_id": "R25774", "text": "Palladium sulphide \u2013 A highly selective catalyst for the gas phase hydrogenation of alkynes to alkenes Abstract A particular palladium sulphide phase (Pd4S) supported on carbon nanofibers is shown to be one of the most selective alkyne hydrogenation catalysts reported to date. Propyne and ethyne (in the absence of the corresponding alkene) can be converted in the gas phase to the corresponding alkene with 96% and 83% selectivity at 100% alkyne conversion. A bulk phase PdS powder (pre-reduced at 523 K) also demonstrated excellent performance (79% ethene selectivity). Other bulk phase metal sulphides (Ni2S3 and CuS) were tested and whilst the nickel analogue was found to be active/selective the performance was poorer than observed with either supported or unsupported Pd sulphide. Exceptional alkene selectivity extends to mixed alkyne/alkene feeds using the Pd4S/CNF catalyst \u2013 86% and 95% alkene selectivity for C3 and C2 mixes, respectively. This report opens up exciting possibilities for using metal sulphides as highly selective hydrogenation catalysts." }, { "instance_id": "R25857xR25807", "comparison_id": "R25857", "paper_id": "R25807", "text": "Nanosizing Intermetallic Compounds Onto Carbon Nanotubes: Active and Selective Hydrogenation Catalysts Therefore, nanosizing andsupporting the annealed metal products remain challenges.Another difficulty is in directly preparing supportedcatalysts while simultaneously obtaining good crystallite sizecontrol. A good catalyst support should be capable ofinhibiting sintering and loss of the catalyst during reaction.Fabrication of supported intermetallics catalysts in nanoscaledimensionsrequiresareliablemethodthatfacilitatesnotonlysize control but a thermally stable phase under reactionconditions. Since the work of Iijima in 1991," }, { "instance_id": "R25857xR25834", "comparison_id": "R25857", "paper_id": "R25834", "text": "High performance of carbon nanotubes confining gold nanoparticles for selective hydrogenation of 1,3-butadiene and cinnamaldehyde Abstract This work reports a striking enhancement of catalytic performance of gold nanoparticles (NPs) with average diameter of 3.2 nm partially (ca. 32\u201340%) confined in the cavity of carbon nanotubes (CNTs) for the gas-phase hydrogenation of 1,3-butadiene (BD), as well as liquid-phase hydrogenation of cinnamaldehyde (CAL). The reaction rates and turnover frequencies of the CNTs confining gold NPs exceed those with a similar size deposited on the outer surface of CNTs and activated carbon by more than one to two orders of magnitude in both two reactions. The selectivity to monobutenes and hydrocinnamaldehyde is up to 100% and 91% at 100% and 95% conversions of BD and CAL, respectively. Au/CNTs catalysts were characterized in depth to establish their structure\u2013property relationship. The peculiar interaction of confined gold NPs with the surface of CNTs facilitates the dissociation/activation of H2, which is the rate-determining step for hydrogenation reactions demonstrated by kinetic studies." }, { "instance_id": "R25857xR25803", "comparison_id": "R25857", "paper_id": "R25803", "text": "Intermetallic Compound Pd 2 Ga as a Selective Catalyst for the Semi-Hydrogenation of Acetylene: From Model to High Performance Systems Selective catalytic hydrogenation has wide applications in both petrochemical and fine chemical industries, however, it remains challenging when two or multiple functional groups coexist in the substrate. To tackle this challenge, the \"active site isolation\" strategy has been proved effective, and various approaches to the site isolation have been developed. In this review, we have summarized these approaches, including adsorption/grafting of N/S-containing organic molecules on the metal surface, partial covering of active metal surface by metal oxides either via doping or through strong metal-support interaction, confinement of active metal nanoparticles in micro- or mesopores of the supports, formation of bimetallic alloys or intermetallics or core@shell structures with a relatively inert metal (IB and IIB) or nonmetal element (B, C, S, etc.), and construction of single-atom catalysts on reducible oxides or inert metals. Both advantages and disadvantages of each approach toward the site isolation have been discussed for three types of chemoselective hydrogenation reactions, including alkynes/dienes to monoenes, \u03b1,\u03b2-unsaturated aldehydes/ketones to the unsaturated alcohols, and substituted nitroarenes to the corresponding anilines. The key factors affecting the catalytic activity/selectivity, in particular, the geometric and electronic structure of the active sites, are discussed with the aim to extract fundamental principles for the development of efficient and selective catalysts in hydrogenation as well as other transformations." }, { "instance_id": "R25857xR25789", "comparison_id": "R25857", "paper_id": "R25789", "text": "Ag Alloyed Pd Single-Atom Catalysts for Efficient Selective Hydrogenation of Acetylene to Ethylene in Excess Ethylene Semihydrogenation of acetylene in an ethylene-rich stream is an industrially important process. Conventional supported monometallic Pd catalysts offer high acetylene conversion, but they suffer from very low selectivity to ethylene due to overhydrogenation and the formation of carbonaceous deposits. Herein, a series of Ag alloyed Pd single-atom catalysts, possessing only ppm levels of Pd, supported on silica gel were prepared by a simple incipient wetness coimpregnation method and applied to the selective hydrogenation of acetylene in an ethylene-rich stream under conditions close to the front-end employed by industry. High acetylene conversion and simultaneous selectivity to ethylene was attained over a wide temperature window, surpassing an analogous Au alloyed Pd single-atom system we previously reported. Restructuring of AgPd nanoparticles and electron transfer from Ag to Pd were evidenced by in situ FTIR and in situ XPS as a function of increasing reduction temperature. Microcalorimetry and XANES measurements support both geometric and electronic synergetic effects between the alloyed Pd and Ag. Kinetic studies provide valuable insight into the nature of the active sites within these AgPd/SiO2 catalysts, and hence, they provide evidence for the key factors underpinning the excellent performance of these bimetallic catalysts toward the selective hydrogenation of acetylene under ethylene-rich conditions while minimizing precious metal usage." }, { "instance_id": "R25857xR25814", "comparison_id": "R25857", "paper_id": "R25814", "text": "Isolated Single-Atom Pd Sites in Intermetallic Nanostructures: High Catalytic Selectivity for Semihydrogenation of Alkynes Improving the catalytic selectivity of Pd catalysts is of key importance for various industrial processes and remains a challenge so far. Given the unique properties of single-atom catalysts, isolating contiguous Pd atoms into a single-Pd site with another metal to form intermetallic structures is an effective way to endow Pd with high catalytic selectivity and to stabilize the single site with the intermetallic structures. Based on density functional theory modeling, we demonstrate that the (110) surface of Pm3\u0305m PdIn with single-atom Pd sites shows high selectivity for semihydrogenation of acetylene, whereas the (111) surface of P4/mmm Pd3In with Pd trimer sites shows low selectivity. This idea has been further validated by experimental results that intermetallic PdIn nanocrystals mainly exposing the (110) surface exhibit much higher selectivity for acetylene hydrogenation than Pd3In nanocrystals mainly exposing the (111) surface (92% vs 21% ethylene selectivity at 90 \u00b0C). This work provides insight for rational design of bimetallic metal catalysts with specific catalytic properties." }, { "instance_id": "R25857xR25818", "comparison_id": "R25857", "paper_id": "R25818", "text": "Enhancing both selectivity and coking-resistance of a single-atom Pd1/C3N4 catalyst for acetylene hydrogenation Selective hydrogenation is an important industrial catalytic process in chemical upgrading, where Pd-based catalysts are widely used because of their high hydrogenation activities. However, poor selectivity and short catalyst lifetime because of heavy coke formation have been major concerns. In this work, atomically dispersed Pd atoms were successfully synthesized on graphitic carbon nitride (g-C3N4) using atomic layer deposition. Aberration-corrected high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) confirmed the dominant presence of isolated Pd atoms without Pd nanoparticle (NP) formation. During selective hydrogenation of acetylene in excess ethylene, the g-C3N4-supported Pd NP catalysts had strikingly higher ethylene selectivities than the conventional Pd/Al2O3 and Pd/SiO2 catalysts. In-situ X-ray photoemission spectroscopy revealed that the considerable charge transfer from the Pd NPs to g-C3N4 likely plays an important role in the catalytic performance enhancement. More impressively, the single-atom Pd1/C3N4 catalyst exhibited both higher ethylene selectivity and higher coking resistance. Our work demonstrates that the single-atom Pd catalyst is a promising candidate for improving both selectivity and coking-resistance in hydrogenation reactions." }, { "instance_id": "R25857xR25838", "comparison_id": "R25857", "paper_id": "R25838", "text": "Gold(III) \u2013 metal organic framework bridges the gap between homogeneous and heterogeneous gold catalysts Abstract A MOF containing an Au(III) Schiff base complex lining the pore walls has been prepared by a post-synthesis method. The Au(III)-containing MOF is highly active and selective for domino coupling and cyclization reactions in liquid phase, the Au(III) species remain after the reaction, and the catalyst is fully recyclable. This gives higher activity than homogeneous and gold-supported catalysts reported up to now. The well-defined Au(III) sites are active for dissociating H2 and proved to be active for the gas-phase selective hydrogenation of 1,3-butadiene into the butenes." }, { "instance_id": "R25857xR25840", "comparison_id": "R25857", "paper_id": "R25840", "text": "Partial hydrogenation of propyne over copper-based catalysts and comparison with nickel-based analogues Abstract The partial hydrogenation of propyne was studied over copper-based catalysts derived from Cu\u2013Al hydrotalcite and malachite precursors and compared with supported systems (Cu/Al 2 O 3 and Cu/SiO 2 ). The as-synthesized samples and the materials derived from calcination and reduction were characterized by XRF, XRD, TGA, TEM, N 2 adsorption, H 2 -TPR, XPS, and N 2 O pulse chemisorption. Catalytic tests were carried out in a continuous flow-reactor at ambient pressure and 423\u2013523 K using H 2 :C 3 H 4 ratios of 1\u201312 and were complemented by operando DRIFTS experiments. The propyne conversion and propene selectivity correlated with the copper dispersion, which varied with the type of precursor or support and the calcination and reduction temperatures. The highest exposed copper surface was attained on hydrotalcite-derived catalysts, which displayed C 3 H 6 selectivity up to 80% at full C 3 H 4 conversion and stable performance in long-run tests at T \u2a7e 473 K. Both activated Cu\u2013Al hydrotalcites (this work) and Ni\u2013Al hydrotalcites [S. Abello, D. Verboekend, B. Bridier, J. Perez-Ramirez, J. Catal. 259 (2008) 85] exhibited a relatively high alkene selectivity under optimal operation conditions, but they present a markedly distinctive catalytic behavior with respect to temperature and hydrogen-to-alkyne ratio. The product distribution was assigned through Density Functional Theory (DFT) simulations to the different stability of subsurface phases (carbides, hydrides) and the energies and barriers for the competing reaction mechanisms. The behavior of Cu in partial alkyne hydrogenation resembles that of Au nanoparticles, while Ni is closer to Pd." }, { "instance_id": "R25857xR25793", "comparison_id": "R25857", "paper_id": "R25793", "text": "Performance of Cu-promoted Pd catalysts prepared by adding Cu using a surface redox method in acetylene hydrogenation Abstract Cu-promoted Pd/Al 2 O 3 catalysts were prepared by selectively depositing Cu onto the Pd surface using a surface redox (SR) method, and their performance in the selective hydrogenation of acetylene was compared with that of Ag-promoted catalysts prepared by both the SR and the conventional impregnation method. The Cu-promoted catalysts prepared by SR showed higher ethylene selectivity and activity than Ag-promoted catalysts, particularly with small amounts of added promoter. The above results were obtained because Cu added by SR was deposited preferentially onto the low-coordination sites of Pd, which were detrimental to ethylene selectivity but took a small fraction of the Pd surface that was responsible for acetylene conversion, and also because Cu had an intrinsic activity for hydrogenation. The advantages of Cu-promoted catalysts prepared using Cu as a promoter and the SR process as the promoter-addition method were conclusively demonstrated in the present study." }, { "instance_id": "R25857xR25809", "comparison_id": "R25857", "paper_id": "R25809", "text": "PdZn Intermetallic Nanostructure with Pd\u2013Zn\u2013Pd Ensembles for Highly Active and Chemoselective Semi-Hydrogenation of Acetylene Intermetallic alloying of one active metal to another inert metal provides not only the improved dispersion of active centers but also a unique and homogeneous ensemble of active sites, thus offering new opportunities in a variety of reactions. Herein, we report that PdZn intermetallic nanostructure with Pd\u2013Zn\u2013Pd ensembles are both highly active and selective for the semihydrogenation of acetylene to ethylene, which is usually inaccessible due to the sequential hydrogenation to ethane. Microcalorimetric measurements and density functional theory calculations demonstrate that the appropriate spatial arrangement of Pd sites in the Pd\u2013Zn\u2013Pd ensembles of the PdZn alloy leads to the moderate \u03c3-bonding mode for acetylene with two neighboring Pd sites while the weak \u03c0-bonding pattern of ethylene adsorption on the single Pd site, which facilitates the chemisorption toward acetylene and promotes the desorption of ethylene from the catalyst surface. As a result, it leads to the kinetic favor of the selective conver..." }, { "instance_id": "R25857xR25850", "comparison_id": "R25857", "paper_id": "R25850", "text": "Ceria in hydrogenation catalysis: high selectivity in the conversion of alkynes to olefins Active and selective: Ceria shows a high activity and selectivity in the gas-phase hydrogenation of alkynes to olefins. This unprecedented behavior has direct impact on the purification of olefin streams and, more importantly, it opens new perspectives for exploring this fascinating oxide as a catalyst for the selective hydrogenation of other functional groups." }, { "instance_id": "R25857xR25797", "comparison_id": "R25857", "paper_id": "R25797", "text": "TiO2 supported Pd@Ag as highly selective catalysts for hydrogenation of acetylene in excess ethylene A novel TiO2 supported core-shell (Pd@Ag) bimetallic catalyst was fabricated via the sequential photodeposition method. The Ag shell effectively blocks the high coordination sites on the Pd core, and therefore pronouncedly enhances the ethylene selectivity for the catalytic hydrogenation of acetylene in excess ethylene." }, { "instance_id": "R25857xR25783", "comparison_id": "R25857", "paper_id": "R25783", "text": "Pd@C core\u2013shell nanoparticles on carbon nanotubes as highly stable and selective catalysts for hydrogenation of acetylene to ethylene

Highly stable and selective Pd-based catalyst was synthesized by covering supported Pd nanoparticles with an N-doped carbon shell for acetylene hydrogenation.

" }, { "instance_id": "R25900xR25884", "comparison_id": "R25900", "paper_id": "R25884", "text": "Design of Core-Pd/Shell-Ag Nanocomposite Catalyst for Selective Semihydrogenation of Alkynes We designed core-Pd/shell-Ag nanocomposite catalyst (Pd@Ag) for highly selective semihydrogenation of alkynes. The construction of the core\u2013shell nanocomposite enables a significant improvement in the low activity of Ag NPs for the selective semihydrogenation of alkynes because hydrogen is supplied from the core-Pd NPs to the shell-Ag NPs in a synergistic manner. Simultaneously, coating the core-Pd NPs with shell-Ag NPs results in efficient suppression of overhydrogenation of alkenes by the Pd NPs. This complementary action of core-Pd and shell-Ag provides high chemoselectivity toward a wide range of alkenes with high Z-selectivity under mild reaction conditions (room temperature and 1 atm H2). Moreover, Pd@Ag can be easily separated from the reaction mixture and is reusable without loss of catalytic activity or selectivity." }, { "instance_id": "R25900xR25878", "comparison_id": "R25900", "paper_id": "R25878", "text": "Selective Semihydrogenation of Alkynes Catalyzed by Pd Nanoparticles Immobilized on Heteroatom-Doped Hierarchical Porous Carbon Derived from Bamboo Shoots Highly dispersed palladium nanoparticles (Pd NPs) immobilized on heteroatom-doped hierarchical porous carbon supports (N,O-carbon) with large specific surface areas are synthesized by a wet chemical reduction method. The N,O-carbon derived from naturally abundant bamboo shoots is fabricated by a tandem hydrothermal-carbonization process without assistance of any templates, chemical activation reagents, or exogenous N or O sources in a simple and ecofriendly manner. The prepared Pd/N,O-carbon catalyst shows extremely high activity and excellent chemoselectivity for semihydrogenation of a broad range of alkynes to versatile and valuable alkenes under ambient conditions. The catalyst can be readily recovered for successive reuse with negligible loss in activity and selectivity, and is also applicable for practical gram-scale reactions." }, { "instance_id": "R25900xR25876", "comparison_id": "R25900", "paper_id": "R25876", "text": "A Pd- Cu2O nanocomposite as an effective synergistic catalyst for selective semi-hydrogenation of the terminal alkynes only

A new type lead-free Pd\u2013Cu2O nanocomposite catalyst shows \u201cdouble\u201d selectivities for hydrogenation of alkynes: only terminal alkynes hydrogenated and only alkenes produced, i.e. no internal alkyne is hydrogenated.

" }, { "instance_id": "R25900xR25892", "comparison_id": "R25900", "paper_id": "R25892", "text": "Palladium\u2013gold single atom alloy catalysts for liquid phase selective hydrogenation of 1-hexyne

Silica supported and unsupported PdAu single atom alloys (SAAs) were investigated for the selective hydrogenation of 1-hexyne to hexenes under mild conditions.

" }, { "instance_id": "R25900xR25886", "comparison_id": "R25900", "paper_id": "R25886", "text": "Achieving the Trade-Off between Selectivity and Activity in Semihydrogenation of Alkynes by Fabrication of (Asymmetrical Pd@Ag Core)@(CeO2 Shell) Nanocatalysts via Autoredox Reaction (Asymmetrical Pd@Ag core)@(CeO2 shell) nanostructures are successfully fabricated via a clean and facile modified autoredox reaction by the preaddition of Pd seeds in the growth solution. In a subsequent catalytic test, it is found that the as-obtained bimetallic core@shell nanoparticles exhibit excellent catalytic performance in semihydrogenation of alkynes. The trade-off between selectivity and activity is well realized." }, { "instance_id": "R25900xR25868", "comparison_id": "R25900", "paper_id": "R25868", "text": "Selective Hydrogenation of Polyunsaturated Fatty Acids Using Alkanethiol Self-Assembled Monolayer-Coated Pd/Al2O3 Catalysts Pd/Al2O3 catalysts coated with various thiolate self-assembled monolayers (SAMs) were used to direct the partial hydrogenation of 18-carbon polyunsaturated fatty acids, yielding a product stream enriched in monounsaturated fatty acids (with low saturated fatty acid content), a favorable result for increasing the oxidative stability of biodiesel. The uncoated Pd/Al2O3 catalyst quickly saturated all fatty acid reactants under hydrogenation conditions, but the addition of alkanethiol SAMs markedly increased the reaction selectivity to the monounsaturated product oleic acid to a level of 80\u201390%, even at conversions >70%. This effect, which is attributed to steric effects between the SAMs and reactants, was consistent with the relative consumption rates of linoleic and oleic acid using alkanethiol-coated and uncoated Pd/Al2O3 catalysts. With an uncoated Pd/Al2O3 catalyst, each fatty acid, regardless of its degree of saturation had a reaction rate of \u223c0.2 mol reactant consumed per mole of surface palladium per ..." }, { "instance_id": "R25900xR25863", "comparison_id": "R25900", "paper_id": "R25863", "text": "Green, Multi-Gram One-Step Synthesis of Core-Shell Nanocomposites in Water and Their Catalytic Application to Chemoselective Hydrogenations We devise a new and green route for the multi-gram synthesis of core-shell nanoparticles (NPs) in one step under organic-free and pH-neutral conditions. Simply mixing core and shell metal precursors in the presence of solid metal oxides in water allowed for the facile fabrication of small CeO2 -covered Au and Ag nanoparticles dispersed on metal oxides in one step. The CeO2 -covered Au nanoparticles acted as a highly efficient and reusable catalyst for a series of chemoselective hydrogenations, while retaining C=C bonds in diverse substrates. Consequently, higher environmental compatibility and more efficient energy savings were achieved across the entire process, including catalyst preparation, reaction, separation, and reuse." }, { "instance_id": "R25900xR25898", "comparison_id": "R25900", "paper_id": "R25898", "text": "Merging Single-Atom-Dispersed Silver and Carbon Nitride to a Joint Electronic System via Copolymerization with Silver Tricyanomethanide Herein, we present an approach to create a hybrid between single-atom-dispersed silver and a carbon nitride polymer. Silver tricyanomethanide (AgTCM) is used as a reactive comonomer during templated carbon nitride synthesis to introduce both negative charges and silver atoms/ions to the system. The successful introduction of the extra electron density under the formation of a delocalized joint electronic system is proven by photoluminescence measurements, X-ray photoelectron spectroscopy investigations, and measurements of surface \u03b6-potential. At the same time, the principal structure of the carbon nitride network is not disturbed, as shown by solid-state nuclear magnetic resonance spectroscopy and electrochemical impedance spectroscopy analysis. The synthesis also results in an improvement of the visible light absorption and the development of higher surface area in the final products. The atom-dispersed AgTCM-doped carbon nitride shows an enhanced performance in the selective hydrogenation of alkynes in comparison with the performance of other conventional Ag-based materials prepared by spray deposition and impregnation-reduction methods, here exemplified with 1-hexyne." }, { "instance_id": "R25900xR25896", "comparison_id": "R25900", "paper_id": "R25896", "text": "A stable single-site palladium catalyst for hydrogenations We report the preparation and hydrogenation performance of a single-site palladium catalyst that was obtained by the anchoring of Pd atoms into the cavities of mesoporous polymeric graphitic carbon nitride. The characterization of the material confirmed the atomic dispersion of the palladium phase throughout the sample. The catalyst was applied for three-phase hydrogenations of alkynes and nitroarenes in a continuous-flow reactor, showing its high activity and product selectivity in comparison with benchmark catalysts based on nanoparticles. Density functional theory calculations provided fundamental insights into the material structure and attributed the high catalyst activity and selectivity to the facile hydrogen activation and hydrocarbon adsorption on atomically dispersed Pd sites." }, { "instance_id": "R25900xR25870", "comparison_id": "R25900", "paper_id": "R25870", "text": "Selective ensembles in supported palladium sulfide nanoparticles for alkyne semi-hydrogenation Ensemble control has been intensively pursued for decades to identify sustainable alternatives to the Lindlar catalyst (PdPb/CaCO3) applied for the partial hydrogenation of alkynes in industrial organic synthesis. Although the geometric and electronic requirements are known, a literature survey illustrates the difficulty of transferring this knowledge into an efficient and robust catalyst. Here, we report a simple treatment of palladium nanoparticles supported on graphitic carbon nitride with aqueous sodium sulfide, which directs the formation of a nanostructured Pd3S phase with controlled crystallographic orientation, exhibiting unparalleled performance in the semi-hydrogenation of alkynes in the liquid phase. The exceptional behavior is linked to the multifunctional role of sulfur. Apart from defining a structure integrating spatially-isolated palladium trimers, the active ensembles, the modifier imparts a bifunctional mechanism and weak binding of the organic intermediates. Similar metal trimers are also identified in Pd4S, evidencing the pervasiveness of these selective ensembles in supported palladium sulfides.Developing robust catalysts for alkyne semi-hydrogenation remains a challenge. Here, the authors introduce a scalable protocol to prepare crystal phase and orientation controlled Pd3S nanoparticles supported on carbon nitride, exhibiting unparalleled semi-hydrogenation performance due to a high density of active and selective ensembles." }, { "instance_id": "R25900xR25890", "comparison_id": "R25900", "paper_id": "R25890", "text": "Design of a difunctional Zn- Ti LDHs supported PdAu catalyst for selective hydrogenation of phenylacetylene Abstract To suppress hydrogenation of alkene at complete alkyne conversion, a difunctional Zn-Ti layered double hydroxides (LDHs) supported bimetallic PdAu alloy catalyst with alkalinity was designed and prepared by a photochemical reduction method. On the basic of TEM and XPS results, the formation of Pd-Au alloy was determined. The alloy nanoparticles had incorporated into the interlayer region of LDHs, giving a strong interaction between them. As expected, the PdAu/ZnTi catalysts exhibited excellent styrene selectivity (over 90%) even when the reaction time was prolonged (6 h) after full conversion of phenylacetylene. Such excellent selectivity is attributed to the synergistic effect between bimetallic alloy nanoparticles and Zn-Ti LDHs. The selective formation of polar hydrogen species derived from the heterolytic dissociation of H2 at the interface between PdAu alloy and basic sites of Zn-Ti LDHs is more favorably reactive to alkyne compared with alkene. Moreover, the Zn-Ti LDHs supported PdAu catalyst exhibited great recyclability. The difunctional catalyst is expected to be potentially promising for industrial applications." }, { "instance_id": "R25900xR25874", "comparison_id": "R25900", "paper_id": "R25874", "text": "Dual Pd and CuFe2O4 nanoparticles encapsulated in a core/shell silica microsphere for selective hydrogenation of arylacetylenes A dual catalyst containing Pd and CuFe(2)O(4) nanoparticles in a silica shell exhibits >98% conversion of arylacetylenes to related styrenes with selectivity greater than 98%, which are better than those obtained using a commercial Lindlar catalyst. The excellent synergy was likely a result of the proximal interaction between Pd and CuFe(2)O(4) nanoparticles." }, { "instance_id": "R25900xR25865", "comparison_id": "R25900", "paper_id": "R25865", "text": "Pd-Pb Alloy Nanocrystals with Tailored Composition for Semihydrogenation: Taking Advantage of Catalyst Poisoning Metallic nanocrystals (NCs) with well-defined sizes and shapes represent a new family of model systems for establishing structure-function relationships in heterogeneous catalysis. Here in this study, we show that catalyst poisoning can be utilized as an efficient strategy for nanocrystals shape and composition control, as well as a way to tune the catalytic activity of catalysts. Lead species, a well-known poison for noble-metal catalysts, was investigated in the growth of Pd NCs. We discovered that Pb atoms can be incorporated into the lattice of Pd NCs and form Pd-Pb alloy NCs with tunable composition and crystal facets. As model catalysts, the alloy NCs with different compositions showed different selectivity in the semihydrogenation of phenylacetylene. Pd-Pb alloy NCs with better selectivity than that of the commercial Lindlar catalyst were discovered. This study exemplified that the poisoning effect in catalysis can be explored as efficient shape-directing reagents in NC growth, and more importantly, as a strategy to tailor the performance of catalysts with high selectivity." }, { "instance_id": "R25900xR25882", "comparison_id": "R25900", "paper_id": "R25882", "text": "Interstitial modification of palladium nanoparticles with boron atoms as a green catalyst for selective hydrogenation Lindlar catalysts comprising of palladium/calcium carbonate modified with lead acetate and quinoline are widely employed industrially for the partial hydrogenation of alkynes. However, their use is restricted, particularly for food, cosmetic and drug manufacture, due to the extremely toxic nature of lead, and the risk of its leaching from catalyst surface. In addition, the catalysts also exhibit poor selectivities in a number of cases. Here we report that a non-surface modification of palladium gives rise to the formation of an ultra-selective nanocatalyst. Boron atoms are found to take residence in palladium interstitial lattice sites with good chemical and thermal stability. This is favoured due to a strong host-guest electronic interaction when supported palladium nanoparticles are treated with a borane tetrahydrofuran solution. The adsorptive properties of palladium are modified by the subsurface boron atoms and display ultra-selectivity in a number of challenging alkyne hydrogenation reactions, which outclass the performance of Lindlar catalysts." }, { "instance_id": "R25999xR25979", "comparison_id": "R25999", "paper_id": "R25979", "text": "Syntactic segmentation and labeling of digitized pages from technical journals A method for extracting alternating horizontal and vertical projection profiles are from nested sub-blocks of scanned page images of technical documents is discussed. The thresholded profile strings are parsed using the compiler utilities Lex and Yacc. The significant document components are demarcated and identified by the recursive application of block grammars. Backtracking for error recovery and branch and bound for maximum-area labeling are implemented with Unix Shell programs. Results of the segmentation and labeling process are stored in a labeled x-y tree. It is shown that families of technical documents that share the same layout conventions can be readily analyzed. Results from experiments in which more than 20 types of document entities were identified in sample pages from two journals are presented. >" }, { "instance_id": "R25999xR25981", "comparison_id": "R25999", "paper_id": "R25981", "text": "Document image segmentation and text area ordering A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>" }, { "instance_id": "R25999xR25991", "comparison_id": "R25999", "paper_id": "R25991", "text": "Logical structure analysis of book document images using contents information Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure." }, { "instance_id": "R25999xR25975", "comparison_id": "R25999", "paper_id": "R25975", "text": "Modeling documents for structure recognition using generalized N-grams We present and discuss a novel approach to modeling logical structures of documents, based on a statistical representation of patterns in a document class. An efficient and error tolerant recognition heuristics adapted to the model is proposed. The statistical approach permits easily automated and incremental learning of the model. The approach has been partially evaluated on a prototype. A discussion of the results achieved by the prototype is finally made." }, { "instance_id": "R25999xR25967", "comparison_id": "R25999", "paper_id": "R25967", "text": "Document Image Understanding Despite the expansion of electronic data processing paper remains the most popular medium for display, storage and transmission of information for persons and organisations. With growing office automation the paper-computer interface becomes increasingly important. To be useful, this interface must be able to handle documents containing text as well as graphics, and convert them into a standardized electronic representation." }, { "instance_id": "R25999xR25985", "comparison_id": "R25999", "paper_id": "R25985", "text": "Knowledge-based derivation of document logical structure The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout." }, { "instance_id": "R25999xR25987", "comparison_id": "R25999", "paper_id": "R25987", "text": "Near-wordless document structure classification Automatic derivation of logical document structure from generic layout would enable the development of many highly flexible electronic document manipulation tools. This problem can be divided into the segmentation of text into pieces and the classification of these pieces as particular logical structures. This paper proposes an approach to the classification of logical document structures, according to their distance from predefined prototypes. The prototypes consider linguistic information minimally, thus relying minimally on the accuracy of OCR and decreasing language-dependence. Different classes of logical structures and the differences in the requisite information for classifying them are discussed. A prototype format is proposed, existing prototypes and a distance measurement are described, and performance results are provided." }, { "instance_id": "R25999xR25963", "comparison_id": "R25999", "paper_id": "R25963", "text": "Understanding multi-articled documents A document understanding method based on the tree representation of document structures is proposed. It is shown that documents have an obvious hierarchical structure in their geometry which is represented by a tree. A small number of rules are introduced to transform the geometric structure into the logical structure which represents the semantics. The virtual field separator technique is employed to utilize the information carried by special constituents of documents such as field separators and frames, keeping the number of transformation rules small. Experimental results on a variety of document formats have shown that the proposed method is applicable to most of the documents commonly encountered in daily use, although there is still room for further refinement of the transformation rules.<>" }, { "instance_id": "R25999xR25989", "comparison_id": "R25999", "paper_id": "R25989", "text": "Computer understanding of document structure We describe a system which is capable of learning the presentation of document logical structure, exemplary as shown for business letters. Presenting a set of instances to the system, it clusters them into structural concepts and induces a concept hierarchy. This concept hierarchy is taken as a reference for classifying future input. The article introduces the sequence of learning steps and describes how the resulting concept hierarchy is applied to logical labeling, and reports the results. \u00a9 1996 John Wiley & Sons, Inc." }, { "instance_id": "R26017xR25991", "comparison_id": "R26017", "paper_id": "R25991", "text": "Logical structure analysis of book document images using contents information Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure." }, { "instance_id": "R26017xR25963", "comparison_id": "R26017", "paper_id": "R25963", "text": "Understanding multi-articled documents A document understanding method based on the tree representation of document structures is proposed. It is shown that documents have an obvious hierarchical structure in their geometry which is represented by a tree. A small number of rules are introduced to transform the geometric structure into the logical structure which represents the semantics. The virtual field separator technique is employed to utilize the information carried by special constituents of documents such as field separators and frames, keeping the number of transformation rules small. Experimental results on a variety of document formats have shown that the proposed method is applicable to most of the documents commonly encountered in daily use, although there is still room for further refinement of the transformation rules.<>" }, { "instance_id": "R26017xR25975", "comparison_id": "R26017", "paper_id": "R25975", "text": "Modeling documents for structure recognition using generalized N-grams We present and discuss a novel approach to modeling logical structures of documents, based on a statistical representation of patterns in a document class. An efficient and error tolerant recognition heuristics adapted to the model is proposed. The statistical approach permits easily automated and incremental learning of the model. The approach has been partially evaluated on a prototype. A discussion of the results achieved by the prototype is finally made." }, { "instance_id": "R26017xR25997", "comparison_id": "R26017", "paper_id": "R25997", "text": "Automated labeling in document images The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%." }, { "instance_id": "R26017xR25987", "comparison_id": "R26017", "paper_id": "R25987", "text": "Near-wordless document structure classification Automatic derivation of logical document structure from generic layout would enable the development of many highly flexible electronic document manipulation tools. This problem can be divided into the segmentation of text into pieces and the classification of these pieces as particular logical structures. This paper proposes an approach to the classification of logical document structures, according to their distance from predefined prototypes. The prototypes consider linguistic information minimally, thus relying minimally on the accuracy of OCR and decreasing language-dependence. Different classes of logical structures and the differences in the requisite information for classifying them are discussed. A prototype format is proposed, existing prototypes and a distance measurement are described, and performance results are provided." }, { "instance_id": "R26017xR25983", "comparison_id": "R26017", "paper_id": "R25983", "text": "Using stochastic syntactic analysis for extracting a logical structure from a document image A method of stochastic syntactic analysis is applied to extracting the logical structure of a printed document from its physical layout and keywords indicating logical components. The document is parsed as a sentence consisting of text lines and graphic objects according to a stochastic regular grammar with attributes. By using stochastic analysis, the parser can retain possible results in order of their probability, and thus, if ambiguity occurs, it selects an optimal result more appropriately than deterministic systems. A mark up system applying the method was constructed, and 87% of the logical components of manuals and 82% of those of technical papers are correctly marked up. The rate improved to 89% when the second candidates were considered, showing the advantage of the authors' approach over the deterministic approach." }, { "instance_id": "R26017xR25993", "comparison_id": "R26017", "paper_id": "R25993", "text": "Logical structure analysis of document images based on emergent computation A new method for logical structure analysis of document images is proposed in this paper as the basis for a document reader which can extract logical information from various printed documents. The proposed system consists of five basic modules: typography analysis, object recognition, object segmentation, object grouping and object modification. Emergent computation, which is a key concept of artificial life, is adopted for the cooperative interaction among the modules in the system in order to achieve an effective and flexible behavior of the whole system. It has two principal advantages over other methods: adaptive system configuration for various and complex logical structures, and robust document analysis that is tolerant of erroneous feature detection." }, { "instance_id": "R26017xR25977", "comparison_id": "R26017", "paper_id": "R25977", "text": "Page grammars and page parsing. A syntactic approach to document layout recognition Describes a syntactic approach to deducing the logical structure of printed documents from their physical layout. Page layout is described by a two-dimensional grammar, similar to a context-free string grammar, and a chart parser is used to parse segmented page images according to the grammar. This process is part of a system which reads scanned document images and produces computer-readable text in a logical mark-up format such as SGML. The system is briefly outlined, the grammar formalism and the parsing algorithm are described in detail, and some experimental results are reported.<>" }, { "instance_id": "R26017xR25989", "comparison_id": "R26017", "paper_id": "R25989", "text": "Computer understanding of document structure We describe a system which is capable of learning the presentation of document logical structure, exemplary as shown for business letters. Presenting a set of instances to the system, it clusters them into structural concepts and induces a concept hierarchy. This concept hierarchy is taken as a reference for classifying future input. The article introduces the sequence of learning steps and describes how the resulting concept hierarchy is applied to logical labeling, and reports the results. \u00a9 1996 John Wiley & Sons, Inc." }, { "instance_id": "R26063xR26025", "comparison_id": "R26063", "paper_id": "R26025", "text": "Notes for Contributors Andrew Challinor was a founding member of the cross-disciplinary Crops and Climate Group at the University of Reading where, in collaboration with colleagues, he pioneered a new approach to the simulation of the response of crops to climate. Contracts and grants from the Department for Environment, Food and Rural Affairs (DEFRA), the Natural Environment Research Council (NERC), the British Council and the European Union have allowed him to explore the impacts of climate variability and change on crops in India, Africa and China. In May 2007, Andrew began a Lectureship at the University of Leeds, where he continues to work on the knowledge base that strengthens the food security of populations vulnerable to climate variability and global environmental change." }, { "instance_id": "R26063xR26041", "comparison_id": "R26063", "paper_id": "R26041", "text": "Analysis of Adhesive\u2010Bonded Joints with Nonidentical Adherends In this paper the effect of the deflected configuration of the joint on the static equilibrium of the jointed portion is considered and the two end-binding moments of the joint are deduced from classical beam-plate theory in closed forms. This simplifies the calculation of the stress distribution, and makes feasible closed form solutions of stress-intensity factors. By this method, all boundary stress conditions of the joint can be strictly satisfied and the effects of the bonding material and the physical and dimensional properties of the nonidentical adherents are taken into account. The study results show that the intensities of the normal stress and the shearing stress are always much greater in a small zone at both ends of the joint. It was also found that the maximum shearing stress and the maximum normal stress always occur at different end zones of the joint." }, { "instance_id": "R26063xR26043", "comparison_id": "R26063", "paper_id": "R26043", "text": "Development of a full elasto-plastic adhesive joint design analysis A previous adhesive joint analysis that accommodated non-linear adhesive behaviour is extended to model the elasto-plastic response of the adherends. The resulting analysis models the joint as an adherend-adhesive sandwich capable of sustaining any combination of end load conditions, thus enabling a wide variety of adhesive joints to be modelled. The adhesive is assumed to behave as a coupled set of non-linear shear and tension springs, and the adherends as cylindrically bent plates which yield under the action of combined tension and bending. The complete problem is reduced to a set of six non-linear first-order ordinary differential equations which are solved numerically using a finite-difference method. In this way a reasonable assessment of adhesive stresses and strains can be obtained easily, without resorting to the complexity of a two-dimensional finite element solution. A comparison between the results from these two methods has been made and is presented in this paper after the outline of the analysis derivations." }, { "instance_id": "R26063xR26033", "comparison_id": "R26063", "paper_id": "R26033", "text": "Bond Thickness Effects upon Stresses in Single-Lap Adhesive Joints Results of an analytical investigation on the influence of bond thickness upon the stress distribution in singlelap adhesive joints are presented. The present work extends the basic approach for bonded joints, orginally introduced by Goland and Reissner, through use of a more complete shear-strain/displacement equation for the adhesive layer. This refinement was not found to be included in any of the numerous analytical investigations reviewed. As a result of the approach employed, the present work uncovers several interesting phenomena without adding any significant complication to the analysis. Besides modifying some coefficients in the shear stress equations, completely new terms in the differential equation and boundary conditions for bond peel stress are obtained. Sn addition, a variation of shear stress through the bond thickness, no matter how thin it may be, is analytically predicted only by the present theory. This through-the-bond-thickness variation of shear stress identifies two antisymmetrical adherend-bond interface points at which the shear stresses are highest. The growth of joint failures originating from these points agrees with results obtained from actual experiments." }, { "instance_id": "R26063xR26057", "comparison_id": "R26063", "paper_id": "R26057", "text": "Structural Adhesive Joints in Engineering The intention of this textbook is that it should contain everything an engineer needs to know to be able to design and produce adhesively bonded joints which are required to carry significant loads. The advantages and disadvantages of bonding are given, together with a sufficient understanding of the necessary mechanics and chemistry to enable the designer to make a sound judgement in any particular case. The stresses in joints are discussed extensively so that the engineer can get sufficient philosophy or feel for them, or can delve more deeply into the mathematics to obtain quantitive solutions even with elasto-plastic behaviour. A critical description is given of standard methods of testing adhesives, both destructively and non-destructively. The essential chemistry of adhesives and the importance of surface preparation are described and guidance is given for adhesive selection by means of check lists. Service life in terms of creep and the influence of temperature is also considered. For many applications, there will not be a unique adhesive which alone is suitable, and factors such as cost, convenience, production considerations or familiarity may be decisive. The authors wish to increase the confidence of engineers using adhesive bonding in load-bearing applications by the information and experience presented. With increasing experience of adhesives engineering, design will become more elegant as well as more fitted to its products. A list of standard American and British specifications relating to bonded joints, to be found in ASTM and BSI publications, is given in an appendix. (TRRL)" }, { "instance_id": "R26063xR26045", "comparison_id": "R26063", "paper_id": "R26045", "text": "A Method for the Stress Analysis of Lap Joints Abstract A theory is presented for the adhesive stresses in single and double lap joints under tensile loading, while subjected to thermal stress. The formulation includes the effects of bending, shearing, stretching and hygrothermal deformation in both the adherend and adhesive. All boundary conditions, including shear stress free surfaces, are satisfied. The method is general and therefore applicable to a range of material properties and joint configurations including metal-to-metal, metal-to-CFRP or CFRP-to-CFRP. The solution is numerical and is based on an equilibrium finite element approach. Through the use of an iterative procedure, the solution has been extended to cater for non-linear adhesive materials." }, { "instance_id": "R26063xR26047", "comparison_id": "R26063", "paper_id": "R26047", "text": "Bond strength for adhesive-bonded single-lap joints SummaryArbitrarily nonlinear stress-strain behaviour in both shear and peel for adhesive are utilised to formulate two coupled nonlinear governing equations for an adhesive-adherend sandwich of single-lap type. For a balanced adhesive-adherend sandwich, the two equations can be integrated, and simple formulas for bond strength are developed for characterising pure shear, peel and mixed failure in adhesive. These formulas define the bond strength in terms of the maximum strain energy density in the adhesive. It is shown that the product of the adhesive strain energy density and the adhesive thickness is equal to the energy release rateJ of mode I, mode II and mixed fracture." }, { "instance_id": "R26063xR26053", "comparison_id": "R26063", "paper_id": "R26053", "text": "A two-dimensional stress analysis of single-lap adhesive joints of dissimilar adherends subjected to tensile loads Single-lap adhesive joints of dissimilar adherends subjected to tensile loads are analyzed as a three-body contact problem using the two-dimensional theory of elasticity. In the numerical calculations, the effects of Young's modulus ratio between different adherends, the ratio of the adherend thicknesses, the ratio of the adherend lengths, and the adhesive thickness on the contact stress distributions at the interfaces are examined. As a result, it is found that (1) the stress singularity occurs near the edges of the interfaces and it increases at the edge of the interface of an adherend with smaller Young's modulus; (2) the stress singularity increases at the edge of the interface of an adherend with thinner thickness; (3) the singular stresses increase at the edges of the two interfaces as the ratio of the upper adherend length to the lower one decreases; and (4) the singular stresses increase at the edges of the two interfaces as the adhesive thickness decreases when the adhesive is thin enough, and they also increase as the adhesive thickness increases when the adhesive is thick enough. In addition, the singular stresses obtained from the present analysis are compared with those obtained by Bogy. Fairly good agreement is seen between the present analysis and the results from Bogy. Strain measurement and finite element analysis (FEA) were carried out. The analytical results are in fairly good agreement with the measured and the FEA results." }, { "instance_id": "R26063xR26055", "comparison_id": "R26063", "paper_id": "R26055", "text": "Analysis of adhesive bonded joints: a unified approach Abstract This paper presents a newly developed unified approach for the analysis and design of adhesive bonded joints. The adherends are modelled as beams or wide plates in cylindrical bending, and are considered as generally orthotropic laminates using classical laminate theory. Consequently, adherends made as asymmetric and unbalanced composite laminates can be included in the analysis. The adhesive layer is modelled in two ways. The first approach assumes the adhesive layer to be a linear elastic material and the second approach takes into account the inelastic behaviour of many adhesives. The governing equations are formulated in terms of sets of first order ordinary differential equations. The multiple-point boundary value problem constituted by the differential equations together with the imposed boundary conditions is solved numerically by direct integration using the \u2018multi-segment method\u2019 of integration. The approach is validated by comparison with finite element models and a high-order theory approach." }, { "instance_id": "R26107xR26089", "comparison_id": "R26107", "paper_id": "R26089", "text": "Environmental and human factors influencing thermal comfort of office occupants in hot\u2014humid and hot\u2014arid climates The effects of environmental and individual factors on thermal sensation in air-conditioned office environments were analysed for two large, fully compatible thermal comfort field studies in contrasting Australian climates. In the hot\u2014humid location of Townsville, 836 office workers were surveyed; 935 workers participated in hot\u2014arid Kalgoorlie-Boulder. Overall perceived work area temperature and measured indoor operative temperature correlated moderately with thermal sensation for Townsville (T) subjects but only perceived temperature correlated with Kalgoorlie-Boulder (KB) sensation. Multiple regression analyses confirmed that indoor climatic variables (including Predicted Mean Vote) contributed to actual thermal sensation vote (24% T; 15% KB), with operative temperature having more of an effect in T than in KB. Subsequent analyses of individual characteristics showed no linear contributions to thermal sensation. The remaining variances were significantly related to perceived work area temperature (7% additional explained variance in T; 12% in KB). Mann Whitney analyses (after correction for climatic variables) showed that T subjects with higher job satisfaction had thermal sensations closer to \u2018neutral\u2019. Males, healthier subjects, non-smokers, respondents with earlier survey times and underweight occupants had lower median thermal sensations in KB. Townsville occupants appeared more adapted to their outdoor climatic conditions than Kalgoorlie-Boulder respondents, perhaps due to limited home air-conditioning. Further research into non-thermal impacts on gender-related thermal acceptability is suggested." }, { "instance_id": "R26107xR26087", "comparison_id": "R26107", "paper_id": "R26087", "text": "Second-Level Post-Occupancy Evaluation Analysis Findings from a detailed analysis of post-occupancy evaluation data, sponsored by LRI, which involved thirteen office buildings typical of current design practice, will be discussed. Analysis of the data indicates that occupant satisfaction can be related to type of lighting system, presence of daylight, and patterns of luminance in the office. 15 refs., 9 figs., 3 tabs." }, { "instance_id": "R26107xR26091", "comparison_id": "R26107", "paper_id": "R26091", "text": "Thermal comfort: analysis and applications in environmental engineering This book is basically an account of research undertaken by the author and his colleagues at the Technical University of Denmark and at the Institute for Environmental Research, Kansas State University. Although the data in the literature on thermal comfort are extensive, they are disjointed Other CABI sites \ue600" }, { "instance_id": "R26107xR26105", "comparison_id": "R26107", "paper_id": "R26105", "text": "Subjective indoor air quality in schools in relation to exposure This paper presents data on indoor air quality in schools as perceived by those working in them and relates these data to exposure measurements. Data on subjective air quality, domestic exposures and health aspects were gathered by means of a questionnaire which was sent to all personnel in 38 schools; it was completed by 1410 persons (85\u20194 of the total). Data on exposure were gathered by exposure measurements in classrooms. The results indicate that 53% of the personnel perceived the indoor air quality as bad or very bad. It was perceived as worse by those who were younger, those who were dissatisfied with their psychosocial work climate and those who were not exposed to tobacco smoke at home. In older school buildings and buildings with displacement ventilation there was less dissatisfaction with the air quality. There were no significant relations between complaints and air exchange rate or concentration of carbon dioxide. The air quality was perceived as worse at higher levels of exposure to a number of airborne compounds including volatile organic compounds, moulds, bacteria and respirable dust. It was concluded that exposure to indoor pollutants affects perception even at the low concentrations normally found indoors in nonindustrial buildings." }, { "instance_id": "R26107xR26101", "comparison_id": "R26107", "paper_id": "R26101", "text": "Subjective indoor air quality in schools- the influence of high room temperature, carpeting, fleecy wall materials and volatile organic compounds (VOC) Subjective indoor air quality in schools-the influence of high room temperature,carpeting, fleecy materials and volatile organic compounds (VOC)" }, { "instance_id": "R26107xR26103", "comparison_id": "R26107", "paper_id": "R26103", "text": "Combined effects of temperature and noise on human discomfort The trade-off between noise and temperature and their combined effects on discomfort were studied on 108 lightly clothed subjects (0.6 clo), individually exposed for 2 h in a climatic chamber. Every 10 min of the first hour, subjects could modify the experimental conditions by deciding a change in temperature or noise. However, any change in one parameter was experimentally associated with a fixed change in the other parameter according to eight predetermined designs and all trials for thermal improvement were detrimental to acoustic comfort and conversely. Four initial exposures started at thermoneutrality (24 degrees C) in a noisy environment (85 dBA, recorded fan noise), the reduction of noise being linked to a temperature change towards cool or warm climates. The other four conditions started at a low noise level (35 dBA) but in a cool (14 or 19 degrees C) or warm (29 or 34 degrees C) environment, the reduction of thermal discomfort towards 24 degrees C leading to a louder noise. After six possible voluntary changes, the environment was kept constant for 1 h. Ambient parameters, skin temperatures, and subjective estimates were recorded. Results showed that females accepted noisier environments than males, suggesting that thermal comfort is dominant for women. Noise was rated as the most unpleasant factor when initial conditions were noisy whereas temperature was the most disturbing factor when subjects began the experiment with thermal conditions far from thermoneutrality. Finally, although the combined effects of noise and temperature did not influence the physiological data, our results suggest that noise may alter thermal pleasantness in warm conditions." }, { "instance_id": "R26107xR26097", "comparison_id": "R26107", "paper_id": "R26097", "text": "Differences in perception of indoor environ- ment between Japanese and non-Japanese workers Field surveys were conducted at an office with multinational workers in Japan to investigate the differences in the way groups of occupants perceive the environment under real working conditions. Returned questionnaires, 406 in total, were classified into three groups according to their nationality and sex. Only 26% of workers reported their working environment to be comfortable. A significant neutral temperature difference of 3.1 \u00b0C was observed between the Japanese female group and the non-Japanese male group under their usual working conditions. Japanese females reported a higher frequency of sick building syndrome related symptoms compared to other groups. Occupant comfort and reported frequency of SBS symptoms were closely related to deviation of the thermal sensation vote from neutral. The thermal environment was found to be a major factor affecting occupant comfort in the concerned office. Differences in the perception of the indoor environment were negatively affecting the ratings of their working environment." }, { "instance_id": "R26127xR26115", "comparison_id": "R26127", "paper_id": "R26115", "text": "The underground economy in the United States: Reply to comments by Feige, Thomas, and Zilberfarb THE FATE that an author should dread the most is to see his/her writings ignored. While I have experienced this fate with some of my writings, this is definitely not what has happened to my articles on the underground economy in the United States. For these I have been \"flattered\" by more attention than I would perhaps have liked. The three comments and criticisms discussed here are quite different: they deal in part with the methodology of my work in this area and in part with the empirical results. There are several ways in which I could deal with them but perhaps the simplest is to take the authors' comments alphabetically. I shall allocate far more space to Feige's \"comment\" than to the other two, largely because his is not just a comment on my paper but is also an attempt to \"sell\" his work to the readers of Staff Papers. Thus, I must inevitably discuss his method and results while attempting to answer his specific criticism of my work." }, { "instance_id": "R26146xR26128", "comparison_id": "R26146", "paper_id": "R26128", "text": "Small scale entrepreneurs in Ghana and development planning Summary This article attempts an assessment of entrepreneurial contributions to the solution of some of the objectives of central economic development planning\u2014contributions which are ignored by planners for reasons that are described in this social anthropological study of one aspect of economic development in Ghana. The author wishes to express his gratitude to the Managers of the Smuts\u2019 Memorial Fund for providing much of his financial backing during field\u2010work; also to the Ling Roth fund, the Anthony Wilkin fund, the Bartle Frere Fund, the Mary Euphrasia Mosley fund, the West African Research Unit, and the Warmington fund. He held a Department of Education and Science studentship during the years 1965\u201368. The author also wishes to thank Jack Goody, Esther Goody, Enid Schild\u2010Krout and Jeremy Eades for discussing a preliminary draft of this paper; and Marion Pearsall for comments on later versions; Richard Cornes and Mike Faber have also been most helpful. Responsibility for the final draft is entirely ..." }, { "instance_id": "R26146xR26144", "comparison_id": "R26146", "paper_id": "R26144", "text": "The size, origins, and character of Mongolia\u2019s informal sector during the transition The explosion of informal entrepreneurial activity during Mongolia's transition to a market economy represents one of the most visible signs of change in this expansive but sparsely populated Asian country. To deepen our understanding of Mongolia's informal sector during the transition, the author merges anecdotal experience from qualitative interviews with hard data from a survey of 770 informals in Ulaanbaatar, from a national household survey, and from official employment statistics. Using varied sources, the author generates rudimentary estimates of the magnitude of, and trends in, informal activity in Mongolia, estimates that are surprisingly consistent with each other. He evaluates four types of reasons for the burst of informal activity in Mongolia since 1990: 1) The crisis of the early and mid-1990s, during which large pools of labor were released from formal employment. 2) Rural to urban migration. 3) The\"market's\"reallocation of resources toward areas neglected under the old system: services such as distribution and transportation. 4) The institutional environments faced by the formal and informal sectors: hindering growth of the formal sector, facilitating entry for the informal sector. Formal labor markets haven't absorbed the labor made available by the crisis and by migration and haven't fully responded to the demand for new services. The relative ease of entering the informal market explains that market's great expansion. The relative difficulty of entering formal markets is not random but is driven by policy. Improving policies in the formal sector could afford the same ease of entry there as is currently being experienced in the informal sector." }, { "instance_id": "R26194xR26175", "comparison_id": "R26194", "paper_id": "R26175", "text": "Stochastic Inventory Routing: Route Design with Stockouts and Route Failures The stochastic inventory routing problem involves the distribution of a commodity such as heating oil over a long period of time to a large set of customers. The customers maintain a local inventory of the commodity which they consume at a daily rate. Their consumption varies daily and seasonally and their exact demand is known only upon the arrival of the delivery vehicle. This paper presentes a detailed analysis of this problem incorporating the stochastic nature of customers' consumptions and the possibility of route failures when the actual demand on a route exceeds the capacity of a vehicle. A number of solution procedures are compared on a large set of real life data for a period of 12 consecutive weeks. The winning strategy, though computationally more expensive, provides the best system performance and reduces (almost eliminates) the stockout phenomena." }, { "instance_id": "R26194xR26184", "comparison_id": "R26194", "paper_id": "R26184", "text": "The Metered Inventory Routing Problem, an integrative heuristic algorithm Abstract The Metered Inventory Routing Problem (MIRP) involves a central warehouse, a fleet of trucks with a finite capacity, and a set of customers, for each of whom there is an estimated consumption rate, and a known storage capacity. The objective is to the determine when to service each customer, as well as the route to be performed by each truck, in order to minimize the total discounted costs. The problem is solved on a rolling horizon basis, taking into consideration holding, transportation, fixed ordering, and stockout costs. The algorithm we develop uses the concept of \u2018temporal distances\u2019; in short, the temporal distance between two customers is the cost of moving these customers to a common period. A simulation study is performed to demonstrate the effectiveness of our procedure." }, { "instance_id": "R26194xR26192", "comparison_id": "R26194", "paper_id": "R26192", "text": "Deliveries in an inventory/routing problem using stochastic dynamic programming An industrial gases tanker vehicle visitsn customers on a tour, with a possible ( n + 1)st customer added at the end. The amount of needed product at each customer is a known random process, typically a Wiener process. The objective is to adjust dynamically the amount of product provided on scene to each customer so as to minimize total expected costs, comprising costs of earliness, lateness, product shortfall, and returning to the depot nonempty. Earliness costs are computed by invocation of an annualized incremental cost argument. Amounts of product delivered to each customer are not known until the driver is on scene at the customer location, at which point the customer is either restocked to capacity or left with some residual empty capacity, the policy determined by stochastic dynamic programming. The methodology has applications beyond industrial gases." }, { "instance_id": "R26194xR26165", "comparison_id": "R26194", "paper_id": "R26165", "text": "A computational comparison of algorithms for the inventory routing problem The inventory routing problem is a distribution problem in which each customer maintains a local inventory of a product such as heating oil and consumes a certain amount of that product each day. Each day a fleet of trucks is dispatched over a set of routes to resupply a subset of the customers. In this paper, we describe and compare algorithms for this problem defined over a short planning period, e.g. one week. These algorithms define the set of customers to be serviced each day and produce routes for a fleet of vehicles to service those customers. Two algorithms are compared in detail, one which first allocates deliveries to days and then solves a vehicle routing problem and a second which treats the multi-day problem as a modified vehicle routing problem. The comparison is based on a set of real data obtained from a propane distribution firm in Pennsylvania. The solutions obtained by both procedures compare quite favorably with those in use by the firm." }, { "instance_id": "R26194xR26189", "comparison_id": "R26194", "paper_id": "R26189", "text": "A decomposition approach to the inventory routing problem with satellite facilities This paper presents a comprehensive decomposition scheme for solving the inventory routing problem in which a central supplier must restock a subset of customers on an intermittent basis. In this setting, the customer demand is not known with certainty and routing decisions taken over the short run might conflict with the long-run goal of minimizing annual operating costs. A unique aspect of the short-run subproblem is the presence of satellite facilities where vehicles can be reloaded and customer deliveries continued until the closing time is reached. Three heuristics have been developed to solve the vehicle routing problem with satellite facilities (randomized Clarke-Wright, GRASP, modified sweep). After the daily tours are derived, a parametric analysis is conducted to investigate the tradeoff between distance and annual costs. This leads to the development of the efficient frontier from which the decision maker is free to choose the most attractive alternative. The proposed procedures are tested on data sets generated from field experience with a national liquid propane distributor." }, { "instance_id": "R26194xR26181", "comparison_id": "R26194", "paper_id": "R26181", "text": "Dynamic allocations for multi-product distribution Consider the problem of allocating multiple products by a distributor with limited capacity (truck size), who has a fixed sequence of customers (retailers) whose demands are unknown. Each time the distributor visits a customer, he gets information about the realization of the demand for this customer, but he does not yet know the demands of the following customers. The decision faced by the distributor is how much to allocate to each customer given that the penalties for not satisfying demand are not identical. In addition, we optimally solve the problem of loading the truck with the multiple products, given the limited storage capacity. This framework can also be used for the general problem of seat allocation in the airline industry. As with the truck in the distribution problem, the airplane has limited capacity. A critical decision is how to allocate the available seats between early and late reservations (sequence of customers), for the different fare classes (multiple products), where the revenues from discount (early) and regular (late) passengers are different." }, { "instance_id": "R26194xR26156", "comparison_id": "R26194", "paper_id": "R26156", "text": "A Combined Vehicle Routing and Inventory Allocation Problem We address the combined problem of allocating a scarce resource among several locations, and planning deliveries using a fleet of vehicles. Demands are random, and holding and shortage costs must be considered in the decision along with transportation costs. We show how to extend some of the available methods for the deterministic vehicle routing problem to this case. Computational results using one such adaptation show that the algorithm is fast enough for practical work, and that substantial cost savings can be achieved with this approach." }, { "instance_id": "R26262xR26196", "comparison_id": "R26262", "paper_id": "R26196", "text": "Improving the distribution of industrial gases with an on-line computerized routing and scheduling optimizer For Air Products and Chemicals, Inc., inventory management of industrial gases at customer locations is integrated with vehicle scheduling and dispatching. Their advanced decision support system includes on-line data entry functions, customer usage forecasting, a time/distance network with a shortest path algorithm to compute intercustomer travel times and distances, a mathematical optimization module to produce daily delivery schedules, and an interactive schedule change interface. The optimization module uses a sophisticated Lagrangian relaxation algorithm to solve mixed integer programs with up to 800,000 variables and 200,000 constraints to near optimality. The system, first implemented in October, 1981, has been saving between 6% to 10% of operating costs." }, { "instance_id": "R26262xR26241", "comparison_id": "R26262", "paper_id": "R26241", "text": "Optimizing the periodic pick-up of raw materials for a manufacturer of auto parts Abstract We describe a solution procedure for a special case of the periodic vehicle routing problem (PVRP). Operation managers at an auto parts manufacturer in the north of Spain described the optimization problem to the authors. The manufacturer must pick up parts (raw material) from geographically dispersed locations. The parts are picked up periodically at scheduled times. The problem consists of assigning a pickup schedule to each of its supplier\u2019s locations and also establishing daily routes in order to minimize total transportation costs. The time horizon under consideration may be as long as 90 days. The resulting PVRP is such that the critical decision is the assignment of locations to schedules, because once this is done, the daily routing of vehicles is relatively straightforward. Through extensive computational experiments, we show that the metaheuristic procedure described in this paper is capable of finding high-quality solutions within a reasonable amount of computer time. Our main contribution is the development of a procedure that is more effective at handling PVRP instances with long planning horizons when compared to those proposed in the literature." }, { "instance_id": "R26262xR26251", "comparison_id": "R26262", "paper_id": "R26251", "text": "Inventory routing with continuous moves The typical inventory routing problem deals with the repeated distribution of a single product from a single facility with an unlimited supply to a set of customers that can all be reached with out-and-back trips. Unfortunately, this is not always the reality. We introduce the inventory routing problem with continuous moves to study two important real-life complexities: limited product availabilities at facilities and customers that cannot be served using out-and-back tours. We need to design delivery tours spanning several days, covering huge geographic areas, and involving product pickups at different facilities. We develop an innovative randomized greedy algorithm, which includes linear programming based postprocessing technology, and we demonstrate its effectiveness in an extensive computational study." }, { "instance_id": "R26262xR26235", "comparison_id": "R26262", "paper_id": "R26235", "text": "A genetic algorithm approach to the integrated inventory-distribution problem We introduce a new genetic algorithm (GA) approach for the integrated inventory distribution problem (IIDP). We present the developed genetic representation and use a randomized version of a previously developed construction heuristic to generate the initial random population. We design suitable crossover and mutation operators for the GA improvement phase. The comparison of results shows the significance of the designed GA over the construction heuristic and demonstrates the capability of reaching solutions within 20% of the optimum on sets of randomly generated test problems." }, { "instance_id": "R26262xR26260", "comparison_id": "R26262", "paper_id": "R26260", "text": "A new model and hybrid approach for large scale inventory routing problems This paper studies an inventory routing problem (IRP) with split delivery and vehicle fleet size constraint. Due to the complexity of the IRP, it is very difficult to develop an exact algorithm that can solve large scale problems in a reasonable computation time. As an alternative, an approximate approach that can quickly and near-optimally solve the problem is developed based on an approximate model of the problem and Lagrangian relaxation. In the approach, the model is solved by using a Lagrangian relaxation method in which the relaxed problem is decomposed into an inventory problem and a routing problem that are solved by a linear programming algorithm and a minimum cost flow algorithm, respectively, and the dual problem is solved by using the surrogate subgradient method. The solution of the model obtained by the Lagrangian relaxation method is used to construct a near-optimal solution of the IRP by solving a series of assignment problems. Numerical experiments show that the proposed hybrid approach can find a high quality near-optimal solution for the IRP with up to 200 customers in a reasonable computation time." }, { "instance_id": "R26262xR26248", "comparison_id": "R26262", "paper_id": "R26248", "text": "Omya Hustadmarmor optimizes its supply chain for delivering calcium carbonate slurry to European paper manufacturers The Norwegian company Omya Hustadmarmor supplies calcium carbonate slurry to European paper manufacturers from a single processing plant, using chemical tank ships of various sizes to transport its products. Transportation costs are lower for large ships than for small ships, but their use increases planning complexity and creates problems in production. In 2001, the company faced overwhelming operational challenges and sought operations-research-based planning support. The CEO, Sturla Steinsvik, contacted More Research Molde, which conducted a project that led to the development of a decision-support system (DSS) for maritime inventory routing. The core of the DSS is an optimization model that is solved through a metaheuristic-based algorithm. The system helps planners to make stronger, faster decisions and has increased predictability and flexibility throughout the supply chain. It has saved production and transportation costs close to US$7 million a year. We project additional direct savings of nearly US$4 million a year as the company adds even larger ships to the fleet as a result of the project. In addition, the company has avoided investments of US$35 million by increasing capacity utilization. Finally, the project has had a positive environmental effect by reducing overall oil consumption by more than 10 percent." }, { "instance_id": "R26262xR26233", "comparison_id": "R26262", "paper_id": "R26233", "text": "An integrated model of the periodic delivery problems for vending-machine supply chains In this paper we present a model and solution procedures of the Inventory Routing Problem (IRP) encountered in vending machine supply chains working under vendor-managed inventory (VMI) scheme. The new IRP model is built based on the existing Periodic Vehicle Routing Problem with Time-windows (PVRPTW). The model will be referred to as the Integrated Inventory and Periodic Vehicle Routing Problem with Time-windows (IPVRPTW). The objective of the IPVRPTW is to minimize the sum of the average inventory holding and traveling costs during a given m-day period. The visit frequency is treated as a decision variable instead of as a fixed parameter. We attempt to optimize the visit frequency of each retailer and to build vehicle tours simultaneously in order to find the best trade-off between the inventory holding and traveling costs. Computational experiments are conducted on some instances taken from the literature to evaluate the performance of the proposed model." }, { "instance_id": "R26262xR26224", "comparison_id": "R26262", "paper_id": "R26224", "text": "A Decomposition Approach for the Inventory-Routing Problem In this paper, we present a solution approach for the inventory-routing problem. The inventory-routing problem is a variation of the vehicle-routing problem that arises in situations where a vendor has the ability to make decisions about the timing and sizing of deliveries, as well as the routing, with the restriction that customers are not allowed to run out of product. We develop a two-phase approach based on decomposing the set of decisions: A delivery schedule is created first, followed by the construction of a set of delivery routes. The first phase utilizes integer programming, whereas the second phase employs routing and scheduling heuristics. Our focus is on creating a solution methodology appropriate for large-scale real-life instances. Computational experiments demonstrating the effectiveness of our approach are presented." }, { "instance_id": "R26262xR26214", "comparison_id": "R26262", "paper_id": "R26214", "text": "Decomposition of a Combined Inventory and Time Constrained Ship Routing Problem In contrast to vehicle routing problems, little work has been done in ship routing and scheduling, although large benefits may be expected from improving this scheduling process. We will present a real ship planning problem, which is a combined inventory management problem anda routing problem with time windows. A fleet of ships transports a single product (ammonia) between production and consumption harbors. The quantities loaded and discharged are determined by the production rates of the harbors, possible stock levels, and the actual ship visiting the harbor. We describe the real problem and the underlying mathematical model. To decompose this model, we discuss some model adjustments. Then, the problem can be solved by a Dantzig\u00c2 Wolfe decomposition approach including both ship routing subproblems and inventory management subproblems. The overall problem is solved by branch-and-bound. Our computational results indicate that the proposed method works for the real planning problem." }, { "instance_id": "R26262xR26258", "comparison_id": "R26262", "paper_id": "R26258", "text": "An optimization algorithm for the inventory routing problem with continuous moves The typical inventory routing problem deals with the repeated distribution of a single product from a single facility with an unlimited supply to a set of customers that can all be reached with out-and-back trips. Unfortunately, this is not always the reality. We focus on the inventory routing problem with continuous moves, which incorporates two important real-life complexities: limited product availabilities at facilities and customers that cannot be served using out-and-back tours. We need to design delivery tours spanning several days, covering huge geographic areas, and involving product pickups at different facilities. We develop an integer programming based optimization algorithm capable of solving small to medium size instances. This optimization algorithm is embedded in local search procedure to improve solutions produced by a randomized greedy heuristic. We demonstrate the effectiveness of this approach in an extensive computational study." }, { "instance_id": "R26262xR26231", "comparison_id": "R26262", "paper_id": "R26231", "text": "Shipment planning at oil refineries using column generation and valid inequalities In this paper we suggest an optimization model and a solution method for a shipment planning problem. This problem concerns the simultaneous planning of how to route a fleet of ships and the planning of which products to transport in these ships. The ships are used for moving products from oil refineries to storage depots. There are inventory levels to consider both at the refineries and at the depots. The inventory levels are affected by the process scheduling at the refineries and demand at the depots. The problem is formulated using an optimization model including an aggregated representation of the process scheduling at the refineries. Hence, we integrate the shipment planning and the process scheduling at the refineries. We suggest a solution method based on column generation, valid inequalities, and constraint branching. The solution method is tested on data provided by the Nynas oil refinery company and solutions are obtained within 4 hours, for problem instances of up to 3 refineries, 15 depots, and 4 products when considering a time horizon of 42 days." }, { "instance_id": "R26262xR26205", "comparison_id": "R26262", "paper_id": "R26205", "text": "A Dynamic Distribution Model with Warehouse and Customer Replenishment Requirements This paper addresses a multiperiod integrated model that plans deliveries to customers based upon inventories (at warehouse and customer locations) and vehicle routes. The model determines replenishment quantities and intervals at the warehouse, and distribution lots and delivery routes at customer locations. We investigate coordination of customer and warehouse replenishment decisions and illustrate their interdependence. Computational experience on randomly generated problems is reported. We show that ordering policy at the warehouse is a function of how goods are distributed to lower echelons and that coordination leads to cost reduction." }, { "instance_id": "R26262xR26255", "comparison_id": "R26262", "paper_id": "R26255", "text": "Delivery strategies for blood products supplies We introduce a problem faced by the blood bank of the Austrian Red Cross for Eastern Austria: how to cost-effectively organize the delivery of blood products to Austrian hospitals? We investigate the potential value of switching from the current vendee-managed inventory set up to a vendor-managed inventory system. We present solution approaches based on integer programming and variable neighborhood search and evaluate their performance." }, { "instance_id": "R26262xR26220", "comparison_id": "R26262", "paper_id": "R26220", "text": "A Lagrangian Relaxation Approach to Multi-Period Inventory/Distribution Planning We consider a multi-period inventory/distribution planning problem (MPIDP) in a one-warehouse multiretailer distribution system where a fleet of heterogeneous vehicles delivers products from a warehouse to several retailers. The objective of the MPIDP is to minimise transportation costs for product delivery and inventory holding costs at retailers over the planning horizon. In this research, the problem is formulated as a mixed integer linear programme and solved by a Lagrangian relaxation approach. A subgradient optimisation method is employed to obtain lower bounds. We develop a Lagrangian heuristic algorithm to find a good feasible solution of the MPIDP. Computational experiments on randomly generated test problems showed that the suggested algorithm gave relatively good solutions in a reasonable amount of computation time." }, { "instance_id": "R26352xR26317", "comparison_id": "R26352", "paper_id": "R26317", "text": "Price-Directed Replenishment of Subsets: Methodology and Its Application to Inventory Routing The idea of price-directed control is to use an operating policy that exploits optimal dual prices from a mathematical programming relaxation of the underlying control problem. We apply it to the problem of replenishing inventory to subsets of products/locations, such as in the distribution of industrial gases, so as to minimize long-run time average replenishment costs. Given a marginal value for each product/location, whenever there is a stockout the dispatcher compares the total value of each feasible replenishment with its cost, and chooses one that maximizes the surplus. We derive this operating policy using a linear functional approximation to the optimal value function of a semi-Markov decision process on continuous spaces. This approximation also leads to a math program whose optimal dual prices yield values and whose optimal objective value gives a lower bound on system performance. We use duality theory to show that optimal prices satisfy several structural properties and can be interpreted as estimates of lowest achievable marginal costs. On real-world instances, the price-directed policy achieves superior, near optimal performance as compared with other approaches." }, { "instance_id": "R26352xR26300", "comparison_id": "R26352", "paper_id": "R26300", "text": "Integrating Routing and Inventory Decisions in One-Warehouse Multiretailer Multiproduct Distribution Systems We consider distribution systems with a central warehouse and many retailers that stock a number of different products. Deterministic demand occurs at the retailers for each product. The warehouse acts as a break-bulk center and does not keep any inventory. The products are delivered from the warehouse to the retailers by vehicles that combine the deliveries to several retailers into efficient vehicle routes. The objective is to determine replenishment policies that specify the delivery quantities and the vehicle routes used for the delivery, so as to minimize the long-run average inventory and transportation costs. A new heuristic that develops a stationary nested joint replenishment policy for the problem is presented in this paper. Unlike existing methods, the proposed heuristic is capable of solving problems involving distribution systems with multiple products. Results of a computational study on randomly generated single-product problems are also presented." }, { "instance_id": "R26352xR26306", "comparison_id": "R26352", "paper_id": "R26306", "text": "An integrated inventory\u2013transportation system with modified periodic policy for multiple products Abstract Efficient management of a distribution system requires an integrated approach towards various logistical functions. In particular, the fundamental areas of inventory control and transportation planning need to be closely coordinated. Our model deals with an inbound material-collection problem. An integrated inventory\u2013transportation system is developed with a modified periodic-review inventory policy and a travelling-salesman component. This is a multi-item joint replenishment problem, in a stochastic setting, with simultaneous decisions made on inventory and transportation policies. We propose a heuristic decomposition method to solve the problem, minimizing the long-run total average costs (major- and minor-ordering, holding, backlogging, stopover and travel). The decomposition algorithm works by using separate calculations for inventory and routing decisions, and then coordinating them appropriately. A lower bound is constructed and computational experience is reported." }, { "instance_id": "R26352xR26297", "comparison_id": "R26352", "paper_id": "R26297", "text": "Fully Loaded Direct Shipping Strategy in One Warehouse/NRetailer Systems without Central Inventories In this paper, we consider one warehouse/multiple retailer systems with transportation costs. The planning horizon is infinite and the warehouse keeps no central inventory. It is shown that the fully loaded direct shipping strategy is optimal among all possible shipping/allocation strategies if the truck capacity is smaller than a certain quantity, and a bound is provided for the general case." }, { "instance_id": "R26352xR26348", "comparison_id": "R26352", "paper_id": "R26348", "text": "Designing distribution patterns for long-term inventory routing with constant demand rates This paper proposes a practical solution approach for the challenging optimization problem of minimizing overall costs in an integrated distribution and inventory control system. Constant customer demand rates are assumed and therefore a long-term, cyclic planning approach is adopted. The concept of distribution patterns, consisting of vehicles performing multiple tours with possibly different frequencies, is used to extend the traditional concept of a single tour per vehicle. A heuristic is proposed that is capable of solving a cyclical distribution problem involving real-life features, such as customer capacity restrictions, loading and unloading extra times and prespecified minimum times between consecutive deliveries." }, { "instance_id": "R26352xR26302", "comparison_id": "R26352", "paper_id": "R26302", "text": "Probabilistic Analyses and Algorithms for Three-Level Distribution Systems We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of a single outside vendor, a fixed number of warehouses and many geographically dispersed retailers. Each retailer faces a constant, retailer specific, demand rate and inventory holding cost is charged at the retailers and the warehouses. We show that, in an effective strategy which minimizes the asymptotic long run average cost, each warehouse receives fully loaded trucks from the vendor but never holds inventory. That is, each warehouse serves only as a coordinator of the frequency, time and sizes of deliveries to the retailers. This insight is used to construct an inventory control policy and vehicle routing strategy for multi-echelon distribution systems. Computational results are also reported." }, { "instance_id": "R26352xR26333", "comparison_id": "R26352", "paper_id": "R26333", "text": "An Efficient Heuristic Algorithm for a Two-Echelon Joint Inventory and Routing Problem With an increasing emphasis on coordination in the supply chain, the inventory and distribution decisions, which in most part had been dealt with independently of each other, need to be considered jointly. This research considers a two-echelon distribution system consisting of one warehouse and N retailers that face external demand at a constant rate. Inventories are kept at retailers as well as at the warehouse. The products are delivered to the retailers by a fleet of vehicles with limited capacity. We develop an efficient heuristic procedure that finds a reorder interval for the warehouse, the replenishment quantities (and associated reorder interval) for each retailer, and the delivery routes so as to minimize the long-run average inventory and transportation costs." }, { "instance_id": "R26352xR26311", "comparison_id": "R26352", "paper_id": "R26311", "text": "Heavy Traffic Analysis of the Dynamic Stochastic Inventory-Routing Problem We analyze three queueing control problems that model a dynamic stochastic distribution system, where a single capacitated vehicle serves a finite number of retailers in a make-to-stock fashion. The objective in each of these vehicle routing and inventory problems is to minimize the long run average inventory (holding and backordering) and transportation cost. In all three problems, the controller dynamically specifies whether a vehicle at the warehouse should idle orembark with a full load. In the first problem, the vehicle must travel along a prespecified (TSP) tour of all retailers, and the controller dynamically decides how many units to deliver to each retailer. In the second problem, the vehicle delivers an entire load to one retailer (direct shipping) and the controller decides which retailer to visit next. The third problem allows the additional dynamic choice between the TSP and direct shipping options. Motivated by existing heavy traffic limit theorems, we make a time scale decomposition assumption that allows us to approximate these queueing control problems by diffusion control problems, which are explicitly solved in the fixed route problems, and numerically solved in the dynamic routing case. Simulation experiments confirm that the heavy traffic approximations are quite accurate over a broad range of problem parameters. Our results lead to some new observations about the behavior of this complex system." }, { "instance_id": "R26352xR26340", "comparison_id": "R26352", "paper_id": "R26340", "text": "Using scenario trees and progressive hedging for stochastic inventory routing problems The Stochastic Inventory Routing Problem is a challenging problem, combining inventory management and vehicle routing, as well as including stochastic customer demands. The problem can be described by a discounted, infinite horizon Markov Decision Problem, but it has been showed that this can be effectively approximated by solving a finite scenario tree based problem at each epoch. In this paper the use of the Progressive Hedging Algorithm for solving these scenario tree based problems is examined. The Progressive Hedging Algorithm can be suitable for large-scale problems, by giving an effective decomposition, but is not trivially implemented for non-convex problems. Attempting to improve the solution process, the standard algorithm is extended with locking mechanisms, dynamic multiple penalty parameters, and heuristic intermediate solutions. Extensive computational results are reported, giving further insights into the use of scenario trees as approximations of Markov Decision Problem formulations of the Stochastic Inventory Routing Problem." }, { "instance_id": "R26352xR26295", "comparison_id": "R26352", "paper_id": "R26295", "text": "Heuristics for a One-Warehouse Multiretailer Distribution Problem with Performance Bounds We investigate the one warehouse multiretailer distribution problem with traveling salesman tour vehicle routing costs. We model the system in the framework of the more general production/distribution system with arbitrary non-negative monotone joint order costs. We develop polynomial time heuristics whose policy costs are provably close to the cost of an optimal policy. In particular, we show that given a submodular function which is close to the true order cost then we can find a power-of-two policy whose cost is only moderately greater than the cost of an optimal policy. Since such submodular approximations exist for traveling salesman tour vehicle routing costs we present a detailed description of heuristics for the one warehouse multiretailer distribution problem. We formulate a nonpolynomial dynamic program that computes optimal power-of-two policies for the one warehouse multiretailer system assuming only that the order costs are non-negative monotone. Finally, we perform computational tests which compare our heuristics to optimal power of two policies for problems of up to sixteen retailers. We also perform computational tests on larger problems; these tests give us insight into what policies one should employ." }, { "instance_id": "R26352xR26284", "comparison_id": "R26352", "paper_id": "R26284", "text": "Minimizing Transportation and Inventory Costs for Several Products on a Single Link This paper deals with the problem of determining the frequencies at which several products have to be shipped on a common link to minimize the sum of transportation and inventory costs. A set of feasible shipping frequencies is given. Transportation costs are supposed to be proportional to the number of journeys performed by vehicles of a given capacity. Vehicles may or may not be supposed to carry out completely all materials available, and products assigned to different frequencies may or may not share the same truck. Integer and mixed integer linear programming models are formulated for each of the resulting four situations, and their properties are investigated. In particular, we show that allowing products to be split among several shipping frequencies makes trucks traveling at high frequencies to be filled up completely. In this situation, trucks may always be loaded with products shipped at the same frequency." }, { "instance_id": "R26352xR26274", "comparison_id": "R26352", "paper_id": "R26274", "text": "Two-echelon distribution systems with vehicle routing costs and central inventory We consider distribution systems with a single depot and many retailers each of which faces external demands for a single item that occurs at a specific deterministic demand rate. All stock enters the systems through the depot where it can be stored and then picked up and distributed to the retailers by a fleet of vehicles, combining deliveries into efficient routes. We extend earlier methods for obtaining low complexity lower bounds and heuristics for systems without central stock. We show under mild probabilistic assumptions that the generated solutions and bounds come asymptotically within a few percentage points of optimality (within the considered class of strategies). A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size." }, { "instance_id": "R26352xR26336", "comparison_id": "R26352", "paper_id": "R26336", "text": "The storage constrained, inbound inventory routing problem Purpose This paper aims to describe the storage constrained, inbound inventory routeing problem and presents bounds and heuristics for solutions to this problem. It also seeks to analyze various characteristics of this problem by comparing the solutions generated by the two proposed heuristics with each other and with the lower bound solutions. Design/methodology/approach The proposed heuristics use a sequential decomposition strategy for generating solutions for this problem. These heuristics are evaluated on a set of problem instances which are based on an actual application in the automotive manufacturing industry. Findings The storage space clearly has a significant effect on both the routeing and inventory decisions, and there are complex and interesting interactions between the problem factors and performance measures. Practical implications Facility design decisions for the storage of inbound materials should carefully consider the impact of storage space on transportation and logistics costs. Originality/value This problem occurs in a number of different industrial applications while most of the existing literature addresses outbound distribution. Other papers that address similar problems do not consider all of the practical constraints in the problem or do not adequately benchmark and analyze their proposed solutions." }, { "instance_id": "R26352xR26269", "comparison_id": "R26352", "paper_id": "R26269", "text": "One Warehouse Multiple Retailer Systems with Vehicle Routing Costs We consider distribution systems with a depot and many geographically dispersed retailers each of which faces external demands occurring at constant, deterministic but retailer specific rates. All stock enters the system through the depot from where it is distributed to the retailers by a fleet of capacitated vehicles combining deliveries into efficient routes. Inventories are kept at the retailers but not at the depot. We wish to determine feasible replenishment strategies i.e., inventory rules and routing patterns minimising infinite horizon long-run average transportation and inventory costs. We restrict ourselves to a class of strategies in which a collection of regions sets of retailers is specified which cover all outlets: if an outlet belongs to several regions, a specific fraction of its sales/operations is assigned to each of these regions. Each time one of the retailers in a given region receives a delivery, this delivery is made by a vehicle who visits all other outlets in the region as well in an efficient route. We describe a class of low complexity heuristics and show under mild probabilistic assumptions that the generated solutions are asymptotically optimal within the above class of strategies. We also show that lower and upper bounds on the system-wide costs may be computed and that these bounds are asymptotically tight under the same assumptions. A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size." }, { "instance_id": "R26352xR26321", "comparison_id": "R26352", "paper_id": "R26321", "text": "Dynamic Programming Approximations for a Stochastic Inventory Routing Problem This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources." }, { "instance_id": "R26352xR26319", "comparison_id": "R26352", "paper_id": "R26319", "text": "A Price-Directed Approach to Stochastic Inventory/Routing We consider a new approach to stochastic inventory/routing that approximates the future costs of current actions using optimal dual prices of a linear program. We obtain two such linear programs by formulating the control problem as a Markov decision process and then replacing the optimal value function with the sum of single-customer inventory value functions. The resulting approximation yields statewise lower bounds on optimal infinite-horizon discounted costs. We present a linear program that takes into account inventory dynamics and economics in allocating transportation costs for stochastic inventory routing. On test instances we find that these allocations do not introduce any error in the value function approximations relative to the best approximations that can be achieved without them. Also, unlike other approaches, we do not restrict the set of allowable vehicle itineraries in any way. Instead, we develop an efficient algorithm to both generate and eliminate itineraries during solution of the linear programs and control policy. In simulation experiments, the price-directed policy outperforms other policies from the literature." }, { "instance_id": "R26352xR26279", "comparison_id": "R26352", "paper_id": "R26279", "text": "A Markov Decision Model and Decomposition Heuristic for Dynamic Vehicle Dispatching We describe a dynamic and stochastic vehicle dispatching problem called the delivery dispatching problem. This problem is modeled as a Markov decision process. Because exact solution of this model is impractical, we adopt a heuristic approach for handling the problem. The heuristic is based in part on a decomposition of the problem by customer, where customer subproblems generate penalty functions that are applied in a master dispatching problem. We describe how to compute bounds on the algorithm's performance, and apply it to several examples with good results." }, { "instance_id": "R26352xR26326", "comparison_id": "R26352", "paper_id": "R26326", "text": "Redesigning distribution operations: a case study on integrating inventory management and vehicle routes design This paper describes a real-world application concerning the distribution in Portugal of frozen products of a world-wide food and beverage company. Its focus is the development of a model to support negotiations between a logistics operator and retailers, establishing a common basis for a co-operative scheme in supply chain management. A periodic review policy is adopted and an optimisation procedure based on the heuristic proposed by Viswanathan and Mathur (Mgmnt Sci., 1997, 43, 294\u2013312) is used to devise guidelines for inventory replenishment frequencies and for the design of routes to be used in the distribution process. This provides an integrated approach of the two logistics functions\u2014inventory management and routing\u2014with the objective of minimising long-term average costs, considering an infinite time horizon. A framework to estimate inventory levels, namely safety stocks, is also presented. The model provides full information concerning the expected performance of the proposed solution, which can be compared against the present situation, allowing each party to assess its benefits and drawbacks." }, { "instance_id": "R26352xR26288", "comparison_id": "R26352", "paper_id": "R26288", "text": "A Location Based Heuristic for General Routing Problems We present a general framework for modeling routing problems based on formulating them as a traditional location problem called the capacitated concentrator location problem. We apply this framework to two classical routing problems: the capacitated vehicle routing problem and the inventory routing problem. In the former case, the heuristic is proven to be asymptotically optimal for any distribution of customer demands and locations. Computational experiments show that the heuristic performs well for both problems and, in most cases, outperforms all published heuristics on a set of standard test problems." }, { "instance_id": "R26352xR26265", "comparison_id": "R26352", "paper_id": "R26265", "text": "Analyzing trade-offs between transportation, inventory and production costs on freight networks The purpose of this paper is to determine optimal shipping strategies (i.e. routes and shipment sizes) on freight networks by analyzing trade-offs between transportation, inventory, and production set-up costs. Networks involving direct shipping, shipping via a consolidation terminal, and a combination of terminal and direct shipping are considered. This paper makes three main contributions. First, an understanding is provided of the interface between transportation and production set-up costs, and of how these costs both affect inventory. Second, conditions are identified that indicate when networks involving direct shipments between many origins and destinations can be analyzed on a link-by-link basis. Finally, a simple optimization method is developed that simultaneously determines optimal routes and shipment sizes for networks with a consolidation terminal and concave cost functions. This method decomposes the network into separate sub-networks, and determines the optimum analytically without the need for mathematical programming techniques." }, { "instance_id": "R26377xR26248", "comparison_id": "R26377", "paper_id": "R26248", "text": "Omya Hustadmarmor optimizes its supply chain for delivering calcium carbonate slurry to European paper manufacturers The Norwegian company Omya Hustadmarmor supplies calcium carbonate slurry to European paper manufacturers from a single processing plant, using chemical tank ships of various sizes to transport its products. Transportation costs are lower for large ships than for small ships, but their use increases planning complexity and creates problems in production. In 2001, the company faced overwhelming operational challenges and sought operations-research-based planning support. The CEO, Sturla Steinsvik, contacted More Research Molde, which conducted a project that led to the development of a decision-support system (DSS) for maritime inventory routing. The core of the DSS is an optimization model that is solved through a metaheuristic-based algorithm. The system helps planners to make stronger, faster decisions and has increased predictability and flexibility throughout the supply chain. It has saved production and transportation costs close to US$7 million a year. We project additional direct savings of nearly US$4 million a year as the company adds even larger ships to the fleet as a result of the project. In addition, the company has avoided investments of US$35 million by increasing capacity utilization. Finally, the project has had a positive environmental effect by reducing overall oil consumption by more than 10 percent." }, { "instance_id": "R26377xR26196", "comparison_id": "R26377", "paper_id": "R26196", "text": "Improving the distribution of industrial gases with an on-line computerized routing and scheduling optimizer For Air Products and Chemicals, Inc., inventory management of industrial gases at customer locations is integrated with vehicle scheduling and dispatching. Their advanced decision support system includes on-line data entry functions, customer usage forecasting, a time/distance network with a shortest path algorithm to compute intercustomer travel times and distances, a mathematical optimization module to produce daily delivery schedules, and an interactive schedule change interface. The optimization module uses a sophisticated Lagrangian relaxation algorithm to solve mixed integer programs with up to 800,000 variables and 200,000 constraints to near optimality. The system, first implemented in October, 1981, has been saving between 6% to 10% of operating costs." }, { "instance_id": "R26377xR26231", "comparison_id": "R26377", "paper_id": "R26231", "text": "Shipment planning at oil refineries using column generation and valid inequalities In this paper we suggest an optimization model and a solution method for a shipment planning problem. This problem concerns the simultaneous planning of how to route a fleet of ships and the planning of which products to transport in these ships. The ships are used for moving products from oil refineries to storage depots. There are inventory levels to consider both at the refineries and at the depots. The inventory levels are affected by the process scheduling at the refineries and demand at the depots. The problem is formulated using an optimization model including an aggregated representation of the process scheduling at the refineries. Hence, we integrate the shipment planning and the process scheduling at the refineries. We suggest a solution method based on column generation, valid inequalities, and constraint branching. The solution method is tested on data provided by the Nynas oil refinery company and solutions are obtained within 4 hours, for problem instances of up to 3 refineries, 15 depots, and 4 products when considering a time horizon of 42 days." }, { "instance_id": "R26377xR26359", "comparison_id": "R26377", "paper_id": "R26359", "text": "Reducing Logistics Costs at General Motors Automobile and truck production at General Motors involves shipping a broad variety of materials, parts, and components from 20,000 supplier plants to over 160 GM plants. To help reduce logistics costs at GM, the decision tool TRANSPART was developed. In its initial application for GM's Delco Electronics Division, TRANSPART identified a 26 percent logistics cost savings opportunity ($2.9 million per year). Today, TRANSPART II---a commercial version of the tool---is being used in more than 40 GM plants." }, { "instance_id": "R26377xR26238", "comparison_id": "R26377", "paper_id": "R26238", "text": "Inventory constrained maritime routing and scheduling for multi-commodity liquid bulk, Part I: Applications and model Abstract This paper formulates a model for finding a minimum cost routing in a network for a heterogeneous fleet of ships engaged in pickup and delivery of several liquid bulk products. The problem is frequently encountered by maritime chemical transport companies, including oil companies serving an archipelago of islands. The products are assumed to require dedicated compartments in the ship. The problem is to decide how much of each product should be carried by each ship from supply ports to demand ports, subject to the inventory level of each product in each port being maintained between certain levels that are set by the production rates, the consumption rates, and the storage capacities of the various products in each port. This important and challenging inventory constrained multi-ship pickup\u2013delivery problem is formulated as a mixed-integer nonlinear program. We show that the model can be reformulated as an equivalent mixed-integer linear program with special structure. Over 100 test problems are randomly generated and solved using CPLEX 7.5. The results of our numerical experiments illuminate where problem structure can be exploited in order to solve larger instances of the model. Part II of the sequel will deal with new algorithms that take advantage of model properties." }, { "instance_id": "R26377xR26161", "comparison_id": "R26377", "paper_id": "R26161", "text": "Analysis of a large scale vehicle routing problem with an inventory component Description d'un systeme de distribution integre base sur l'optimisation, couramment developpe pour une grande entreprise commerciale. Le probleme de distribution est tel qu'il est necessaire de prevoir les demandes des consommateurs, de choisir un sous ensemble de consommateurs, et de generer des itineraires journaliers pour les vehicules" }, { "instance_id": "R26377xR26255", "comparison_id": "R26377", "paper_id": "R26255", "text": "Delivery strategies for blood products supplies We introduce a problem faced by the blood bank of the Austrian Red Cross for Eastern Austria: how to cost-effectively organize the delivery of blood products to Austrian hospitals? We investigate the potential value of switching from the current vendee-managed inventory set up to a vendor-managed inventory system. We present solution approaches based on integer programming and variable neighborhood search and evaluate their performance." }, { "instance_id": "R26377xR26241", "comparison_id": "R26377", "paper_id": "R26241", "text": "Optimizing the periodic pick-up of raw materials for a manufacturer of auto parts Abstract We describe a solution procedure for a special case of the periodic vehicle routing problem (PVRP). Operation managers at an auto parts manufacturer in the north of Spain described the optimization problem to the authors. The manufacturer must pick up parts (raw material) from geographically dispersed locations. The parts are picked up periodically at scheduled times. The problem consists of assigning a pickup schedule to each of its supplier\u2019s locations and also establishing daily routes in order to minimize total transportation costs. The time horizon under consideration may be as long as 90 days. The resulting PVRP is such that the critical decision is the assignment of locations to schedules, because once this is done, the daily routing of vehicles is relatively straightforward. Through extensive computational experiments, we show that the metaheuristic procedure described in this paper is capable of finding high-quality solutions within a reasonable amount of computer time. Our main contribution is the development of a procedure that is more effective at handling PVRP instances with long planning horizons when compared to those proposed in the literature." }, { "instance_id": "R26377xR26214", "comparison_id": "R26377", "paper_id": "R26214", "text": "Decomposition of a Combined Inventory and Time Constrained Ship Routing Problem In contrast to vehicle routing problems, little work has been done in ship routing and scheduling, although large benefits may be expected from improving this scheduling process. We will present a real ship planning problem, which is a combined inventory management problem anda routing problem with time windows. A fleet of ships transports a single product (ammonia) between production and consumption harbors. The quantities loaded and discharged are determined by the production rates of the harbors, possible stock levels, and the actual ship visiting the harbor. We describe the real problem and the underlying mathematical model. To decompose this model, we discuss some model adjustments. Then, the problem can be solved by a Dantzig\u00c2 Wolfe decomposition approach including both ship routing subproblems and inventory management subproblems. The overall problem is solved by branch-and-bound. Our computational results indicate that the proposed method works for the real planning problem." }, { "instance_id": "R26421xR26393", "comparison_id": "R26421", "paper_id": "R26393", "text": "Reaction mechanism of chitosanase from Streptomyces sp. N174 Chitosanase was produced by the strain of Streptomyces lividans TK24 bearing the csn gene from Streptomyces sp. N174, and purified by S-Sepharose and Bio-Gel A column chromatography. Partially (25-35%) N-acetylated chitosan was digested by the purified chitosanase, and structures of the products were analysed by NMR spectroscopy. The chitosanase produced heterooligosaccharides consisting of D-GlcN and GlcNAc in addition to glucosamine oligosaccharides [(GlcN)n, n = 1, 2 and 3]. The reducing- and non-reducing-end residues of the heterooligosaccharide products were GlcNAc and GlcN respectively, indicating that the chitosanase can split the GlcNAc-GlcN linkage in addition to that of GlcN-GlcN. Time-dependent 1H-NMR spectra showing hydrolysis of (GlcN)6 by the chitosanase were obtained in order to determine the anomeric form of the reaction products. The chitosanase was found to produce only the alpha-form; therefore it is an inverting enzyme. Separation and quantification of (GlcN)n was achieved by HPLC, and the time course of the reaction catalysed by the chitosanase was studied using (GlcN)n (n = 4, 5 and 6) as the substrate. The chitosanase hydrolysed (GlcN)6 in an endo-splitting manner producing (GlcN)2, (GlcN)3 and (GlcN)4, and did not catalyse transglycosylation. Product distribution was (GlcN)3 >> (GlcN)2 > (GlcN)4. Cleavage to (GlcN)3 + (GlcN)3 predominated over that to (GlcN)2 + (GlcN)4. Time courses showed a decrease in rate of substrate degradation from (GlcN)6 to (GlcN)5 to (GlcN)4. It is most likely that the substrate-binding cleft of the chitosanase can accommodate at least six GlcN residues, and that the cleavage point is located at the midpoint of the binding cleft." }, { "instance_id": "R26421xR26403", "comparison_id": "R26421", "paper_id": "R26403", "text": "Purification and Mode of Action of Chitosanolytic Enzyme fromEnterobactersp. G-1 A chitosanolytic enzyme was purified from Enterobacter sp. G-1 by fractionation of 30% saturation with ammonium sulfate, isoelectric focusing, and Sephadex G-100 gel chromatography. The purified enzyme. showed a single band on sodium dodecyl sulfate polyacrylamide gel electrophoresis, and the molecular mass was estimated to be 50 kDa. The enzyme degraded N-acetyl-chitooligosaccharides, glycol chitin, colloidal chitin, and colloidal chitosan (about 80% deacetylated), but did not degrade chitooligosaccharides, colloidal chitosan (100% deacetylated), or Micrococcus lysodeikticus cell walls. It hydrolyzed GlcNAc4\u20136 and colloidal chitin to GlcNAc2, finally. The main cleavage site with GlcNAc3\u20136 was the second linkage from the non-reducing end, based on the pattern of pNp-GlcNAc2\u20135. Colloidal chitosan was hydrolyzed to GlcNAc2 and to similar partially N-acetylated chitooligosaccharides." }, { "instance_id": "R26421xR26383", "comparison_id": "R26421", "paper_id": "R26383", "text": "Action Pattern of Bacillus SP. No. 7-M Chitosanase on Partially N-Acetylated Chitosan The hydrolyzate of partially N-acetylated chitosan by Bacillus sp. No. 7-M chitosanase was separated by gel filtration on Bio-Gel P-2. Sugar compositions and sequences of the oligosaccharides were identified by exo-splitting with beta-GlcNase, fast atom bombardment mass spectroscopy, and proton NMR spectroscopy. In addition to chitooligosaccharides, (GlcN)2, (GlcN)3, and (GlcN)4, hetero-chitooligosaccharides such as (GlcN)2.GlcNAc.(GlcN)2, GlcN.GlcNAc.(GlcN)3, (GlcN)2.GlcNAc.(GlcN)3, and GlcN.GlcNAc.(GlcN)4 were detected. These results indicate that Bacillus sp. No. 7-M chitosanase is absolutely specific toward the GlcN.GlcN bonds in partially N-acetylated chitosan and at least three GlcN residues were necessary to the hydrolysis of chitosan by chitosanase." }, { "instance_id": "R26421xR26413", "comparison_id": "R26421", "paper_id": "R26413", "text": "An Aspergillus chitosanase with potential for large-scale preparation of chitosan oligosaccharides A chitosan\u2010degrading fungus, designated Aspergillus sp. Y2K, was isolated from soil. The micro\u2010organism was used for producing chitosanase (EC 3.2.1.132) in a minimal medium containing chitosan as the sole carbon source. The induced chitosanase was purified to homogeneity from the culture filtrate by concentration and cationic SP\u2010Sepharose chromatography. The purified enzyme is a monomer with an estimated molecular mass of 25 kDa by SDS/PAGE and of 22 kDa by gel\u2010filtration chromatography. pI, optimum pH and optimum temperature values were 8.4, 6.5 and 65\u201370 \u00b0C, respectively. The chitosanase is stable in the pH range from 4 to 7.5 at 55 \u00b0C. Higher deacetylated chitosan is a better substrate. Chitin, xylan, 6\u2010O \u2010sulphated chitosan and O \u2010carboxymethyl chitin were indigestible by the purified enzyme. By endo\u2010splitting activity, the chitosanase hydrolysed chitosan to form chitosan oligomers with chitotriose, chitotetraose and chitopentaose as the major products. The enzyme hydrolyses chitohexaose to form chitotriose, while the chitopentaose and shorter oligomers remain intact. The N\u2010terminal amino acid sequence of the enzyme was determined as YNLPNNLKQIYDDHK, which provides useful information for further gene cloning of this enzyme. A 275 g\u2010scale hydrolysis of chitosan was performed. The product distribution was virtually identical to that of the small\u2010scale reaction. Owing to the simple purification process and high stability of the enzyme, it is potentially valuable for industrial applications." }, { "instance_id": "R26421xR26395", "comparison_id": "R26421", "paper_id": "R26395", "text": "Novel Chitosanase fromStreptomyces griseusHUT 6037 with Transglycosylation Activity Streptomyces griseus HUT 6037 inducibly produced two chitosanases when grown on chitosan. To elucidate the mechanism of degradation of chitinous compound by this strain, chitosanases I and II of S. griseus HUT 6037 were purified and characterized. The purified enzymes had a molecular mass of 34 kDa. Their optimum pH was 5.7, and their optimum temperature was 60\u00b0C. They hydrolyzed not only partially deacetylated chitosan, but also carboxymethylcellulose. Time-dependent 1H-NMR spectra showing hydrolysis of (GlcN)6 by the chitosanases were obtained for identification of the anomeric form of the reaction products. Both chitosanases produced the \u03b2-form specifically, indicating that they were retaining enzymes. These enzymes catalyzed a glycosyltransfer reaction in the hydrolysis of chitooligosaccharides. The N-terminal and internal amino acid sequences of chitosanase II were identified. A PCR fragment corresponding to these amino acid sequences was used to screen a genomic library for the entire gene encoding chitosanase II. Sequencing of the choII gene showed an open reading frame encoding a protein with 359 amino acid residues. The deduced primary structure was similar to endoglucanase E-5 of Thermomonospora fusca, which enzyme belongs to family 5 of the glycosyl hydrolases. This is the first report of a family 5 chitosanase with transglycosylation activity." }, { "instance_id": "R26421xR26397", "comparison_id": "R26421", "paper_id": "R26397", "text": "Chitosanase from Streptomyces griseus Publisher Summary This chapter describes the assay method and procedure for the purification of chitosanase from Streptomyces griseus. The assay is based on the estimation of amino sugars produced in the hydrolysis of glycol chitosan, a water-soluble derivative of chitosan, by the method of Rondle and Morgan, s using glucosamine as a reference compound. The purified chitosanases are classified into two groups: the enzymes hydrolyzing only chitosan and the enzymes hydrolyzing both chitosan and carboxymethylcellulose. A strain of Streptomyces griseus has produced chitosanase in culture broth with chitosan as the single carbon and nitrogen source. The purified enzyme is able to hydrolyze chitosan and carboxymethylcellulose, and produces glucosamine oligomers in the hydrolysis of chitosan." }, { "instance_id": "R26421xR26409", "comparison_id": "R26421", "paper_id": "R26409", "text": "Purification and Properties of a Chitosanase fromPseudomonassp. H-14 Purification and Properties of a Chitosanase from Pseudomonas sp. H-14 Kazutoshi Yoshihara, Jun Hosokawa, Takamasa Kubo, Masashi Nishiyama & Yojiro Koba To cite this article: Kazutoshi Yoshihara, Jun Hosokawa, Takamasa Kubo, Masashi Nishiyama & Yojiro Koba (1992) Purification and Properties of a Chitosanase from Pseudomonas sp. H-14, Bioscience, Biotechnology, and Biochemistry, 56:6, 972-973, DOI: 10.1271/bbb.56.972 To link to this article: http://dx.doi.org/10.1271/bbb.56.972" }, { "instance_id": "R26421xR26419", "comparison_id": "R26421", "paper_id": "R26419", "text": "Characterization of Two Chitinase Genes and One Chitosanase Gene Encoded by Chlorella Virus PBCV-1 Chlorella virus PBCV-1 encodes two putative chitinase genes, a181/182r and a260r, and one chitosanase gene, a292l. The three genes were cloned and expressed in Escherichia coli. The recombinant A181/182R protein has endochitinase activity, recombinant A260R has both endochitinase and exochitinase activities, and recombinant A292L has chitosanase activity. Transcription of a181/182r, a260r, and a292l genes begins at 30, 60, and 60 min p.i., respectively; transcription of all three genes continues until the cells lyse. A181/182R, A260R, and A292L proteins are first detected by Western blots at 60, 90, and 120 min p.i., respectively. Therefore, a181/182r is an early gene and a260r and a292l are late genes. All three genes are widespread in chlorella viruses. Phylogenetic analyses indicate that the ancestral condition of the a181/182r gene arose from the most recent common ancestor of a gene found in tobacco, whereas the genealogical position of the a260r gene could not be unambiguously resolved." }, { "instance_id": "R26421xR26381", "comparison_id": "R26421", "paper_id": "R26381", "text": "Crystal Structure of Chitosanase fromBacillus circulansMH-K1 at 1.6-\u00c5 Resolution and Its Substrate Recognition Mechanism Chitosanase from Bacillus circulans MH-K1 is a 29-kDa extracellular protein composed of 259 amino acids. The crystal structure of chitosanase from B. circulans MH-K1 has been determined by multiwavelength anomalous diffraction method and refined to crystallographic R = 19.2% (R free = 23.5%) for the diffraction data at 1.6-\u212b resolution collected by synchrotron radiation. The enzyme has two globular upper and lower domains, which generate the active site cleft for the substrate binding. The overall molecular folding is similar to chitosanase from Streptomyces sp. N174, although there is only 20% identity at the amino acid sequence level between both chitosanases. However, there are three regions in which the topology is remarkably different. In addition, the disulfide bridge between Cys50 and Cys124 joins the \u03b21 strand and the \u03b17 helix, which is not conserved among other chitosanases. The orientation of two backbone helices, which connect the two domains, is also different and is responsible for the differences in size and shape of the active site cleft in these two chitosanases. This structural difference in the active site cleft is the reason why the enzymes specifically recognize different substrates and catalyze different types of chitosan degradation." }, { "instance_id": "R26421xR26389", "comparison_id": "R26421", "paper_id": "R26389", "text": "A new chitosanase gene from a Nocardioides sp. is a third member of glycosyl hydrolase family 46 Strain N106, a newly isolated soil actinomycete classified in the genus Nocardioides on the basis of its chemotaxonomy, produced an extracellular chitosanase and was highly active in chitosan degradation. A gene library of Nocardioides sp. N106 was constructed in the shuttle vector pFD666 and recombinant plasmids carrying the chitosanase gene (csnN106) were identified using the 5'-terminal portion of the chitosanase gene from Streptomyces sp. N174 as a hybridization probe. One plasmid, pCSN106-2, was used to transform Streptomyces lividans TK24. The chitosanase produced by S. lividans (pCSN106-2) is a protein of 29.5 kDa, with a pI 8.1, and hydrolyses chitosan by an endo-mechanism giving a mixture of dimers and trimers as end-products. N-terminal sequencing revealed that the mature chitosanase is a mixture of two enzyme forms differing by one N-terminal amino acid. The csnN106 gene is 79.5% homologous to the csn gene from Streptomyces sp. N174. At the amino acid level, both chitosanases are homologous at 74.4% and hydrophobic cluster analysis revealed a strict conservation of structural features. This chitosanase is the third known member of family 46 of glycosyl hydrolases." }, { "instance_id": "R26421xR26415", "comparison_id": "R26421", "paper_id": "R26415", "text": "Cloning and characterization of a chitosanase gene from the plant pathogenic fungus Fusarium solani The plant pathogenic fungus Fusarium solani f. sp. phaseoli SUF386 secretes a chitosanase in the absence of exogenous chitosan. Based on partial amino acid sequences of the purified chitosanase, two degenerate oligonucleotides were synthesized and used as reverse transcription-mediated PCR (RT-PCR) primers to amplify a 500-bp fragment of corresponding cDNA. The PCR product was used as a probe to isolate the genomic copy of the gene (csn). F. solani csn has an open reading frame encoding a polypeptide of 304 amino acid residues with a calculated molecular mass of 31,876 Da and containing a putative 19-amino acid residue signal sequence. Comparison between the genomic and cDNA sequences revealed that three introns are present in the coding region. Southern blot analysis results indicated that csn is present as a single copy in the genome of F. solani SUF386. The cDNA fragment corresponding to the mature enzyme was introduced into E. coli using an expression vector driven by the T7 promoter. The resulting E. coli transformant overproduced proteins with chitosanolytic activity." }, { "instance_id": "R26421xR26387", "comparison_id": "R26421", "paper_id": "R26387", "text": "Purification and hydrolytic action of a chitosanase from Nocardia orientalis Chitosanase from the culture filtrate of Nocardia orientalis was purified to apparent homogeneity by precipitation with ammonium sulfate followed by CM-Sephadex chromatography, biospecific affinity chromatography on a Sepharose CL-4B with immobilized chitotriose and by gel filtration on Sephadex G-75. The enzyme specifically acted on chitooligosaccharides and chitosan to yield chitobiose and chitotriose as final products. The mode of action of the chitosanase on chitooligosaccharides and their corresponding alcohols suggests that the enzyme requires substrates with four or more glucosamine residues for the expression of activity and its shows maximum activity on chitohexaose and chitoheptaose. In the hydrolysis of chitosans of varying N-acetyl content, the enzyme cleaved about 30% acetylated chitosan with maximum activity and the enzyme activity decreased with increasing the degree of deacetylation of chitosans tested. The analysis of products formed from 33% acetylated chitosan shows the chitosanase is capable of cleaving between glucosamine and glucosamine or N-acetylglucosamine, but not cleaving between N-acetylglucosamine and glucosamine. On the basis of the results, the whole pathway of enymatic degradation of partially acetylated chitosan by a combination of chitosanase, exo-beta-D-glucosaminidase and beta-N-acetylhexosaminidase is proposed." }, { "instance_id": "R26421xR26401", "comparison_id": "R26421", "paper_id": "R26401", "text": "Molecular cloning and characterization of a chitosanase from the chitosanolytic bacterium Burkholderia gladioli strain CHB101 Abstract A chitosanase was purified from the culture fluid of the chitino- and chitosanolytic bacterium Burkholderia gladioli strain CHB101. The purified enzyme (chitosanase A) had a molecular mass of 28 kDa, and catalyzed the endo-type cleavage of chitosans having a low degree of acetylation (0\u201330%). The enzyme hydrolyzed glucosamine oligomers larger than a pentamer, but did not exhibit any activity toward N-acetyl-glucosamine oligomers and colloidal chitin. The gene coding for chitosanase A (csnA) was isolated and its nucleotide sequence determined. B. gladioli csnA has an ORF encoding a polypeptide of 355 amino acid residues. Analysis of the N-terminal amino acid sequence of the purified chitosanase A and comparison with that deduced from the csnA ORF suggests post-translational processing of a putative signal peptide and a possible substrate-binding domain. The deduced amino acid sequence corresponding to the mature protein showed 80% similarity to the sequences reported from Bacillus circulans strain MH-K1 and Bacillus ehimensis strain EAG1, which belong to family 46 glycosyl hydrolases." }, { "instance_id": "R26421xR26385", "comparison_id": "R26421", "paper_id": "R26385", "text": "Specificity of chitosanase from Bacillus pumilus Partially (25-35%) N-acetylated chitosan was digested by chitosanase from Bacillus pumilus BN-262, and structures of the products, partially N-acetylated chitooligosaccharides, were analyzed in order to investigate the specificity of the chitosanase. The chitosanase produced glucosamine (GlcN) oligosaccharides abundantly, indicating that the chitosanase splits the beta-1,4-glycosidic linkage of GlcN-GlcN. The chitosanase also produced hetero-oligosaccharides consisting of glucosamine and N-acetyl-D-glucosamine (GlcNAc). Three types of the hetero-oligosaccharides purified by cation-exchange chromatography and HPLC were found to have GlcNAc residue at their reducing end and GlcN residue at their non-reducing end, indicating that the chitosanase can also split the linkage of GlcNAc-GlcN. The determination of the mode of action toward partially N-acetylated chitosan enables a classification of chitosanases according to their specificities and a more precise definition of chitosanases." }, { "instance_id": "R26550xR26498", "comparison_id": "R26550", "paper_id": "R26498", "text": "A synergistic chlorhexidine/chitosan combination for improved antiplaque strategies BACKGROUND The minor efficacy of chlorhexidine (CHX) on other cariogenic bacteria than mutans streptococci such as Streptococcus sanguinis may contribute to uneffective antiplaque strategies. METHODS AND RESULTS In addition to CHX (0.1%) as positive control and saline as negative control, two chitosan derivatives (0.2%) and their CHX combinations were applied to planktonic and attached sanguinis streptococci for 2 min. In a preclinical biofilm model, the bacteria suspended in human sterile saliva were allowed to attach to human enamel slides for 60 min under flow conditions mimicking human salivation. The efficacy of the test agents on streptococci was screened by the following parameters: vitality status, colony-forming units (CFU)/ml and cell density on enamel. The first combination reduced the bacterial vitality to approximately 0% and yielded a strong CFU reduction of 2-3 log(10) units, much stronger than CHX alone. Furthermore, the first chitosan derivative showed a significant decrease of the surface coverage with these treated streptococci after attachment to enamel. CONCLUSIONS Based on these results, a new CHX formulation would be beneficial unifying the bioadhesive properties of chitosan with the antibacterial activity of CHX synergistically resulting in a superior antiplaque effect than CHX alone." }, { "instance_id": "R26550xR26438", "comparison_id": "R26550", "paper_id": "R26438", "text": "Chitosan as Tear Substitute: A Wetting Agent Endowed with Antimicrobial Efficacy A cationic biopolymer, chitosan, is proposed for use in artificial tear formulations. It is endowed with good wetting properties as well as an antibacterial effect that are desirable in cases of dry eye, which is often complicated by secondary infections. Solutions containing 0.5% w/v of a low molecular weight (M(w)) chitosan (160 kDa) were assessed for antibacterial efficacy against E. coli and S. aureus by using the usual broth-dilution technique. The in vitro evaluation showed that concentrations of chitosan as low as 0.0375% still exert a bacteriostatic effect against E. coli. Minimal inhibitory concentration (MIC) values of chitosan were calculated to be as low as 0.375 mg/ml for E. coli and 0.15 mg/ml for S. aureus. Gamma scintigraphic studies demonstrated that chitosan formulations remain on the precorneal surface as long as commonly used commercial artificial tears (Protagent collyrium and Protagent-SE unit-dose) having a 5-fold higher viscosity." }, { "instance_id": "R26550xR26530", "comparison_id": "R26550", "paper_id": "R26530", "text": "Development and evaluation of an edible antimicrobial film based on yam starch and chitosan Edible antimicrobial films are an innovation within the biodegradable active packaging concept. They have been developed in order to reduce and/or inhibit the growth of microorganisms on the surface of foods. This study developed an edible antimicrobial film based on yam starch (Dioscorea alata) and chitosan and investigated its antimicrobial efficiency on Salmonella enteritidis. A solution of yam starch (4%) and glycerol (2%) was gelatinized in a viscoamilograph and chitosan added at concentrations of 1%, 3% and 5%. Films with and without chitosan were produced by the cast method. To evaluate the antimicrobial activity of the films, two suspensions of S. enteritidis were used in BHI medium, corresponding to counts of 2 \u00d7 108 and 1.1 \u00d7 106 CFU/ml. The suspensions (50 ml) were poured into flasks. The films were cut into 5 \u00d7 5 and 5 \u00d7 10 cm rectangles to be used at ratios of 1 : 1 (1 cm2/ml microorganism suspension) and 2 : 1 (2 cm2/ml). The film 30 \u00b5m thick on average. As a control, pure chitosan at an amount corresponding to that contained in the 3% and 5% films (5 \u00d7 10 cm) was added to flasks containing the microorganism suspension. Also, flasks containing only a suspension of S. enteritidis were used as control. The suspensions, in flasks, were kept at 37\u00b0C in a waterbath with agitation. Suspension aliquots were removed every hour for reading the optic density (OD595) and plating onto PCA medium. The results showed that chitosan has a bactericidal effect upon S. enteritidis. Films treated with chitosan at different concentrations showed similar antimicrobial efficiency, in addition to being dependent on diffusion. The chitosan-treated films caused a reduction of one to two log cycles in the number of microorganisms, whereas the pure chitosan presented a reduction of four to six log cycles compared with the control and starch film. The films showed good flexibility. Copyright \u00a9 2005 John Wiley & Sons, Ltd." }, { "instance_id": "R26550xR26480", "comparison_id": "R26550", "paper_id": "R26480", "text": "Anticoagulant activity of heterochitosans and their oligosaccharide sulfates Three kinds of partially deacetylated heterochitosans, 90% deacetylated chitosan, 75% deacetylated chitosan and 50% deacetylated chitosan, were prepared from crab chitin by N-deacetylation with 40% sodium hydroxide solution for different durations. Nine kinds of heterochitooligosaccharides (hetero-COSs) with relatively high molecular weights (5,000\u201310,000 Da; 90-HMWCOSs, 75-HMWCOSs, and 50-HMWCOSs), medium molecular weights (1,000\u20135,000 Da; 90-MMWCOSs, 75-MMWCOSs, and 50-MMWCOSs), and low molecular weights (below 1,000 Da; 90-LMWCOSs, 75-LMWCOSs, and 50-LMWCOSs) were prepared using an ultrafiltration membrane reactor system, respectively. In addition, their sulfated derivatives were prepared by a method using a trimethylamine-sulfur trioxide, and the anticoagulant properties of the heterochitosans and their COS sulfates with different chain lengths and degrees of deacetylation were investigated. Clotting times in thrombin-time assay were prolonged in the presence of various concentrations of the heterochitosans and their COS sulfates using normal human plasma. The 90% deacetylated chitosan sulfate exhibited the highest anticoagulant activity among all the heterochitosans and their COS sulfates." }, { "instance_id": "R26550xR26429", "comparison_id": "R26550", "paper_id": "R26429", "text": "Chitosan-alginate films prepared with chitosans of different molecular weights Chitosan-alginate polyelectrolyte complex (CS-AL PEC) is water insoluble and more effective in limiting the release of encapsulated materials compared to chitosan or alginate. Coherent CS-AL PEC films have been prepared in our laboratory by casting and drying suspensions of chitosan-alginate coacervates. The objective of this study was to evaluate the properties of the CS-AL PEC films prepared with chitosans of different molecular weights. Films prepared with low-molecular-weight chitosan (Mv 1.30 x 10(5)) were twice as thin and transparent, as well as 55% less permeable to water vapor, compared to films prepared with high-molecular-weight chitosan (Mv 10.0 x 10(5)). It may be inferred that the low-molecular-weight chitosan reacted more completely with the sodium alginate (M(v) 1.04 x 10(5)) than chitosan of higher molecular weight. A threshold molecular weight may be required, because chitosans of Mv 10.0 x 10(5) and 5.33 x 10(5) yielded films with similar physical properties. The PEC films exhibited different surface properties from the parent films, and contained a higher degree of chain alignment with the possible formation of new crystal types. The PEC films exhibited good in vitro biocompatibility with mouse and human fibroblasts, suggesting that they can be further explored for biomedical applications." }, { "instance_id": "R26550xR26503", "comparison_id": "R26550", "paper_id": "R26503", "text": "Comparative Study of Protective Effects of Chitin, Chitosan, and N-Acetyl Chitohexaose against Pseudomonas aeruginosa and Listeria monocytogenes Infections in Mice We conducted a comparative study of the protective effects of chitin, chitosan, and N-acetyl chitohexaose (NACOS-6) against mice infected intravenously or intraperitoneally with Pseudomonas aeruginosa and Listeria monocytogenes. Mice pretreated with chitin, chitosan, and NACOS-6 showed resistance to intraperitoneal infections by both microbes. Only mice pretreated with chitin and chitosan showed resistance to intravenous infections by both microbes. The number, active oxygen generation, and myeloperoxidase activity of peritoneal exudate cells (PEC) in the chitin, chitosan, and NACOS-6-treated mice were greater than those of the untreated mice. Also, these PEC factors from mice pretreated with chitin and chitosan were greater than those from the NACOS-6-treated mice." }, { "instance_id": "R26550xR26456", "comparison_id": "R26550", "paper_id": "R26456", "text": "Chitosan and its derivatives: potential excipients for peroral peptide delivery systems In the 1990s chitosan turned out to be a useful excipient in various pharmaceutical formulations. By modifications of the primary amino group at the 2-position of this poly(beta1-->4 D-glucosamine), the features of chitosan can even be optimised according to a given task in drug delivery systems. For peroral peptide delivery these tasks focus on overcoming the absorption (I) and enzymatic barrier (II) of the gut. On the one hand, even unmodified chitosan proved to display a permeation enhancing effect for peptide drugs. On the other hand, a protective effect for polymer embedded peptides towards degradation by intestinal peptidases can be achieved by the immobilisation of enzyme inhibitors on the polymer. Whereas serine proteases are inhibited by the covalent attachment of competitive inhibitors such as the Bowman-Birk inhibitor, metallo-peptidases are inhibited by chitosan derivatives displaying complexing properties such as chitosan-EDTA conjugates. In addition, because of the mucoadhesive properties of chitosan and most of its derivatives, a presystemic metabolism of peptides on the way between the dosage form and the absorption membrane can be strongly reduced. Based on these unique features, the co-administration of chitosan and its derivatives leads to a strongly improved bioavailability of many perorally given peptide drugs such as insulin, calcitonin and buserelin. These polymers are therefore useful excipients for the peroral administration of peptide drugs." }, { "instance_id": "R26550xR26495", "comparison_id": "R26550", "paper_id": "R26495", "text": "A review of chitin and chitosan applications Chitin is the most abundant natural amino polysaccharide and is estimated to be produced annually almost as much as cellulose. It has become of great interest not only as an underutilized resource, but also as a new functional material of high potential in various fields, and recent progress in chitin chemistry is quite noteworthy. The purpose of this review is to take a closer look at chitin and chitosan applications. Based on current research and existing products, some new and futuristic approaches in this fascinating area are thoroughly discussed." }, { "instance_id": "R26550xR26524", "comparison_id": "R26550", "paper_id": "R26524", "text": "Antimicrobial actions of degraded and native chitosan against spoilage organisms in laboratory media and foods ABSTRACT The objective of this study was to determine whether chitosan (poly-\u03b2-1,4-glucosamine) and hydrolysates of chitosan can be used as novel preservatives in foods. Chitosan was hydrolyzed by using oxidative-reductive degradation, crude papaya latex, and lysozyme. Mild hydrolysis of chitosan resulted in improved microbial inactivation in saline and greater inhibition of growth of several spoilage yeasts in laboratory media, but highly degraded products of chitosan exhibited no antimicrobial activity. In pasteurized apple-elderflower juice stored at 7\u00b0C, addition of 0.3 g of chitosan per liter eliminated yeasts entirely for the duration of the experiment (13 days), while the total counts and the lactic acid bacterial counts increased at a slower rate than they increased in the control. Addition of 0.3 or 1.0 g of chitosan per kg had no effect on the microbial flora of houmous, a chickpea dip; in the presence of 5.0 g of chitosan per kg, bacterial growth but not yeast growth was substantially reduced compared with growth in control dip stored at 7\u00b0C for 6 days. Improved antimicrobial potency of chitosan hydrolysates like that observed in the saline and laboratory medium experiments was not observed in juice and dip experiments. We concluded that native chitosan has potential for use as a preservative in certain types of food but that the increase in antimicrobial activity that occurs following partial hydrolysis is too small to justify the extra processing involved." }, { "instance_id": "R26550xR26511", "comparison_id": "R26550", "paper_id": "R26511", "text": "Anti-biofilm properties of chitosan-coated surfaces Surfaces coated with the naturally-occurring polysaccharide chitosan (partially deacetylated poly N-acetyl glucosamine) resisted biofilm formation by bacteria and yeast. Reductions in biofilm viable cell numbers ranging from 95% to 99.9997% were demonstrated for Staphylococcus epidermidis, Staphylococcus aureus, Klebsiella pneumoniae, Pseudomonas aeruginosa and Candida albicans on chitosan-coated surfaces over a 54-h experiment in comparison to controls. For instance, chitosan-coated surfaces reduced S. epidermidis surface-associated growth more than 5.5 10log units (99.9997%) compared to a control surface. As a comparison, coatings containing a combination of the antibiotics minocycline and rifampin reduced S. epidermidis growth by 3.9 10log units (99.99%) and coatings containing the antiseptic chlorhexidine did not significantly reduce S. epidermidis surface associated growth as compared to controls. The chitosan effects were confirmed with microscopy. Using time-lapse fluorescence microscopy and fluorescent-dye-loaded S. epidermidis, the permeabilization of these cells was observed as they alighted on chitosan-coated surfaces. This suggests chitosan disrupts cell membranes as microbes settle on the surface. Chitosan offers a flexible, biocompatible platform for designing coatings to protect surfaces from infection." }, { "instance_id": "R26550xR26489", "comparison_id": "R26550", "paper_id": "R26489", "text": "EFFECTS OF SHRIMP (MACROBRACIUM ROSENBERGII)-DERIVED CHITOSAN ON PLASMA LIPID PROFILE AND LIVER LIPID PEROXIDE LEVELS IN NORMO- AND HYPERCHOLESTEROLAEMIC RATS 1 The effects of chitosan (CS) derived from the exoskeleton of the shrimp Macrobracium rosenbergii on bodyweight, plasma lipid profile, fatty acid composition, liver lipid peroxide (LPO) levels and plasma levels of glutamate pyruvate transaminase (GPT) were determined in normocholesterolaemic (NC) and hypercholesterolaemic (HC) Long Evans rats. 2 The NC rats were fed a diet containing 2% CS and the HC rats were fed a diet containing 2 and 4% CS for 8 weeks. Chitosan significantly reduced bodyweight gain only in HC + 4% CS rats compared with HC rats, but not in NC + 2% CS or HC + 2% CS rats. 3 Chitosan reduced plasma total cholesterol in the HC + 2% CS, HC + 4% CS and NC + 2% CS rats; however, low density lipoprotein\u2013cholesterol decreased only in the first two groups. High\u2010density lipoprotein\u2013cholesterol (HDL\u2010C) increased in the HC + 4% CS rats by 24% compared with the HC + 2% CS group and by 30% compared with HC rats; however, HDL\u2010C did not increase in the NC + 2% CS group compared with NC rats. The level of plasma triglycerides decreased significantly only in HC + 2% CS rats compared with HC rats. 4 Chitosan significantly decreased plasma levels of arachidonic acid in the HC + 2% CS and HC + 4% CS groups, with a concurrent increase in the molar ratio of total unsaturated fatty acid (TUFA) to total saturated fatty acid (TSFA). 5 Moreover, CS increased liver LPO levels without affecting plasma levels of GPT. Liver LPO levels were positively correlated with the TUFA/TSFA molar ratio. 6 The present study suggests that dietary CS decreases the atherogenic lipid profiles of both NC and HC rats and reduces the bodyweight gain of HC rats." }, { "instance_id": "R26550xR26527", "comparison_id": "R26550", "paper_id": "R26527", "text": "Antimicrobial Edible Films and Coatings Increasing consumer demand for microbiologicallysafer foods, greater convenience,smaller packages, and longer product shelf life is forcing the industry to develop new food-processing,cooking, handling, and packaging strategies. Nonfluid ready-to-eat foods are frequently exposed to postprocess surface contamination, leading to a reduction in shelf life. The food industry has at its disposal a wide range of nonedible polypropylene- and polyethylene-based packaging materials and various biodegradable protein- and polysaccharide-based edible films that can potentially serve as packaging materials. Research on the use of edible films as packaging materials continues because of the potential for these films to enhance food quality, food safety, and product shelf life. Besides acting as a barrier against mass diffusion (moisture, gases, and volatiles), edible films can serve as carriers for a wide range of food additives, including flavoring agents, antioxidants, vitamins, and colorants. When antimicrobial agents such as benzoic acid, sorbic acid, propionic acid, lactic acid, nisin, and lysozyme have been incorporated into edible films, such films retarded surface growth of bacteria, yeasts, and molds on a wide range of products, including meats and cheeses. Various antimicrobial edible films have been developed to minimize growth of spoilage and pathogenic microorganisms, including Listeria monocytogenes, which may contaminate the surface of cooked ready-to-eat foods after processing. Here, we review the various types of protein-based (wheat gluten, collagen, corn zein, soy, casein, and whey protein), polysaccharide-based (cellulose, chitosan, alginate, starch, pectin, and dextrin), and lipid-based (waxes, acylglycerols, and fatty acids) edible films and a wide range of antimicrobial agents that have been or could potentially be incorporated into such films during manufacture to enhance the safety and shelf life of ready-to-eat foods." }, { "instance_id": "R26550xR26453", "comparison_id": "R26550", "paper_id": "R26453", "text": "Chitosan microparticles for oral vaccination: preparation, characterization and preliminary in vivo uptake studies in murine Peyer's patches Although oral vaccination has numerous advantages over parenteral injection, degradation of the vaccine in the gut and low uptake in the lymphoid tissue of the gastrointestinal tract still complicate the development of oral vaccines. In this study chitosan microparticles were prepared and characterized with respect to size, zeta potential, morphology and ovalbumin-loading and -release. Furthermore, the in vivo uptake of chitosan microparticles by murine Peyer's patches was studied using confocal laser scanning microscopy (CLSM). Chitosan microparticles were made according to a precipitation/coacervation method, which was found to be reproducible for different batches of chitosan. The chitosan microparticles were 4.3+/-0.7 microm in size and positively charged (20+/-1 mV). Since only microparticles smaller than 10 microm can be taken up by M-cells of Peyer's patches, these microparticles are suitable to serve as vaccination systems. CLSM visualization studies showed that the model antigen ovalbumin was entrapped within the chitosan microparticles and not only associated to their outer surface. These results were verified using field emission scanning electron microscopy, which demonstrated the porous structure of the chitosan microparticles, thus facilitating the entrapment of ovalbumin in the microparticles. Loading studies of the chitosan microparticles with the model compound ovalbumin resulted in loading capacities of about 40%. Subsequent release studies showed only a very low release of ovalbumin within 4 h and most of the ovalbumin (about 90%) remained entrapped in the microparticles. Because the prepared chitosan microparticles are biodegradable, this entrapped ovalbumin will be released after intracellular digestion in the Peyer's patches. Initial in vivo studies demonstrated that fluorescently labeled chitosan microparticles can be taken up by the epithelium of the murine Peyer's patches. Since uptake by Peyer's patches is an essential step in oral vaccination, these results show that the presently developed porous chitosan microparticles are a very promising vaccine delivery system." }, { "instance_id": "R26550xR26492", "comparison_id": "R26550", "paper_id": "R26492", "text": "Control of wound infections using a bilayer chitosan wound dressing with sustainable antibiotic delivery A novel bilayer chitosan membrane was prepared by a combined wet/dry phase inversion method and evaluated as a wound dressing. This new type of bilayer chitosan wound dressing, consisting of a dense upper layer (skin layer) and a sponge-like lower layer (sublayer), is very suitable for use as a topical delivery of silver sulfadiazine (AgSD) for the control of wound infections. Physical characterization of the bilayer wound dressing showed that it has excellent oxygen permeability, that it controls the water vapor transmission rate, and that it promotes water uptake capability. AgSD dissolved from bilayer chitosan dressings to release silver and sulfadiazine. The release of sulfadiazine from the bilayer chitosan dressing displayed a burst release on the first day and then tapered off to a much slower release. However, the release of silver from the bilayer chitosan dressing displayed a slow release profile with a sustained increase of silver concentration. The cultures of Pseudomonas aeruginosa and Staphylococcus aureus in agar plates showed effective antimicrobial activity for 1 week. In vivo antibacterial tests confirmed that this wound dressing is effective for long-term inhibition of the growth of Pseudomonas aeruginosa and Staphylococcus aureus at an infected wound site. The results in this study indicate that the AgSD-incorporated bilayer chitosan wound dressing may be a material with potential antibacterial capability for the treatment of infected wounds." }, { "instance_id": "R26550xR26468", "comparison_id": "R26550", "paper_id": "R26468", "text": "Anti-ulcerogenic effect of chitin and chitosan on mucosal antioxidant defence system in HCl-ethanol-induced ulcer in rats Abstract The anti-ulcerogenic effect of chitin and chitosan against ulcer induced by HCl-ethanol in male Wistar rats was studied. Levels of acid output, pepsin, protein, lipid peroxides and reduced glutathione and the activity of glutathione peroxidase (GPx), glutathione-S-transferase (GST), catalase (CAT) and superoxide dismutase (SOD) were determined in the gastric mucosa of normal and experimental groups of rats. A significant increase in volume and acidity of the gastric juice was observed in the ulcer-induced group of rats. Peptic activity was significantly decreased as compared with that of normal controls. In the rats pre-treated with chitin and chitosan 2% along with feed, the volume and acid output and peptic activity of gastric mucosa were maintained at near normal levels. The level of lipid peroxidation was significantly higher in the ulcerated mucosa when compared with that of normal controls. This was paralleled by a decline in the level of reduced glutathione and in the activity of antioxidant enzymes like GPx, GST, CAT and SOD in the gastric mucosa of ulcer-induced rats. Also, the levels of mucosal proteins and glycoprotein components were significantly depleted in ulcerated mucosa. The pre-treatment with chitin and chitosan was found to exert a significant anti-ulcer effect by preventing all the HCl-ethanol-induced ulcerogenic effects in experimental rats." }, { "instance_id": "R26550xR26423", "comparison_id": "R26550", "paper_id": "R26423", "text": "Characterization of chitosan acetate as a binder for sustained release tablets A chitosan derivative as an acetate salt was successfully prepared by using a spray drying technique. Physicochemical characteristics and micromeritic properties of spray-dried chitosan acetate (SD-CSA) were studied as well as drug-polymer and excipient-polymer interaction. SD-CSA was spherical agglomerates with rough surface and less than 75 microm in diameter. The salt was an amorphous solid with slight to moderate hygroscopicity. The results of Fourier transform infrared (FTIR) and solid-state (13)C NMR spectroscopy demonstrated the functional groups of an acetate salt in its molecular structure. DSC and TGA thermograms of SD-CSA as well as FTIR and NMR spectrum of the salt, heated at 120 degrees C for 12 h, revealed the evidence of the conversion of chitosan acetate molecular structure to N-acetylglucosamine at higher temperature. No interaction of SD-CSA with either drugs (salicylic acid and theophylline) or selected pharmaceutical excipients were observed in the study using DSC method. As a wet granulation binder, SD-CSA gave theophylline granules with good flowability (according to the value of angle of repose, Carr's index, and Hausner ratio) and an excellent compressibility profile comparable to a pharmaceutical binder, PVP K30. In vitro release study of theophylline from the tablets containing 3% w/w SD-CSA as a binder demonstrated sustained drug release in all media. Cumulative drug released in 0.1 N HCl, pH 6.8 phosphate buffer and distilled water was nearly 100% within 6, 16 and 24 h, respectively. It was suggested that the simple incorporation of spray-dried chitosan acetate as a tablet binder could give rise to controlled drug delivery systems exhibiting sustained drug release." }, { "instance_id": "R26550xR26459", "comparison_id": "R26550", "paper_id": "R26459", "text": "Chitosan Nanoparticles for Non-Viral Gene Therapy Orthopedic Research Laboratory Hopital du Sacre-C\u0153ur de Montreal Universite de Montreal, Montreal, Que. H4J 1C5" }, { "instance_id": "R26550xR26544", "comparison_id": "R26550", "paper_id": "R26544", "text": "Application of chitosan, a natural aminopolysaccharide, for dye removal from aqueous solutions by adsorption processes using batch studies: A review of recent literature Application of chitinous products in wastewater treatment has received considerable attention in recent years in the literature. In particular, the development of chitosan-based materials as useful adsorbent polymeric matrices is an expanding field in the area of adsorption science. This review highlights some of the notable examples in the use of chitosan and its grafted and crosslinked derivatives for dye removal from aqueous solutions. It summarizes the key advances and results that have been obtained in their decolorizing application as biosorbents. The review provides a summary of recent information obtained using batch studies and deals with the various adsorption mechanisms involved. The effects of parameters such as the chitosan characteristics, the process variables, the chemistry of the dye and the solution conditions used in batch studies on the biosorption capacity and kinetics are presented and discussed. The review also summarizes and attempts to compare the equilibrium and kinetic models, and the thermodynamic studies reported for biosorption onto chitosan." }, { "instance_id": "R26550xR26514", "comparison_id": "R26550", "paper_id": "R26514", "text": "Application of Glutaraldehyde-Crosslinked Chitosan as a Scaffold for Hepatocyte Attachment. The effectiveness of chitosan, a biocompatible polymer derived by the deacetylation of chitin, as a scaffold of hepatocyte attachment, was examined. Since chitosan gel was too fragile to use for cell culture, its free amino groups were crosslinked by glutaraldehyde to increase its strength. Rat hepatocytes seeded onto glutaraldehyde-crosslinked chitosan (GA-chitosan) gel could stably attach to the surface, retaining its spherical form, the same as in vivo, and then release a very small amount of lactate dehydrogenase during the 5 d culture period. By contrast, hepatocytes on a collagen-coated surface spread flat, and they released much more lactate dehydrogenase than those on the GA-chitosan gel. Hepatocytes on GA-chitosan also retained higher urea synthesis activity, a liver-specific function, than those on the collagen-coated surface. These results indicate that chitosan is a promising biopolymer as a scaffold of hepatocyte attachment, which can be applied to an effective bioartificial liver support system." }, { "instance_id": "R26550xR26477", "comparison_id": "R26550", "paper_id": "R26477", "text": "Chitosan Induces Apoptosis via Caspase-3 Activation in Bladder Tumor Cells Recently, because of its low toxicity and biological effects, chitosan has been widely used in the medical and pharmaceutical fields, e.g., for nasal or oral delivery of peptide or polar drug delivery. Here, we report a growth\u2010inhibitory effect of chitosan on tumor cells. The growth inhibition was examined by WST\u20101 colorimetric assay and cell counting. We also observed DNA fragmentation, which is characteristic of apoptosis, and elevated caspase\u20103\u2010like activity in chitosan\u2010treated cancer cells. The findings suggest that chitosan may have potential value in cancer therapy." }, { "instance_id": "R26550xR26426", "comparison_id": "R26550", "paper_id": "R26426", "text": "Therapeutic efficacy of sustained drug release from chitosan gel on local inflammation The model anti-inflammatory drug prednisolone (PS) was retained in chitosan (CS) gel beads, which were prepared in a 10% aqueous amino acid solution (pH 9.0). Sustained release of PS from the CS gel beads was observed. Carrageenan solution was injected into air pouches (AP), which were prepared subcutaneously on the dorsal surface of mice, in order to induce local inflammation. CS gel beads retaining PS were then implanted into the AP to investigate the therapeutic efficacy of sustained PS release against local inflammation. In vivo PS release from CS gel beads was governed by both diffusion of the drug and degradation of the gel matrix. Sustained drug release by CS gel beads allowed the supply of the minimum effective dose and facilitated prolonged periods of local drug presence. Inflammation indexes were significantly reduced after implantation of CS gel beads when compared with injection of PS suspension. Thus, extension of the duration of drug activity by CS gel beads resulted in improved therapeutic efficacy. These observations indicate that CS gel beads are a promising biocompatible and biodegradable vehicle for treatment of local inflammation." }, { "instance_id": "R26550xR26474", "comparison_id": "R26550", "paper_id": "R26474", "text": "Antioxidant activity of water-soluble chitosan derivatives Water-soluble chitosan derivatives were prepared by graft copolymerization of maleic acid sodium onto hydroxypropyl chitosan and carboxymethyl chitosan sodium. Their scavenging activities against hydroxyl radical *OH were investigated by chemiluminescence technique. They exhibit IC(50) values ranging from 246 to 498 microg/mL, which should be attributed to their different contents of hydroxyl and amino groups and different substituting groups." }, { "instance_id": "R26550xR26447", "comparison_id": "R26550", "paper_id": "R26447", "text": "The use of chitosan formulations in cancer therapy With the advent of the theranostics era in biomedical research, gene therapy is poised to offer more, provided that more efficient delivery vehicles are discovered and developed. Chitosan is a biomatrix that is abundant, biocompatible, biodegradable, versatile, inexpensive and safe. These features have paved the way for its use in gene therapy, mainly for delivery of therapeutic plasmids and more recently for siRNA. Recent studies show that chitosan per se exhibits anticancer properties both in vitro and in animal models, most probably through the p21/Cip and p27/Kip pathways. This review looks at the in vivo studies using chitosan technology towards cancer gene therapy, drawing some support from non-cancer studies. The future of this promising technology lies in the evolution of new ideas for enhanced nucleic acid drug pharmacokinetics and, consequently, pharmacodynamics for cancer patients." }, { "instance_id": "R26608xR26590", "comparison_id": "R26608", "paper_id": "R26590", "text": "Distributed Energy-Efficient Hierarchical Clustering for Wireless Sensor Networks Since nodes in a sensor network have limited energy, prolonging the network lifetime and improving scalability become important. In this paper, we propose a distributed weight-based energy-efficient hierarchical clustering protocol (DWEHC). Each node first locates its neighbors (in its enclosure region), then calculates its weight which is based on its residual energy and distance to its neighbors. The largest weight node in a neighborhood may become a clusterhead. Neighboring nodes will then join the clusterhead hierarchy. The clustering process terminates in O(1) iterations, and does not depend on network topology or size. Simulations show that DWEHC clusters have good performance characteristics." }, { "instance_id": "R26608xR26606", "comparison_id": "R26608", "paper_id": "R26606", "text": "PEACH: Power-efficient and adaptive clustering hierarchy protocol for wireless sensor networks The main goal of this research is concerning clustering protocols to minimize the energy consumption of each node, and maximize the network lifetime of wireless sensor networks. However, most existing clustering protocols consume large amounts of energy, incurred by cluster formation overhead and fixed-level clustering, particularly when sensor nodes are densely deployed in wireless sensor networks. In this paper, we propose PEACH protocol, which is a power-efficient and adaptive clustering hierarchy protocol for wireless sensor networks. By using overhearing characteristics of wireless communication, PEACH forms clusters without additional overhead and supports adaptive multi-level clustering. In addition, PEACH can be used for both location-unaware and location-aware wireless sensor networks. The simulation results demonstrate that PEACH significantly minimizes energy consumption of each node and extends the network lifetime, compared with existing clustering protocols. The performance of PEACH is less affected by the distribution of sensor nodes than other clustering protocols." }, { "instance_id": "R26608xR26582", "comparison_id": "R26608", "paper_id": "R26582", "text": "Design and analysis of a fast local clustering service for wireless sensor networks We present a fast local clustering service, FLOC, that partitions a multi-hop wireless network into nonoverlapping and approximately equal-sited clusters. Each cluster has a clusterhead such that all nodes within unit distance of the clusterhead belong to the cluster but no node beyond distance m from the clusterhead belongs to the cluster. By asserting m /spl ges/ 2, FLOC achieves locality: effects of cluster formation and faults/changes at any part of the network are contained within most m units. By taking unit distance to be the reliable communication radius and m to be the maximum communication radius, FLOC exploits the double-band nature of wireless radio-model and achieves clustering in constant time regardless of the network size. Through simulations and experiments with actual deployments, we analyze the tradeoffs between clustering time and the quality of clustering, and suggest suitable parameters for FLOC to achieve a fast completion time without compromising the quality of the resulting clustering." }, { "instance_id": "R26608xR26558", "comparison_id": "R26608", "paper_id": "R26558", "text": "An application-specific protocol architecture for wireless microsensor networks Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches." }, { "instance_id": "R26608xR26594", "comparison_id": "R26608", "paper_id": "R26594", "text": "Cluster-Head Election Using Fuzzy Logic for Wireless Sensor Networks Wireless sensor networks (WSNs) present a new generation of real-time embedded systems with limited computation, energy and memory resources that are being used in a wide variety of applications where traditional networking infrastructure is practically infeasible. Appropriate cluster-head node election can drastically reduce the energy consumption and enhance the lifetime of the network. In this paper, a fuzzy logic approach to cluster-head election is proposed based on three descriptors-energy, concentration and centrality. Simulation shows that depending upon network configuration, a substantial increase in network lifetime can be accomplished as compared to probabilistically selecting the nodes as cluster-heads using only local information." }, { "instance_id": "R26608xR26574", "comparison_id": "R26608", "paper_id": "R26574", "text": "GS3: scalable self-configuration and self-healing in wireless sensor networks We present GS3, a distributed algorithm for scalable self-configuration and self-healing in multi-hop wireless sensor networks. The algorithm enables network nodes in a 2D plane to configure themselves into a cellular hexagonal structure where cells have tightly bounded geographic radius and the overlap between neighboring cells is low. The structure is self-healing under various perturbations, such as node joins, leaves, deaths, movements, and state corruptions. For instance, the structure slides as a whole if nodes in many cells die at the same rate. Moreover, its configuration and healing are scalable in three respects: first, local knowledge enables each node to maintain only limited information with respect to a constant number of nearby nodes; second, local self-healing guarantees that all perturbations are contained within a tightly bounded region with respect to the perturbed area and dealt with in the time taken to diffuse a message across the region; third, only local coordination is needed in both configuration and self-healing." }, { "instance_id": "R26608xR26586", "comparison_id": "R26608", "paper_id": "R26586", "text": "Distributed clustering in ad-hoc sensor networks: a hybrid, energy-efficient approach Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation." }, { "instance_id": "R26608xR26562", "comparison_id": "R26608", "paper_id": "R26562", "text": "TEEN: a routing protocol for enhanced efficiency in wireless sensor networks Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols." }, { "instance_id": "R26608xR26554", "comparison_id": "R26608", "paper_id": "R26554", "text": "Energy-efficient communication protocol for wireless microsensor networks Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated." }, { "instance_id": "R26654xR26554", "comparison_id": "R26654", "paper_id": "R26554", "text": "Energy-efficient communication protocol for wireless microsensor networks Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated." }, { "instance_id": "R26654xR26637", "comparison_id": "R26654", "paper_id": "R26637", "text": "A two-levels hierarchy for low-energy adaptive clustering hierarchy (TL-LEACH) Wireless sensor networks with thousands of tiny sensor nodes are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper we propose a modification to a well-known protocol for sensor networks called Low Energy Adaptive Clustering Hierarchy (LEACH). This last is designed for sensor networks where end- user wants to remotely monitor the environment. In such situation, the data from the individual nodes must be sent to a central base station, often located far from the sensor network, through which the end-user can access the data. In this context our contribution is represented by building a two-level hierarchy to realize a protocol that saves better the energy consumption. Our TL-LEACH uses random rotation of local cluster base stations (primary cluster-heads and secondary cluster-heads). In this way we build, where it is possible, a two-level hierarchy. This permits to better distribute the energy load among the sensors in the network especially when the density of network is higher. TL- LEACH uses localized coordination to enable scalability and robustness. We evaluated the performances of our protocol with NS-2 and we observed that our protocol outperforms the LEACH in terms of energy consumption and lifetime of the network." }, { "instance_id": "R26654xR26624", "comparison_id": "R26654", "paper_id": "R26624", "text": "APTEEN: a hybrid protocol for efficient routing and comprehensive information retrieval in wireless Wireless sensor networks with thousands of tiny sensor nodes, are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper, we propose a hybrid routing protocol (APTEEN) which allows for comprehensive information retrieval. The nodes in such a network not only react to time-critical situations, but also give an overall picture of the network at periodic intervals in a very energy efficient manner. Such a network enables the user to request past, present and future data from the network in the form of historical, one-time and persistent queries respectively. We evaluated the performance of these protocols and observe that these protocols are observed to outperform existing protocols in terms of energy consumption and longevity of the network." }, { "instance_id": "R26654xR26634", "comparison_id": "R26654", "paper_id": "R26634", "text": "SEP: A Stable Election Protocol for clustered heterogeneous wireless sensor networks We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources\u2014this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes." }, { "instance_id": "R26654xR26562", "comparison_id": "R26654", "paper_id": "R26562", "text": "TEEN: a routing protocol for enhanced efficiency in wireless sensor networks Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols." }, { "instance_id": "R26654xR26631", "comparison_id": "R26654", "paper_id": "R26631", "text": "Low energy adaptive clustering hierarchy with deterministic cluster-head selection This paper focuses on reducing the power consumption of wireless microsensor networks. Therefore, a communication protocol named LEACH (low-energy adaptive clustering hierarchy) is modified. We extend LEACH's stochastic cluster-head selection algorithm by a deterministic component. Depending on the network configuration an increase of network lifetime by about 30% can be accomplished. Furthermore, we present a new approach to define lifetime of microsensor networks using three new metrics FND (First Node Dies), HNA (Half of the Nodes Alive), and LND (Last Node Dies)." }, { "instance_id": "R26654xR26558", "comparison_id": "R26654", "paper_id": "R26558", "text": "An application-specific protocol architecture for wireless microsensor networks Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches." }, { "instance_id": "R26654xR26640", "comparison_id": "R26654", "paper_id": "R26640", "text": "An energy-efficient unequal clustering mechanism for wireless sensor networks Unequal clustering mechanism, in combination with inter-cluster multihop routing, provides a new effective way to balance the energy dissipation among nodes and prolong the lifetime of wireless sensor networks. In this paper, a distributed energy-efficient unequal clustering mechanism (DEEUC) is proposed and evaluated. By a time based competitive clustering algorithm, DEEUC partitions all nodes into clusters of unequal size, in which the clusters closer to the base station have smaller size. The cluster heads of these clusters can preserve some more energy for the inter-cluster relay traffic and the \u201chot-spots\u201d problem can be avoided. For inter-cluster communication, DEEUC adopts an energy-aware multihop routing to reduce the energy consumption of the cluster heads. Simulation results demonstrate that the protocol can efficiently decrease the dead speed of the nodes and prolong the network lifetime" }, { "instance_id": "R26654xR26652", "comparison_id": "R26654", "paper_id": "R26652", "text": "Distance based thresholds for cluster head selection in wireless sensor networks Central to the cluster-based routing protocols is the cluster head (CH) selection procedure that allows even distribution of energy consumption among the sensors, and therefore prolonging the lifespan of a sensor network. We propose a distributed CH selection algorithm that takes into account the distances from sensors to a base station that optimally balances the energy consumption among the sensors. NS-2 simulations show that our proposed scheme outperforms existing algorithms in terms of the average node lifespan and the time to first node death." }, { "instance_id": "R26729xR26694", "comparison_id": "R26729", "paper_id": "R26694", "text": "Energy efficient clustering algorithm based on neighbors for wireless sensor networks In this paper, an energy efficient clustering algorithm based on neighbors (EECABN) for wireless sensor networks is proposed. In the algorithm, an optimized weight of nodes is introduced to determine the priority of clustering procedure. As improvement, the weight is a measurement of energy and degree as usual, and even associates with distance from neighbors, distance to the sink node, and other factors. To prevent the low energy nodes being exhausted with energy, the strong nodes should have more opportunities to act as cluster heads during the clustering procedure. The simulation results show that the algorithm can effectively prolong whole the network lifetime. Especially at the early stage that some nodes in the network begin to die, the process can be postponed by using the algorithm." }, { "instance_id": "R26729xR26558", "comparison_id": "R26729", "paper_id": "R26558", "text": "An application-specific protocol architecture for wireless microsensor networks Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches." }, { "instance_id": "R26729xR26602", "comparison_id": "R26729", "paper_id": "R26602", "text": "WSN16-5: Distributed Formation of Overlapping Multi-hop Clusters in Wireless Sensor Networks Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Most of the published clustering algorithms strive to generate the minimum number of disjoint clusters. However, we argue that guaranteeing some degree of overlap among clusters can facilitate many applications, like inter-cluster routing, topology discovery and node localization, recovery from cluster head failure, etc. We formulate the overlapping multi-hop clustering problem as an extension to the k-dominating set problem. Then we propose MOCA; a randomized distributed multi-hop clustering algorithm for organizing the sensors into overlapping clusters. We validate MOCA in a simulated environment and analyze the effect of different parameters, e.g. node density and network connectivity, on its performance. The simulation results demonstrate that MOCA is scalable, introduces low overhead and produces approximately equal-sized clusters." }, { "instance_id": "R26729xR26672", "comparison_id": "R26729", "paper_id": "R26672", "text": "A probabilistic clustering algorithm in wireless sensor networks A wireless sensor network consists of nodes that can communicate with each other via wireless links. One way to support efficient communication between sensors is to organize the network into several groups, called clusters, with each cluster electing one node as the head of cluster. The paper describes a constant time clustering algorithm that can be applied on wireless sensor networks. This approach is an extension to the Younis and Fahmy method (1). The simulation results show that the extension can generate a small number of cluster heads in relatively few rounds, especially in sparse networks." }, { "instance_id": "R26729xR26578", "comparison_id": "R26729", "paper_id": "R26578", "text": "ACE: An Emergent Algorithm for Highly Uniform Cluster Formation The efficient subdivision of a sensor network into uniform, mostly non-overlapping clusters of physically close nodes is an important building block in the design of efficient upper layer network functions such as routing, broadcast, data aggregation, and query processing." }, { "instance_id": "R26729xR26721", "comparison_id": "R26729", "paper_id": "R26721", "text": "Energy Efficient and Balanced Cluster-Based Data Aggregation Algorithm for Wireless Sensor Networks Abstract Data aggregation is an effectual approach for wireless sensor networks (WSNs) to save energy and prolong network lifetime. Cluster-based data aggregation algorithms are most popular because they have the advantages of high flexibility and reliability. But the problem of unbalanced energy dissipation is the inherent disadvantage in clusterbased WSNs. This paper addresses this problem in cluster-based and homogeneous WSNs in which cluster heads transmit data to base station by one-hop communication, and proposes an energy efficient and balanced cluster-based data aggregation algorithm (EEBCDA). It divides the network into rectangular grids with unequal size and makes cluster heads rotate among the nodes in each grid respectively, the grid whose cluster head consumes more energy has more sensor nodes to take part in cluster head rotation and share energy load, by this way, it is able to balance energy dissipation. Besides, it adopts some measures to save energy. The results of simulation show that EEBCDA can remarkably enhance energy efficiency, balance energy dissipation and prolong network lifetime." }, { "instance_id": "R26729xR26691", "comparison_id": "R26729", "paper_id": "R26691", "text": "Distributed Clustering-Based Aggregation Algorithm for Spatial Correlated Sensor Networks In wireless sensor networks, it is already noted that nearby sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires the research of in-network data aggregation. In this paper, an \u03b1 -local spatial clustering algorithm for sensor networks is proposed. By measuring the spatial correlation between data sampled by different sensors, the algorithm constructs a dominating set as the sensor network backbone used to realize the data aggregation based on the information description/summarization performance of the dominators. In order to evaluate the performance of the algorithm a pattern recognition scenario over environmental data is presented. The evaluation shows that the resulting network achieved by our algorithm can provide environmental information at higher accuracy compared to other algorithms." }, { "instance_id": "R26729xR26682", "comparison_id": "R26729", "paper_id": "R26682", "text": "EEMC: An energy-efficient multi-level clustering algorithm for large-scale wireless sensor networks Wireless sensor networks can be used to collect surrounding data by multi-hop. As sensor networks have the limited and not rechargeable energy resource, energy efficiency is an important design issue for its topology. In this paper, we propose a distributed algorithm, EEMC (energy-efficient multi-tier clustering), that generates multi-tier clusters for long-lived sensor networks. EEMC terminates in O(log logN) iterations given N nodes, incurs low energy consumption and latency across the network. Simulation results demonstrate that our proposed algorithm is effective in prolonging the large-scale network lifetime and achieving more power reductions" }, { "instance_id": "R26729xR26566", "comparison_id": "R26729", "paper_id": "R26566", "text": "A clustering scheme for hierarchical control in multi-hop wireless networks In this paper we present a clustering scheme to create a hierarchical control structure for multi-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters." }, { "instance_id": "R26729xR26724", "comparison_id": "R26729", "paper_id": "R26724", "text": "Load-balanced clustering algorithm with distributed selforganization for wireless sensor networks Wireless sensor networks (WSNs) are composed of a large number of inexpensive power-constrained wireless sensor nodes, which detect and monitor physical parameters around them through self-organization. Utilizing clustering algorithms to form a hierarchical network topology is a common method of implementing network management and data aggregation in WSNs. Assuming that the residual energy of nodes follows the random distribution, we propose a load-balanced clustering algorithm for WSNs on the basis of their distance and density distribution, making it essentially different from the previous clustering algorithms. Simulated tests indicate that the new algorithm can build more balanceable clustering structure and enhance the network life cycle." }, { "instance_id": "R26729xR26664", "comparison_id": "R26729", "paper_id": "R26664", "text": "An energy efficient hierarchical clustering algorithm for wireless sensor networks A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center." }, { "instance_id": "R26729xR26704", "comparison_id": "R26729", "paper_id": "R26704", "text": "A centralized energy-efficient routing protocol for wireless sensor networks Wireless sensor networks consist of small battery powered devices with limited energy resources. Once deployed, the small sensor nodes are usually inaccessible to the user, and thus replacement of the energy source is not feasible. Hence, energy efficiency is a key design issue that needs to be enhanced in order to improve the life span of the network. Several network layer protocols have been proposed to improve the effective lifetime of a network with a limited energy supply. In this article we propose a centralized routing protocol called base-station controlled dynamic clustering protocol (BCDCP), which distributes the energy dissipation evenly among all sensor nodes to improve network lifetime and average energy savings. The performance of BCDCP is then compared to clustering-based schemes such as low-energy adaptive clustering hierarchy (LEACH), LEACH-centralized (LEACH-C), and power-efficient gathering in sensor information systems (PEGASIS). Simulation results show that BCDCP reduces overall energy consumption and improves network lifetime over its comparatives." }, { "instance_id": "R26729xR26687", "comparison_id": "R26729", "paper_id": "R26687", "text": "TASC: topology adaptive spatial clustering for sensor networks The ability to extract topological regularity out of large randomly deployed sensor networks holds the promise to maximally leverage correlation for data aggregation and also to assist with sensor localization and hierarchy creation. This paper focuses on extracting such regular structures from physical topology through the development of a distributed clustering scheme. The topology adaptive spatial clustering (TASC) algorithm presented here is a distributed algorithm that partitions the network into a set of locally isotropic, non-overlapping clusters without prior knowledge of the number of clusters, cluster size and node coordinates. This is achieved by deriving a set of weights that encode distance measurements, connectivity and density information within the locality of each node. The derived weights form the terrain for holding a coordinated leader election in which each node selects the node closer to the center of mass of its neighborhood to become its leader. The clustering algorithm also employs a dynamic density reachability criterion that groups nodes according to their neighborhood's density properties. Our simulation results show that the proposed algorithm can trace locally isotropic structures in non-isotropic network and cluster the network with respect to local density attributes. We also found out that TASC exhibits consistent behavior in the presence of moderate measurement noise levels" }, { "instance_id": "R26729xR26727", "comparison_id": "R26729", "paper_id": "R26727", "text": "LCM: A Link-Aware Clustering Mechanism for Energy-Efficient Routing in Wireless Sensor Networks In wireless sensor networks, nodes in the area of interest must report sensing readings to the sink, and this report always satisfies the report frequency required by the sink. This paper proposes a link-aware clustering mechanism, called LCM, to determine an energy-efficient and reliable routing path. The LCM primarily considers node status and link condition, and uses a novel clustering metric called the predicted transmission count (PTX), to evaluate the qualification of nodes for clusterheads and gateways to construct clusters. Each clusterhead or gateway candidate depends on the PTX to derive its priority, and the candidate with the highest priority becomes the clusterhead or gateway. Simulation results validate that the proposed LCM significantly outperforms the clustering mechanisms using random selection and by considering only link quality and residual energy in the packet delivery ratio, energy consumption, and delivery latency." }, { "instance_id": "R26729xR26586", "comparison_id": "R26729", "paper_id": "R26586", "text": "Distributed clustering in ad-hoc sensor networks: a hybrid, energy-efficient approach Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation." }, { "instance_id": "R26729xR26676", "comparison_id": "R26729", "paper_id": "R26676", "text": "Automatic Decentralized Clustering for Wireless Sensor Networks We propose a decentralized algorithm for organizing an ad hoc sensor network into clusters. Each sensor uses a random waiting timer and local criteria to determine whether to form a new cluster or to join a current cluster. The algorithm operates without a centralized controller, it operates asynchronously, and does not require that the location of the sensors be known a priori. Simplified models are used to estimate the number of clusters formed, and the energy requirements of the algorithm are investigated. The performance of the algorithm is described analytically and via simulation." }, { "instance_id": "R26729xR26708", "comparison_id": "R26729", "paper_id": "R26708", "text": "A dynamic clustering and energy efficient routing technique for sensor networks In the development of various large-scale sensor systems, a particularly challenging problem is how to dynamically organize the sensors into a wireless communication network and route sensed information from the field sensors to a remote base station. This paper presents a new energy-efficient dynamic clustering technique for large-scale sensor networks. By monitoring the received signal power from its neighboring nodes, each node estimates the number of active nodes in realtime and computes its optimal probability of becoming a cluster head, so that the amount of energy spent in both intra- and inter-cluster communications can be minimized. Based on the clustered architecture, this paper also proposes a simple multihop routing algorithm that is designed to be both energy-efficient and power-aware, so as to prolong the network lifetime. The new clustering and routing algorithms scale well and converge fast for large-scale dynamic sensor networks, as shown by our extensive simulation results." }, { "instance_id": "R26729xR26718", "comparison_id": "R26729", "paper_id": "R26718", "text": "Topology-controlled adaptive clustering for uniformity and increased lifetime in wireless sensor networks Owing to the dynamic nature of sensor network applications the adoption of adaptive cluster-based topologies has many untapped desirable benefits for the wireless sensor networks. In this study, the authors explore such possibility and present an adaptive clustering algorithm to increase the network's lifetime while maintaining the required network connectivity. The proposed scheme features capability of cluster heads to adjust their power level to achieve optimal degree and maintain this value throughout the network operation. Under the proposed method a topology control allows an optimal degree, which results in a better distributed sensors and well-balanced clustering system enhancing networks' lifetime. The simulation results show that the proposed clustering algorithm maintains the required degree for inter-cluster connectivity on many more rounds compared with hybrid energy-efficient distributed clustering (HEED), energy-efficient clustering scheme (EECS), low-energy adaptive clustering hierarchy (LEACH) and energy-based LEACH." }, { "instance_id": "R26775xR26766", "comparison_id": "R26775", "paper_id": "R26766", "text": "Multihop Routing Protocol with Unequal Clustering for Wireless Sensor Networks In order to prolong the lifetime of wireless sensor networks, this paper presents a multihop routing protocol with unequal clustering (MRPUC). On the one hand, cluster heads deliver the data to the base station with relay to reduce energy consumption. On the other hand, MRPUC uses many measures to balance the energy of nodes. First, it selects the nodes with more residual energy as cluster heads, and clusters closer to the base station have smaller sizes to preserve some energy during intra-cluster communication for inter-cluster packets forwarding. Second, when regular nodes join clusters, they consider not only the distance to cluster heads but also the residual energy of cluster heads. Third, cluster heads choose those nodes as relay nodes, which have minimum energy consumption for forwarding and maximum residual energy to avoid dying earlier. Simulation results show that MRPUC performs much better than similar protocols." }, { "instance_id": "R26775xR26739", "comparison_id": "R26775", "paper_id": "R26739", "text": "PRODUCE: A Probability-Driven Unequal Clustering Mechanism for Wireless Sensor Networks There has been proliferation of research on seeking for distributing the energy consumption among nodes in each cluster and between cluster heads to extend the network lifetime. However, they hardly consider the hot spots problem caused by heavy relay traffic forwarded. In this paper, we propose a distributed and randomized clustering algorithm that consists of unequal sized clusters. The cluster heads closer to the base station may focus more on inter-cluster communication while distant cluster heads concentrate more on intra-cluster communication. As a result, it nearly guarantees no communication in the network gets excessively long communication distance that significantly attenuates signal strength. Simulation results show that our algorithm achieves abundant improvement in terms of the coverage time and network lifetime, especially when the density of distributed nodes is high." }, { "instance_id": "R26775xR26773", "comparison_id": "R26775", "paper_id": "R26773", "text": "An energy aware fuzzy unequal clustering algorithm for wireless sensor networks In order to gather information more efficiently, wireless sensor networks (WSNs) are partitioned into clusters. The most of the proposed clustering algorithms do not consider the location of the base station. This situation causes hot spots problem in multi-hop WSNs. Unequal clustering mechanisms, which are designed by considering the base station location, solve this problem. In this paper, we introduce a fuzzy unequal clustering algorithm (EAUCF) which aims to prolong the lifetime of WSNs. EAUCF adjusts the cluster-head radius considering the residual energy and the distance to the base station parameters of the sensor nodes. This helps decreasing the intra-cluster work of the sensor nodes which are closer to the base station or have lower battery level. We utilize fuzzy logic for handling the uncertainties in cluster-head radius estimation. We compare our algorithm with some popular algorithms in literature, namely LEACH, CHEF and EEUC, according to First Node Dies (FND), Half of the Nodes Alive (HNA) and energy-efficiency metrics. Our simulation results show that EAUCF performs better than the other algorithms in most of the cases. Therefore, EAUCF is a stable and energy-efficient clustering algorithm to be utilized in any real time WSN application." }, { "instance_id": "R26775xR26760", "comparison_id": "R26775", "paper_id": "R26760", "text": "An energy-balancing clustering approach for gradient-based routing in wireless sensor networks Clustering significantly reduces the energy consumption of each individual sensor in a wireless sensor network (WSN), and also increases the communication load on cluster heads. Because of the unbalanced energy consumption among cluster heads, the hot spots problem will arise when using the multihop forwarding model for the intercluster communication. Unequal clustering is an effective way to balance the energy consumption of cluster heads. In this paper, we present an Energy-Balancing unequal Clustering Approach for Gradient-based routing (EBCAG) in wireless sensor networks. It partitions the nodes into clusters of unequal size, and each sensor node maintains a gradient value, which is defined as its minimum hop count to the sink. The size of a cluster is decided by the gradient value of its cluster head, and the data gathered from the cluster members should follow the direction of descending gradient to reach the sink. Simulation results show that EBCAG balances the energy consumption among the cluster heads, and significantly improves the network lifetime." }, { "instance_id": "R26775xR26640", "comparison_id": "R26775", "paper_id": "R26640", "text": "An energy-efficient unequal clustering mechanism for wireless sensor networks Unequal clustering mechanism, in combination with inter-cluster multihop routing, provides a new effective way to balance the energy dissipation among nodes and prolong the lifetime of wireless sensor networks. In this paper, a distributed energy-efficient unequal clustering mechanism (DEEUC) is proposed and evaluated. By a time based competitive clustering algorithm, DEEUC partitions all nodes into clusters of unequal size, in which the clusters closer to the base station have smaller size. The cluster heads of these clusters can preserve some more energy for the inter-cluster relay traffic and the \u201chot-spots\u201d problem can be avoided. For inter-cluster communication, DEEUC adopts an energy-aware multihop routing to reduce the energy consumption of the cluster heads. Simulation results demonstrate that the protocol can efficiently decrease the dead speed of the nodes and prolong the network lifetime" }, { "instance_id": "R26775xR26754", "comparison_id": "R26775", "paper_id": "R26754", "text": "An Energy-Aware Distributed Unequal Clustering Protocol for Wireless Sensor Networks Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly." }, { "instance_id": "R26775xR26598", "comparison_id": "R26775", "paper_id": "R26598", "text": "Prolonging the Lifetime of Wireless Sensor Networks via Prolonging the Lifetime of Wireless Sensor Networks via Organizing wireless sensor networks into clusters enables the efficient utilization of the limited energy resources of the deployed sensor nodes. However, the problem of unbalanced energy consumption exists, and it is tightly bound to the role and to the location of a particular node in the network. If the network is organized into heterogeneous clusters, where some more powerful nodes take on the cluster head role to control network operation, it is important to ensure that energy dissipation of these cluster head nodes is balanced. Oftentimes the network is organized into clusters of equal size, but such equal clustering results in an unequal load on the cluster head nodes. Instead, we propose an unequal clustering size (UCS) model for network organization, which can lead to more uniform energy dissipation among the cluster head nodes, thus increasing network lifetime. Also, we expand this approach to homogeneous sensor networks and show that UCS can lead to more uniform energy dissipation in a homogeneous network as well." }, { "instance_id": "R26775xR26748", "comparison_id": "R26775", "paper_id": "R26748", "text": "An Energy-Efficient Clustering Solution for Wireless Sensor Networks Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two well-known clustering algorithms, HEED and UCR." }, { "instance_id": "R26850xR26839", "comparison_id": "R26850", "paper_id": "R26839", "text": "Valid inequalities for the fleet size and mix vehicle routing problem with fixed costs In the well-known Vehicle Routing Problem (VRP) a set of identical vehicles located at a central depot is to be optimally routed to supply customers with known demands subject to vehicle capacity constraints. An important variant of the VRP arises when a mixed fleet of vehicles, characterized by different capacities and costs, is available for distribution activities. The problem is known as Fleet Size and Mix VRP with Fixed Costs (FSMF) and has several practical applications. In this paper, we present a new mixed integer programming formulation for FSMF based on a two\u2013commodity network flow approach. New valid inequalities are proposed to strengthen the linear programming relaxation of the mathematical formulation. The effectiveness of the proposed cuts is extensively tested on benchmark instances." }, { "instance_id": "R26850xR26844", "comparison_id": "R26850", "paper_id": "R26844", "text": "A Robust Branch-Cut-and-Price Algorithm for the Heterogeneous Fleet Vehicle Routing Problem This paper presents a robust branch-cut-and-price algorithm for the Heterogeneous Fleet Vehicle Routing Problem (HFVRP), vehicles may have various capacities and fixed costs. The columns in the formulation are associated to q-routes, a relaxation of capacitated elementary routes that makes the pricing problem solvable in pseudopolynomial time. Powerful new families of cuts are also proposed, which are expressed over a very large set of variables. Those cuts do not increase the complexity of the pricing subproblem. Experiments are reported where instances up to 75 vertices were solved to optimality, a major improvement with respect to previous algorithms." }, { "instance_id": "R26850xR26782", "comparison_id": "R26850", "paper_id": "R26782", "text": "The fleet size and mix problem for capacitated arc routing Abstract There have been several attempts to solve the capacitated arc routing problem with m vehicles starting their tours from a central node. The objective has been to minimize the total distance travelled. In the problem treated here we also have the fixed costs of the vehicles included in the objective function. A set of vehicle capacities with their respective costs are used. Thus the objective function becomes a combination of fixed and variable costs. The solution procedure consists of four phases. In the first phase, a Chinese or rural postman problem is solved depending on whether all or some of the arcs in the network demand service with the objective of minimizing the total distance travelled. It results in a tour called the giant tour. In the second phase, the giant tour is partitioned into single vehicle subtours feasible with respect to the constraints. A new network is constructed with the node set corresponding to the arcs of the giant tour and with the arc set consisting of the subtours of the giant tour. The arc costs include both the fixed and variable costs of the subtours. The third phase consists of solving the shortest path problem on this new network to result in the least cost set of subtours represented on the new network. In the last phase a postprocessor is applied to the solution to improve it. The procedure is repeated for different giant tours to improve the final solution. The problem is extended to the case where there can be upper bounds on the number of vehicles with given capacities using a branch and bound method. Extension to directed networks is given. Some computational results are reported." }, { "instance_id": "R26850xR26830", "comparison_id": "R26850", "paper_id": "R26830", "text": "A heuristic for vehicle fleet mix problem using tabu search and set partitioning The vehicle fleet mix problem is a special case of the vehicle routing problem where customers are served by a heterogeneous fleet of vehicles with various capacities. An efficient heuristic for determining the composition of a vehicle fleet and travelling routes was developed using tabu search and by solving set partitioning problems. Two kinds of problems have appeared in the literature, concerning fixed cost and variable cost, and these were tested for evaluation. Initial solutions were found using the modified sweeping method. Whenever a new solution in an iteration of the tabu search was obtained, optimal vehicle allocation was performed for the set of routes, which are constructed from the current solution by making a giant tour. Experiments were performed for the benchmark problems that appeared in the literature and new best-known solutions were found." }, { "instance_id": "R26850xR26787", "comparison_id": "R26850", "paper_id": "R26787", "text": "Vehicle routing considerations in distribution system design Abstract In Distribution System Design, one minimizes total costs related to the number, locations and sizes of warehouses, and the assignment of warehouses to customers. The resulting system, while optimal in a strategic sense, may not be the best choice if operational aspects such as vehicle routing are also considered. We formulate a multicommodity, capacitated distribution planning model as anon-linear, mixed integer program. Distribution from factories to customers is two-staged via depots (warehouses) whose number and location must be chosen. Vehicle routes from depots to customers are established by considering the \u201cfleet size and mix\u201d problem, which also incorporates strategic decisions on fleet makeup and vehicle numbers of each type. This problem is solved as a generalized assignment problem, within an algorithm for the overall distribution/routing problem that is based on Benders decomposition. We furnish two version of our algorithm denoted Technique I and II. The latter is an enhaancement of the former and is employed at the user's discretion. Computer solution of test problems is discussed." }, { "instance_id": "R26850xR26814", "comparison_id": "R26850", "paper_id": "R26814", "text": "A sweep-based algorithm for the fleet size and mix vehicle routing problem Abstract This paper presents a new sweep-based heuristic for the fleet size and mix vehicle routing problem. This problem involves two kinds of decisions: the selection of a mix of vehicles among the available vehicle types and the routing of the selected fleet. The proposed algorithm first generates a large number of routes that are serviced by one or two vehicles. The selection of routes and vehicles to be used is then made by solving to optimality, in polynomial time, a set-partitioning problem having a special structure. Results on a set of benchmark test problems show that the proposed heuristic produces excellent solutions in short computing times. Having a fast but good solution method is needed for transportation companies that rent a significant part of their fleet and consequently can take advantage of frequent changes in fleet composition. Finally, the proposed heuristic produced new best-known solutions for three of the test problems; these solutions are reported." }, { "instance_id": "R26850xR26803", "comparison_id": "R26850", "paper_id": "R26803", "text": "A parallel evolutionary algorithm for the vehicle routing problem with heterogeneous fleet Nowadays genetic algorithms stand as a trend to solve NPcomplete and NP-hard problems. In this paper, we present a new hybrid metaheuristic which uses Parallel Genetic Algorithms and Scatter Search coupled with a decomposition-into-petals procedure for solving a class of Vehicle Routing and Scheduling Problems. The parallel genetic algorithm presented is based on the island model and was run on a cluster of workstations. Its performance is evaluated for a heterogeneous fleet problem, which is considered a problem much harder to solve than the homogeneous vehicle routing problem." }, { "instance_id": "R26850xR26833", "comparison_id": "R26850", "paper_id": "R26833", "text": "Formulations and Valid Inequalities for the Heterogeneous Vehicle Routing Problem We consider the vehicle routing problem where one can choose among vehicles with different costs and capacities to serve the trips. We develop six different formulations: the first four based on Miller-Tucker-Zemlin constraints and the last two based on flows. We compare the linear programming bounds of these formulations. We derive valid inequalities and lift some of the constraints to improve the lower bounds. We generalize and strengthen subtour elimination and generalized large multistar inequalities." }, { "instance_id": "R26850xR26801", "comparison_id": "R26850", "paper_id": "R26801", "text": "An evolutionary hybrid metaheuristic for solving the vehicle routing problem with heterogeneous fleet Nowadays genetic algorithms stand as a trend to solve NP-complete and NP-hard problems. In this paper, we present a new hybrid metaheuristic which combines Genetic Algorithms and Scatter Search coupled with a decomposition-into-petals procedure for solving a class of Vehicle Routing and Scheduling Problems. Its performance is evaluated for a heterogeneous fleet model, which is considered a problem much harder to solve than the homogeneous vehicle routing problem." }, { "instance_id": "R26850xR26823", "comparison_id": "R26850", "paper_id": "R26823", "text": "A memetic algorithm for the heterogeneous fleet vehicle routing problem Abstract The Heterogeneous Fleet Vehicle Routing Problem is a variant of the classical Vehicle Routing Problem in which customers are served by a heterogeneous fleet of vehicles with various capacities, fixed and variable costs. Due to its complexity, no exact algorithm is known for this problem. This paper describes a genetic algorithm hybridized with two known heuristic methods for this problem. Hybrid genetic algorithms are often referred as memetic algorithms. Computational experiments presenting the performance of the proposed algorithm concerning quality of solution and processing time are reported. On a set of benchmark instances, it produces high-quality solutions, including several new best-known solutions." }, { "instance_id": "R26850xR26808", "comparison_id": "R26850", "paper_id": "R26808", "text": "A heuristic column generation method for the heterogeneous fleet VRP This paper presents a heuristic column generation method for solving vehicle routing problems with a heterogeneous fleet of vehicles. The method may also solve the fleet size and composition vehicle routing problem and new best known solutions are reported for a set of classical problems. Numerical results show that the method is robust and efficient, particularly for medium and large size problem instances." }, { "instance_id": "R26850xR26828", "comparison_id": "R26850", "paper_id": "R26828", "text": "An Integrated Model and Solution Approach for Fleet Sizing with Heterogeneous Assets This paper addresses a fleet-sizing problem in the context of the truck-rental industry. Specifically, trucks that vary in capacity and age are utilized over space and time to meet customer demand. Operational decisions (including demand allocation and empty truck repositioning) and tactical decisions (including asset procurements and sales) are explicitly examined in a linear programming model to determine the optimal fleet size and mix. The method uses a time-space network, common to fleet-management problems, but also includes capital cost decisions, wherein assets of different ages carry different costs, as is common to replacement analysis problems. A two-phase solution approach is developed to solve large-scale instances of the problem. Phase I allocates customer demand among assets through Benders decomposition with a demand-shifting algorithm assuring feasibility in each subproblem. Phase II uses the initial bounds and dual variables from Phase I and further improves the solution convergence without increasing computer memory requirements through the use of Lagrangian relaxation. Computational studies are presented to show the effectiveness of the approach for solving large problems within reasonable solution gaps." }, { "instance_id": "R26850xR26789", "comparison_id": "R26850", "paper_id": "R26789", "text": "A new heuristic for the fleet size and mix vehicle routing problem Abstract In this paper we address the problem of simultaneously selecting the composition and routing of a fleet of vehicles in order to service efficiently customers with known demands from a central depot. This problem is called the fleet size and mix vehicle routing problem (FSMVRP). The vehicle fleet may be heterogeneous. The objective is to find the fleet composition and a set of routes with minimum total cost, which includes routing cost and vehicle cost. We present a new savings heuristic based on successive route fusion. At each iteration, the best fusion is selected by solving a weighted matching problem. This provides a less myoptic criteria than the usual savings heuristics. This algorithm is also very easy to implement. Computational results are provided for a number of benchmark problems in order to compare its performance to that of various other methods." }, { "instance_id": "R26850xR26819", "comparison_id": "R26850", "paper_id": "R26819", "text": "The Heterogeneous Vehicle-Routing Game In this paper, we study a cost-allocation problem that arises in a distribution-planning situation at the Logistics Department at Norsk Hydro Olje AB, Stockholm, Sweden. We consider the routes from one depot during one day. The total distribution cost for these routes is to be divided among the customers that are visited. This cost-allocation problem is formulated as a vehicle-routing game (VRG), allowing the use of vehicles with different capacities. Cost-allocation methods based on different concepts from cooperative game theory, such as the core and the nucleolus, are discussed. A procedure that can be used to investigate whether the core is empty or not is presented, as well as a procedure to compute the nucleolus. Computational results for the Norsk Hydro case are presented and discussed." }, { "instance_id": "R26850xR26811", "comparison_id": "R26850", "paper_id": "R26811", "text": "A Gids Metaheuristic Approach to the Fleet Size and Mix Vehicle Routing Problem This paper presents a new metaheuristic approach, i.e., the GIDS (Generic Intensification and Diversification Search), as well as its performance for solving the FSMVRP (Fleet Size and Mix Vehicle Routing Problem). The GIDS integrates the use of some recently developed generic search methods such as TA (Threshold Accepting) and GDA (Great Deluge Algorithm), and the meta-strategies of intensification and diversification for intelligent search. The GIDS includes three components: (1) MIC, multiple initialization constructor; (2) GSI, generic search for intensification; (3) PSD, perturbation search for diversification. A bank of twenty FSMVRP benchmark instances was tested by several different implementations of GIDS. All programs were coded in UNIX C and implemented on a SPARC 10 SUN workstation. Results are very encouraging. We have updated the best-known solutions for two of the twenty benchmark instances; the average deviation from the twenty best solutions is merely 0.598%." }, { "instance_id": "R26850xR26794", "comparison_id": "R26850", "paper_id": "R26794", "text": "Incorporating vehicle routing into the vehicle fleet composition problem An efficient heuristic for determining the composition of a vehicle fleet is developed which considers delivery routes whilst evaluating fleet mixes. The aim is to include a perturbation procedure within existing or constructed routes to reduce the total cost of routing and acquisition by improving the utilisation of the vehicles. This approach has been tested on twenty problems given in the literature and new best results are reported here. An extensive literature review is also given." }, { "instance_id": "R26850xR26821", "comparison_id": "R26850", "paper_id": "R26821", "text": "Ferry service network design: optimal fleet size, routing, and scheduling The study formulated a ferry network design problem by considering the optimal fleet size, routing, and scheduling for both direct and multi-stop services. The objective function combines both the operator and passengers' performance measures. Mathematically, the model is formulated as a mixed integer multiple origin-destination network flow problem with ferry capacity constraints. To solve this problem of practical size, this study developed a heuristic algorithm that exploits the polynomial-time performance of shortest path algorithms. Two scenarios of ferry services in Hong Kong were solved to demonstrate the performance of the heuristic algorithm. The results showed that the heuristic produced solutions that were within 1.3% from the CPLEX optimal solutions. The computational time is within tens of seconds even for problem size that is beyond the capability of CPLEX." }, { "instance_id": "R26850xR26836", "comparison_id": "R26850", "paper_id": "R26836", "text": "Routing a Heterogeneous Fleet of Vehicles In the well-known Vehicle Routing Problem (VRP) a set of identical vehicles, based at a central depot, is to be optimally routed to supply customers with known demands subject to vehicle capacity constraints." }, { "instance_id": "R26850xR26797", "comparison_id": "R26850", "paper_id": "R26797", "text": "The mixed fleet stochastic vehicle routing problem The mixed fleet stochastic vehicle routing problem is considered in the paper. It is assumed that operations are performed by vehicles of different types. The model developed is based on the \u201croute first \u2014 cluster second\u201d approach. A heuristic algorithm based on space-filling curves is used to produce a giant Travelling Salesman tour. The giant tour is divided into smaller parts using the generalized Floyd algorithm. The final set of routes may be chosen after making a suitable multi-attribute decision making analysis." }, { "instance_id": "R26881xR26863", "comparison_id": "R26881", "paper_id": "R26863", "text": "A heuristic algorithm for the truckload and less-than-truckload problem Abstract The delivery of goods from a warehouse to local customers is an important and practical problem of a logistics manager. In reality, we are facing the fluctuation of demand. When the total demand is greater than the whole capacity of owned trucks, the logistics managers may consider using an outsider carrier. Logistics managers can make a selection between a truckload (a private truck) and a less-than-truckload carrier (an outsider carrier). Selecting the right mode to transport a shipment may bring significant cost savings to the company. In this paper, we address the problem of routing a fixed number of trucks with limited capacity from a central warehouse to customers with known demand. The objective of this paper is developing a heuristic algorithm to route the private trucks and to make a selection of less-than-truckload carriers by minimizing a total cost function. Both the mathematical model and the heuristic algorithm are developed. Finally, some computational results and suggestions for future research are presented." }, { "instance_id": "R26881xR26867", "comparison_id": "R26881", "paper_id": "R26867", "text": "A new intuitional algorithm for solving heterogeneous fixed fleet routing problems: Passenger pickup algorithm Fixed-fleet heterogeneous vehicle routing is a type of vehicle routing problem that aims to provide service to a specific customer group with minimum cost, with a limited number of vehicles with different capacities. In this study, a new intuitional algorithm, which can divide the demands at the stops for fixed heterogeneous vehicle routing, is developed and tested on tests samples. The algorithm is compared to the BATA Algorithm available in the literature in relation to the number of vehicles, fixed cost, variable cost and total cost." }, { "instance_id": "R26881xR26857", "comparison_id": "R26881", "paper_id": "R26857", "text": "Efficient heuristics for the heterogeneous fleet multitrip VRP with application to a large-scale real case The basic Vehicle Routing Problem (VRP) consists of computing a set of trips of minimum total cost, to deliver fixed amounts of goods to customers with a fleet of identical vehicles. Few papers address the case with several types of vehicles (heterogeneous fleet). Most of them assume an unlimited number of vehicles of each type, to dimension the fleet from a strategic point of view. This paper tackles the more realistic tactical or operational case, with a fixed number of vehicles of each type, and the optional possibility for each vehicle to perform several trips. It describes several heuristics, including a very efficient one that progressively merges small starting trips, while ensuring that they can be performed by the fleet. This heuristic seeks to minimize the number of required vehicles as a secondary objective. It outperforms classical VRP heuristics, can easily handle various constraints, and gives very good initial solutions for a tabu search method. The real case of a French manufacturer of furniture with 775 destination stores is presented." }, { "instance_id": "R26881xR26859", "comparison_id": "R26881", "paper_id": "R26859", "text": "A list based threshold accepting metaheuristic for the heterogeneous fixed fleet vehicle routing problem In real life situations most companies that deliver or collect goods own a heterogeneous fleet of vehicles. Their goal is to find a set of vehicle routes, each starting and ending at a depot, making the best possible use of the given vehicle fleet such that total cost is minimized. The specific problem can be formulated as the Heterogeneous Fixed Fleet Vehicle Routing Problem (HFFVRP), which is a variant of the classical Vehicle Routing Problem. This paper describes a variant of the threshold accepting heuristic for the HFFVRP. The proposed metaheuristic has a remarkably simple structure, it is lean and parsimonious and it produces high quality solutions over a set of published benchmark instances. Improvement over several of previous best solutions also demonstrates the capabilities of the method and is encouraging for further research." }, { "instance_id": "R26881xR26873", "comparison_id": "R26881", "paper_id": "R26873", "text": "A record-to-record travel algorithm for solving the heterogeneous fleet vehicle routing problem In the heterogeneous fleet vehicle routing problem (HVRP), several different types of vehicles can be used to service the customers. The types of vehicles differ with respect to capacity, fixed cost, and variable cost. We assume that the number of vehicles of each type is fixed and equal to a constant. We must decide how to make the best use of the fixed fleet of heterogeneous vehicles. In this paper, we review methods for solving the HVRP, develop a variant of our record-to-record travel algorithm for the standard vehicle routing problem that takes a heterogeneous fleet into account, and report computational results on eight benchmark problems. Finally, we generate a new set of five test problems that have 200-360 customers and solve each new problem using our record-to-record travel algorithm." }, { "instance_id": "R26881xR26879", "comparison_id": "R26881", "paper_id": "R26879", "text": "A New Capacitated Vehicle Routing Problem with Split Service for Minimizing Fleet Cost by Simulated Annealing We address a capacitated vehicle routing problem (CVRP) in which the demand of a node can be split on several vehicles celled split services by assuming heterogeneous fixed fleet. The objective is to minimize the fleet cost and total distance traveled. The fleet cost is dependent on the number of vehicles used and the total unused capacity. In most practical cases, especially in urban transportation, several vehicles transiting on a demand point occurs. Thus, the split services can aid to minimize the number of used vehicles by maximizing the capacity utilization. This paper presents a mix-integer linear model of a CVRP with split services and heterogeneous fleet. This model is then solved by using a simulated annealing (SA) method. Our analysis suggests that the proposed model enables users to establish routes to serve all given customers using the minimum number of vehicles and maximum capacity. Our proposed method can also find very good solutions in a reasonable amount of time. To illustrate these solutions further, a number of test problems in small and large sizes are solved and computational results are reported in the paper." }, { "instance_id": "R26881xR26855", "comparison_id": "R26881", "paper_id": "R26855", "text": "A meta-heuristic algorithm for the efficient distribution of perishable foods Abstract A fast and robust algorithm for solving the fresh milk distribution problem for one of the biggest diary companies in Greece was developed. This particular problem was formulated as a Heterogeneous Fixed Fleet Vehicle Routing Problem (HFFVRP) for which, due to its high computational complexity, no exact algorithm ever has been used to solve it. In this study, a threshold-accepting based algorithm was developed aiming to satisfy the needs of the company that plans to use this methodology repeatedly to schedule their distribution many times a week. For this purpose, the proposed formulation was implemented in an efficient and reliable computer code. The algorithm manages to provide practical solutions and the early findings indicate considerable improvements in the operational performance of the company." }, { "instance_id": "R26881xR26865", "comparison_id": "R26881", "paper_id": "R26865", "text": "Synchronized routing of seasonal products through a production/distribution network This paper presents a multi-period vehicle routing problem for a large-scale production and distribution network. The vehicles must be routed in such a way as to minimize travel and inventory costs over a multi-period horizon, while also taking retailer demands and the availability of products at a central production facility into account. The network is composed of one distribution center and hundreds of retailers. Each retailer has its demand schedule representing the total number of units of a given product that should have been received on a given day. Many high value products are distributed. Product availability is determined by the production facility, whose production schedule determines how many units of each product must be available on a given day. To distribute these products, the routes of a heterogeneous fleet must be determined for a multiple period horizon. The objective of our research is to minimize the cost of distributing products to the retailers and the cost of maintaining inventory at the facility. In addition to considering product availability, the routing schedule must respect many constraints, such as capacity restrictions on the routes and the possibility of multiple vehicle trips over the time horizon. In the situation studied, no more than 20 product units could be carried by a single vehicle, which generally limited the number of retailers that could be supplied to one or two per route. This article proposes a mathematical formulation, as well as some heuristics, for solving this single-retailer-route vehicle routing problem. Extensions are then proposed to deal with the multiple-retailer-route situation." }, { "instance_id": "R26881xR26836", "comparison_id": "R26881", "paper_id": "R26836", "text": "Routing a Heterogeneous Fleet of Vehicles In the well-known Vehicle Routing Problem (VRP) a set of identical vehicles, based at a central depot, is to be optimally routed to supply customers with known demands subject to vehicle capacity constraints." }, { "instance_id": "R26918xR26886", "comparison_id": "R26918", "paper_id": "R26886", "text": "The Vehicle Scheduling Problem with Multiple Vehicle Types The vehicle scheduling problem is specified in terms of a set of tasks to be executed with a fleet of multiple vehicle types. The purpose of this paper is to formulate the problem and to show that the heuristic and exact methods developed for the vehicle scheduling problem with time windows and with a single type of vehicle can be extended in a straightforward fashion to the multiple-vehicle-types problem." }, { "instance_id": "R26918xR26916", "comparison_id": "R26918", "paper_id": "R26916", "text": "A reactive variable neighborhood tabu search for the heterogeneous fleet vehicle routing problem with time windows Abstract This paper presents a solution methodology for the heterogeneous fleet vehicle routing problem with time windows. The objective is to minimize the total distribution costs, or similarly to determine the optimal fleet size and mix that minimizes both the total distance travelled by vehicles and the fixed vehicle costs, such that all problem\u2019s constraints are satisfied. The problem is solved using a two-phase solution framework based upon a hybridized Tabu Search, within a new Reactive Variable Neighborhood Search metaheuristic algorithm. Computational experiments on benchmark data sets yield high quality solutions, illustrating the effectiveness of the approach and its applicability to realistic routing problems." }, { "instance_id": "R26918xR26893", "comparison_id": "R26918", "paper_id": "R26893", "text": "Minimum Vehicle Fleet Size Under Time-Window Constraints at a Container Terminal Products can be transported in containers from one port to another. At a container terminal these containers are transshipped from one mode of transportation to another. Cranes remove containers from a ship and put them at a certain time (i.e., release time) into a buffer area with limited capacity. A vehicle lifts a container from the buffer area before the buffer area is full (i.e., in due time) and transports the container from the buffer area to the storage area. At the storage area the container is placed in another buffer area. The advantage of using these buffer areas is the resultant decoupling of the unloading and transportation processes. We study the case in which each container has a time window [release time, due time] in which the transportation should start.The objective is to minimize the vehicle fleet size such that the transportation of each container starts within its time window. No literature has been found studying this relevant problem. We have developed an integer linear programming model to solve the problem of determining vehicle requirements under time-window constraints. We use simulation to validate the estimates of the vehicle fleet size by the analytical model. We test the ability of the model under various conditions. From these numerical experiments we conclude that the results of the analytical model are close to the results of the simulation model. Furthermore, we conclude that the analytical model performs well in the context of a container terminal." }, { "instance_id": "R26918xR26890", "comparison_id": "R26918", "paper_id": "R26890", "text": "New heuristics for the Fleet Size and Mix Vehicle Routing Problem with Time Windows In the Fleet Size and Mix Vehicle Routing Problem with Time Windows (FSMVRPTW) customers need to be serviced in their time windows at minimal costs by a heterogeneous fleet. In this paper new heuristics for the FSMVRPTW are developed. The performance of the heuristics is shown to be significantly higher than that of any previous heuristic approach and therefore likely to achieve better solutions to practical routing problems." }, { "instance_id": "R26918xR26906", "comparison_id": "R26918", "paper_id": "R26906", "text": "Heuristic Approaches for the Fleet Size and Mix Vehicle Routing Problem with Time Windows The fleet size and mix vehicle routing problem with time windows (FSMVRPTW) is the problem of determining, at the same time, the composition and the routing of a fleet of heterogeneous vehicles aimed to serve a given set of customers. The routing problem requires us to design a set of minimum-cost routes originating and terminating at a central depot and serving customers with known demands, within given time windows. This paper develops a constructive insertion heuristic and a metaheuristic algorithm for FSMVRPTW. Extensive computational experiments on benchmark instances show that the proposed method is robust and efficient, and outperforms the previously published results." }, { "instance_id": "R26918xR26903", "comparison_id": "R26918", "paper_id": "R26903", "text": "A goal programming approach to vehicle routing problems with soft time windows The classical vehicle routing problem involves designing a set of routes for a fleet of vehicles based at one central depot that is required to serve a number of geographically dispersed customers, while minimizing the total travel distance or the total distribution cost. Each route originates and terminates at the central depot and customers demands are known. In many practical distribution problems, besides a hard time window associated with each customer, defining a time interval in which the customer should be served, managers establish multiple objectives to be considered, like avoiding underutilization of labor and vehicle capacity, while meeting the preferences of customers regarding the time of the day in which they would like to be served (soft time windows). This work investigates the use of goal programming to model these problems. To solve the model, an enumeration-followed-by-optimization approach is proposed which first computes feasible routes and then selects the set of best ones. Computational results show that this approach is adequate for medium-sized delivery problems." }, { "instance_id": "R26918xR26888", "comparison_id": "R26918", "paper_id": "R26888", "text": "The Fleet Size and Mix Vehicle Routing Problem with Time Windows This paper describes several insertion-based savings heuristics for the fleet size and mix vehicle routing problem with time window constraints. A certain number of candidate fleet compositions are recorded in the construction phase, followed by applying a composite improvement scheme on them to enhance the solution quality. Computational results on 168 sample problems are reported. We found that heuristics with the consideration of a sequential route construction parameter yielded very good results. In addition, results on the 20 benchmarking problems for the fleet and mix vehicle routing problem with no time window constraints also demonstrate the effectiveness of our heuristics." }, { "instance_id": "R26918xR26910", "comparison_id": "R26918", "paper_id": "R26910", "text": "An Effective Multirestart Deterministic Annealing Metaheuristic for the Fleet Size and Mix Vehicle-Routing Problem with Time Windows This paper presents a new deterministic annealing metaheuristic for the fleet size and mix vehicle-routing problem with time windows. The objective is to service, at minimal total cost, a set of customers within their time windows by a heterogeneous capacitated vehicle fleet. First, we motivate and define the problem. We then give a mathematical formulation of the most studied variant in the literature in the form of a mixed-integer linear program. We also suggest an industrially relevant, alternative definition that leads to a linear mixed-integer formulation. The suggested metaheuristic solution method solves both problem variants and comprises three phases. In Phase 1, high-quality initial solutions are generated by means of a savings-based heuristic that combines diversification strategies with learning mechanisms. In Phase 2, an attempt is made to reduce the number of routes in the initial solution with a new local search procedure. In Phase 3, the solution from Phase 2 is further improved by a set of four local search operators that are embedded in a deterministic annealing framework to guide the improvement process. Some new implementation strategies are also suggested for efficient time window feasibility checks. Extensive computational experiments on the 168 benchmark instances have shown that the suggested method outperforms the previously published results and found 167 best-known solutions. Experimental results are also given for the new problem variant." }, { "instance_id": "R26982xR26966", "comparison_id": "R26982", "paper_id": "R26966", "text": "Fleet Size and Mix Optimization for Paratransit Services Most paratransit agencies use a mix of different types of vehicles ranging from small sedans to large converted vans as a cost-effective way to meet the diverse travel needs and seating requirements of their clients. Currently, decisions on what types of vehicles and how many vehicles to use are mostly made by service managers on an ad hoc basis without much systematic analysis and optimization. The objective of this research is to address the underlying fleet size and mix problem and to develop a practical procedure that can be used to determine the optimal fleet mix for a given application. A real-life example illustrates the relationship between the performance of a paratransit service system and the size of its service vehicles. A heuristic procedure identifies the optimal fleet mix that maximizes the operating efficiency of a service system. A set of recommendations is offered for future research; the most important is the need to incorporate a life-cycle cost framework into the paratransit service planning process." }, { "instance_id": "R26982xR26977", "comparison_id": "R26982", "paper_id": "R26977", "text": "A survey of models and algorithms for winter road maintenance. Part IV: Vehicle routing and fleet sizing for plowing and snow disposal This is the last part of a four-part survey of optimization models and solution algorithms for winter road maintenance planning. The two first parts of the survey address system design problems for winter road maintenance. The third part concentrates mainly on vehicle routing problems for spreading operations. The aim of this paper is to provide a comprehensive survey of optimization models and solution methodologies for the routing of vehicles for plowing and snow disposal operations. We also review models for the fleet sizing and fleet replacement problems." }, { "instance_id": "R26982xR26973", "comparison_id": "R26982", "paper_id": "R26973", "text": "Solving a vehicle-routing problem arising in soft-drink distribution The problem studied in this article arises from the distribution of soft drinks and collection of recyclable containers in a Quebec-based company. It can be modelled as a variant of the vehicle routing problem with a heterogeneous vehicle fleet, time windows, capacity and volume constraints, and an objective function combining routing costs and the revenue resulting from the sale of recyclable material. Three construction heuristics and an improvement procedure are developed for the problem. Comparative tests are performed on a real-life instance and on 10 randomly generated instances." }, { "instance_id": "R26982xR26950", "comparison_id": "R26982", "paper_id": "R26950", "text": "A tabu search algorithm for the multi-trip vehicle routing and scheduling problem Abstract This paper describes a novel tabu search heuristic for the multi-trip vehicle routing and scheduling problem (MTVRSP). The method was developed to tackle real distribution problems, taking into account most of the constraints that appear in practice. In the MTVRSP, besides the constraints that are common to the basic vehicle routing problem, the following ones are present: during each day a vehicle can make more than one trip; the customers impose delivery time windows; the vehicles have different capacities considered in terms of both volume and weight; the access to some customers is restricted to some vehicles; the drivers' schedules must respect the maximum legal driving time per day and the legal time breaks; the unloading times are considered." }, { "instance_id": "R26982xR26980", "comparison_id": "R26982", "paper_id": "R26980", "text": "Coca-Cola Enterprises Optimizes Vehicle Routes for Efficient Product Delivery In 2004 and 2005, Coca-Cola Enterprises (CCE)---the world's largest bottler and distributor of Coca-Cola products---implemented ORTEC's vehicle-routing software. Today, over 300 CCE dispatchers use this software daily to plan the routes of approximately 10,000 trucks. In addition to handling nonstandard constraints, the implementation is notable for its progressive transition from the prior business practice. CCE has realized an annual cost saving of $45 million and major improvements in customer service. This approach has been so successful that Coca-Cola has extended it beyond CCE to other Coca-Cola bottling companies and beer distributors." }, { "instance_id": "R26982xR26970", "comparison_id": "R26982", "paper_id": "R26970", "text": "A model for the fleet sizing of demand responsive transportation services with time windows We study the problem of determining the number of vehicles needed to provide a demand responsive transit service with a predetermined quality for the user in terms of waiting time at the stops and maximum allowed detour. We propose a probabilistic model that requires only the knowledge of the distribution of the demand over the service area and the quality of the service in terms of time windows associated of pickup and delivery nodes. This methodology can be much more effective and straightforward compared to a simulation approach whenever detailed data on demand patterns are not available. Computational results under a fairly broad range of test problems show that our model can provide an estimation of the required size of the fleet in several different scenarios." }, { "instance_id": "R26982xR26929", "comparison_id": "R26982", "paper_id": "R26929", "text": "A Decision Support System for Fleet Management: A Linear Programming Approach This paper describes a successful implementation of a decision support system that is used by the fleet management division at North American Van Lines to plan fleet configuration. At the heart of the system is a large linear programming (LP) model that helps management decide what type of tractors to sell to owner/operators or to trade in each week. The system is used to answer a wide variety of \u201cWhat if\u201d questions, many of which have significant financial impact." }, { "instance_id": "R26982xR26936", "comparison_id": "R26982", "paper_id": "R26936", "text": "Vehicle fleet planning the road transportation industry Planning the composition of a vehicle fleet in order to satisfy transportation service demands is an important resource management activity for any trucking company. Its complexity is such, however, that formal fleet management cannot be done adequately without the help of a decision support system. An important part of such a system is the generation of minimal discounted cost plans covering the purchase, replacement, sale, and/or rental of the vehicles necessary to deal with a seasonal stochastic demand. A stochastic programming model is formulated to address this problem. It reduces to a separable program based on information about the service demand, the state of the current fleet, and the cash flows generated by an acquisition/disposal plan. An efficient algorithm for solving the model is also presented. The discussion concerns the operations of a number of Canadian road carriers. >" }, { "instance_id": "R26982xR26943", "comparison_id": "R26982", "paper_id": "R26943", "text": "Solving real-life vehicle routing problems efficiently using tabu search This paper presents a tabu search based method for finding good solutions to a real-life vehicle routing problem. The problem considered deals with some new features beyond those normally associated with the classical problems of the literature: in addition to capacity constraints for vehicles and time windows for deliveries, it takes the heterogeneous character of the fleet into account, in the sense that utilization costs are vehicle-dependent and that some accessibility restrictions have to be fulfilled. It also deals with the use of trailers. In spite of the intricacy of the problem, the proposed tabu search approach is easy to implement and can be easily adapted to many other applications. An emphasis is placed on means that have to be used to speed up the search. In a few minutes of computation on a personal workstation, our approach obtains solutions that are significantly better than those previously developed and implemented in practice." }, { "instance_id": "R27039xR27005", "comparison_id": "R27039", "paper_id": "R27005", "text": "Fleet management models and algorithms for an oil-tanker routing and scheduling problem This paper explores models and algorithms for routing and scheduling ships in a maritime transportation system. The principal thrust of this research effort is focused on the Kuwait Petroleum Corporation (KPC) Problem. This problem is of great economic significance to the State of Kuwait, whose economy has been traditionally dominated to a large extent by the oil sector, and any enhancement in the existing ad-hoc scheduling procedure has the potential for significant savings. A mixed-integer programming model for the KPC problem is constructed in this paper. The resulting mathematical formulation is rather complex to solve due to the integrality conditions and the overwhelming size of the problem for a typical demand contract scenario. Consequently, an alternate aggregate model that retains the principal features of the KPC problem is formulated. The latter model is computationally far more tractable than the initial model, and a specialized rolling horizon heuristic is developed to solve it. The proposed heuristic procedure enables us to derive solutions for practical sized problems that could not be handled by directly solving even the aggregate model. The initial formulation is solved using CPLEX-4.0-MIP capabilities for a number of relatively small-sized test cases, whereas for larger problem instances, the aggregate formulation is solved using CPLEX-4.0-MIP in concert with the developed rolling horizon heuristic, and related results are reported. An ad-hoc routing procedure that is intended to simulate the current KPC scheduling practice is also described and implemented. The results demonstrate that the proposed approach substantially improves upon the results obtained using the current scheduling practice at KPC." }, { "instance_id": "R27039xR27032", "comparison_id": "R27039", "paper_id": "R27032", "text": "Ship Scheduling with Recurring Visits and Visit Separation Requirements This chapter discusses an application of advanced planning support in designing a sea-transport system. The system is designed for Norwegian companies who depend on sea-transport between Norway and Central Europe. They want to achieve faster and more frequent transport by combining tonnage. This requires the possible construction of up to 15 new ships with potential investments of approximately 150 mill US dollars. The problem is a variant of the general pickup and delivery problem with multiple time windows. In addition, it includes requirements for recurring visits, separation between visits and limits on transport lead-time. It is solved by a heuristic branch-and-price algorithm." }, { "instance_id": "R27039xR26992", "comparison_id": "R27039", "paper_id": "R26992", "text": "Optimal liner fleet routeing strategies The objective of this paper is to suggest practical optimization models for routing strategies for liner fleets. Many useful routing and scheduling problems have been studied in the transportation literature. As for ship scheduling or routing problems, relatively less effort has been devoted, in spite of the fact that sea transportation involves large capital and operating costs. This paper suggests two optimization models that can be useful to liner shipping companies. One is a linear programming model of profit maximization, which provides an optimal routing mix for each ship available and optimal service frequencies for each candidate route. The other model is a mixed integer programming model with binary variables which not only provides optimal routing mixes and service frequencies but also best capital investment alternatives to expand fleet capacity. This model is a cost minimization model." }, { "instance_id": "R27039xR27015", "comparison_id": "R27039", "paper_id": "R27015", "text": "Strategic fleet size planning for maritime refrigerated containers In the present economic climate, it is often the case that profits can only be improved, or for that matter maintained, by improving efficiency and cutting costs. This is particularly notorious in the shipping business, where it has been seen that the competition is getting tougher among carriers, thus alliances and partnerships are resulting for cost effective services in recent years. In this scenario, effective planning methods are important not only for strategic but also operating tasks, covering their entire transportation systems. Container fleet size planning is an important part of the strategy of any shipping line. This paper addresses the problem of fleet size planning for refrigerated containers, to achieve cost-effective services in a competitive maritime shipping market. An analytical model is first discussed to determine the optimal size of an own dry container fleet. Then, this is extended for an own refrigerated container fleet, which is the case when an extremely unbalanced trade represents one of the major investment decisions to be taken by liner operators. Next, a simulation model is developed for fleet sizing in a more practical situation and, by using this, various scenarios are analysed to determine the most convenient composition of refrigerated fleet between own and leased containers for the transpacific cargo trade." }, { "instance_id": "R27039xR27002", "comparison_id": "R27039", "paper_id": "R27002", "text": "Scheduling short-term marine transport of bulk products A multinational company uses a personal computer to schedule a fleet of coastal tankers and barges transporting liquid bulk products among plants, distribution centres (tank farms), and industrial customers. A simple spreadsheet interface cloaks a sophisticated optimization-based decision support system and makes this system useable via a varity of natural languages. The dispatchers, whose native language is not English, and some of whom presumably speak no English at all, communicate via the spreadsheet, and view recommended schedules displayed in Gantt charts both internationally familiar tools. Inside the spreadsheet, a highly detailed simulation can generate every feasible alternate vessel employment schedule, and an integer linear set partitioning model selects one schedule for each vessel so that all loads and deliveries are completed at minimal cost while satisfying all operational requirements. The optimized fleet employment schedule is displyed graphically with hourly time resolution over a planning horizon of 2-3 weeks. Each vessel will customarily make several voyages and many port calls to load and unload products during this time." }, { "instance_id": "R27039xR27018", "comparison_id": "R27039", "paper_id": "R27018", "text": "Robust ship scheduling with multiple time windows We present a ship scheduling problem concerned with the pickup and delivery of bulk cargoes within given time windows. As the ports are closed for service at night and during weekends, the wide time windows can be regarded as multiple time windows. Another issue is that the loading/discharging times of cargoes may take several days. This means that a ship will stay idle much of the time in port, and the total time at port will depend on the ship's arrival time. Ship scheduling is associated with uncertainty due to bad weather at sea and unpredictable service times in ports. Our objective is to make robust schedules that are less likely to result in ships staying idle in ports during the weekend, and impose penalty costs for arrivals at risky times (i.e., close to weekends). A set partitioning approach is proposed to solve the problem. The columns correspond to feasible ship schedules that are found a priori. They are generated taking the uncertainty and multiple time windows into account. The computational results show that we can increase the robustness of the schedules at the sacrifice of increased transportation costs. \u00a9 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 611\u2013625, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10033" }, { "instance_id": "R27039xR26995", "comparison_id": "R27039", "paper_id": "R26995", "text": "Ferry Traffic in the Aegean Islands: A Simulation Study A simulation of ferry traffic in the Aegean islands, which is being used as a decision aiding system for regional development, is presented. The model has been developed using SIMSCRIPT II.5 and its major variables and parameters, for which data are available, consist of types of vessels, harbour layouts, weather conditions, passenger and vehicle demand, types of vehicles, and loading and unloading times. The model has inbuilt flexibility to consider additional variables and parameters depending on data availability and scenarios to be examined. The graphics interface is based on a chart of the Aegean Islands and the various types of vessels appear as dynamic entities on the screen either moving or queuing outside busy ports. Itineraries are defined through the graphics editor on the basis of coordinates on the chart. It has been used for studying the scenarios to compare many combinations of various types of vessels, various types of harbour layout, differing routes, passenger and vehicle demands, and even the establishing of new ports. It can also be used to aid decision making about non-profit making itineraries which could then qualify for government subsidies." }, { "instance_id": "R27039xR27029", "comparison_id": "R27039", "paper_id": "R27029", "text": "A computer-based decision support system for vessel fleet scheduling\u2014experience and future research Most ocean shipping companies do the planning of fleet schedules manually based on their experience. Only a few use optimization-based decision support systems (DSS). This paper presents TurboRouter, a decision support system for vessel fleet scheduling, as well as some of the experience gathered from a research project to develop commercial software that is now used by several shipping companies. Perhaps the most important experience is that when designing such systems, the focus should be directed more to the interaction between the user and the system than the optimization algorithm, which has often been the case." }, { "instance_id": "R27039xR27009", "comparison_id": "R27039", "paper_id": "R27009", "text": "Optimal policies for maintaining a supply service in the Norwegian Sea This paper considers the real problem of determining an efficient policy for a supply operation in the Norwegian Sea, where a number of offshore installations are serviced from an onshore depot by supply vessels. The purpose of the study was to evaluate the effect on the total supply cost of having some or all of the offshore installations closed for service during night, and to determine an optimal routing policy. Six scenarios were developed, varying in the opening hours and the number of weekly services of the installations, as well as an algorithm to find an optimal routing policy (which vessels to operate and the coherent weekly schedules) for each of the scenarios. By also evaluating the qualitative aspects of the solution of each scenario, a routing policy was recommended, involving potential savings of 7 million dollars. The proposed policy has been implemented and the experience so far is good." }, { "instance_id": "R27039xR26984", "comparison_id": "R27039", "paper_id": "R26984", "text": "Transporting Sludge to the 106-Mile Site: An Inventory/Routing Model for Fleet Sizing and Logistics System Design This paper develops a model that is being used by the City of New York to design a new logistics system to transport municipal sewage sludge from city-operated wastewater treatment plants to a new ocean dumping site 106 miles offshore. The model provides an integrative framework for considering such strategic planning issues as fleet sizing, choice of vessel size, sizing local inventory holding capacities, and analyzing system behavior with and without transshipment. A unique feature of the model is that plant visitation frequencies are determined naturally by the characteristics of the problem (vessel size, inventory holding capacities, statistics of sludge production, proximity of other plants), rather than stated as exogeneous constraints. The formulation should be useful in a more general class of depot-to-customer distribution systems, including the distribution of industrial gases. The paper concludes with a description of additional research that is required in refining both the assumptions and the mechanisms of execution of the model." }, { "instance_id": "R27039xR27013", "comparison_id": "R27039", "paper_id": "R27013", "text": "A Scheduling Model for a High Speed Containership Service: A Hub and Spoke Short-Sea Application Advances in ship technology must be demonstrably beneficial and profitable before shipowners invest. Since the capital costs are large, investment in new technology will tend to be incremental rather than radical and will be affected by the financial viability of the service in which the ship is employed. While operating costs depend on the technology used for a given freight task, revenue from operations depends on transit time, frequency of service, freight rates, and volume of containers carried. Although high speed vessels (40 knots+) carry small payloads over short distances, this disadvantage can be offset by the greater number of round voyages achievable over a given period. After examining factors influencing the demand for fast cargo services, a high speed cargo ship design is described along with appropriate cargo handling and terminal operations. Using a mixed integer programming approach, an optimisation model is used to determine the profitability of a short-haul hub and spoke feeder operation based on Singapore. The model is used to calculate the optimum number of ships required to meet the given distribution task, the most profitable deployment of the fleet and the profitability over the planning horizon." }, { "instance_id": "R27039xR27034", "comparison_id": "R27039", "paper_id": "R27034", "text": "A multi-start local search heuristic for ship scheduling\u2014a computational study We present a multi-start local search heuristic for a typical ship scheduling problem. A large number of initial solutions are generated by an insertion heuristic with random elements. The best initial solutions are improved by a local search heuristic that is split into a quick and an extended version. The quick local search is used to improve a given number of the best initial solutions. The extended local search heuristic is then used to further improve some of the best solutions found. The multi-start local search heuristic is compared with an optimization-based solution approach with respect to computation time and solution quality. The computational study shows that the multi-start local search method consistently returns optimal or near-optimal solutions to real-life instances of the ship scheduling problem within a reasonable amount of computation time." }, { "instance_id": "R27039xR27007", "comparison_id": "R27039", "paper_id": "R27007", "text": "Optimal fleet design in a ship routing problem Abstract The problem of deciding an optimal fleet (the type of ships and the number of each type) in a real liner shipping problem is considered. The liner shipping problem is a multi-trip vehicle routing problem, and consists of deciding weekly routes for the selected ships. A solution method consisting of three phases is presented. In phase 1, all feasible single routes are generated for the largest ship available. Some of these routes will use only a small portion of the ship\u2019s capacity and can be performed by smaller ships at less cost. This fact is used when calculating the cost of each route. In phase 2, the single routes generated in phase 1 are combined into multiple routes. By solving a set partitioning problem (phase 3), where the columns are the routes generated in phases 1 and 2, we find both the optimal fleet and the coherent routes for the fleet." }, { "instance_id": "R27039xR26990", "comparison_id": "R27039", "paper_id": "R26990", "text": "Hierarchical resource planning for shipping companies The problem of resource management for a merchant fleet is addressed. The problem solution involves decisions on the purchase and use of ships in order to satisfy customers' demands. In particular, shipping companies employing their own ships are considered. These companies can maximize their profits on a highly competitive quality-sensitive market by optimizing the purchase and use of their resources (ships). Decision makers must maximize company profits by acting on the number and type of ships, shipping routes, type of service, and dates of sailing. A hierarchical model for the problem considered has been developed, and heuristic techniques which solve problems at different decision levels are described." }, { "instance_id": "R27061xR27051", "comparison_id": "R27061", "paper_id": "R27051", "text": "Definition methodology for the smart cities model Nowadays, the large and small districts are proposing a new city model, called \u201cthe smart city\u201d, which represents a community of average technology size, interconnected and sustainable, comfortable, attractive and secure. The landscape requirements and the solutions to local problems are the critical factors. The cities consume 75% of worldwide energy production and generate 80% of CO2 emissions. Thus, a sustainable urban model, \u201cthe smart city\u201d, is sustained by the European Commission. In this paper, a model for computing \u201cthe smart city\u201d indices is proposed. The chosen indicators are not homogeneous, and contain high amount of information. The paper deals with the computation of assigned weights for the considered indicators. The proposed approach uses a procedure based on fuzzy logic and defines a model that allows us to estimate \u201cthe smart city\u201d, in order to access European funding. The proposed innovative system results in a more extended comprehension and simple use. Thus, the model could help in policy making process as starting point of discussion between stakeholders, as well as citizens in final decision of adoption measures and best evaluated options." }, { "instance_id": "R27061xR27043", "comparison_id": "R27061", "paper_id": "R27043", "text": "Smart cities ranking: an effective instrument for the positioning of the cities? Due to different reasons cities are increasingly challenged to improve their competitiveness. Different strategic efforts are discussed in planning sciences, new approaches and instruments are elaborated and applied, steering the positioning of cities in a competitive urban world. As one specific consequence city rankings have experienced a remarkable boom. However, there is some evidence that public attention of city rankings is mainly concentrated simply on the ranks themselves totally neglecting its meaning as an instrument for strategic planning. In order to elaborate this potential meaning of rankings the paper gives an overview of different types and introduces an own approach called \u2018Smart City ranking\u2019. Based on this ranking approach and corresponding experiences of different cities reacting on its dissemination in the second part the paper shows how this approach can be used as an effective instrument detecting strengths and weaknesses and improving a city\u2019s competitiveness through relevant strategic efforts." }, { "instance_id": "R27061xR27041", "comparison_id": "R27061", "paper_id": "R27041", "text": "Foundations for Smarter Cities This paper describes the information technology (IT) foundation and principles for Smarter Cities\u2122. Smarter Cities are urban areas that exploit operational data, such as that arising from traffic congestion, power consumption statistics, and public safety events, to optimize the operation of city services. The foundational concepts are instrumented, interconnected, and intelligent. Instrumented refers to sources of near-real-time real-world data from both physical and virtual sensors. Interconnected means the integration of those data into an enterprise computing platform and the communication of such information among the various city services. Intelligent refers to the inclusion of complex analytics, modeling, optimization, and visualization in the operational business processes to make better operational decisions. This approach enables the adaptation of city services to the behavior of the inhabitants, which permits the optimal use of the available physical infrastructure and resources, for example, in sensing and controlling consumption of energy and water, managing waste processing and transportation systems, and applying optimization to achieve new efficiencies among these resources. Additional roles exist in intelligent interaction between the city and its inhabitants and further contribute to operational efficiency while maintaining or enhancing quality of life." }, { "instance_id": "R27061xR27053", "comparison_id": "R27061", "paper_id": "R27053", "text": "A vision of smarter cities: how cities can lead the way into a prosperous and sustainable future Tamara Kulesa: Hello. This is Tamara Kulesa, Worldwide Marketing Manager for IBM Global Business Services for the Global Government Industry. I am here today with Susanne Dirks, Manager of the IBM Institute for Business Values Global Center for Economic Development in Ireland. Susanne is responsible for the research and writing of the newly published report, \"A Vision of Smarter Cities: How Cities Can Lead the Way into a Prosperous and Sustainable Future.\" Susanne, thank you for joining me today." }, { "instance_id": "R27235xR27151", "comparison_id": "R27235", "paper_id": "R27151", "text": "The effects of exchange rate trends and volatility on export prices: Industry examples from Japan, Germany, and the United States ZusammenfassungDie Wirkungen von Trends und Schwankungen der Wechselkurse auf die Exportpreise: Beispiele aus der Industrie Japans, der Bundesrepublik und der Vereinigten Staaten. - Wie unterscheiden sich Trends und Schwankungen der Wechselkurse, denen sich die Exporteure der einzelnen L\u00c4nder gegen\u00fcbersehen, und wie beeinflussen sie deren Exportpreispolitik? Die Verfasserin untersucht Daten \u00fcber die Transaktionspreise der Exporte von f\u00fcnf stark disaggregierten Industrieprodukten, welche aus den USA, Japan und der Bundesrepublik ausgef\u00fchrt wurden. Sie ermittelt, da\\ die deutschen Unternehmen geringere Trendver\u00c4nderungen und Schwankungen der Wechselkurse auf ihren Exportm\u00c4rkten erfahren haben als die amerikanischen und japanischen Exporteure dieser Produkte. Indessen scheinen die deutschen Exporteure in ihrer Preispolitik besonders empfindlich auf Trendver\u00c4nderungen und Schwankungen der Wechselkurse zu reagieren. F\u00fcr die Preisgestaltung der US-Firmen sind weder der Trend noch die Schwankungen des Dollarkurses besonders wichtig. Die japanischen Firmen werden am meisten von den Schwankungen und den Trendbewegungen des Yen betroffen.R\u00e9sum\u00e9Les effets des tendances et des fluctuations des taux de change sur les prix \u00e0 l\u2019exportation: des examples de l\u2019industrie au Japon, dans la RFA et aux Etats-Unis. - De quelle mani\u00e8re diff\u00e8rent la tendance et la fluctuation des taux de change parmi les exportateurs de divers pays et de quelle fa\u00c7on influencent-elles les strat\u00e9gies des prix \u00e0 l\u2019exportation? L\u2019auteur analyse les donn\u00e9es sur les prix de transaction pour des exportations des cinq produits industriels tr\u00e8s desaggreg\u00e9s export\u00e9s des Etats-Unis, du Japon et de la RFA. Elle a trouv\u00e9 que les exportateurs allemands ont appris sur leurs d\u00e9bouch\u00e9s des mouvements de tendance et des fluctuations des taux de change qui sont moins accentu\u00e9s que ceux des exportateurs am\u00e9ricains et japonais pour les m\u00cames produits. Cependant les exportateurs allemands semblent r\u00e9agir le plus sensiblement \u00e0 ces deux facteurs. Ni le mouvement de tendance du dollar ni sa fluctuation ne sont tr\u00e8s importants pour le comportement des entreprises am\u00e9ricaines dans leur politique de prix. Les entreprises japonaises sont les plus influenc\u00e9es par la fluctuation et le mouvement de tendance du yen.ResumenLos efectos de tendencias en el movimiento de las tasas de cambio y la volatilidad sobre los precios de exportaci\u00f3n: ejemplos de industrias del Jap\u00f3n, Alemania y los Estados Unidos. - \u00bfC\u00f3mo difieren las tendencias en el movimiento y la volatilidad de las tasas de cambio entre pa\u00edses exportadores y c\u00f3mo afectan \u00e9stas a las estrategias de precio de exportaci\u00f3n? La autora analiza estad\u00edsticas a nivel muy desagregado de los precios para transacciones de exportaci\u00f3n de cinco productos industriales exportados por los Estados Unidos, el Jap\u00f3n y Alemania Occidental. Los resultados demuestran que los exportadores alemanes han experimentado movimientos de tendencia y una volatilidad de las tasas de cambio relativamente menores en sus mercados de destino que los exportadores estadounidenses y japoneses de estos productos. Los exportadores alemanes, empero, parecen ser m\u00e1s sensitivos frente a estos dos factores en el marco de sus estrategias de precios. Ni la tendencia del movimiento del d\u00f3lar ni su volatilidad son importantes para la pol\u00edtica de precios de las empresas estadounidenses. Las empresas japonesas resultan ser las mas afectadas por la volatilidad del yen y su tendencia de movimiento." }, { "instance_id": "R27235xR27228", "comparison_id": "R27235", "paper_id": "R27228", "text": "Exchange-Rate Volatility in Latin America and its Impact on Foreign Trade Abstract This paper investigates empirically the impact of real exchange-rate volatility on the export flows of eight Latin American countries over the quarterly period 1973\u20132004. Estimates of the cointegrating relations are obtained using different cointegration techniques. Estimates of the short-run dynamics are obtained for each country utilizing the error-correction technique. The major results show that increases in the volatility of the real effective exchange rate, approximating exchange-rate uncertainty, exert a significant negative effect upon export demand in both the short-run and the long-run in each of the eight Latin American countries. These effects may result in significant reallocation of resources by market participants." }, { "instance_id": "R27235xR27156", "comparison_id": "R27235", "paper_id": "R27156", "text": "The Effect of Real Exchange Rate Uncertainty on Exports: Empirical Evidence Unless very specific assumptions are made, theory alone cannot determine the sign of the relation between real exchange rate uncertainty and exports. On the one hand, convexity of the profit function with respect to prices implies that an increase in price uncertainty raises the expected returns in the export sector. On the other, potential asymmetries in the cost of adjusting factors of production (for example, investment irreversibility) and risk aversion tend to make the uncertainty-exports relation negative. This article examines these issues using a simple risk-aversion model. Export equations allowing for uncertainty are then estimated for six developing countries. Contrary to the ambiguity of the theory, the empirical relation is strongly negative. Estimates indicate that a 5 percent increase in the annual standard deviation of the real exchange rate can reduce exports by 2 to 30 percent in the short run. These effects are substantially magnified in the long run." }, { "instance_id": "R27235xR27200", "comparison_id": "R27235", "paper_id": "R27200", "text": "Does exchange rate volatility impede the volume of Japan's bilateral trade? Abstract The purpose of this paper has been to examine the extent to which exchange rate volatility impedes Japan's bilateral trade flows. In addition to exchange rate volatility, other factors that were posited to affect trade flows include data on real economic activity, costs, and prices which feature in the theoretical framework. The empirical analysis differs from the majority of previous research by appropriately specifying the models in terms of the order of integration of the data and in terms of the equation dynamics. The major finding of the paper is that exchange rate volatility is at least as likely to raise trade flows as it is to impede them." }, { "instance_id": "R27235xR27163", "comparison_id": "R27235", "paper_id": "R27163", "text": "The effects of exchange rate volatility on exports Investigations of the impact of exchange rate volatility on exports have not allowed for the integrated non-stationary of the variables employed in estimation. The results reported in this note are suggestive that, in the context of an error correction framework, real exchange rate volatility has a significant impact on exports." }, { "instance_id": "R27235xR27165", "comparison_id": "R27235", "paper_id": "R27165", "text": "Exchange rate variability and trade: why is it so difficult to find any empirical relationship? This paper discusses why previous literature has found little evidence of any effect of exchange rate variability on international trade. Methodological and statistical issues are discussed. In particular, comparisons are made of estimations based on different specifications or using different data sets and changes in the results depending on the method used are shown." }, { "instance_id": "R27235xR27198", "comparison_id": "R27235", "paper_id": "R27198", "text": "The impact of exchange rate volatility on Australian trade flows Abstract This paper analyses the impact of exchange rate volatility on Australian trade flows. ARCH models are used to generate a measure of exchange rate volatility which is then tested in a model of Australian imports and exports. This paper differs from many of the papers previously published as special attention is given to the export and import trade data sets used. Not only is aggregate trade data tested for the effects of volatility, but disaggregate sectoral trade data is also analysed. Testing sectoral trade data allows us to detect whether the direction or magnitude of the impact of volatility differs depending on the nature of the market in which the goods are traded. If the effect of exchange rate volatility does differ by market, then testing aggregate trade data convolutes the true nature of the relationship and may prevent a significant relationship from being derived. The results obtained in this paper suggest that the impact of exchange rate volatility does differ between traded good sectors although it remains difficult to firmly establish the nature of the relationship." }, { "instance_id": "R27235xR27134", "comparison_id": "R27235", "paper_id": "R27134", "text": "The Impact of Exchange Rate volatility on Export Growth: Some Theoretical considerations and empirical results\u201d Abstract This paper (i) examines the theoretical relationship between exchange-rate volatility and export growth and (ii) tests for the empirical impact of such volatility on real export growth of 11 OECD countries. We argue that, theoretically, exchange-rate volatility can have an impact on trade in either a positive or negative direction. Empirical results are provided for the managed-rate and flexible-rate periods. Both nominal and real measures of exchange rates are used in two specifications of volatility: absolute percentage changes and standard deviations. Of 33 regressions presented, only three support the hypothesis that exchange-rate volatility impedes export performance." }, { "instance_id": "R27235xR27136", "comparison_id": "R27235", "paper_id": "R27136", "text": "Trade and Investment Under Floating Rates: the U.S. Experience\u201d Since the move to a managed floating exchange rate system in 1973, world financial markets have been characterized by large movements in nominal exchange rates. These movements have been accompanied by large swings in real exchange rates, reflecting the fact that nominal exchange rate variations have not closely followed changes in relative prices of traded goods. The short-run variability of exchange rates \u2014 whether measured in real or nominal terms, in bilateral or effective terms \u2014 has been substantially higher in the post-1973 period than it was under the Bretton Woods system (Frenkel and Goldstein 1986). Moreover, exchange rate variations have been much greater than the early advocates of floating had expected. For example, in an influential article, Harry Johnson (1969, pp. 19\u201320) argued that the allegation that a flexible-rate system would result in unstable rates ignored \u201cthe crucial point that a rate that is free to move under influences of changes in demand and supply is not forced to move erratically, but instead will move only in response to such changes in demand and supply... and normally will move only slowly and predictably.\u201d1" }, { "instance_id": "R27235xR27127", "comparison_id": "R27235", "paper_id": "R27127", "text": "Effects of Exchange Rate Uncertainty on German studies have used, we find that exchange rate uncertainty has a significant impact on imports and exports of Germany and of the United States. In addition, we argue that the estimated effects are likely to understate the impact of exchange rate uncertainty on trade. In this article, we first discuss the problem of defining exchange rate uncertainty and its relationship to observed variability of exchange rates. We then outline the various direct and indirect ways through which uncertainty might affect the volume of trade. Finally, we review our empirical results, and attempt to quantify the total impact that exchange rate uncertainty has had on German and U.S. trade in recent years.2" }, { "instance_id": "R27235xR27158", "comparison_id": "R27235", "paper_id": "R27158", "text": "Exchange rate volatility and U.S. multilateral trade flows Abstract This paper investigates the relation between exchange rate volatility, international trade and the macroeconomy in the context of a VAR model. The model is estimated for U.S. multilateral trade over the current floating rate period and includes a moving standard deviation measure of real exchange volatility. There is some evidence of a statistically significant relationship between volatility and trade, but the moving average representation of the system suggests that the effects are quantitatively small. We do find that exchange rate volatility is influenced by the state of the economy." }, { "instance_id": "R27235xR27185", "comparison_id": "R27235", "paper_id": "R27185", "text": "Emerging Currency Blocs Using the gravity model to examine bilateral trade patterns throughout the world. we find clear evidence of trading blocs in Europe. the Western Hemisphere, East Asia and the Pacific. In Europe, it is the EC that operates as a bloc, not including EFTA. Two EC members trade an extra 55 per cent more with each other. beyond what can be explained by proximity, size. and GNP/capita. We also find slight evidence of trade-diversion in 1990. Even though the blocs fall along natural geographic lines. they may actually be \"super-natural.\" Turning to the possibility of currency blocs, we find a degree of intra-regional stabilization of exchange rates, especially in Europe. Not surprisingly. the European currencies link to the OM. and Western Hemisphere countries peg to the dollar. East Asian countries, however, link to the dollar. not the yen. We also find some tentative cross-section evidence that bilateral exchange rate stability may have a (small) effect on trade. A sample calculation suggests that if real exchange rate variability within Europe were to double, as it would if it returned from the 1990 level to the 1980 level, the volume of intra-regional trade might fall by an estimated 0.7 per cent." }, { "instance_id": "R27235xR27209", "comparison_id": "R27235", "paper_id": "R27209", "text": "Estimating the impact of exchange rate volatility on exports: evidence from Asian countries The paper examines the impact of exchange rate volatility on the exports of five Asian countries. The countries are Turkey, South Korea, Malaysia, Indonesia and Pakistan. The impact of a volatility term on exports is examined by using an Engle-Granger residual-based cointegrating technique. The results indicate that the exchange rate volatility reduced real exports for these countries. This might mean that producers in these countries are risk-averse. The producers will prefer to sell in domestic markets rather than foreign markets if the exchange rate volatility increases." }, { "instance_id": "R27235xR27230", "comparison_id": "R27235", "paper_id": "R27230", "text": "Exchange Rate Volatility and Trade Flows of the U.K. in 1990s This paper examines the impact of exchange rate volatility on trade flows in the U.K. over the period 1990\u20132000. According to the conventional approach, exchange rate volatility clamps down trade volumes. This paper, however, identifies the existence of a positive relationship between exchange rate volatility and imports in the U.K. in the 1990s by using a bivariate GARCH-in-mean model. It highlights a possible emergence of a polarized version with conventional proposition that ERV works as an impediment factor on trade flows." }, { "instance_id": "R27235xR27217", "comparison_id": "R27235", "paper_id": "R27217", "text": "Exchange Rate Volatility and Trade among the Asia Pacific The purpose of this paper is to investigate the impact of exchange rate volatility on exports among 14 Asia Pacific countries, where various measures to raise the intra-region trade are being implemented. Specifically, this paper estimates a gravity model, in which the dependent variable is the product of the exports of two trading countries. In addition, it also estimates a unilateral exports model, in which the dependent variable is not the product of the exports of two trading countries but the exports from one country to another. By doing this, the depreciation rate of the exporting country's currency value can be included as one of the explanatory variables affecting the volume of exports. As the explanatory variables of the export volume, the gravity model adopts the product of the GDPs of two trading counties, their bilateral exchange rate volatility, their distance, a time trend and dummies for the share of the border line, the use of the same language, and the APEC membership. In the case of the unilateral exports model, the product of the GDPs is replaced by the GDP of the importing country, and the depreciation rate of the exporting country's currency value is dded. In addition, considering that the export volume will also depend on various onditions of the exporting country, dummies for exporting countries are also included as an explanatory variable. The empirical tests, using annual data for the period from 1980 to 2002, detect a significant negative impact of exchange rate volatility on the volume of exports. In addition, various tests using the data for sub-sample periods indicate that the negative impact had been weakened since 1989, when APEC had launched, and surged again from 1997, when the Asian financial crisis broke out. This finding implies that the impact of exchange rate volatility is time-dependent and that it is significantly negative at least in the present time. This phenomenon is noticed regardless which estimation model is adopted. In addition, the test results show that the GDP of the importing country, the depreciation of the exporting country's currency value, the use of the same language and the membership of APEC have positive impacts on exports, while the distance between trading countries have negative impacts. Finally, it turns out that the negative impact of exchange rate volatility is much weaker among OECD countries than among non-OECD counties." }, { "instance_id": "R27235xR27193", "comparison_id": "R27235", "paper_id": "R27193", "text": "Exchange Rate Variability and the Flow of International Trade Abstract This study uses a GARCH (Generalized Autoregressive Conditional Heteroskedasticity) model to test if real exchange rate volatility has an adverse effect on the value of U.S. imports from Canada. We find that exchange rate uncertainty has a negative and statistically significant affect on trade flows." }, { "instance_id": "R27235xR27160", "comparison_id": "R27235", "paper_id": "R27160", "text": "The effect of exchange rate variability on trade: The case of the West African Monetary Union's imports Abstract The West African Monetary Union, during the period of this study, 1976\u20131982, comprised six countries \u2014 Benin, Burkina Faso, Co\u02c6te d'Ivoire, Niger, Senegal, and Togo. Since their common currency, the CFA franc, has been pegged to the French franc at an unchanged rate since 1948, the entire nominal exchange rate variability that traders in the region face is due to movements in the French franc with respect to other currencies. In the face of exchange rate risk that is not always coverable, trade should be adversely affected. Using three measures of variability in the nominal effective exchange rate index, this paper finds that exchange rate variability has not affected the Union's real imports, or the diversification of trade away from the franc zone." }, { "instance_id": "R27235xR27172", "comparison_id": "R27235", "paper_id": "R27172", "text": "Exchange rate volatility and Pakistan's exports to the developed world, 1974\u201385 Abstract The paper attempts to estimate the impact of exchange rate uncertainty on Pakistan's exports to the developed world for 1974\u201385, discovering strong evidence to suggest that exports were adversely affected by the increased variability of its bilateral exchange rates. Unlike much of the recent evidence on developed countries, however, it is the variability in nominal rather than real exchange rates that is significant. The computations also suggest the presence of strong third-country effects." }, { "instance_id": "R27235xR27183", "comparison_id": "R27235", "paper_id": "R27183", "text": "Exchange rate variability and the level of international trade There have been numerous theoretical and empirical studies of the effect of exchange rate variability on the level of international trade. Most theoretical studies have concluded that under reasonable assumptions exchange rate variability ought to depress the level of trade. Empirical studies generally have not identified a significant effect of exchange rate variability on trade flows. This paper builds a theoretical model in which exchange rate variability has a negative effect on the level of trade. The model is calibrated to observed trade flows and real exchange rates. Simulation of the model demonstrates that the effect of increasing exchange rate variability on trade flows is very small. These results are not sensitive to a wide range of parameter values. Moreover, reasonable extensions of the model only serve to minimize further the effect of exchange rate variability on trade flows." }, { "instance_id": "R27235xR27220", "comparison_id": "R27235", "paper_id": "R27220", "text": "On the Trade Impact of Nominal Exchange Rate Volatility What is the effect of nominal exchange rate variability on trade? I argue that the methods conventionally used to answer this perennial question are plagued by a variety of sources of systematic bias. I propose a novel approach that simultaneously addresses all of these biases, and present new estimates from a broad sample of countries from 1970 to 1997. The answer to the question is: Not much." }, { "instance_id": "R27235xR27225", "comparison_id": "R27235", "paper_id": "R27225", "text": "Exchange Rate Uncertainty in Turkey and its Impact on Export Volume This paper investigates the impact of real exchange rate volatility on Turkey\u2019s exports to its most important trading partners using quarterly data for the period 1982 to 2001. Cointegration and error correction modeling approaches are applied, and estimates of the cointegrating relations are obtained using Johansen\u2019s multivariate procedure. Estimates of the short-run dynamics are obtained through the error correction technique. Our results indicate that exchange rate volatility has a significant positive effect on export volume in the long run. This result may indicate that firms operating in a small economy, like Turkey, have little option for dealing with increased exchange rate risk." }, { "instance_id": "R27235xR27175", "comparison_id": "R27235", "paper_id": "R27175", "text": "The Impact of Exchange Rate Variability on Trade Flows: Further Results on U.S. Imports from Canada Abstract In this study, we provide additional empirical evidence on the effects of exchange rate variability on trade flows. We focus on trade between Canada and the United States in five sectors with substantial shares of bilateral trade between the two countries. Beside our sectoral approach, our contribution is to address econometric problems raised by the use of proxies for risk using the non-parametric approach suggested by Pagan and Ullah (1988). We adopt risk measures based on three-month forecast errors on the forward market. Our results generally indicate that exchange rate variability has not significantly depressed the volume of trade between Canada and the United States during the present floating rate period." }, { "instance_id": "R27264xR27254", "comparison_id": "R27264", "paper_id": "R27254", "text": "Kick-Starting Robot Programming Using ROS The last three chapters discussed the prerequisites for programming a robot using the Robot Operating System (ROS). We discussed the basics of Ubuntu/Linux, bash commands, the basic concepts of C++ programming, and the basics of Python programming. In this chapter, we start working with ROS. Before discussing ROS concepts, let\u2019s discuss robot programming and how we do it. After this, we learn more about ROS, how to install ROS, and its architecture." }, { "instance_id": "R27264xR27259", "comparison_id": "R27264", "paper_id": "R27259", "text": "Cyberbotics ltd. webots professional mobile robot simulation Cyberbotics Ltd. develops Webots \u2122 , a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots \u2122 lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots \u2122 has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd." }, { "instance_id": "R27380xR27291", "comparison_id": "R27380", "paper_id": "R27291", "text": "Changes in residual stress during the tension fatigue of normalized and peened SAE 1040 steel Abstract During the tension-tension fatigue of normalized SAE 1040 steel under applied stress control, compressive residual stresses develop, but only in or near deformation markings. However, if this steel is shot peened prior to fatigue, the compressive residual stresses produced by peening are eliminated during fatigue and replaced by tensile residual stresses, when the applied (fatigue) stress is above the endurance limit." }, { "instance_id": "R27380xR27374", "comparison_id": "R27380", "paper_id": "R27374", "text": "Relaxation of residual stresses induced by turning and shot peening on steels Experiments on the relaxation of residual stresses in steels by fatigue loading are described. This question is of interest because it is well known that compressive residual stresses are often induced by special surface treatments (such as shot peening) to improve the fatigue life of metal parts; however, if cyclic relaxation occurs, the beneficial effects can, in part, vanish during service. Two hardened and tempered steels of grade C45 and 39NiCrMo3 were used in the tests. For both materials, different specimens were given two surface treatments: simple turning without successive surface treatment, inducing on the surface a moderate tensile residual stress state, and shot peening, inducing high residual compressive stresses. The specimens were submitted to constant-amplitude tension-compression fatigue loading, and the surface residual stresses were measured after 0, 1, 10 cycles and more. Results show that relaxation occurs from the very first cycle; the amount of residual stress relaxation depends on many parameters and on the type of steel. The results are in agreement with data obtained by other researchers." }, { "instance_id": "R27380xR27318", "comparison_id": "R27380", "paper_id": "R27318", "text": "Fatigue Fracture and Residual Stress Relaxation in Shot-Peened Components This work is devoted to highlighting the beneficial effects of compressive residual stresses, their relaxation and the associated fatigue crack initiation and propagation in two contrasting materials. Accordingly, rotating bend fatigue tests were conducted on essentially un-notched specimens made from 7075 T7351 aluminium zinc alloy and 080M40 medium carbon steel. The tests involved assessing the life of un-peened, peened, and re-peened specimens where re-peening was applied after the exhaustion of a proportion of the specimens anticipated fatigue life. Whilst the steel demonstrated a very significant recovery of fatigue life associated with the re-peening treatment, this was not the case for the aluminium even though peening had proved highly beneficial. Residual stress measurements and fractographic examination were used to elucidate this discrepancy." }, { "instance_id": "R27380xR27354", "comparison_id": "R27380", "paper_id": "R27354", "text": "Contact fatigue of automotive gears: evolution and effects of residual stresses introduced by surface treatments Helical gears from an automotive gearbox, previously subjected to the surface treatments of carbo-nitriding and shot-peening, were submitted to contact fatigue tests. The X-ray diffraction technique was used to characterize the evolution of different mechanical and metallurgical parameters as a function of gear damage. Particular attention was paid to residual stress relief. A numerical model was developed to predict residual stress relaxation and estimate the most likely localization of contact fatigue crack initiation. The stress\u2013strain laws of the surface-treated layers were determined by means of two separate experimental methods, based on locally measured parameters. The Dang Van multiaxial fatigue criterion was used to analyse the failure of the gears, taking into account the effects of friction and roughness." }, { "instance_id": "R27380xR27362", "comparison_id": "R27380", "paper_id": "R27362", "text": "Influence of Optimized Warm Peening on Residual Stress Stability and Fatigue Strength of AISI 4140 in Different Material States Using a modified air blasting machine warm peening at 20 O C < T I 410 \"C was feasible. An optimized peening temperature of about 310 \"C was identified for a 450 \"C quenched and ternpered steel AISI 4140. Warm peening was also investigated for a normalized, a 650 \"C quenched and tempered, and a martensitically hardened material state. The quasi static surface compressive yield strengths as well as the cyclic surface yield strengths were determined from residual stress relaxation tests conducted at different stress amplitudes and numbers of loading cycles. Dynamic and static strain aging effects acting during and after warm peening clearly increased the residual stress stability and the alternating bending strength for all material states." }, { "instance_id": "R27380xR27347", "comparison_id": "R27380", "paper_id": "R27347", "text": "Influence of the shot peening temperature on the relaxation behaviour of residual stresses during cyclic bending Shot peening of steels at elevated temperatures (warm peening) can improve the fatigue behaviour of workpieces. For the steel AI Sf 4140 (German grade 42CrM04) in a quenched and tempered condition, it is shown that this is not only caused by the higher compressive residual stresses induced but also due to an enlarged stability of these residual stresses during cyclic bending. This can be explained by strain aging effects during shot peening, which cause different and more stable dislocation structures." }, { "instance_id": "R27380xR27297", "comparison_id": "R27380", "paper_id": "R27297", "text": "Relaxation of Shot Peening Induced Compressive Stress During Fatigue of Notched Steelsamples AbstractThis paper presents an experimental investigation of the surface residual stress relaxation behaviour of a shot peened 0.4% carbon low alloy steel under fatigue loading. A round specimen with a circumferential notch and a notch factor Kt = 1.75 was fatigue loaded in both shot peened and ground conditions. Loading conditions included axial fatigue with stress ratio R = \u22121 and R = 0 and also R = \u22121 with an additional peak overload applied at 106 cycles. Plain unnotched shot peened specimens were also fatigue loaded with stress ratio R = \u22121. The results show how the relaxation is dependent on load level, how the peak load changes the surface residual stress state, and that relaxation of the smooth and notched conditions is similar. Two different shot peening conditions were used, one with Almen intensity of 30\u201335A (mm/100) and another of 50\u201355 A (mm/l00)." }, { "instance_id": "R27380xR27366", "comparison_id": "R27380", "paper_id": "R27366", "text": "Consideration of shot peening treatment applied to a high strength aeronautical steel with different hardnesses One of the most important components in a aircraft is its landing gear, due to the high load that it is submitted to during, principally, the take off and landing. For this reason, the AISI 4340 steel is widely used in the aircraft industry for fabrication of structural components, in which strength and toughness are fundamental design requirements [I]. Fatigue is an important parameter to be considered in the behavior of mechanical components subjected to constant and variable amplitude loading. One of the known ways to improve fatigue resistance is by using the shot peening process to induce a conlpressive residual stress in the surface layers of the material, making the nucleation and propagation of fatigue cracks more difficult [2,3]. The shot peening results depend on various parameters. These parameters can be grouped in three different classes according to I<. Fathallah et a1 (41: parameters describing the treated part, parameters of stream energy produced by the process and parameters describing the contact conditions. Furthermore, relaxation of the CKSF induced by shot peening has been observed during the fatigue process 15-71. In the present research the gain in fatigue life of AISI 4340 steel, obtained by shot peening treatment, is evaluated under the two different hardnesses used in landing gear. Rotating bending fatigue tests were conducted and the CRSF was measured by an x-ray tensometry prior and during fatigue tests. The evaluation of fatigue life due the shot peening in relation to the relaxation of CRSF, of crack sources position and roughness variation is done." }, { "instance_id": "R27380xR27351", "comparison_id": "R27380", "paper_id": "R27351", "text": "Effects of warm peening on fatigue life and relaxation behaviour of residual stresses in AISI 4140 steel A new device has been built which allows shot peening in an air blast machine at elevated temperatures. The effects of conventional shot peening and peening at elevated temperatures on the characteristics of regions close to the surface, on the stability of residual stresses and half widths of X-ray interference lines and on the fatigue strength are presented for a quenched and tempered AISI 4140 steel (German grade 42CrMo4). The alternating bending strength is increased by warm peening compared with conventional shot peening. Additional investigations of samples conventionally peened and then annealed confirm that these effects are due to the stability of the dislocation structure, which is highly affected by strain ageing effects. This causes an additional benefit owing to higher stability of the residual stresses induced." }, { "instance_id": "R27380xR27320", "comparison_id": "R27380", "paper_id": "R27320", "text": "Crack propagation in the presence of shot-peening residual stresses Abstract This investigation examines the respective effects of peening and re-peening upon the fatigue fracture behaviour of metallic components in the presence of shot-peening residual stresses. Specifically, the work is devoted to highlighting the beneficial effects of compressive surface residual stresses, their relaxation and the associated crack initiation and propagation behaviours of two contrasting materials. Accordingly, room temperature rotating bending fatigue tests were conducted on cylindrical specimens made in aluminium zinc alloy and medium carbon steel. These specimens were first peened and then partially fatigued to a \u201cknown\u201d proportion of their anticipated fatigue life. A number of these specimens were then repeened and tested to ultimate fracture; the remainder were used for supplementary tests. The above tests revealed a substantial fatigue life improvement for both the steel and aluminium specimens. However, when re-peened, after being partially fatigued, only the steel specimens showed further life enhancement. Residual stress measurements using X-ray diffraction techniques and fractography using scanning electron microscopy and optical methods were employed, at different stages of the work, to elucidate this discrepancy." }, { "instance_id": "R27380xR27295", "comparison_id": "R27380", "paper_id": "R27295", "text": "Relaxationofshot peening induced compressive stress during fatigue of notched steel samples AbstractThis paper presents an experimental investigation of the surface residual stress relaxation behaviour of a shot peened 0.4% carbon low alloy steel under fatigue loading. A round specimen with a circumferential notch and a notch factor Kt = 1.75 was fatigue loaded in both shot peened and ground conditions. Loading conditions included axial fatigue with stress ratio R = \u22121 and R = 0 and also R = \u22121 with an additional peak overload applied at 106 cycles. Plain unnotched shot peened specimens were also fatigue loaded with stress ratio R = \u22121. The results show how the relaxation is dependent on load level, how the peak load changes the surface residual stress state, and that relaxation of the smooth and notched conditions is similar. Two different shot peening conditions were used, one with Almen intensity of 30\u201335A (mm/100) and another of 50\u201355 A (mm/l00)." }, { "instance_id": "R27380xR27369", "comparison_id": "R27380", "paper_id": "R27369", "text": "An evaluation of shot peening, residual stress and stress relaxation on the fatigue life of AISI 4340 steel Shot peening is a method widely used to improve the fatigue strength of materials, through the creation of a compressive residual stress field (CRSF) in their surface layers. In the present research the gain in fatigue life of AISI 4340 steel, used in landing gear, is evaluated under four shot peening conditions. Rotating bending fatigue tests were conducted and the CRSF was measured by an X-ray tensometry prior and during fatigue tests. It was observed that relaxation of the CRSF occurred due to the fatigue process. In addition, the fractured fatigue specimens were investigated using a scanning electron microscope in order to obtain information about the crack initiation points. The evaluation of fatigue life, relaxation of CRSF and crack sources are discussed." }, { "instance_id": "R27380xR27325", "comparison_id": "R27380", "paper_id": "R27325", "text": "Effect of Cooling and Shot Peening on Residual Stresses and Fatigue Performance of Milled Inconel 718 The present study highlights the effect of cooling and post-machining surface treatment of shot peening on the residual stresses and corresponding fatigue life of milled superalloy Inconel 718. It ..." }, { "instance_id": "R27380xR27341", "comparison_id": "R27380", "paper_id": "R27341", "text": "Residual stress relaxation in an AISI 4140 steel due to quasistatic and cyclic loading at higher temperatures Abstract Residual stresses can be relaxed by supplying sufficiently high amounts of thermal and/or mechanical energy, which converts the residual elastic strains to microplastic strains. In order to better understand this relaxation behavior, shot peening induced residual stresses in normalized condition and in quenched and tempered condition of the steel AISI 4140 (German grade 42 CrMo 4) were investigated in annealing experiments, quasistatic loading experiments and bending fatigue experiments at 25, 250 and 400\u00b0C. The residual stress relaxation during alternating bending occurs in different regimes. First, thermal relaxation reduces the residual stresses during specimen heating. The relaxation during the first cycle can be discussed on the basis of the effects due to quasistatic loading, if the inhomogeneous distribution of the loading stress is taken into account. Differences in the behavior after the two heat treatments result from the Bauschinger-effect and effects of dynamic strain ageing. Owing to cyclic creep effects, the interval between the first cycle (N=1) and the number of cycles to crack initiation Ni is characterized by residual stresses which decrease linearly with the logarithm of N. Finally for N>Ni the reduction of residual stresses with the logarithm of N is stronger than linear." }, { "instance_id": "R27380xR27311", "comparison_id": "R27380", "paper_id": "R27311", "text": "Deceleration of small crack growth by shot-peening L'influence du grenaillage sur le comportement de la croissance de fissures en surface a ete observee sur des specimens d'acier de construction au carbone avec une rainure perforee relativement grosse sous pliage rotatif" }, { "instance_id": "R27380xR27331", "comparison_id": "R27380", "paper_id": "R27331", "text": "Effect of shot peening on residual stress and fatigue life of a spring steel This study describes shot peening effects such as shot hardness, shot size and shot projection pressure, on the residual stress distribution and fatigue life in reversed torsion of a 60SC7 spring steel. There appears to be a correlation between the fatigue strength and the area under the residual stress distribution curve. The biggest shot shows the best fatigue life improvement. However, for a shorter time of shot peening, small hard shot showed the best performance. Moreover, the superficial residual stresses and the amount of work hardening (characterised by the width of the X-ray diffraction line) do not remain stable during fatigue cycling. Indeed they decrease and their reduction rate is a function of the cyclic stress level and an inverse function of the depth of the plastically deformed surface layer." }, { "instance_id": "R27461xR27457", "comparison_id": "R27461", "paper_id": "R27457", "text": "Measurement of the Heat Transfer Coefficient in Plate Heat Exchangers Using a Tem- perature Oscillation Technique Abstract Thermal parameters of plate type heat exchangers are experimentally evaluated using a temperature oscillation technique. A mathematical model with axial dispersion has been utilised to evaluate heat transfer coefficient and dispersion coefficients characterised by NTU and Peclet number, respectively. Special reference has been made to the deviation from plug flow due to the phase lag effect in a U-type plate heat exchanger. A mathematical model for correcting the thermal penetration effect in plate edgings and thick end plates has been presented. The experimental results obtained by using the temperature oscillation technique have been compared with those obtained by traditional steady-state experiment. A series of developments have been suggested to make the method more suitable for plate type heat exchangers." }, { "instance_id": "R27461xR27407", "comparison_id": "R27461", "paper_id": "R27407", "text": "Experimental Study of Turbulent Flow Heat Transfer and Pressure Drop in a Plate Heat Exchanger With Chevron Plates Experimental heat transfer and isothermal pressure drop data for single-phase water flows in a plate heat exchanger (PHE) with chevron plates are presented. In a single-pass U-type counterflow PHE, three different chevron plate arrangements are considered: two symmetric plate arrangements with \u03b2 = 30 deg/30 deg and 60 deg/60 deg, and one mixed-plate arrangement with \u03b2 = 30 deg/60 deg. For water (2 < Pr < 6) flow rates in the 600 < Re < 104 regime, data for Nu and f are presented. The results show significant effects of both the chevron angle \u03b2 and surface area enlargement factor \u03c6. As \u03b2 increases, and compared to a flat-plate pack, up to two to five times higher Nu are obtained; the concomitant f, however, are 13 to 44 times higher. Increasing \u03c6 also has a similar, though smaller effect. Based on experimental data for Re a 7000 and 30 deg \u2264 \u03b2 \u2264 60 deg, predictive correlations of the form Nu = C1,(\u03b2) D1(\u03c6) Rep1(\u03b2)Pr1/3(\u03bc/\u03bcw)0.14 and f = C2(\u03b2) D2(\u03c6) Rep2(\u03b2) are devised. Finally, at constant pumping power, and depending upon Re, \u03b2, and \u03c6, the heat transfer is found to be enhanced by up to 2.8 times that in an equivalent flat-plate channel." }, { "instance_id": "R27620xR27593", "comparison_id": "R27620", "paper_id": "R27593", "text": "Energy consumption, carbon emissions, and economic growth in China This paper investigates the existence and direction of Granger causality between economic growth, energy consumption, and carbon emissions in China, applying a multivariate model of economic growth, energy use, carbon emissions, capital and urban population. Empirical results for China over the period 1960-2007 suggest a unidirectional Granger causality running from GDP to energy consumption, and a unidirectional Granger causality running from energy consumption to carbon emissions in the long run. Evidence shows that neither carbon emissions nor energy consumption leads economic growth. Therefore, the government of China can purse conservative energy policy and carbon emissions reduction policy in the long run without impeding economic growth." }, { "instance_id": "R27620xR27595", "comparison_id": "R27620", "paper_id": "R27595", "text": "Energy consumption and economic growth in New Zealand: Results of trivariate and multivariate models This study examines the energy consumption-growth nexus in New Zealand. Causal linkages between energy and macroeconomic variables are investigated using trivariate demand-side and multivariate production models. Long run and short run relationships are estimated for the period 1960-2004. The estimated results of demand model reveal a long run relationship between energy consumption, real GDP and energy prices. The short run results indicate that real GDP Granger-causes energy consumption without feedback, consistent with the proposition that energy demand is a derived demand. Energy prices are found to be significant for energy consumption outcomes. Production model results indicate a long run relationship between real GDP, energy consumption and employment. The Granger-causality is found from real GDP to energy consumption, providing additional evidence to support the neoclassical proposition that energy consumption in New Zealand is fundamentally driven by economic activities. Inclusion of capital in the multivariate production model shows short run causality from capital to energy consumption. Also, changes in real GDP and employment have significant predictive power for changes in real capital." }, { "instance_id": "R27620xR27486", "comparison_id": "R27620", "paper_id": "R27486", "text": "Interfuel substitution and energy consumption in the industrial sector This paper examines the possibilities for fuel substitution in the industrial sector. First, we determine the total demand for fuel and power for the industrial sector from 1955 to 1972. We then examine fuel substitution possibilities for electricity and eight major fossil fuels consumed by the industrial sector. These are coal, natural gas, residual oil, distillate oil, kerosene, liquefied petroleum gas, still gas and petroleum coke. The analysis includes an estimation of the fuel split equations, the dynamic simulation of the industrial sector demands for fuel and the computation of short- and long-run demand elasticities for each fuel." }, { "instance_id": "R27620xR27584", "comparison_id": "R27620", "paper_id": "R27584", "text": "Energy consumption, economic growth, and carbon emissions: challenges faced by an EU candidate member This paper investigates the long run Granger causality relationship between economic growth, carbon dioxide emissions and energy consumption in Turkey, controlling for gross fixed capital formation and labor. The most interesting result is that carbon emissions seem to Granger cause energy consumption, but the reverse is not true. The lack of a long run causal link between income and emissions may be implying that to reduce carbon emissions, Turkey does not have to forgo economic growth." }, { "instance_id": "R27620xR27609", "comparison_id": "R27620", "paper_id": "R27609", "text": "Interpreting the dynamic nexus between energy consumption and economic growth: empirical evidence from Russia Research on the nexus between energy consumption and economic growth is a fundamental topic for energy policy making and low-carbon economic development. Russia proves the third largest energy consumption country in the world in recent years, while little research has shed light upon its energy consumption issue till now, especially its energy-growth nexus. Therefore, this paper empirically investigates the dynamic nexus of the two variables in Russia based on the state space model. The results indicate that, first of all, Russia's energy consumption is cointegrated with its economic growth in a time-varying way though they do not have static or average cointegration relationship. Hence it is unsuitable to merely portrait the nexus in an average manner. Second, ever since the year of 2000, Russia's energy efficiency has achieved much more promotion compared with that in previous decades, mainly due to the industrial structure adjustment and technology progress. Third, among BRIC countries, the consistency of Russia's energy consumption and economic growth appears the worst, which suggests the complexity of energy-growth nexus in Russia. Finally, there exists bi-directional causality between Russia's energy consumption and economic growth, though their quantitative proportional relation does not have solid foundation according to the cointegration theory." }, { "instance_id": "R27620xR27575", "comparison_id": "R27620", "paper_id": "R27575", "text": "The causality between energy consumption and economic growth in Turkey Abstract This paper applies the causality test to examine the causal relationship between primary energy consumption (EC) and real Gross National Product (GNP) for Turkey during 1970\u20132006. We employ unit root tests, the augmented Dickey\u2013Fuller (ADF) and the Philips\u2013Perron (PP), Johansen cointegration test, and Pair-wise Granger causality test to examine relation between EC and GNP. Our empirical results indicate that the two series are found to be non-stationary. However, first differences of these series lead to stationarity. Further, the results indicate that EC and GNP are cointegrated and there is bidirectional causality running from EC to GNP and vice versa. This means that an increase in EC directly affects economic growth and that economic growth also stimulates further EC. This bidirectional causality relationship between EC and GNP determined for Turkey at 1970\u20132006 period is in accordance with the ones in literature reported for similar countries. Consequently, we conclude that energy is a limiting factor to economic growth in Turkey and, hence, shocks to energy supply will have a negative impact on economic growth." }, { "instance_id": "R27620xR27598", "comparison_id": "R27620", "paper_id": "R27598", "text": "A multivariate causality test of carbon dioxide emissions, energy consumption and economic growth in China This paper uses multivariate co-integration Granger causality tests to investigate the correlations between carbon dioxide emissions, energy consumption and economic growth in China. Some researchers have argued that the adoption of a reduction in carbon dioxide emissions and energy consumption as a long term policy goal will result in a closed-form relationship, to the detriment of the economy. Therefore, a perspective that can make allowances for the fact that the exclusive pursuit of economic growth will increase energy consumption and CO2 emissions is required; to the extent that such growth will have adverse effects with regard to global climate change." }, { "instance_id": "R27620xR27611", "comparison_id": "R27620", "paper_id": "R27611", "text": "Causal Relationships between Energy Consumption and Economic Growth Abstract The relationship between energy consumption and economic growth has long been the focus at home and abroad. The general view is: energy consumption promotes economic growth; economic growth affects energy consumption on the other hand. In sight of the original theory, this paper adopts the statistical data of Shandong Province from 1980 to 2008, which include Gross Domestic Products (GDP), energy consumption, fixed asset investment and employees. Using unit root, co-integration and Granger causality test, we examined the relationship between energy consumption and economic growth in Shandong Province. The results show that energy consumption and economic growth have long-term trend relation, and there is two-way causality between them. The econometric model is estimated using Generalized Least Square (GLS) method. The conclusions are as follows: energy consumption and economic growth is positively correlated in Shandong province, and economic growth has highly depended on energy consumption." }, { "instance_id": "R27620xR27553", "comparison_id": "R27620", "paper_id": "R27553", "text": "Structural breaks, energy consumption, and economic growth revisited: evidence from Taiwan Abstract This paper studies the stability between energy consumption and GDP for Taiwan during 1954\u20132003. We use aggregate as well as various disaggregate data of energy consumption, including coal, oil, gas, and electricity, to employ the unit root tests and the cointegration tests allowing for structural breaks. Our main findings are: First, though gas consumption seems to have structural breaks in the 1960s, after considering the structural breaks, the series is a stationary variable when Taiwan adopted its expansionary export trade policy. Second, we find that different directions of causality exist between GDP and various kinds of energy consumption. The empirical result shows unanimously in the long run that energy acts as an engine of economic growth, and that energy conservation may harm economic growth. Third, the cointegration between energy consumption and GDP is unstable, and some economic events may affect the stability. Overall, we do find the structural breakpoints, and they look to match clearly with the corresponding critical economic incidents." }, { "instance_id": "R27620xR27580", "comparison_id": "R27620", "paper_id": "R27580", "text": "An econometric study of CO2 emissions, energy consumption, income and foreign trade inTurkey This study attempts to examine empirically dynamic causal relationships between carbon emissions, energy consumption, income, and foreign trade in the case of Turkey using the time series data for the period 1960-2005. This research tests the interrelationship between the variables using the bounds testing to cointegration procedure. The bounds test results indicate that there exist two forms of long-run relationships between the variables. In the case of first form of long-relationship, carbon emissions are determined by energy consumption, income and foreign trade. In the case of second long-run relationship, income is determined by carbon emissions, energy consumption and foreign trade. An augmented form of Granger causality analysis is conducted amongst the variables. The long-run relationship of CO2 emissions, energy consumption, income and foreign trade equation is also checked for the parameter stability. The empirical results suggest that income is the most significant variable in explaining the carbon emissions in Turkey which is followed by energy consumption and foreign trade. Moreover, there exists a stable carbon emissions function. The results also provide important policy recommendations." }, { "instance_id": "R27620xR27606", "comparison_id": "R27620", "paper_id": "R27606", "text": "Energy consumption and economic growth in China: A multivariate causality test This study takes a fresh look at the direction of causality between energy consumption and economic growth in China during the period from 1972 to 2006, using a multivariate cointegration approach. Given the weakness associated with the bivariate causality framework, the current study performs a multivariate causality framework by incorporating capital and labor variables into the model between energy consumption and economic growth based on neo-classical aggregate production theory. Using the recently developed autoregressive distributed lag (ARDL) bounds testing approach, a long-run equilibrium cointegration relationship has been found to exist between economic growth and the explanatory variables: energy consumption, capital and employment. Empirical results reveal that the long-run parameter of energy consumption on economic growth in China is approximately 0.15, through a long-run static solution of the estimated ARDL model, and that for the short-run is approximately 0.12 by the error correction model. The study also indicates the existence of short-run and long-run causality running from energy consumption, capital and employment to economic growth. The estimation results imply that energy serves as an important source of economic growth, thus more vigorous energy use and economic development strategies should be adopted for China." }, { "instance_id": "R27620xR27582", "comparison_id": "R27620", "paper_id": "R27582", "text": "On the dynamics of energy consumption and output in the US This note employs US annual data from 1949 to 2006 to compare the causal relationship between renewable and non-renewable energy consumption and real GDP, respectively. Given the sample size of the study, the Toda-Yamamoto causality tests reveal the absence of Granger-causality between renewable or non-renewable energy consumption and real GDP which supports the neutrality hypothesis." }, { "instance_id": "R27620xR27489", "comparison_id": "R27620", "paper_id": "R27489", "text": "Dynamic Modeling using Advanced Time-series Techniques: Energy-GNP and Energy-Employment Interactions Using advanced time-series techniques, this chapter discusses the relationship between energy and two important macro-variables, gross national product (GNP) and employment. In assessing the interaction between gross energy consumption (GEC) and GNP, the regression technique of Sims is used, and the method of Box and Jenkins is applied in the analysis of causal relations between energy and employment. The advanced time-series methods yield univariate and bivariate models that are subjected to rigorous methodological procedures. The time-series models provide predictions that are more accurate than those of the large macroeconomic models. The procedure utilized is the following. First, each of the two time-series is separately treated to isolate the random shocks from the systematic behavior of the series. The random shocks or innovationsare those parts of series behavior that cannot be predicted from the variable's own past history. Second, the shocks in the two series are related through their cross-correlation function to determine the degree to which the shocks in the energy series explain those in the employment series, and vice versa. This information is used to determine the direction of causality between the original series, according to a criterion developed by Granger, and to model the lag structure of the relation. The estimated residuals from the model are then analyzed to check the adequacy of the causal order selected and the fit obtained." }, { "instance_id": "R27620xR27549", "comparison_id": "R27620", "paper_id": "R27549", "text": "Disaggregated industrial energy consumption and GDP: the case of Shanghai Abstract This paper investigates the causal relationship between various kinds of industrial energy consumption and GDP in Shanghai for the period 1952\u20131999 using a modified version of the Granger (1969) causality test proposed by Toda and Yamamoto (J. Econ. 66 (1995) 225). The empirical evidence from disaggregated energy series seems to suggest that there was a uni-directional Granger causality running from coal, coke, electricity and total energy consumption to real GDP but no Granger causality running in any direction between oil consumption and real GDP." }, { "instance_id": "R27620xR27527", "comparison_id": "R27620", "paper_id": "R27527", "text": "The relationship between energy consumption and economic growth in Pakistan Energy is substantial for economic development. This study aims to unveil the causal relationship and long-term association between economic growth and energy consumption in Pakistan. The Granger-Causality test finds that; natural gas consumption, electricity consumption and coal consumption have uni-directional causal relationship with economic growth as (GC, EC and CC\u2192GDP), however, GDP growth rate, natural gas consumption and coal consumption unilaterally Granger causes Inflation (GDP, GC and CC\u2192CPI) and lastly coal consumption\u2192natural gas consumption (GC), Electricity consumption (EC)\u2192GC. The ARDL estimations delineate natural gas consumption and oil consumption having a positive and negative association with GDP growth rate may have significant long term impacts respectively on the the economic growth of Pakistan." }, { "instance_id": "R27620xR27591", "comparison_id": "R27620", "paper_id": "R27591", "text": "Energy consumption and economic growth nexus in Tanza- nia: an ARDL bounds testing approach In this paper, we examine the intertemporal causal relationship between energy consumption and economic growth in Tanzania during the period of 1971-2006. Unlike the majority of the previous studies, we employ the newly developed autoregressive distributed lag (ARDL)-bounds testing approach by Pesaran et al. [2001. Bounds testing approaches to the analysis of level relationships. Journal of Applied Econometrics 16, 289-326] to examine this linkage. We also use two proxies of energy consumption, namely total energy consumption per capita and electricity consumption per capita. The results of the bounds test show that there is a stable long-run relationship between each of the proxies of energy consumption and economic growth. The results of the causality test, on the other hand, show that there is a unidirectional causal flow from total energy consumption to economic growth and a prima-facie causal flow from electricity consumption to economic growth. Overall, the study finds that energy consumption spurs economic growth in Tanzania." }, { "instance_id": "R27620xR27536", "comparison_id": "R27620", "paper_id": "R27536", "text": "Energy consumption and economic growth: assessing the evidence from Greece Abstract This paper attempts to shed light into the empirical relationship between energy consumption and economic growth, for Greece (1960\u20131996) employing the vector error-correction model estimation. The vector specification includes energy consumption, real GDP and price developments, the latter taken to represent a measure of economic efficiency. The empirical evidence suggests that there is a long-run relationship between the three variables, supporting the endogeneity of energy consumption and real output. These findings have important policy implications, since the adoption of suitable structural policies aiming at improving economic efficiency can induce energy conservation without impeding economic growth." }, { "instance_id": "R27620xR27520", "comparison_id": "R27620", "paper_id": "R27520", "text": "A multivariate cointegration analysis of the role of energy in the US macroeconomy This paper extends my previous analysis of the causal relationship of GDP and energy use in the USA in the post-war period to a cointegration analysis of that relationship. It is found that the majority of the relevant variables are integrated justifying a cointegration analysis. The results show that cointegration does occur and that energy input cannot be excluded from the cointegration space. The results are plausible in terms of macroeconomic dynamics. The results are similar to my previous Granger Causality results and contradict claims in the literature (based on bivariate models) that there is no cointegration between energy and output." }, { "instance_id": "R27620xR27600", "comparison_id": "R27620", "paper_id": "R27600", "text": "Causality between energy consumption and output growth in the Indian cement industry: an application of the panel vector error correction model (VECM) The aim of this paper is to examine the existence and direction of the causal relationship between energy consumption and output growth in the Indian cement industry for the period 1979-80 to 2004-05. The most recently developed panel unit root, a heterogeneous panel cointegration and panel-based error correction model, is applied within a multivariate framework. The empirical results confirm a positive, long-run cointegrated relationship between output and energy consumption when heterogeneous state effects are taken into account. We also found a long-run, bi-directional relationship between energy consumption and output growth in the Indian cement industry for the study period, implying that an increase in energy consumption directly affects the growth of this sector and that growth stimulates further energy consumption. These empirical findings imply that energy consumption and output are jointly determined and affect each other. The empirical evidence also suggests the implementation of energy conservation policies oriented toward improving energy-use efficiency to avoid any negative impacts of the conservation policies on the growth of this industry." }, { "instance_id": "R27620xR27604", "comparison_id": "R27620", "paper_id": "R27604", "text": "CO2 emissions, energy consumption and economic growth in China: a panel data analysis This paper examines the causal relationships between carbon dioxide emissions, energy consumption and real economic output using panel cointegration and panel vector error correction modeling techniques based on the panel data for 28 provinces in China over the period 1995\u20132007. Our empirical results show that CO2 emissions, energy consumption and economic growth have appeared to be cointegrated. Moreover, there exists bidirectional causality between CO2 emissions and energy consumption, and also between energy consumption and economic growth. It has also been found that energy consumption and economic growth are the long-run causes for CO2 emissions and CO2 emissions and economic growth are the long-run causes for energy consumption. The results indicate that China's CO2 emissions will not decrease in a long period of time and reducing CO2 emissions may handicap China's economic growth to some degree. Some policy implications of the empirical results have finally been proposed." }, { "instance_id": "R27620xR27500", "comparison_id": "R27620", "paper_id": "R27500", "text": "Cointegration tests of energy consumption, income, and employment Abstract A recently developed methodology of the cointegration test is employed to determine whether energy consumption has a long-run equilibrium relationship with the level of income or employment. It is found that the long-run equilibrium relationship fails to exist in either case. The finding implies a long-run neutrality of energy consumption, which is consistent with the short-run neutrality found in the literature. The results are further confirmed by splitting the sample into two sub-periods." }, { "instance_id": "R27620xR27505", "comparison_id": "R27620", "paper_id": "R27505", "text": "An investigation of cointegration and causality between energy consumption and economic growth This paper reexamines the causality between energy consumption and economic growth with both bivariate and multivariate models by applying the recently developed methods of cointegration and Hsiao`s version of the Granger causality to transformed U.S. data for the period 1947-1990. The Phillips-Perron (PP) tests reveal that the original series are not stationary and, therefore, a first differencing is performed to secure stationarity. The study finds no causal linkages between energy consumption and economic growth. Energy and gross national product (GNP) each live a life of its own. The results of this article are consistent with some of the past studies that find no relationship between energy and GNP but are contrary to some other studies that find GNP unidirectionally causes energy consumption. Both the bivariate and trivariate models produce the similar results. We also find that there is no causal relationship between energy consumption and industrial production. The United States is basically a service-oriented economy and changes in energy consumption can cause little or no changes in GNP. In other words, an implementation of energy conservation policy may not impair economic growth. 27 refs., 5 tabs." }, { "instance_id": "R27620xR27587", "comparison_id": "R27620", "paper_id": "R27587", "text": "Energy consumption and GDP in Tunisia: Cointegration and causality analysis In this paper, the Johansen cointegration technique is used to examine the causal relationship between per capita energy consumption (PCEC) and per capita gross domestic product (PCGDP) for Tunisia during the 1971-2004 period. In order to test for Granger causality in the presence of cointegration among the variables, a vector error correction model (VECM) is used instead of a vector autoregressive (VAR) model. Our estimation results indicate that the PCGDP and PCEC for Tunisia are related by one cointegrating vector and that there is a long-run bi-directional causal relationship between the two series and a short-run unidirectional causality from energy to gross domestic product (GDP). The source of causation in the long-run is found to be the error-correction terms in both directions. Hence, an important policy implication resulting from this analysis is that energy can be considered as a limiting factor to GDP growth in Tunisia. Conclusions for Tunisia may also be relevant for a number of countries that have to go through a similar development path of increasing pressure on already scarce energy resources." }, { "instance_id": "R27620xR27578", "comparison_id": "R27620", "paper_id": "R27578", "text": "The causal relationship between U.S. energy consumption and real output: A disaggregated analysis This study utilizes U.S. annual data from 1949 to 2006 to examine the causal relationship between energy consumption and real GDP using aggregate and sectoral primary energy consumption measures within a multivariate framework. The Toda-Yamamoto long-run causality tests reveal that the relationship between energy consumption and real GDP is not uniform across sectors. Granger-causality is absent between total and transportation primary energy consumption and real GDP, respectively. Bidirectional Granger-causality is present between commercial and residential primary energy consumption and real GDP, respectively. Finally, the results indicate that industrial primary energy consumption Granger-causes real GDP. The results suggest that prudent energy and environmental policies should recognize the differences in the relationship between energy consumption and real GDP by sector." }, { "instance_id": "R27620xR27538", "comparison_id": "R27620", "paper_id": "R27538", "text": "Structural break, unit root, and the causality between energy consumption and GDP in Turkey Abstract This paper tries to investigate a series of unit root and causality tests to detect causality between the GDP and energy consumption in Turkey employing Hsiao's version of Granger causality method for the 1950\u20132000 period. The conventional unit root tests indicate the series are I(1), whereas the endogenous break unit root tests proposed by Zivot and Andrews [Zivot, E. and Andrews, D.W.K., 1992, Further evidence on the great crash, the oil price shock, and the unit root hypothesis, Journal of Business and Economics Statistics 10, 251\u2013270.] and Perron [Perron, P., 1997, Further evidence on breaking trend functions in macroeconomic variables, Journal of Econometrics 80, 355\u2013385.] reveal that the series are trend stationary with a structural break. Therefore, it is inappropriate to take the first difference of the data to achieve stationarity. The main conclusion of this study is that there is no evidence of causality between energy consumption and GDP in Turkey based on the detrended data." }, { "instance_id": "R27620xR27545", "comparison_id": "R27620", "paper_id": "R27545", "text": "Causality between energy consumption and economic growth in India: a note on conflicting results Abstract This note examines the different direction of causal relation between energy consumption and economic growth in India. Applying Engle\u2013Granger cointegration approach combined with the standard Granger causality test on Indian data for the period 1950\u20131996, we find that bi-directional causality exists between energy consumption and economic growth. Further, we apply Johansen multivariate cointegration technique on the different set of variables. The same direction of causality exists between energy consumption and economic growth. This is different from the results obtained in earlier studies." }, { "instance_id": "R27620xR27518", "comparison_id": "R27620", "paper_id": "R27518", "text": "Causality between energy consumption and economic growth in India: an application of cointegration and error-correction modeling Applying the Johansen cointegration test, this study finds that energy consumption, economic growth, capital and labour are cointegrated. However, this study detects no causality from energy consumption to economic growth using Hsiao's version of the Granger causality method with the aid of cointegration and error correction modelling. Interestingly, it is discerned that causality runs from economic growth to energy consumption both in the short run and in the long run and causality flows from capital to economic growth in the short run." }, { "instance_id": "R27620xR27572", "comparison_id": "R27620", "paper_id": "R27572", "text": "Economic development, pollutant emissions and energy con- sump- tion in Malaysia The objective of this paper is to examine the long-run relationship between output, pollutant emissions, and energy consumption in Malaysia during the period 1971-1999. To supplement the findings of cointegrating analysis, we assess the causal relationships between the variables using the recent causality tests available in the literature. The results indicate that pollution and energy use are positively related to output in the long-run. We found a strong support for causality running from economic growth to energy consumption growth, both in the short-run and long-run." }, { "instance_id": "R27620xR27511", "comparison_id": "R27620", "paper_id": "R27511", "text": "Cointegration, error correction and the relationship between GDP and energy: the case of South Korea and Singapore This paper examines the causality issue between energy consumption and GDP for South Korea and Singapore, with the aid of cointegration and error-correction modeling. Results of the cointegration and error-correction models indicate bidirectional causality between GDP and energy consumption for both South Korea and Singapore. However, results of the standard Granger causality tests show no causal relationship between GDP and energy consumption for South Korea and unidirectional causal relationship from energy consumption to GDP for Singapore." }, { "instance_id": "R27620xR27618", "comparison_id": "R27620", "paper_id": "R27618", "text": "The causal relationship between energy consumption and economic growth in Lebanon This paper investigates the dynamic causal relationship between energy consumption and economic growth in Lebanon over the period 1980\u20132009. Within a bivariate framework, imposed on us due to data limitations, and in an effort to increase the robustness of our results, we employ a variety of causality tests, namely, Hsiao, Toda-Yamamoto, and vector error correction based Granger causality tests. We find strong evidence of a bidirectional relationship both in the short-run and in the long-run, indicating that energy is a limiting factor to economic growth in Lebanon. From a policy perspective, the confirmation of the feedback hypothesis warns against the use of policy instruments geared towards restricting energy consumption, as these may lead to adverse effects on economic growth. Consequently, there is a pressing need to revise the current national energy policy that calls for a 5% energy conservation target. Also, to shield the country from external supply shocks, given its substantial dependence on energy imports, policymakers should emphasize the development of domestic energy resources. Further, the most pertinent implication is that relaxing the present electric capacity shortages should be made a national priority, in view of its potential positive effect on the economy." }, { "instance_id": "R27705xR27627", "comparison_id": "R27705", "paper_id": "R27627", "text": "Electricity consumption and economic growth in India Abstract This paper tries to examine the Granger causality between electricity consumption per capita and Gross Domestic Product (GDP) per capita for India using annual data covering the period 1950\u201351 to 1996\u201397. Phillips\u2013Perron tests reveal that both the series, after logarithmic transformation, are non-stationary and individually integrated of order one. This study finds the absence of long-run equilibrium relationship among the variables but there exists unidirectional Granger causality running from economic growth to electricity consumption without any feedback effect. So, electricity conservation policies can be initiated without deteriorating economic side effects." }, { "instance_id": "R27705xR27667", "comparison_id": "R27705", "paper_id": "R27667", "text": "Energy consumption and economic growth: evidence from China at both aggregated and disaggregated levels Using a neo-classical aggregate production model where capital, labor and energy are treated as separate inputs, this paper tests for the existence and direction of causality between output growth and energy use in China at both aggregated total energy and disaggregated levels as coal, oil and electricity consumption. Using the Johansen cointegration technique, the empirical findings indicate that there exists long-run cointegration among output, labor, capital and energy use in China at both aggregated and all three disaggregated levels. Then using a VEC specification, the short-run dynamics of the interested variables are tested, indicating that there exists Granger causality running from electricity and oil consumption to GDP, but does not exist Granger causality running from coal and total energy consumption to GDP. On the other hand, short-run Granger causality exists from GDP to total energy, coal and oil consumption, but does not exist from GDP to electricity consumption. We thus propose policy suggestions to solve the energy and sustainable development dilemma in China as: enhancing energy supply security and guaranteeing energy supply, especially in the short run to provide adequate electric power supply and set up national strategic oil reserve; enhancing energy efficiency to save energy; diversifying energy sources, energetically exploiting renewable energy and drawing out corresponding policies and measures; and finally in the long run, transforming development pattern and cut reliance on resource- and energy-dependent industries." }, { "instance_id": "R27705xR27614", "comparison_id": "R27705", "paper_id": "R27614", "text": "Energy consumption, carbon emissions and economic growth nexus in Bangladesh: cointegration and dynamic causality analysis The paper investigates the possible existence of dynamic causality between energy consumption, electricity consumption, carbon emissions and economic growth in Bangladesh. First, we have tested cointegration relationships using the Johansen bi-variate cointegration model. This is complemented with an analysis of an auto-regressive distributed lag model to examine the results' robustness. Then, the Granger short-run, the long-run and strong causality are tested with a vector error correction modelling framework. The results indicate that uni-directional causality exists from energy consumption to economic growth both in the short and the long-run while a bi-directional long-run causality exists between electricity consumption and economic growth but no causal relationship exists in short-run. The strong causality results indicate bi-directional causality for both the cases. A uni-directional causality runs from energy consumption to CO2 emission for the short-run but feedback causality exists in the long-run. CO2 Granger causes economic growth both in the short and in the long-run. An important policy implication is that energy (electricity as well) can be considered as an important factor for the economic growth in Bangladesh. Moreover, as higher energy consumption also means higher pollution in the long-run, policy makers should stimulate alternative energy sources for meeting up the increasing energy demand." }, { "instance_id": "R27705xR27630", "comparison_id": "R27705", "paper_id": "R27630", "text": "Cointegration and causality between electricity consumption and GDP: empirical evidence from Malawi Abstract The Granger-causality (GC) and error correction (ECM) techniques were applied on 1970\u20131999 data for Malawi to examine cointegration and causality between electricity consumption (kWh) and, respectively, overall GDP, agricultural-GDP (AGDP) and non-agricultural-GDP (NGDP). Cointegration was established between kWh and, respectively, GDP and NGDP, but not with AGDP. The GC results detect bi-directional causality between kWh and GDP suggesting that kWh and GDP are jointly determined, but one-way causality running from NGDP to kWh. The ECM results detect causality running one-way from GDP (also from NGDP) to kWh suggesting that a permanent rise in GDP may cause a permanent growth in electricity consumption." }, { "instance_id": "R27705xR27661", "comparison_id": "R27705", "paper_id": "R27661", "text": "Residential electricity demand dynamics in Turkey Abstract This article provides fresh empirical evidences for the income and price elasticies of the residential energy demand both in the short-run and long-run for Turkey over the period 1968\u20132005, using the bounds testing procedure to cointegration. The computed elasticities of income and price are consistent with the previous studies and, as expected, the long-run elasticities are greater than the short-run elasticities. An augmented form of Granger causality analysis is implemented among residential electricity, income, price and urbanization. In the long-run, causality runs interactively through the error-correction term from income, price and urbanization to residential energy but the short-run causality tests are inconclusive The parameter stability of the short-run as well as long-run coefficients in the residential energy demand function are tested. The results of these tests display a stable pattern." }, { "instance_id": "R27705xR27650", "comparison_id": "R27705", "paper_id": "R27650", "text": "An empirical analysis of electricity con- sumption in Cyprus The paper presents the first empirical analysis of electricity consumption in Cyprus. Using annual data from 1960 to 2004, we have examined electricity use in the residential and the services sectors, which are the fastest-growing electricity consumers in the island, and its interaction with income, prices and the weather. The analysis was performed with the aid of time series analysis techniques such as unit root tests with and without a structural break in levels, cointegration tests, Vector Error Correction models, Granger causality tests and impulse response functions. Results show long-term elasticities of electricity use above unity for income, and of the order of -0.3 to -0.4 for prices. In the short term electricity consumption is rather inelastic, mostly affected by weather fluctuations. Granger causality tests confirm exogeneity of electricity prices and bidirectional causality between residential electricity consumption and private income. The commercial sector is less elastic and reverts faster to equilibrium than the residential sector. Despite the relatively small sample size, results reported here are quite robust and can be used for forecasts and policy analyses." }, { "instance_id": "R27705xR27563", "comparison_id": "R27705", "paper_id": "R27563", "text": "A dynamic equilibrium of electricity consumption and GDP in Hong Kong: an empirical investigation Public debates on electricity policy in Hong Kong focus on the regulation regime but sel- dom discuss the macroeconomic impact. In this paper, we use the novel dataset on electricity consumption and report the following ?ndings: (1) there is a long run equilibrium relationship between real GDP and electricity consumption, (2) a one-way causal e\u00a4ect exists from electric- ity consumption to real GDP, (3) a signi?cant adjustment process occurs when equilibrium is interrupted, (4) there exists possible structural change in the relationship between electricity consumption and economic activities in 1990s." }, { "instance_id": "R27705xR27703", "comparison_id": "R27705", "paper_id": "R27703", "text": "The dynamics of electricity consumption and economic growth: a revisit study of their causality in Pakistan This study revisits the relationship between electricity consumption and economic growth in Pakistan by controlling and investigating the effects of two major production factors \u2013 capital and labor. The empirical evidence confirms the cointegration among the variables and indicates that electricity consumption has a positive effect on economic growth. Moreover, bi-directional Granger causality between electricity consumption and economic growth has been found. The finding suggests that adoption of electricity conservation policies to conserve energy resources may unwittingly decline economic growth and the lower growth rate will in turn further decrease the demand for electricity. Therefore, government contemplating such conservationist policies should instead explore and develop alternate sources of energy as a strategy rather than just increasing electricity production per se in order to meet the rising demand for electricity in their quest towards sustaining development in the country." }, { "instance_id": "R27705xR27538", "comparison_id": "R27705", "paper_id": "R27538", "text": "Structural break, unit root, and the causality between energy consumption and GDP in Turkey Abstract This paper tries to investigate a series of unit root and causality tests to detect causality between the GDP and energy consumption in Turkey employing Hsiao's version of Granger causality method for the 1950\u20132000 period. The conventional unit root tests indicate the series are I(1), whereas the endogenous break unit root tests proposed by Zivot and Andrews [Zivot, E. and Andrews, D.W.K., 1992, Further evidence on the great crash, the oil price shock, and the unit root hypothesis, Journal of Business and Economics Statistics 10, 251\u2013270.] and Perron [Perron, P., 1997, Further evidence on breaking trend functions in macroeconomic variables, Journal of Econometrics 80, 355\u2013385.] reveal that the series are trend stationary with a structural break. Therefore, it is inappropriate to take the first difference of the data to achieve stationarity. The main conclusion of this study is that there is no evidence of causality between energy consumption and GDP in Turkey based on the detrended data." }, { "instance_id": "R27705xR27591", "comparison_id": "R27705", "paper_id": "R27591", "text": "Energy consumption and economic growth nexus in Tanza- nia: an ARDL bounds testing approach In this paper, we examine the intertemporal causal relationship between energy consumption and economic growth in Tanzania during the period of 1971-2006. Unlike the majority of the previous studies, we employ the newly developed autoregressive distributed lag (ARDL)-bounds testing approach by Pesaran et al. [2001. Bounds testing approaches to the analysis of level relationships. Journal of Applied Econometrics 16, 289-326] to examine this linkage. We also use two proxies of energy consumption, namely total energy consumption per capita and electricity consumption per capita. The results of the bounds test show that there is a stable long-run relationship between each of the proxies of energy consumption and economic growth. The results of the causality test, on the other hand, show that there is a unidirectional causal flow from total energy consumption to economic growth and a prima-facie causal flow from electricity consumption to economic growth. Overall, the study finds that energy consumption spurs economic growth in Tanzania." }, { "instance_id": "R27705xR27701", "comparison_id": "R27705", "paper_id": "R27701", "text": "Electricity consumption and economic growth empirical evidence from Pakistan The present article uses the Autoregressive Distributed Lag (ARDL) bounds testing procedure to identify the long run equilibrium relationship between electricity consumption and economic growth. Toda Yamamoto and Wald-test causality tests have identified the direction of the causal relationship between these two variables in the case of Pakistan in the period between 1971 and 2008. Ng-Perron and Clement-Montanes-Reyes unit root tests are used to handle the problem of integrating orders for variables. The results suggest that the two variables are in a long run equilibrium relationship and economic growth leads to electricity consumption and not vice versa." }, { "instance_id": "R27705xR27638", "comparison_id": "R27705", "paper_id": "R27638", "text": "Electricity consumption and economic growth: evidence from Korea Th e paper investigates the relationship between electricity consumption and economic growth in Poland for the period 2000 to 2012. Understanding the behavior of electricity consumption in relation to the economy is very important for improve a stable economic growth and development. Th e obtained results indicate that there is the causal relationship between electricity consumption and economic growth in Poland and the relationship is bi-directional. We also discovered the bi-directional causality between capital and economic growth. On the basis of the causality results we estimated a one-sector aggregate production function, where the electricity consumption was one of the input variables. Th e evaluated growth model showed that electricity consumption is a pro-growth variable, so the results indicate that economic growth of Poland is electricity-dependent. Th at\u2019s allows to state that electricity is a limiting factor to economic growth of Poland." }, { "instance_id": "R27705xR27593", "comparison_id": "R27705", "paper_id": "R27593", "text": "Energy consumption, carbon emissions, and economic growth in China This paper investigates the existence and direction of Granger causality between economic growth, energy consumption, and carbon emissions in China, applying a multivariate model of economic growth, energy use, carbon emissions, capital and urban population. Empirical results for China over the period 1960-2007 suggest a unidirectional Granger causality running from GDP to energy consumption, and a unidirectional Granger causality running from energy consumption to carbon emissions in the long run. Evidence shows that neither carbon emissions nor energy consumption leads economic growth. Therefore, the government of China can purse conservative energy policy and carbon emissions reduction policy in the long run without impeding economic growth." }, { "instance_id": "R27705xR27632", "comparison_id": "R27705", "paper_id": "R27632", "text": "Electricity consumption and economic growth in China Abstract This paper applies the error-correction model to examine the causal relationship between electricity consumption and real GDP for China during 1971\u20132000. Our estimation results indicate that real GDP and electricity consumption for China are cointegrated and there is unidirectional Granger causality running from electricity consumption to real GDP but not vice versa. In order to overcome the constraints on electricity consumption, the Chinese government has to speed up the nation-wide interconnection of power networks, to upgrade urban and rural distribution grids, and to accelerate rural electrification." }, { "instance_id": "R27705xR27694", "comparison_id": "R27705", "paper_id": "R27694", "text": "The importance of electrical energy for economic growth in Barbados Using a neo-classical aggregate production model where capital, labour, technology, and energy are treated as separate inputs, this paper tests for the existence and direction of causality between output growth and electrical energy use in Barbados, analysed as a whole and in sectors respectively. Results indicate the presence of a long-run relationship between growth and electricity consumption; specifically we find that the non-residential sector is a key driver of growth. In addition, the evidence reveals a bidirectional causal relationship between electrical energy consumption and real GDP in the long run, but only a unidirectional causal relationship from energy to output in the short run. Forecasts indicate increasing consumption of electrical energy, particularly by the residential sector. We suggest that plans by the Government to liberalise the sector should encourage efficiency and innovation in production and distribution which should result in lower prices, as independent suppliers compete to maintain their market shares. Changes in the regulatory environment will also be necessary if such plans materialise. Policymakers will need to pay greater attention to the expected increase in the rate of consumption by the residential sector, as this will help to reduce the imports of oil and depletion of scarce foreign exchange resources by a sector that does not spur economic growth. An increase in energy capacity should be encouraged as contingency planning in the event of a technical or political disruption to fuel imports will be critical, notwithstanding the drive to use more renewable sources of energy." }, { "instance_id": "R27705xR27642", "comparison_id": "R27705", "paper_id": "R27642", "text": "Electricity consumption, employment and real income in Australia evidence from multivariate Granger causality tests This paper examines the relationship between electricity consumption, employment and real income in Australia within a cointegration and causality framework. We find that electricity consumption, employment and real income are cointegrated and that in the long-run employment and real income Granger cause electricity consumption, while in the short run there is weak unidirectional Granger causality running from income to electricity consumption and from income to employment." }, { "instance_id": "R27705xR27625", "comparison_id": "R27705", "paper_id": "R27625", "text": "A note on the causal relationship between energy and GDP in Taiwan Abstract This paper re-examines the causality between energy consumption and GDP by using updated Taiwan data for the period 1954\u20131997. As a secondary contribution, we investigate the causal relationship between GDP and the aggregate as well as several disaggregate categories of energy consumption, including coal, oil, natural gas, and electricity. Applying Granger\u2019s technique, we find bidirectional causality between total energy consumption and GDP. We find further that different directions of cause exist between GDP and various kinds of energy consumption." }, { "instance_id": "R27705xR27685", "comparison_id": "R27705", "paper_id": "R27685", "text": "Structural breaks, electricity consumption and economic growth: evidence from Turkey This study examines the causal relationship between electricity consumption and economic growth for Ghana during the period 1970 to 2010. The study employed unit root and cointegration tests taking into account structural breaks. The following findings were made: First, a plot of the series indicated a trend pattern. The series also experienced structural breaks in 1979 and 1983 but after taking structural breaks into account they became stationary. Second, the series exhibited one cointegration vector implying a long\u2013run relationship between them. Third, the results revealed the presence of unidirectional Granger causality running from economic growth to electricity consumption. In general, the study identified the presence of structural break dates which corresponded with the critical economic events in Ghana. Key words: Electricity consumption, economic growth, structural break, vector error correction mode (VECM), causality." }, { "instance_id": "R27705xR27688", "comparison_id": "R27705", "paper_id": "R27688", "text": "Electricity consumption\u2013growth nexus: The case of Malaysia The goal of this paper is to model the relationship between electricity consumption and real gross domestic product (GDP) for Malaysia in a bivariate and multivariate framework. We use time series data for the period 1971-2003 and apply the bounds testing approach to search for a long-run relationship. Our results reveal that electricity consumption, real GDP and price share a long-run relationship. The results of the autoregressive distributed lag (ARDL) estimates of long-run elasticity of electricity consumption on GDP are found to be around 0.7 and statistically significant. Finally, in the short-run, the results of the causality test show that there is a unidirectional causal flow from electricity consumption to economic growth in Malaysia. From these findings we conclude that Malaysia is an energy-dependent country, leading us to draw some policy implications. This paper adds support and validity, thus reducing the policy makers concern on the ambiguity of the electricity and growth nexus in Malaysia." }, { "instance_id": "R27705xR27622", "comparison_id": "R27705", "paper_id": "R27622", "text": "Electricity consumption and economic growth in Jamaica Abstract This study examines the relationship between electricity consumption and economic growth in Jamaica during 1970\u201386, a period of rapid increase in energy prices. The results show that the aggregate demand for electricity is slightly income elastic, electricity has a significant impact on economic growth, the electricity intensity has increased over time, residential demand is fairly income elastic, commercial demand is price inelastic, and the rate of adjustment is slow. These results suggest that conservation policies could be ineffective. Therefore, indigenous sources of electricity are important for Jamaica to be less dependent on imported energy." }, { "instance_id": "R27705xR27646", "comparison_id": "R27705", "paper_id": "R27646", "text": "Electricity generation and economic growth in Indonesia To cope with the increasing electricity demand and to overcome the supply shortage of electricity, it is imminent that investments be made on the electricity generation sector on a large scale in Indonesia. This paper attempts to investigate the causal relationship between electricity generation and economic growth in Indonesia, using time-series techniques for the period of 1971\u20132002. The results indicate that there is a uni-directional causality running from economic growth to electricity generation without any feedback effect. Thus, economic growth stimulates further electricity generation, and policies for reducing electricity generation can be initiated without deteriorating economic side effects in Indonesia." }, { "instance_id": "R27705xR27682", "comparison_id": "R27705", "paper_id": "R27682", "text": "Electricity consumption, income, foreign direct investment, and population in Malaysia: new evidence from multivariate framework analysis Purpose - This study attempts to re-investigate the electricity consumption function for Malaysia through the cointegration and causality analyses over the period 1970 to 2005. Design/methodology/approach - The study employed the bounds-testing procedure for cointegration to examine the potential long-run relationship, while an autoregressive distributed lag model is used to derive the short- and long-run coefficients. The Granger causality test is applied to determine the causality direction between electricity consumption and its determinants. Findings - New evidence is found in this study: first, electricity consumption, income, foreign direct investment, and population in Malaysia are cointegrated. Second, the influx of foreign direct investment and population growth are positively related to electricity consumption in Malaysia and the Granger causality evidence indicates that electricity consumption, income, and foreign direct investment are of bilateral causality. Originality/value - The estimated multivariate electricity consumption function for Malaysia implies that Malaysia is an energy-dependent country; thus energy-saving policies may have an inverse effect on current and also future economic development in Malaysia." }, { "instance_id": "R27705xR27656", "comparison_id": "R27705", "paper_id": "R27656", "text": "Electricity consumption and economic growth in China: cointegration and co-feature analysis Abstract This paper applies the cointegration theory to examine the causal relationship between electricity consumption and real GDP (Gross Demostic Product) for China during 1978\u20132004. Our estimation results indicate that real GDP and electricity consumption for China are cointegrated and there is only unidirectional Granger causality running from electricity consumption to real GDP but not the vice versa. Then Hodrick\u2013Prescott (HP) filter is applied to decompose the trend and fluctuation component of the GDP and electricity consumption series. The estimation results indicate that there is cointegration between not only the trend components, but also the cyclical components of the two series, which implies that, the Granger causality is probably related with the business cycle. The estimation results are of policy implication to the development of electric sector in China." }, { "instance_id": "R27835xR27743", "comparison_id": "R27835", "paper_id": "R27743", "text": "Experimental Validation of the Learning Effect for a Pedagogical Game on Computer Fundamentals The question/answer-based computer game Age of Computers was introduced to replace traditional weekly paper exercises in a course in computer fundamentals in 2003. Questionnaire evaluations and observation of student behavior have indicated that the students found the game more motivating than paper exercises and that a majority of the students also perceived the game to have a higher learning effect than paper exercises or textbook reading. This paper reports on a controlled experiment to compare the learning effectiveness of game play with traditional paper exercises, as well as with textbook reading. The results indicated that with equal time being spent on the various learning activities, the effect of game play was only equal to that of the other activities, not better. Yet this result is promising enough, as the increased motivation means that students work harder in the course. Also, the results indicate that the game has potential for improvement, in particular with respect to its feedback on the more complicated questions." }, { "instance_id": "R27835xR27826", "comparison_id": "R27835", "paper_id": "R27826", "text": "Empirical evaluation of an educational game on software measurement Software measurement is considered important in improving the software process. However, teaching software measurement remains a challenging issue. Although, games and simulations are regarded powerful tools for learning, their learning effectiveness is not rigorously established. This paper describes the results of an explorative study to investigate the learning effectiveness of a game prototype on software measurement in order to make an initial judgment about its potential as an educational tool as well as to analyze its appropriateness, engagement and strengths & weaknesses as guidance for further evolution. Within the study, a series of experiments was conducted in parallel in three master courses in Brazil. Results of the study reveal that the participants consider the content and structure of the game appropriate, but no indication for a significant difference on learning effectiveness could be shown." }, { "instance_id": "R27835xR27819", "comparison_id": "R27835", "paper_id": "R27819", "text": "Collaborative Game\u2010play as a Site for Participation and Situated Learning of a Second Language This paper addresses additional language learning as rooted in participation in the social activity of collaborative game\u2010play. Building on a social\u2010interactional view of learning, it analyses some of the detailed practices through which players attend to a video game as the material and semiotic structure that shapes play and creates affordances for additional language learning. We describe how players engage with the language resources offered by the game, drawing on the vocabulary, constructions, prosodic features and utterances modelled on game dialogue, in building their own actions during collaborative play. With these resources, the players display their ongoing engagement with the game as well as their competences in recognising, reproducing and creatively reshaping the available linguistic resources in their own activities." }, { "instance_id": "R27835xR27735", "comparison_id": "R27835", "paper_id": "R27735", "text": "A video game improves behavioral outcomes in adolescents and young adults with cancer: A randomized trial OBJECTIVE. Suboptimal adherence to self-administered medications is a common problem. The purpose of this study was to determine the effectiveness of a video-game intervention for improving adherence and other behavioral outcomes for adolescents and young adults with malignancies including acute leukemia, lymphoma, and soft-tissue sarcoma. METHODS. A randomized trial with baseline and 1- and 3-month assessments was conducted from 2004 to 2005 at 34 medical centers in the United States, Canada, and Australia. A total of 375 male and female patients who were 13 to 29 years old, had an initial or relapse diagnosis of a malignancy, and currently undergoing treatment and expected to continue treatment for at least 4 months from baseline assessment were randomly assigned to the intervention or control group. The intervention was a video game that addressed issues of cancer treatment and care for teenagers and young adults. Outcome measures included adherence, self-efficacy, knowledge, control, stress, and quality of life. For patients who were prescribed prophylactic antibiotics, adherence to trimethoprim-sulfamethoxazole was tracked by electronic pill-monitoring devices (n = 200). Adherence to 6-mercaptopurine was assessed through serum metabolite assays (n = 54). RESULTS. Adherence to trimethoprim-sulfamethoxazole and 6-mercaptopurine was greater in the intervention group. Self-efficacy and knowledge also increased in the intervention group compared with the control group. The intervention did not affect self-report measures of adherence, stress, control, or quality of life. CONCLUSIONS. The video-game intervention significantly improved treatment adherence and indicators of cancer-related self-efficacy and knowledge in adolescents and young adults who were undergoing cancer therapy. The findings support current efforts to develop effective video-game interventions for education and training in health care." }, { "instance_id": "R27835xR27790", "comparison_id": "R27835", "paper_id": "R27790", "text": "New Directions for Traditional Lessons\u201d: Can Handheld Game Consoles Enhance Mental Mathematics Skills? This paper reports on a pilot study that compared the use of commercial off-the-shelf (COTS) handheld game consoles (HGCs) with traditional teaching methods to develop the automaticity of mathematical calculations and self-concept towards mathematics for year 4 students in two metropolitan schools. One class conducted daily sessions using the HGCs and the Dr Kawashima\u2019s Brain Training software to enhance their mental maths skills while the comparison class engaged in mental maths lessons using more traditional classroom approaches. Students were assessed using standardised tests at the beginning and completion of the term and findings indicated that students who undertook the Brain Training pilot study using the HGCs showed significant improvement in both the speed and accuracy of their mathematical calculations and selfconcept compared to students in the control school. An exploration of the intervention, discussion of methodology and the implications of the use of HGCs in the primary classroom are presented." }, { "instance_id": "R27835xR27783", "comparison_id": "R27835", "paper_id": "R27783", "text": "Computer games application within alternative classroom goal structures: cognitive, metacognitive, and affective evaluation This article reports findings on a study of educational computer games used within various classroom situations. Employing an across-stage, mixed method model, the study examined whether educational computer games, in comparison to traditional paper-and-pencil drills, would be more effective in facilitating comprehensive math learning outcomes, and whether alternative classroom goal structures would enhance or reduce the effects of computer games. The findings indicated that computer games, compared with paper-and-pencil drills, were significantly more effective in promoting learning motivation but not significantly different in facilitating cognitive math test performance and metacognitive awareness. Additionally, this study established that alternative classroom goal structures mediated the effects of computer games on mathematical learning outcomes. Cooperative goal structure, as opposed to competitive and individualistic structures, significantly enhanced the effects of computer games on attitudes toward math learning." }, { "instance_id": "R27835xR27815", "comparison_id": "R27835", "paper_id": "R27815", "text": "Idea Storming Cube: A Game-based System to Support Creative Thinking This paper describes a collaborative game-based creativity support system, idea storming cube, in support of creative thinking. It aims to make people form a creative and perspective-shift thinking habit. The system acquires knowledge from domain expert, user inputs history, and individuals in the current brainstorming group, and then provides user-, goal- and context-sensitive supports. Comparing to classic tutoring systems, it focuses more on stimulating divergent thinking. The system can be put into the basic mode or the idea generation mode in order to support different gaming objectives. A case study for preliminary evaluation of the proposed HCI tool for collaborative idea generation is also reported in this paper." }, { "instance_id": "R27835xR27788", "comparison_id": "R27835", "paper_id": "R27788", "text": "My-Mini-Pet: a handheld pet-nurturing game to engage students in arithmetic practices In the last decade, more and more games have been developed for handheld devices. Furthermore, the popularity of handheld devices and increase of wireless computing can be taken advantage of to provide students with more learning opportunities. Games also could bring promising benefits \u2010 specifically, motivating students to learn/play, sustaining their interest, reflectingtheirlearning/playingstatus,andfacilitatingthelearning/playingprogress.However, most of these have been designed for entertainment rather than education. Hence, in this study weincorporategameelementsintoalearningenvironment.TheMy-Mini-Petsystemisahandheldpet-nurturinggameenvironment,inwhichstudentslearnwithananimallearningcompanion, their My-Mini-Pet. Three design strategies are adopted. First, the pet-nurturing strategy, which simulates the relationship between the pet and its owner, the My-Mini-Pet becomes a motivator/sustainer of learning. Second, the pet appearance-changing strategy, which externalizesthelearningstatusofthestudent.Inotherwords,theMy-Mini-Petplaystheroleofareflector. Third, the pet feedback strategy, which links the behaviours of the student and his/her pet, the My-Mini-Pet acts as a facilitator of learning. A pilot study was also conducted to preliminarily investigate the effectiveness and experiences of the strategies on allowing the student to understand arithmetic practices. The results showed that the strategy was effective, encouraging the students to engage in learning activities. Furthermore, the game attracted the students\u2019 attention and stimulated discussion between peers. Some implications about the further developments are also discussed." }, { "instance_id": "R27835xR27730", "comparison_id": "R27835", "paper_id": "R27730", "text": "Using ubiquitous games in an English listening and speaking course: Impact on learning outcomes and motivation This paper reports the results of a study which aimed to investigate how ubiquitous games influence English learning achievement and motivation through a context-aware ubiquitous learning environment. An English curriculum was conducted on a school campus by using a context-aware ubiquitous learning environment called the Handheld English Language Learning Organization (HELLO). HELLO helps students to engage in learning activities based on the ARCS motivation theory, involving various educational strategies, including ubiquitous game-based learning, collaborative learning, and context-aware learning. Two groups of students participated in the learning activities prescribed in a curriculum by separately using ubiquitous game-based learning and non-gaming learning. The curriculum, entitled 'My Campus', included three learning activities, namely 'Campus Environment', 'Campus Life' and 'Campus Story'. Participants included high school teachers and juniors. During the experiment, tests, a survey, and interviews were conducted for the students. The evaluation results of the learning outcomes and learning motivation demonstrated that incorporating ubiquitous games into the English learning process could achieve a better learning outcomes and motivation than using non-gaming method. They further revealed a positive relationship between learning outcomes and motivation." }, { "instance_id": "R27835xR27792", "comparison_id": "R27835", "paper_id": "R27792", "text": "A Study on Exploiting Commercial Digital Games into School Context Digital game-based learning is a research field within the context of technology-enhanced learning that has attracted significant research interest. Commercial off-the-shelf digital games have the potential to provide concrete learning experiences and allow for drawing links between abstract concepts and real-world situations. The aim of this paper is to provide evidence for the effect of a general-purpose commercial digital game (namely, the \u201cSims 2-Open for Business\u201d) on the achievement of standard curriculum Mathematics educational objectives as well as general educational objectives as defined by standard taxonomies. Furthermore, students\u2019 opinions about their participation in the proposed game-supported educational scenario and potential changes in their attitudes toward math teaching and learning in junior high school are investigated. The results of the conducted research showed that: (i) students engaged in the game-supported educational activities achieved the same results with those who did not, with regard to the subject matter educational objectives, (ii) digital gamesupported educational activities resulted in better achievement of the general educational objectives, and (iii) no significant differences were observed with regard to students\u2019 attitudes towards math teaching and learning." }, { "instance_id": "R27835xR27822", "comparison_id": "R27835", "paper_id": "R27822", "text": "Building virtual cities, inspiring intelligent citizens: Digital games for developing students\u2019 problem solving and learning motivation This study investigates the effectiveness digital game-based learning (DGBL) on students' problem solving, learning motivation, and academic achievement. In order to provide substantive empirical evidence, a quasi-experimental design was implemented over the course of a full semester (23 weeks). Two ninth-grade Civics and Society classes, with a total of 44 students (15-16 years old), were randomly assigned to one of two conditions: an experimental group (incorporating DGBL) and a comparison group (taught using traditional instruction). Two-way mixed ANOVA was employed to evaluate changes in problem solving ability and compare the effectiveness the two strategies, while ANCOVA was used to analyze the effects on learning motivation and academic achievement. The results of this study are summarized as follows: (1) The DGBL strategy was clearly effective in promoting students' problem solving skills, while the control group showed no improvement. Additionally, data from the mid-test and post-test demonstrate that, as a higher order thinking skill, problem-solving requires a full semester to develop. (2). DGBL resulted in better learning motivation for students in the experimental group as compared to learners receiving TI. (3) Contrary to some suggestions that digital games could inhibit academic achievement, no statistically significant difference was found between the two groups. Most importantly, the quantitative improvement in problem-solving and learning motivation suggest that DGBL can be exploited as a useful and productive tool to support students in effective learning while enhancing the classroom atmosphere. Future research in DGBL should emphasize the evaluation of other higher order elements of the cognitive domain in terms of academic achievement outcomes and skills, such as critical and creative thinking." }, { "instance_id": "R27835xR27797", "comparison_id": "R27835", "paper_id": "R27797", "text": "Beyond Nintendo: design and assessment of educational video games for first and second grade students The main objective of this study was to evaluate the effects of the introduction of educational videogames into the classroom, on learning, motivation, and classroom dynamics. These effects were studied using a sample of 1274 students from economically disadvantaged schools in Chile. The videogames were specifically designed to address the educational goals of the first and second years of school, for basic mathematics and reading comprehension. The sample was divided into experimental groups (EG), internal control groups (IC) and external control groups (EC). Students in the EG groups, used the experimental video games during an average of 30 h over a 3-month period. They were evaluated on their acquisition of reading comprehension, spelling, and mathematical skills, and on their motivation to use video games. Teachers' expectations of change due to the use of video games, their technological transfer, and handling of classroom dynamics, were assessed through ad hoc tests and classroom observations. The results show significant differences between the EG and IC groups in relation to the EC group in Math, Reading Comprehension and Spelling, but no significant differences in these aspects were found between the EG and the IC groups. Teacher reports and classroom observations confirm an improvement in motivation to learn, and a positive technological transfer of the experimental tool. Although further studies regarding the effects of learning through videogame use are imperative, positive effects on motivation and classroom dynamics, indicate that the introduction of educational video games can be a useful tool in promoting learning within the classroom." }, { "instance_id": "R27835xR27769", "comparison_id": "R27835", "paper_id": "R27769", "text": "An alternate reality game for language learning: ARGuing for multilingual motivation Over the last decade, Alternate Reality Games (ARGs), a form of narrative often involving multiple media and gaming elements to tell a story that might be affected by participants' actions, have been used in the marketing and promotion of a number of entertainment related products such as films, computer games and music. This paper discusses the design, development and evaluation of an ARG aimed at increasing the motivations of secondary school level students across Europe in the learning of modern foreign languages. The ARG was developed and implemented as part of a European Commission Comenius project and involved 6 project partners, 328 secondary school students and 95 language teachers from 17 European countries. The collaborative nature of ARGs provides a potentially useful vehicle for developing collaborative activities within an educational context. This paper describes the educational value of ARGs, in particular the ARG for supporting the teaching of modern European languages and the specific activities that were developed around Web 2.0 and gaming that underpinned the ARG and helped promote cooperation and learning within an educational environment. An evaluation of the ARG was conducted using an experimental design of pre-test -> ARG intervention -> post-test. 105 students completed the pre-test, 92 students completed the post-test and 45 students completed both the pre-test and post-test questionnaires. In general, student attitudes towards the ARG were very positive with evidence suggesting that the ARG managed to deliver the motivational experience expected by the students. The majority of students who completed the post-test either agreed or strongly agreed that they would be willing to play the game over a prolonged period of time as part of a foreign language course. In addition, through using the ARG, students believed that they obtained skills relating to cooperation, collaboration and teamwork." }, { "instance_id": "R27835xR27795", "comparison_id": "R27835", "paper_id": "R27795", "text": "Designing multimedia games for young children\u2019s taxonomic concept development This study aimed to design and evaluate multimedia games which were based on the theories of children's development of taxonomic concepts. Factors that might affect children's classification skills, such as use of single physical characteristics of objects, competition between thematic and taxonomic relationships, difficulty in forming hierarchical categories, were identified. Several strategies for overcoming the above disadvantages, such as verbal hints, linguistic labeling, exemplar comparison, and explicit statements were implemented in the Software for Rebuilding Taxonomy (SoRT) for improving children's taxonomic concept learning. Sixty children, aged 4 and 5, participated in the evaluation of SoRT. The results showed that the SoRT was helpful to improve children's distinction between thematic and taxonomic relationships and their learning of hierarchical taxonomic concepts." }, { "instance_id": "R27835xR27833", "comparison_id": "R27835", "paper_id": "R27833", "text": "Learning blood management in orthopedic surgery through gameplay Orthopedic surgery treats the musculoskeletal system, in which bleeding is common and can be fatal. To help train future surgeons in this complex practice, researchers designed and implemented a serious game for learning orthopedic surgery. The game focuses on teaching trainees blood management skills, which are critical for safe operations. Using state-of-the-art graphics technologies, the game provides an interactive and realistic virtual environment. It also integrates game elements, including task-oriented and time-attack scenarios, bonuses, game levels, and performance evaluation tools. To study the system's effect, the researchers conducted experiments on player completion time and off-target contacts to test their learning of psychomotor skills in blood management." }, { "instance_id": "R27835xR27777", "comparison_id": "R27835", "paper_id": "R27777", "text": "Computer games for the math achievement of diverse students Introduction As a way to improve student academic performance, educators have begun paying special attention to computer games (Gee, 2005; Oblinger, 2006). Reflecting the interests of the educators, studies have been conducted to explore the effects of computer games on student achievement. However, there has been no consensus on the effects of computer games: Some studies support computer games as educational resources to promote students' learning (Annetta, Mangrum, Holmes, Collazo, & Cheng, 2009; Vogel et al., 2006). Other studies have found no significant effects on the students' performance in school, especially in math achievement of elementary school students (Ke, 2008). Researchers have also been interested in the differential effects of computer games between gender groups. While several studies have reported various gender differences in the preferences of computer games (Agosto, 2004; Kinzie & Joseph, 2008), a few studies have indicated no significant differential effect of computer games between genders and asserted generic benefits for both genders (Vogel et al., 2006). To date, the studies examining computer games and gender interaction are far from conclusive. Moreover, there is a lack of empirical studies examining the differential effects of computer games on the academic performance of diverse learners. These learners included linguistic minority students who speak languages other than English. Recent trends in the K-12 population feature the increasing enrollment of linguistic minority students, whose population reached almost four million (NCES, 2004). These students have been a grieve concern for American educators because of their reported low performance. In response, this study empirically examined the effects of math computer games on the math performance of 4th-graders with focused attention on differential effects for gender and linguistic groups. To achieve greater generalizability of the study findings, the study utilized a US nationally representative database--the 2005 National Assessment of Educational Progress (NAEP). The following research questions guided the current study: 1. Are computer games in math classes associated with the 4th-grade students' math performance? 2. How does the relationship differ by linguistic group? 3. How does the association vary by gender? 4. Is there an interaction effect of computer games on linguistic and gender groups? In other words, how does the effect of computer games on linguistic groups vary by gender group? Literature review Academic performance and computer games According DeBell and Chapman (2004), of 58,273,000 students of nursery and K-12 school age in the USA, 56% of students played computer games. Along with the popularity among students, computer games have received a lot of attention from educators as a potential way to provide learners with effective and fun learning environments (Oblinger, 2006). Gee (2005) agreed that a game would turn out to be good for learning when the game is built to incorporate learning principles. Some researchers have also supported the potential of games for affective domains of learning and fostering a positive attitude towards learning (Ke, 2008; Ke & Grabowski, 2007; Vogel et al., 2006). For example, based on the study conducted on 1,274 1st- and 2nd-graders, Rosas et al. (2003) found a positive effect of educational games on the motivation of students. Although there is overall support for the idea that games have a positive effect on affective aspects of learning, there have been mixed research results regarding the role of games in promoting cognitive gains and academic achievement. In the meta-analysis, Vogel et al. (2006) examined 32 empirical studies and concluded that the inclusion of games for students' learning resulted in significantly higher cognitive gains compared with traditional teaching methods without games. \u2026" }, { "instance_id": "R27835xR27810", "comparison_id": "R27835", "paper_id": "R27810", "text": "Developing and evaluating dialogue games for collaborative e-learning This paper argues that developments in collaborative e-learning dialogue should be based on pedagogically sound principles of discourse, and therefore, by implication, there is a need to develop methodologies which transpose \u2014 typically informal \u2014 models of educational dialogue into cognitive tools that are suitable for students. A methodology of 'investigation by design' is described which has been used to design computer-based dialogue games supporting conceptual change and development in science \u2014 based on the findings of empirical studies. An evaluation of two dialogue games for collaborative interaction, a facilitating game and an elicit-inform game, has shown that they produce significant improvements in students conceptual understanding, and they are differentially successful \u2014 depending on the nature of the conceptual difficulties experienced by the learners. The implications this study has for the role of collaborative dialogue in learning and designing computer- based and computer-mediated collaborative interaction are discussed." }, { "instance_id": "R27835xR27806", "comparison_id": "R27835", "paper_id": "R27806", "text": "Web-based quiz-game-like formative assessment: Development and evaluation This research aims to develop a multiple-choice Web-based quiz-game-like formative assessment system, named GAM-WATA. The unique design of 'Ask-Hint Strategy' turns the Web-based formative assessment into an online quiz game. 'Ask-Hint Strategy' is composed of 'Prune Strategy' and 'Call-in Strategy'. 'Prune Strategy' removes one incorrect option and turns the original 4-option item into a 3-option one. 'Call-in Strategy' provides the rate at which other test takers choose each option when answering a question. This research also compares the effectiveness of three different types of formative assessment in an e-Learning environment: paper-and-pencil test (PPT), normal Web-based test (N-WBT) and GAM-WATA. In total, 165 fifth grade elementary students (from six classes) in central Taiwan participated in this research. The six classes of students were then divided into three groups and each group was randomly assigned one type of formative assessment. Overall results indicate that different types of formative assessment have significant impacts on e-Learning effectiveness and that the e-Learning effectiveness of the students in the GAM-WATA group appears to be better. Students in the GAM-WATA group more actively participate in Web-based formative assessment to do self-assessment than students in the N-WBT group. The effectiveness of formative assessment will not be significantly improved only by replacing the paper-and-pencil test with Web-based test. The strategies included in GAM-WATA are recommended to be taken into consideration when researchers design Web-based formative assessment systems in the future." }, { "instance_id": "R27835xR27786", "comparison_id": "R27835", "paper_id": "R27786", "text": "A computer card game for the learning of basic aspects of the binary system in primary education: Design and pilot evaluation This paper presents the design, features and pilot evaluation study of a computer card game for the learning of basic aspects of the binary system (BS) by primary level education pupils. This design was based on modern social and constructivist theories of learning, in combination with basic game design principles. Pupils are asked to play against the computer with cards featuring Binary Numbers (BNs). To engage successfully with the game, pupils are provided with opportunities to review their previous knowledge of the decimal system and, subsequently, to use analogical reasoning to make connections between this knowledge and basic aspects of the BS. Several scaffolding elements are also provided for the pupils to construct, verify, extend and generalize their knowledge, at the same time using essential learning competencies. The game was piloted in the field using real pupils (20 6th Grade pupils) with encouraging results. Finally, an attempt has been made to address essential points of this game that have contributed to its becoming a successful learning environment. Addressing these points could be useful for both designers of educational computer games for Computer Science (CS) education and educators in Computing." }, { "instance_id": "R27835xR27808", "comparison_id": "R27835", "paper_id": "R27808", "text": "The application of an occupational therapy nutrition education programme for children who are obese The aim of this study was to evaluate an occupational therapy nutrition education programme for children who are obese with the use of two interactive games. A quasi-experimental study was carried out at a municipal school in Fortaleza, Brazil. A convenient sample of 200 children ages 8-10 years old participated in the study. Data collection comprised a semi-structured interview, direct and structured observation, and focus group, comparing two interactive games based on the food pyramid (video game and board game) used individually and then combined. Both play activities were efficient in the mediation of nutritional concepts, with a preference for the board game. In the learning strategies, intrinsic motivation and metacognition were analysed. The attention strategy was most applied at the video game. We concluded that both games promoted the learning of nutritional concepts. We confirmed the effectiveness of the simultaneous application of interactive games in an interdisciplinary health environment. It is recommended that a larger sample should be used in evaluating the effectiveness of play and video games in teaching healthy nutrition to children in a school setting." }, { "instance_id": "R27835xR27761", "comparison_id": "R27835", "paper_id": "R27761", "text": "Blending video games with learning: Issues and challenges with classroom implementations in the Turkish context The research design for this study focuses on examining the core issues and challenges when video games are used in the classroom. For this purpose three naturalistic contexts in Turkey were examined in which educational video games were used as the basis for teaching units on world continents and countries, first aid, and basic computer hardware and peripherals, in primary, secondary and higher education contexts respectively. Methods employed in the data collection include observing lessons, taking field notes, interviewing students and teachers, saving online discourse data, and collecting student artifacts and reflections. Findings identified issues related to (1) the design of the video game environment, (2) school infrastructure, (3) the nature of learning, the role of the teacher and classroom culture, and (4) engagement." }, { "instance_id": "R27835xR27774", "comparison_id": "R27835", "paper_id": "R27774", "text": "Deal or No Deal: using games to improve student learning, retention and decision-making Student understanding and retention can be enhanced and improved by providing alternative learning activities and environments. Education theory recognizes the value of incorporating alternative activities (games, exercises and simulations) to stimulate student interest in the educational environment, enhance transfer of knowledge and improve learned retention with meaningful repetition. In this case study, we investigate using an online version of the television game show, \u2018Deal or No Deal\u2019, to enhance student understanding and retention by playing the game to learn expected value in an introductory statistics course, and to foster development of critical thinking skills necessary to succeed in the modern business environment. Enhancing the thinking process of problem solving using repetitive games should also improve a student's ability to follow non-mathematical problem-solving processes, which should improve the overall ability to process information and make logical decisions. Learning and retention are measured to evaluate the success of the students\u2019 performance." }, { "instance_id": "R27835xR27748", "comparison_id": "R27835", "paper_id": "R27748", "text": "Effect of computer-based video games on children: An experimental study This experimental study investigated whether computer-based video games facilitate children's cognitive learning. In comparison to traditional computer-assisted instruction (CAI), this study explored the impact of the varied types of instructional delivery strategies on children's learning achievement. One major research null hypothesis was tested: no statistically significant differences in students' achievement when they receive two different instructional treatments: (1) traditional CAI; and (2) a computer-based video game. One hundred and eight third-graders from a middle/high socio-economic standard school district in Taiwan participated in the study. Results indicate that computer-based video game playing not only improves participants' fact/recall processes (F=5.288, p<;.05), but also promotes problem-solving skills by recognizing multiple solutions for problems (F=5.656, p<;.05)." }, { "instance_id": "R27835xR27829", "comparison_id": "R27835", "paper_id": "R27829", "text": "Surgical experience correlates with performance on a virtual reality simulator for shoulder arthroscopy Background The traditional process of surgical education is being increasingly challenged by economic constraints and concerns about patient safety. Sophisticated computer-based devices have become available to simulate the surgical experience in a protected environment. As with any new educational tool, these devices have generated controversy about the validity of the training experience. Hypothesis Performance on a virtual reality simulator correlates with actual surgical experience. Study Design Controlled laboratory study. Methods Forty-three test subjects of various experience levels in shoulder arthroscopy were tested on an arthroscopy simulator according to a standardized protocol. Subjects were evaluated for time to completion, distance traveled with the tip of the simulated probe compared with a computer-determined optimal distance, average probe velocity, and number of probe collisions with the tissues. Results Subjects were grouped according to prior experience with shoulder arthroscopy. Comparing the least experienced with most experienced groups, the average time to completion decreased by 62% from 128.8 seconds to 49.2 seconds; path length and hook collisions were more than halved from 8.2 to 3.8 and 34.1 to 16.8, respectively; and average probe velocity more than doubled from 0.18 to 0.4 cm/second. There were no significant differences for any parameter tested between subjects with video game experience compared to those without. Conclusions The study demonstrated a close and statistically significant correlation between simulator results and surgical experience, thus confirming the hypothesis. Conversely, experience with video games was not associated with improved simulator performance. This indicates that the skill set tested may be similar to the one developed in the operating room, thus suggesting its use as a potential tool for future evaluation of surgical trainees. Clinical Relevance The results have implications for the future of orthopaedic surgical training programs, the majority of which have not embraced virtual reality technology for physician education." }, { "instance_id": "R27835xR27751", "comparison_id": "R27835", "paper_id": "R27751", "text": "Effects of constructing versus playing an educational game on student motivation and deep learning strategy use In this study the effects of two different interactive learning tasks, in which simple games were included were described with respect to student motivation and deep strategy use. The research involved 235 students from four elementary schools in The Netherlands. One group of students (N = 128) constructed their own memory 'drag and drop' game, whereas the other group (N = 107) played an existing 'drag and drop' memory game. Analyses of covariance demonstrated a significant difference between the two conditions both on intrinsic motivation and deep strategy use. The large effect sizes for both motivation and deep strategy use were in favour of the construction condition. The results suggest that constructing a game might be a better way to enhance student motivation and deep learning than playing an existing game. Despite the promising results, the low level of complexity of the games used is a study limitation." }, { "instance_id": "R27835xR27831", "comparison_id": "R27835", "paper_id": "R27831", "text": "Individual Skill Progression on a Virtual Reality Simulator for Shoulder Arthroscopy A 3-Year Follow-up Study Background Previous studies have demonstrated a correlation between surgical experience and performance on a virtual reality arthroscopy simulator but only provided single time point evaluations. Additional longitudinal studies are necessary to confirm the validity of virtual reality simulation before these teaching aids can be more fully recommended for surgical education. Hypothesis Subjects will show improved performance on simulator retesting several years after an initial baseline evaluation, commensurate with their advanced surgical experience. Study Design Controlled laboratory study. Methods After gaining further arthroscopic experience, 10 orthopaedic residents underwent retesting 3 years after initial evaluation on a Procedicus virtual reality arthroscopy simulator. Using a paired t test, simulator parameters were compared in each subject before and after additional arthroscopic experience. Subjects were evaluated for time to completion, number of probe collisions with the tissues, average probe velocity, and distance traveled with the tip of the simulated probe compared to an optimal computer-determined distance. In addition, to evaluate consistency of simulator performance, results were compared to historical controls of equal experience. Results Subjects improved significantly ( P < .02 for all) in the 4 simulator parameters: completion time (\u221251 %), probe collisions (\u221229%), average velocity (+122%), and distance traveled (\u2212;32%). With the exception of probe velocity, there were no significant differences between the performance of this group and that of a historical group with equal experience, indicating that groups with similar arthroscopic experience consistently demonstrate equivalent scores on the simulator. Conclusion Subjects significantly improved their performance on simulator retesting 3 years after initial evaluation. Additionally, across independent groups with equivalent surgical experience, similar performance can be expected on simulator parameters; thus it may eventually be possible to establish simulator benchmarks to indicate likely arthroscopic skill. Clinical Relevance These results further validate the use of surgical simulation as an important tool for the evaluation of surgical skills." }, { "instance_id": "R28099xR27913", "comparison_id": "R28099", "paper_id": "R27913", "text": "Vision based autonomous vehicle navigation with self-organizing map feature matching technique Vision is becoming more and more common in applications such as localization, autonomous navigation, path finding and many other computer vision applications. This paper presents an improved technique for feature matching in the stereo images captured by the autonomous vehicle. The Scale Invariant Feature Transform (SIFT) algorithm is used to extract distinctive invariant features from images but this algorithm has a high complexity and a long computational time. In order to reduce the computation time, this paper proposes a SIFT improvement technique based on a Self-Organizing Map (SOM) to perform the matching procedure more efficiently for feature matching problems. Experimental results on real stereo images show that the proposed algorithm performs feature group matching with lower computation time than the original SIFT algorithm. The results showing improvement over the original SIFT are validated through matching examples between different pairs of stereo images. The proposed algorithm can be applied to stereo vision based autonomous vehicle navigation for obstacle avoidance, as well as many other feature matching and computer vision applications." }, { "instance_id": "R28099xR27967", "comparison_id": "R28099", "paper_id": "R27967", "text": "Stereo matching by using the global edge constraint Stereo matching, the key problem in the field of computer vision has long been researched for decades. However, constructing an accurate dense disparity map is still very challenging for both local and global algorithms, especially when dealing with the occlusions and disparity discontinuities. In this paper, by exploring the characteristics of the color edges, a novel constraint named the global edge constraint (GEC) is proposed to discriminate the locations of potential occlusions and disparity discontinuities. The initial disparity map is estimated by using a local algorithm, in which the GEC could guarantee that the optimal support windows would not cross the occlusions. Then a global optimization framework is adopted to improve the accuracy of the disparity map. The data term of the energy function is constructed by using the reliable correspondences selected from the initial disparity map; and the smooth term incorporates the GEC as a soft constraint to handle the disparity discontinuities. Optimal solution can be approximated via existing energy minimization approaches such as Graph cuts used in this paper. Experimental results using the Middlebury Stereo test bed demonstrate the superior performance of the proposed approach." }, { "instance_id": "R28099xR27862", "comparison_id": "R28099", "paper_id": "R27862", "text": "Region-based dense depth extraction from multi-view video A novel multi-view region-based dense depth map estimation problem is presented, based on a modified plane-sweeping strategy. In this approach, the whole scene is assumed to be region-wise planar. These planar regions are defined by back-projections of the over-segmented homogenous color regions on the images and the plane parameters are determined by angle-sweeping at different depth levels. The position and rotation of the plane patches are estimated robustly by minimizing a segment-based cost function, which considers occlusions, as well. The quality of depth map estimates is measured via reconstruction quality of the conjugate views, after warping segments into these views by the resulting homographies. Finally, a greedy-search algorithm is applied to refine the reconstruction quality and update the plane equations with visibility constraint. Based on the simulation results, it is observed that the proposed algorithm handles large un-textured regions, depth discontinuities at object boundaries, slanted surfaces, as well as occlusions." }, { "instance_id": "R28099xR27891", "comparison_id": "R28099", "paper_id": "R27891", "text": "Comparison of FPGA and GPU implementations of real-time stereo vision Real-time stereo vision systems have many applications - from autonomous navigation for vehicles through surveillance to materials handling. Accurate scene interpretation depends on an ability to process high resolution images in real-time, but, although the calculations for stereo matching are basically simple, a practical system needs to evaluate at least 109 disparities every second - beyond the capability of a single processor. Stereo correspondence algorithms have high degrees of inherent parallelism and are thus good candidates for parallel implementations. In this paper, we compare the performance obtainable with an FPGA and a GPU to understand the trade-off between the flexibility but relatively low speed of an FPGA and the high speed and fixed architecture of the GPU. Our comparison highlights the relative strengths and limitations of the two systems. Our experiments show that, for a range of image sizes, the GPU manages 2 \u00d7 109 disparities per second, compared with 2\u22366 \u00d7 109 disparities per second for an FPGA." }, { "instance_id": "R28099xR28083", "comparison_id": "R28099", "paper_id": "R28083", "text": "Efficient edge-awareness propagation via single-map filtering for edge-preserving stereo matching In this paper, we propose an efficient framework for edge-preserving stereo matching. Local methods for stereo matching are more suitable than global methods for real-time applications. Moreover, we can obtain accurate depth maps by using edge-preserving filter for the cost aggregation process in local stereo matching. The computational cost is high, since we must perform the filter for every number of disparity ranges if the order of the edge-preserving filter is constant time. Therefore, we propose an efficient iterative framework which propagates edge-awareness by using single time edge preserving filtering. In our framework, box filtering is used for the cost aggregation, and then the edge-preserving filtering is once used for refinement of the obtained depth map from the box aggregation. After that, we iteratively estimate a new depth map by local stereo matching which utilizes the previous result of the depth map for feedback of the matching cost. Note that the kernel size of the box filter is varied as coarse-to-fine manner at each iteration. Experimental results show that small and large areas of incorrect regions are gradually corrected. Finally, the accuracy of the depth map estimated by our framework is comparable to the state-of-the-art of stereo matching methods with global optimization methods. Moreover, the computational time of our method is faster than the optimization based method." }, { "instance_id": "R28099xR27971", "comparison_id": "R28099", "paper_id": "R27971", "text": "Efficient GPU-Based Graph Cuts for Stereo Matching Although graph cuts (GC) is popularly used in many computer vision problems, slow execution time due to its high complexity hinders wide usage. Manycore solution using Graphics Processing Unit (GPU) may solve this problem. However, conventional GC implementation does not fully exploit GPU's computing power. To address this issue, a new GC algorithm which is suitable for GPU environment is presented in this paper. First, we present a novel graph construction method that accelerates the convergence speed of GC. Next, a repetitive block-based push and relabel method is used to increase the data transfer efficiency. Finally, we propose a low-overhead global relabeling algorithm to increase the GPU occupancy ratio. The experiments on Middlebury stereo dataset shows that 5.2X speedup can be achieved over the baseline implementation, with identical GPU platform and parameters." }, { "instance_id": "R28099xR28067", "comparison_id": "R28099", "paper_id": "R28067", "text": "Hardware-Efficient Design of Real-Time Profile Shape Matching Stereo Vision Algorithm on FPGA A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 \u00d7 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles." }, { "instance_id": "R28099xR27920", "comparison_id": "R28099", "paper_id": "R27920", "text": "Dense Disparity Real-Time Stereo Vision Algorithm for Resource-Limited Systems It is evident that the accuracy of stereo vision algorithms has continued to increase based on commonly used quantitative evaluations of the resulting disparity maps. This paper focuses on the development of promising stereo vision algorithms that efficiently tradeoff accuracy for large reductions in required computational resources. An intensity profile shape-matching algorithm is introduced as an example of an algorithm that makes such tradeoffs. The proposed algorithm is compared to both a basic sum-of-absolute-differences (SAD) block-matching algorithm, as well as a stereo vision algorithm that is highly ranked for its accuracy based on the Middlebury evaluation criteria. This comparison shows that the proposed algorithm's accuracy on the commonly used Tsukuba stereo image pair is lower than many published stereo vision algorithms, but that for unrectified stereo image pairs that have even the slightest differences in brightness, it is potentially more robust than algorithms that rely on SAD block matching. An example application that requires 3-D information is implemented to show that the accuracy of the proposed algorithm is sufficient for this use. Timing results show that this is a very fast dense-disparity stereo vision algorithm when compared to other algorithms capable of running on a standard microprocessor." }, { "instance_id": "R28099xR28030", "comparison_id": "R28099", "paper_id": "R28030", "text": "Stereo Vision Algorithms for FPGAs In recent years, with the advent of cheap and accurate RGBD (RGB plus Depth) active sensors like the Microsoft Kinect and devices based on time-of-flight (ToF) technology, there has been increasing interest in 3D-based applications. At the same time, several effective improvements to passive stereo vision algorithms have been proposed in the literature. Despite these facts and the frequent deployment of stereo vision for many research activities, it is often perceived as a bulky and expensive technology not well suited to consumer applications. In this paper, we will review a subset of state-of-the-art stereo vision algorithms that have the potential to fit a target computing architecture based on low-cost field-programmable gate arrays (FPGAs), without additional external devices (e.g., FIFOs, DDR memories, etc.). Mapping these algorithms into a similar low-power, low-cost architecture would make RGBD sensors based on stereo vision suitable to a wider class of application scenarios currently not addressed by this technology." }, { "instance_id": "R28099xR28055", "comparison_id": "R28099", "paper_id": "R28055", "text": "Evaluation of stereo correspondence algorithms and their implementation on FPGA The accuracy of stereo vision has been considerably improved in the last decade, but real-time stereo matching is still a challenge for embedded systems where the limited resources do not permit fast operation of sophisticated approaches. This work presents an evaluation of area-based algorithms used for calculating distance in stereoscopic vision systems, their hardware architectures for implementation on FPGA and the cost of their accuracies in terms of FPGA hardware resources. The results show the trade-off between the quality of such maps and the hardware resources which each solution demands, so they serve as a guide for implementing stereo correspondence algorithms in real-time processing systems." }, { "instance_id": "R28099xR28021", "comparison_id": "R28099", "paper_id": "R28021", "text": "Real-Time Refinement of Kinect Depth Maps using Multi-Resolution Anisotropic Diffusion In this paper, we present a novel real-time algorithm to refine depth maps generated by low-cost commercial depth sensors like the Microsoft Kinect. The Kinect sensor falls under the category of RGB-D sensors that can generate a high resolution depth map and color image of a scene. They are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth map and color image and lack of sharp object boundaries in the depth map. This is a potential problem in applications that require the color image to be projected in 3-D using the depth map. Such applications depend heavily on the depth map and thus the quality of the depth map is of vital importance. In this paper, a novel multi-resolution anisotropic diffusion based algorithm is presented that accepts a Kinect generated depth map and color image and computes a dense depth map in which the holes have been filled and the edges of the objects are sharpened and aligned with the objects in the color image. The proposed algorithm also ensures that regions in the depth map where the depth is properly estimated are not filtered and ensures that the depth values in the final depth map are the same values that existed in the original depth map. Experimental results are provided to demonstrate the improvement in the quality of the depth map and also execution time results are provided to prove that the proposed method can be executed in real-time." }, { "instance_id": "R28099xR28041", "comparison_id": "R28099", "paper_id": "R28041", "text": "Real-time high-quality stereo vision system in FPGA Stereo vision is a well-known technique for acquiring depth information. In this paper, we propose a real-time high-quality stereo vision system in field-programmable gate array (FPGA). Using absolute difference-census cost initialization, cross-based cost aggregation, and semiglobal optimization, the system provides high-quality depth results for high-definition images. This is the first complete real-time hardware system that supports both cost aggregation on variable support regions and semiglobal optimization in FPGAs. Furthermore, the system is designed to be scaled with image resolution, disparity range, and parallelism degree for maximum parallel efficiency. We present the depth map quality on the Middlebury benchmark and some real-world scenarios with different image resolutions. The results show that our system performs the best among FPGA-based stereo vision systems and its accuracy is comparable with those of current top-performing software implementations. The first version of the system was demonstrated on an Altera Stratix-IV FPGA board, processing 1024 \u00d7 768 pixel images with 96 disparity levels at 67 frames/s. The system is then scaled up on a new Altera Stratix-V FPGA and the processing ability is enhanced to 1600 \u00d7 1200 pixel images with 128 disparity levels at 42 frames/s." }, { "instance_id": "R28099xR28000", "comparison_id": "R28099", "paper_id": "R28000", "text": "Real-Time Stereo Matching on CUDA Using an Iterative Refinement Method for Adaptive Support-Weight Correspondences High-quality real-time stereo matching has the potential to enable various computer vision applications including semi-automated robotic surgery, teleimmersion, and 3-D video surveillance. A novel real-time stereo matching method is presented that uses a two-pass approximation of adaptive support-weight aggregation, and a low-complexity iterative disparity refinement technique. Through an evaluation of computationally efficient approaches to adaptive support-weight cost aggregation, it is shown that the two-pass method produces an accurate approximation of the support weights while greatly reducing the complexity of aggregation. The refinement technique, constructed using a probabilistic framework, incorporates an additive term into matching cost minimization and facilitates iterative processing to improve the accuracy of the disparity map. This method has been implemented on massively parallel high-performance graphics hardware using the Compute Unified Device Architecture computing engine. Results show that the proposed method is the most accurate among all of the real-time stereo matching methods listed on the Middlebury stereo benchmark." }, { "instance_id": "R28099xR27978", "comparison_id": "R28099", "paper_id": "R27978", "text": "Real-time stereo vision: Optimizing Semi-Global Matching Semi-Global Matching (SGM) is arguably one of the most popular algorithms for real-time stereo vision. It is already employed in mass production vehicles today. Thinking of applications in intelligent vehicles (and fully autonomous vehicles in the long term), we aim at further improving SGM regarding its accuracy. In this study, we propose a straight-forward extension of the algorithm's parametrization. We consider individual penalties for different path orientations, weighted integration of paths, and penalties depending on intensity gradients. In order to tune all parameters, we applied evolutionary optimization. For a more efficient offline optimization and evaluation, we implemented SGM on graphics hardware. We describe the implementation using CUDA in detail. For our experiments, we consider two publicly available datasets: the popular Middlebury benchmark as well as a synthetic sequence from the .enpeda. project. The proposed extensions significantly improve the performance of SGM. The number of incorrect disparities was reduced by up to 27.5 % compared to the original approach, while the runtime was not increased." }, { "instance_id": "R28099xR27843", "comparison_id": "R28099", "paper_id": "R27843", "text": "Segmentation based disparity estimation using color and depth information The well-known cooperative stereo uses two dimensional rectangular window for a local block matching, and three dimensional box-shaped volume for a global optimization procedure. In many cases, appropriate selections of these matching regions can provide satisfactory matching results. This paper presents a new method for iteratively modifying sizes and shapes of matching regions based on color and depth information. This algorithm computes the aggregated matching costs with two ideas. The first idea is to select matching regions based on object boundaries to avoid projective distortion. This provides the reliable matching scores as well as the prevention of the foreground fattening phenomenon. The second idea is to iteratively modify the segmentation map by merging the regions where the disparities are likely to be the same. Experimental results show that the proposed algorithm provides more accurate disparity map than other algorithms. Especially, the computed disparity map shows the advantage of our algorithm in disparity discontinuity regions." }, { "instance_id": "R28099xR28012", "comparison_id": "R28099", "paper_id": "R28012", "text": "A near real-time color stereo matching method for GPU This paper presents a near real-time stereo matching method with acceptable matching results. This method consists of three important steps: SAD-ALD cost measure, cost aggregation in adaptive window in cross-based support regions and a refinement step. These three steps are well organized to be adopted by the GPU\u2019s parallel architecture. The parallelism brought by GPU and CUDA implementations provides significant acceleration in running time. This method is tested on six pairs of images from Middlebury dataset, each possibly declined within different sizes. For each pair of images it can generate acceptable matching results in roughly less than 100 milliseconds. The method is also compared with three GPU-based methods and one CPU-based method on increasing size image pairs." }, { "instance_id": "R28099xR27916", "comparison_id": "R28099", "paper_id": "R27916", "text": "A local iterative refinement method for adaptive support-weight stereo matching A new stereo matching algorithm is introduced that performs iterative refinement on the results of adaptive support-weight stereo matching. During each iteration of disparity refinement, adaptive support-weights are used by the algorithm to penalize disparity differences within local windows. Analytical results show that the addition of iterative refinement to adaptive support-weight stereo matching does not significantly increase complexity. In addition, this new algorithm does not rely on image segmentation or plane fitting, which are used by the majority of the most accurate stereo matching algorithms. As a result, this algorithm has lower complexity, is more suitable for parallel implementation, and does not force locally planar surfaces within the scene. When compared to other algorithms that do not rely on image segmentation or plane fitting, results show that the new stereo matching algorithm is one of the most accurate listed on the Middlebury performance benchmark." }, { "instance_id": "R28099xR28004", "comparison_id": "R28099", "paper_id": "R28004", "text": "Local Disparity Estimation With Three-Moded Cross Census and Advanced Support Weight The classical local disparity methods use simple and efficient structure to reduce the computation complexity. To increase the accuracy of the disparity map, new local methods utilize additional processing steps such as iteration, segmentation, calibration and propagation, similar to global methods. In this paper, we present an efficient one-pass local method with no iteration. The proposed method is also extended to video disparity estimation by using motion information as well as imposing spatial temporal consistency. In local method, the accuracy of stereo matching depends on precise similarity measure and proper support window. For the accuracy of similarity measure, we propose a novel three-moded cross census transform with a noise buffer, which increases the robustness to image noise in flat areas. The proposed similarity measure can be used in the same form in both stereo images and videos. We further improve the reliability of the aggregation by adopting the advanced support weight and incorporating motion flow to achieve better depth map near moving edges in video scene. The experimental results show that the proposed method is the best performing local method on the Middlebury stereo benchmark test and outperforms the other state-of-the-art methods on video disparity evaluation." }, { "instance_id": "R28099xR28037", "comparison_id": "R28099", "paper_id": "R28037", "text": "A performance and energy comparison of convolution on GPUs, FPGAs, and multicore processors Recent architectural trends have focused on increased parallelism via multicore processors and increased heterogeneity via accelerator devices (e.g., graphics-processing units, field-programmable gate arrays). Although these architectures have significant performance and energy potential, application designers face many device-specific challenges when choosing an appropriate accelerator or when customizing an algorithm for an accelerator. To help address this problem, in this article we thoroughly evaluate convolution, one of the most common operations in digital-signal processing, on multicores, graphics-processing units, and field-programmable gate arrays. Whereas many previous application studies evaluate a specific usage of an application, this article assists designers with design space exploration for numerous use cases by analyzing effects of different input sizes, different algorithms, and different devices, while also determining Pareto-optimal trade-offs between performance and energy." }, { "instance_id": "R28099xR27851", "comparison_id": "R28099", "paper_id": "R27851", "text": "How Far Can We Go with Local Optimization in Real-Time Stereo Matching Applications such as robot navigation and augmented reality require high-accuracy dense disparity maps in real-time and online. Due to time constraint, most realtime stereo applications rely on local winner-take-all optimization in the disparity computation process. These local approaches are generally outperformed by offline global optimization based algorithms. However, recent research shows that, through carefully selecting and aggregating the matching costs of neighboring pixels, the disparity maps produced by a local approach can be more accurate than those generated by many global optimization techniques. We are therefore motivated to investigate whether these cost aggregation approaches can be adopted in real-time stereo applications and, if so, how well they perform under the real-time constraint. The evaluation is conducted on a real-time stereo platform, which utilizes the processing power of programmable graphics hardware. Several recent cost aggregation approaches are also implemented and optimized for graphics hardware so that real-time speed can be achieved. The performances of these aggregation approaches in terms of both processing speed and result quality are reported." }, { "instance_id": "R28099xR27975", "comparison_id": "R28099", "paper_id": "R27975", "text": "Effective stereo matching using reliable points based graph cut In this paper, we propose an effective stereo matching algorithm using reliable points and region-based graph cut. Firstly, the initial disparity maps are calculated via local windowbased method. Secondly, the unreliable points are detected according to the DSI(Disparity Space Image) and the estimated disparity values of each unreliable point are obtained by considering its surrounding points. Then, the scheme of reliable points is introduced in region-based graph cut framework to optimize the initial result. Finally, remaining errors in the disparity results are effectively handled in a multi-step refinement process. Experiment results show that the proposed algorithm achieves a significant reduction in computation cost and guarantee high matching quality." }, { "instance_id": "R28099xR27992", "comparison_id": "R28099", "paper_id": "R27992", "text": "Secrets of adaptive support weight techniques for local stereo matching Highlights? Study of different strategies for computing adaptive support weights in local stereo matching. ? Our study sheds light on potential trade-offs between the accuracy and computational efficiency. ? The experiments are conducted on 35 stereo pairs of Middlebury with ground truth data. ? Our evaluation study is useful for practical applications. In recent years, local stereo matching algorithms have again become very popular in the stereo community. This is mainly due to the introduction of adaptive support weight algorithms that can for the first time produce results that are on par with global stereo methods. The crux in these adaptive support weight methods is to assign an individual weight to each pixel within the support window. Adaptive support weight algorithms differ mainly in the manner in which this weight computation is carried out.In this paper we present an extensive evaluation study. We evaluate the performance of various methods for computing adaptive support weights including the original bilateral filter-based weights, as well as more recent approaches based on geodesic distances or on the guided filter. To obtain reliable findings, we test these different weight functions on a large set of 35 ground truth disparity pairs. We have implemented all approaches on the GPU, which allows for a fair comparison of run time on modern hardware platforms. Apart from the standard local matching using fronto-parallel windows, we also embed the competing weight functions into the recent PatchMatch Stereo approach, which uses slanted sub-pixel windows and represents a state-of-the-art local algorithm. In the final part of the paper, we aim at shedding light on general points of adaptive support weight matching, which, for example, includes a discussion about symmetric versus asymmetric support weight approaches." }, { "instance_id": "R28099xR27876", "comparison_id": "R28099", "paper_id": "R27876", "text": "Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling In this paper, we formulate an algorithm for the stereo matching problem with careful handling of disparity, discontinuity and occlusion. The algorithm works with a global matching stereo model based on an energy- minimization framework. The global energy contains two terms, the data term and the smoothness term. The data term is first approximated by a color-weighted correlation, then refined in occluded and low-texture areas in a repeated application of a hierarchical loopy belief propagation algorithm. The experimental results are evaluated on the Middlebury data set, showing that our algorithm is the top performer." }, { "instance_id": "R28099xR27995", "comparison_id": "R28099", "paper_id": "R27995", "text": "Real-time GPU-based local stereo matching method Stereo matching techniques aim at reconstruct the disparity maps with a pair of images. This paper propose a real-time stereo matching algorithm optimized for GPU platform. Our method first constructs the matching cost by combining measure of truncated absolute difference (TAD) cost and census cost; then we aggregates matching costs using adaptive weight method and iterates with square step size in the horizontal and vertical pass. In addition, we implement our algorithm on GPU platform using a high level compiler HMPP (Hybrid Multicore Parallel Programming) which greatly reduces the development time and make use of the parallel computing of GPU device with CUDA (Compute Unified Device Architecture)/OpenCL (Open Computing Language). The GPU-based implementation of our method obtains 20 fps on a general laptop's GPU satisfying real-time requirement." }, { "instance_id": "R28099xR27952", "comparison_id": "R28099", "paper_id": "R27952", "text": "A high performance parallel graph cut optimization for depth estimation Graph-cut has been proved to return good quality on the optimization of depth estimation. Leveraging the parallel computation has been proposed as a solution to handle the intensive computation of graph-cut algorithm. This paper proposes two parallelization techniques to enhance the execution time of graph-cut optimization. By executing on an Intel 8-core CPU, the proposed scheme can achieve an average of 4.7 times speedup with only 0.01% energy increase." }, { "instance_id": "R28099xR28035", "comparison_id": "R28099", "paper_id": "R28035", "text": "A new high resolution depth map estimation system using stereo vision and depth sensing device Depth map estimation is a classical problem in computer vision. Conventional depth estimation relies on stereo/multi-view matching or depth sensing devices alone. In this paper, we propose a system which addresses high resolution and high quality depth estimation based on joint fusion of stereo and Kinect data. The problem is formulated as a maximum a posteriori probability (MAP) estimation problem and reliability of two devices are derived. The depth map estimated is further refined by color image guided depth matting and a 2D polynomial regression (LPR)-based filtering. Experimental results show that our system can provide high quality and resolution depth map, which complements the strengths of stereo vision and Kinect depth sensor." }, { "instance_id": "R28099xR27880", "comparison_id": "R28099", "paper_id": "R27880", "text": "Real-Time Stereo Vision: Making More Out of Dynamic Programming Dynamic Programming (DP) is a popular and efficient method for calculating disparity maps from stereo images. It allows for meeting real-time constraints even on low-cost hardware. Therefore, it is frequently used in real-world applications, although more accurate algorithms exist. We present a refined DP stereo processing algorithm which is based on a standard implementation. However it is more flexible and shows increased performance. In particular, we introduce the idea of multi-path backtracking to exploit the information gained from DP more effectively. We show how to automatically tune all parameters of our approach offline by an evolutionary algorithm. The performance was assessed on benchmark data. The number of incorrect disparities was reduced by 40 % compared to the DP reference implementation while the overall complexity increased only slightly." }, { "instance_id": "R28099xR27984", "comparison_id": "R28099", "paper_id": "R27984", "text": "Information permeability for stereo matching A novel local stereo matching algorithm is introduced to address the fundamental challenge of stereo algorithms, accuracy and computational complexity dilemma. The time consuming intensity dependent aggregation procedure of local methods is improved in terms of both speed and precision. Providing connected 2D support regions, the proposed approach exploits a new paradigm, namely separable successive weighted summation (SWS) among horizontal and vertical directions enabling constant operational complexity. The weights are determined by four-neighborhood intensity similarity of pixels and utilized to model the information transfer rate, permeability, towards the corresponding direction. The same procedure is also utilized to diffuse information through overlapped pixels during occlusion handling after detecting unreliable disparity assignments. Successive weighted summation adaptively cumulates the support data based on local characteristics, enabling disparity maps to preserve object boundaries and depth discontinuities. According to the experimental results on Middlebury stereo benchmark, the proposed method is one of the most effective local stereo algorithm providing high quality disparity models by unifying constant time filtering and weighted aggregation. Hence, the proposed algorithm provides a competitive alternative for various local methods in terms of achieving precise and consistent disparity maps from stereo video within fast execution time." }, { "instance_id": "R28099xR27987", "comparison_id": "R28099", "paper_id": "R27987", "text": "Window-based approach for fast stereo correspondence In this study, the authors present a new area-based stereo matching algorithm that computes dense disparity maps for a real-time vision system. Although many stereo matching algorithms have been proposed in recent years, correlation-based algorithms still have an edge because of speed and less memory requirements. The selection of appropriate shape and size of the matching window is a difficult problem for correlation-based algorithms. In the proposed approach, two correlation windows are used to improve the performance of the algorithm while maintaining its real-time suitability. The CPU implementation of the proposed algorithm computes more than 10 frame/s. Unlike other area-based stereo matching algorithms, this method works very well at disparity boundaries as well as in low textured image areas and computes a dense and sharp disparity map. Evaluations on the benchmark Middlebury stereo datasets have been performed to demonstrate the qualitative and quantitative performance of the proposed algorithm." }, { "instance_id": "R28099xR28027", "comparison_id": "R28099", "paper_id": "R28027", "text": "Depth map enhancement based on color and depth consistency Current low-cost depth sensing techniques, such as Microsoft Kinect, still can achieve only limited precision. The resultant depth maps are often found to be noisy, misaligned with the color images, and even contain many large holes. These limitations make it difficult to be adopted by many graphics applications. In this paper, we propose a computational approach to address the problem. By fusing raw depth values with image color, edges and smooth priors in a Markov random field optimization framework, both misalignment and large holes can be eliminated effectively, our method thus can produce high-quality depth maps that are consistent with the color image. To achieve this, a confidence map is estimated for adaptive weighting of different cues, an image inpainting technique is introduced to handle large holes, and contrasts in the color image are also considered for an accurate alignment. Experimental results demonstrate the effectiveness of our method." }, { "instance_id": "R28099xR27936", "comparison_id": "R28099", "paper_id": "R27936", "text": "Multiresolution energy minimisation framework for stereo matching Global optimisation algorithms for stereo dense depth map estimation have demonstrated how to outperform other stereo algorithms such as local methods or dynamic programming. The energy minimisation framework, using Markov random fields model and solved using graph cuts or belief propagation, has especially obtained good results. The main drawback of these methods is that, although they achieve accurate reconstruction, they are not suited for real-time applications. Subsampling the input images does not reduce the complexity of the problem because it also reduces the resolution of the output in the disparity space. Nonetheless, some real-time applications such as navigation would tolerate the reduction of the depth map resolutions (width and height) while maintaining the resolution in the disparity space (number of labels). In this study a new multiresolution energy minimisation framework for real-time robotics applications is proposed where a global optimisation algorithm is applied. A reduction by a factor R of the final depth map's resolution is considered and a speed of up to 50 times has been achieved. Using high-resolution stereo pair input images guarantees that a high resolution on the disparity dimension is preserved. The proposed framework has shown how to obtain real-time performance while keeping accurate results in the Middlebury test data set." }, { "instance_id": "R28099xR28053", "comparison_id": "R28099", "paper_id": "R28053", "text": "Fast stereo matching using adaptive guided filtering Dense disparity map is required by many great 3D applications. In this paper, a novel stereo matching algorithm is presented. The main contributions of this work are three-fold. Firstly, a new cost-volume filtering method is proposed. A novel concept named ''two-level local adaptation'' is introduced to guide the proposed filtering approach. Secondly, a novel post-processing method is proposed to handle both occlusions and textureless regions. Thirdly, a parallel algorithm is proposed to efficiently calculate an integral image on GPU, and it accelerates the whole cost-volume filtering process. The overall stereo matching algorithm generates the state-of-the-art results. At the time of submission, it ranks the 10th among about 152 algorithms on the Middlebury stereo evaluation benchmark, and takes the 1st place in all local methods. By implementing the entire algorithm on the NVIDIA Tesla C2050 GPU, it can achieve over 30million disparity estimates per second (MDE/s)." }, { "instance_id": "R28099xR28064", "comparison_id": "R28099", "paper_id": "R28064", "text": "Using the GPU for fast symmetry-based dense stereo matching in high resolution images SymStereo is a new algorithm used for stereo estimation. Instead of measuring photo-similarity, it proposes novel cost functions that measure symmetry for evaluating the likelihood of two pixels being a match. In this work we propose a parallel approach of the LogN matching cost variant of SymStereo capable of processing pairs of images in real-time for depth estimation. The power of the graphics processing units utilized allows exploring more efficiently the bank of log-Gabor wavelets developed to analyze symmetry, in the spectral domain. We analyze tradeoffs and propose different parameter-izations of the signal processing algorithm to accommodate image size, dimension of the filter bank, number of wavelets and also the number of disparities that controls the space density of the estimation, and still process up to 53 frames per second (fps) for images with size 288 \u00d7 384 and up to 3 fps for 768 \u00d7 1024 images." }, { "instance_id": "R28099xR27887", "comparison_id": "R28099", "paper_id": "R27887", "text": "Stereo vision for robotic applications in the presence of non-ideal lighting conditions Many robotic and machine-vision applications rely on the accurate results of stereo correspondence algorithms. However, difficult environmental conditions, such as differentiations in illumination depending on the viewpoint, heavily affect the stereo algorithms' performance. This work proposes a new illumination-invariant dissimilarity measure in order to substitute the established intensity-based ones. The proposed measure can be adopted by almost any of the existing stereo algorithms, enhancing it with its robust features. The performance of the dissimilarity measure is validated through experimentation with a new adaptive support weight (ASW) stereo correspondence algorithm. Experimental results for a variety of lighting conditions are gathered and compared to those of intensity-based algorithms. The algorithm using the proposed dissimilarity measure outperforms all the other examined algorithms, exhibiting tolerance to illumination differentiations and robust behavior." }, { "instance_id": "R28099xR27932", "comparison_id": "R28099", "paper_id": "R27932", "text": "Efficient hierarchical matching algorithm for processing uncalibrated stereo vision images and its hardware architecture In motion estimation, the sub-pixel matching technique involves the search of sub-sample positions as well as integer-sample positions between the image pairs, choosing the one that gives the best match. Based on this idea, this work proposes an estimation algorithm, which performs a 2-D correspondence search using a hierarchical search pattern. The intermediate results are refined by 3-D cellular automata (CA). The disparity value is then defined using the distance of the matching position. Therefore the proposed algorithm can process uncalibrated and non-rectified stereo image pairs, maintaining the computational load within reasonable levels. Additionally, a hardware architecture of the algorithm is deployed. Its performance has been evaluated on both synthetic and real self-captured image sets. Its attributes, make the proposed method suitable for autonomous outdoor robotic applications." }, { "instance_id": "R28099xR28008", "comparison_id": "R28099", "paper_id": "R28008", "text": "Matching Cost Filtering for Dense Stereo Correspondence Dense stereo correspondence enabling reconstruction of depth information in a scene is of great importance in the field of computer vision. Recently, some local solutions based on matching cost filtering with an edge-preserving filter have been proved to be capable of achieving more accuracy than global approaches. Unfortunately, the computational complexity of these algorithms is quadratically related to the window size used to aggregate the matching costs. The recent trend has been to pursue higher accuracy with greater efficiency in execution. Therefore, this paper proposes a new cost-aggregation module to compute the matching responses for all the image pixels at a set of sampling points generated by a hierarchical clustering algorithm. The complexity of this implementation is linear both in the number of image pixels and the number of clusters. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art local methods in terms of both accuracy and speed. Moreover, performance tests indicate that parameters such as the height of the hierarchical binary tree and the spatial and range standard deviations have a significant influence on time consumption and the accuracy of disparity maps." }, { "instance_id": "R28099xR28078", "comparison_id": "R28099", "paper_id": "R28078", "text": "High-speed segmentation-driven high-resolution matching This paper proposes a segmentation-based approach for matching of high-resolution stereo images in real time. The approach employs direct region matching in a raster scan fashion influenced by scanline approaches, but with pixel decoupling. To enable real-time performance it is implemented as a heterogeneous system of an FPGA and a sequential processor. Additionally, the approach is designed for low resource usage in order to qualify as part of unified image processing in an embedded system." }, { "instance_id": "R28099xR27957", "comparison_id": "R28099", "paper_id": "R27957", "text": "Reducing computation complexity for disparity matching To facilitate the realization of free-viewpoint 3D video systems, disparity matching and view synthesis are two of the most significant operations. However, disparity matching demands high computation complexity which motivates the development of the proposed techniques. In this paper, we propose a shape-adaptive low-complexity (SALC) technique to remove computation redundancy between stereo image pairs for disparity matching. The novel idea takes advantage of that depth values of pixels inside the same object are either the same or gracefully changed, which implies that the operations of depth map generation may be reused, and are not necessarily computed pixel by pixel as conventional works did. Instead, the pixels with the same depth value should be treated as a group which becomes the basic unit in computing disparity matching. Meanwhile, the matching accuracy of stereo matching has been obviously improved by using searching blocks with shape information. From the experimental results, the proposed SALC technique accelerates the disparity matching for more than 26 times as well as improves quality of resulted depth maps with a 71.69% of bad pixel reduction compared with the conventional pixel-by-pixel disparity estimation." }, { "instance_id": "R28099xR28097", "comparison_id": "R28099", "paper_id": "R28097", "text": "A fast trilateral filterbased adaptive support weight method for stereo matching Adaptive support weight (ASW) approach represents the state-of-the-art local stereo matching method. Recent extensive evaluation studies on ASW approaches show that the bilateral filter weight function enables outstanding performance on a large dataset in comparison with various weight functions. However, it does not resolve the ambiguity induced by nearby pixels at different disparities but with similar colors. In this paper, we propose a novel trilateral filter based ASW method which remedies such ambiguities by considering disparity discontinuities through color discontinuity boundaries, i.e., the strength of the boundary between two pixels. The experimental evaluation on the Middlebury benchmark shows that the proposed algorithm ranks 15th out of 150 submissions and is the current most accurate local stereo matching algorithm." }, { "instance_id": "R28099xR27870", "comparison_id": "R28099", "paper_id": "R27870", "text": "Stereo Processing by Semiglobal Matching and Mutual Information This paper describes the Semi-Global Matching (SGM) stereo method. It uses a pixelwise, Mutual Information based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement and multi-baseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed.A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2s on typical test images. An in depth evaluation of the Mutual Information based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems." }, { "instance_id": "R28099xR28024", "comparison_id": "R28099", "paper_id": "R28024", "text": "Stereo matching by adaptive weighting selection based cost aggregation Cost aggregation is the most essential step for dense stereo correspondence searching, which measures the similarity between pixels in the stereo images. In this paper, based on the analysis of the optimal adaptive weight, we propose a novel support aggregation strategy by adaptive weighting selection. The proposed method calculates the aggregation cost by the joint optimization of both left and right matching cost. By assigning more reasonable weighting coefficients, we exclude the occlusion pixels while preserving sufficient support region for accurate matching. The proposed optimal strategy can be integrated by any other adaptive weighting based cost aggregation method to generate more reasonable similarity measurement. Experimental results show that, compare with traditional methods, our algorithm can reduce the foreground fatten phenomenon while increasing the accuracy in the high texture regions." }, { "instance_id": "R28099xR28049", "comparison_id": "R28099", "paper_id": "R28049", "text": "A robust cost function for stereo matching of road scenes In this paper different matching cost functions used for stereo matching are evaluated in the context of intelligent vehicles applications. Classical costs are considered, like: sum of squared differences, normalised cross correlation or Census Transform that were already evaluated in previous studies, together with some recent functions that try to enhance the discriminative power of Census Transform (CT). These are evaluated with two different stereo matching algorithms: a global method based on graph cuts and a fast local one based on cross aggregation regions. Furthermore we propose a new cost function that combines the CT and alternatively a variant of CT called Cross-Comparison Census (CCC), with the mean sum of relative pixel intensity differences (DIFFCensus). Among all the tested cost functions, under the same constraints, the proposed DIFFCensus produces the lower error rate on the KITTI road scenes dataset with both global and local stereo matching algorithms." }, { "instance_id": "R28140xR28135", "comparison_id": "R28140", "paper_id": "R28135", "text": "Duodenal gangliocytic paraganglioma showing lymph node metastasis: A rare case report Abstract We describe a case of duodenal gangliocytic paraganglioma showing lymph node metastasis. A 61-year-old Japanese man underwent pylorus preserving pancreaticoduodenectomy to remove a tumor at the papilla of Vater. The section of the tumor extending from the mucosa to submucosa of the duodenum was sharply demarcated, solid, and white-yellowish. Neither necrosis nor hemorrhage was present. Histological examination confirmed the immunohistochemical identification of three components comprising epithelioid cells, spindle-shaped cells, and ganglion-like cells. Epithelioid cells showed positive reactivity for synaptophysin, somatostatin, and CD56. In contrast, spindle-shaped cells showed positive reactivity for S-100 protein, but not for synaptophysin, somatostatin or CD56. Furthermore, we found lymph node metastasis despite lack of bcl-2 and p53 expression. In addition to the rarity of the tumor, we are describing here the present case suggests the malignant potency of the tumor despite lack of acceptable prognostic indicators for neuroendocrine tumor." }, { "instance_id": "R28140xR28108", "comparison_id": "R28140", "paper_id": "R28108", "text": "Duodenal gangliocytic paraganglioma with lymph node metastasis in a 17-year-old boy A case of duodenal gangliocytic paraganglioma (DGP) in a 17\u2010year\u2010old boy is presented. In this case a lymph node in the peripancreatic region was involved by a metastatic tumor. A review of the literature on DGP indicates that this case represents the youngest patient and is the second case of DGP with metastasis. Immunohistochemical staining for neuron\u2010specific enolase (NSE), neurofilament (NF), pancreatic polypeptide, and somatostatin showed positive results for epithelioid and ganglion\u2010like cells, whereas spindle cells showed immunoreactivities for S\u2010100 protein, NSE, and NF. The histogenesis of DGP is discussed." }, { "instance_id": "R28140xR28116", "comparison_id": "R28140", "paper_id": "R28116", "text": "A Case Report of Duodenal Gangliocytic Paraganglioma with Lymph Node Metastasis. \u4eca\u56de, \u6211\u3005\u306f\u5341\u4e8c\u6307\u8178\u306egangliocytic paraganglioma\u3067, \u81b5\u982d\u90e8\u524d\u9762\u30ea\u30f3\u30d1\u7bc0\u306b\u8ee2\u79fb\u304c\u307f\u3089\u308c\u305f\u75c7\u4f8b\u3092\u7d4c\u9a13\u3057\u305f.\u75c7\u4f8b\u306f63\u6b73\u306e\u5973\u6027\u3067\u4e3b\u8a34\u306f\u4e0a\u8179\u90e8\u75db\u3067\u3042\u3063\u305f.\u8179\u90e8\u8d85\u97f3\u6ce2\u691c\u67fb\u3067\u7dcf\u80c6\u7ba1\u62e1\u5f35\u3092\u8a8d\u3081, \u3055\u3089\u306b\u5185\u8996\u93e1, \u4f4e\u7dca\u5f35\u6027\u5341\u4e8c\u6307\u8178\u9020\u5f71\u3067\u5341\u4e8c\u6307\u8178\u306b\u816b\u760d\u3092\u767a\u898b\u3057\u305f, \u3053\u306e\u816b\u760d\u306f\u8868\u9762\u306b\u591a\u6570\u306e\u6f70\u760d\u3092\u4f34\u3046\u7c98\u819c\u4e0b\u816b\u760d\u69d8\u306e\u6240\u898b\u3092\u5448\u3057\u305f.\u5168\u80c3\u5e7d\u9580\u8f2a\u6e29\u5b58\u81b5\u982d\u5341\u4e8c\u6307\u8178\u5207\u9664\u3092\u65bd\u884c\u3057\u305f.\u816b\u760d\u306e\u7d44\u7e54\u50cf\u304a\u3088\u3073NSE, S-100 protein, somatostatin\u306a\u3069\u306e\u514d\u75ab\u7d44\u7e54\u5316\u5b66\u7684\u691c\u7d22\u306b\u3088\u308agangliocytic paraganglioma\u3068\u8a3a\u65ad\u3057\u305f.\u3055\u3089\u306b\u81b5\u5468\u56f2\u306e\u30ea\u30f3\u30d1\u7bc0\u306b\u3082\u5c0f\u3055\u306a\u8ee2\u79fb\u5de3\u3092\u8a8d\u3081\u305f.\u3053\u308c\u307e\u3067\u306bgangliocytic paraganglioma\u306e\u30ea\u30f3\u30d1\u7bc0\u8ee2\u79fb\u4f8b\u306f6\u4f8b\u5831\u544a\u3055\u308c\u3066\u3044\u308b\u304c, \u518d\u767a\u306e\u5831\u544a\u306f\u306a\u304f\u4e88\u5f8c\u306f\u826f\u597d\u3067\u3042\u3063\u305f.\u672c\u816b\u760d\u306b\u5bfe\u3057\u3066\u306f\u62e1\u5927\u624b\u8853\u306f\u5fc5\u8981\u3067\u306a\u3044\u3068\u8003\u3048\u3089\u308c\u305f." }, { "instance_id": "R28140xR28111", "comparison_id": "R28140", "paper_id": "R28111", "text": "Gangliocytic paraganglioma of the papilla of Vater with regional lymph node metastasis We report a case of duodenal gangliocytic paraganglioma (GP) in a 47-yr-old man. The GP arose in the second portion of the duodenum and was shown by histological examination to consist of epithelioid cells, spindle cells, and ganglion-like cells. Although most of the reported cases of GP have been regarded as benign, in the present case we found that a lymph node in the peripancreatic region contained a metastatic tumor. This is believed to be the third documented case of GP showing regional lymph node metastasis. The metastatic tumor consisted of only epithelioid cells, without spindle cells or ganglion-like cells, supporting the hypothesis that only epithelioid cells have malignant potential." }, { "instance_id": "R28140xR28121", "comparison_id": "R28140", "paper_id": "R28121", "text": "Duodenal gangliocytic paraganglioma showing lymph node metastasis: a case report and review of the literature Abstract A case of duodenal gangliocytic paraganglioma (DGP) in a 67-year-old woman is presented. The DGP arose in the second part of the duodenum. Although most of the reported cases of DGP are considered benign, in the present case, we found regional lymph nodes containing metastatic tumor. Previous reports have documented metastases containing only epithelioid cells. The current case demonstrates metastatic tumor in regional lymph nodes containing all 3 of the DGP components (spindle cells, ganglion-like cells, and epithelioid cells)." }, { "instance_id": "R28140xR28119", "comparison_id": "R28140", "paper_id": "R28119", "text": "Recurrent duodenal gangliocytic paraganglioma with lymph node metastases Gangliocytic paraganglioma is a rare tumor that occurs most commonly in the second portion of the duodenum. It is characterized by its triphasic cellular differentiation: epithelioid neuroendocrine cells, spindle cells with Schwann cell differentiation, and ganglion cells. Most gangliocytic paragangliomas are considered benign and are amenable to local excision. However, to our knowledge, 23 cases with lymph node metastasis have been reported, 1 case of bone metastasis, and 2 cases of liver metastases. Predictive factors that have been suggested for lymph node metastasis include size (larger than 2 cm), young age, and tumors exceeding the submucosal layer. Our objective was to review the clinical features, the histopathologic characteristics, and the differential diagnosis of gangliocytic paraganglioma and to discuss the value of the predictive factors for lymph node metastasis." }, { "instance_id": "R28140xR28127", "comparison_id": "R28140", "paper_id": "R28127", "text": "Locally advanced duodenal gangliocytic paraganglioma treated with adjuvant radiation therapy: case report and review of the literature BackgroundGangliocytic paraganglioma are rare neoplasms that predominantly arise in periampulary region. Though considered benign the disease can spread to regional lymphatics.Case presentationA 49 year old woman presented with melena and was found to have a periampullary mass. Endoscopic evaluation and biopsy demonstrated a periampullary paraganglioma. The tumor was resected with pylorus-preserving pancreaticoduodenectomy and was found to represent a gangliocytic paraganglioma associated with nodal metastases. In a controversial decision, the patient was treated with adjuvant external beam radiation therapy. She is alive and well one year following resection. The authors have reviewed the current literature pertaining to this entity and have discussed the biologic behavior of the tumor as well as the rationale for treatment strategies employed.ConclusionParaganglioma is a rare tumor that typically resides in the gastrointestinal tract and demonstrates low malignant potential. Due to rarity of the disease there is no consensus on the adjuvant treatment even though nearly 5% of the lesions demonstrate the malignant potential." }, { "instance_id": "R28191xR28169", "comparison_id": "R28191", "paper_id": "R28169", "text": "Empty container repositioning in liner shipping1 The efficient and effective management of empty containers is an important problem in the shipping industry. Not only does it have an economic effect, but it also has an environmental and sustainability impact, since the reduction of empty container movements will reduce fuel consumption and reduce congestion and emissions. The purposes of this paper are: to identify critical factors that affect empty container movements; to quantify the scale of empty container repositioning in major shipping routes; and to evaluate and contrast different strategies that shipping lines, and container operators, could adopt to reduce their empty container repositioning costs. The critical factors that affect empty container repositioning are identified through a review of the literature and observations of industrial practice. Taking three major routes (Trans-Pacific, Trans-Atlantic, Europe\u2013Asia) as examples, with the assumption that trade demands could be balanced among the whole network regardless the identities of individual shipping lines, the most optimistic estimation of empty container movements can be calculated. This quantifies the scale of the empty repositioning problem. Depending on whether shipping lines are coordinating the container flows over different routes and whether they are willing to share container fleets, four strategies for empty container repositioning are presented. Mathematical programming is then applied to evaluate and contrast the performance of these strategies in three major routes. 1A preliminary version was presented in IAME Annual Conference at Dalian, China, 2\u20134 April 2008." }, { "instance_id": "R28191xR28163", "comparison_id": "R28191", "paper_id": "R28163", "text": "Multicommodity network flow model for Asia's container ports This paper seeks to develop a multi-commodity network model to analyse the flow of containers within the Asia Pacific context. The model is used to evaluate the impact of container throughput in Asia's port by varying terminal handling charges and turnaround time. The three main regions analysed are north-east Asia, east Asia (Chinese port region) and south east Asia. Using the model, it could be shown that Busan port, which is an important transhipment hub in north-east Asia, could boost the container activities in the north-eastern part of China by improving its service quality. It is also found that the efficiency of the land link between Hong Kong and mainland China plays a crucial role for the future of Hong Kong port. While Singapore port maintains its position as a transhipment hub in south-east Asia, there would be expected competition from neighbouring low costs ports." }, { "instance_id": "R28191xR28189", "comparison_id": "R28191", "paper_id": "R28189", "text": "A revenue management slot allocation model for liner shipping networks The use of revenue management methods is still an up and coming topic in the liner shipping industry. In many liner shipping companies, decisions on container bookings are made by skilled employees without, or with little use of, decision support systems. Also in the literature, only a few publications on the topic of revenue management in the liner shipping industry can be found. Most of the models that have been suggested so far consider only one service and one ship cycle on this service. However, in liner shipping, it is important to consider the possibility of transhipment between services and of different demand situations at different times. Moreover, drawing inferences from similar developments in other industries and the literature, it seems promising to create a segmentation that divides container bookings into urgent and non-urgent cargo. This segmentation gives the customers more control over their cargo, and the carrier can gain additional revenue through extra charges. To achieve that aim, the carrier needs to keep some slots available until closing time, so he can offer slots on the next ship to customers with urgent cargo. On the basis of these facts, a new quantitative slot allocation model is developed that takes into account priority service segmentation, the network structure of liner shipping with the possibility of transhipment, and the existence of different ship cycles on the services. In contrast to the existing models, this approach leads to a more realistic representation of the situation in liner shipping. The booking limits resulting from the model can be used to decide whether a booking should be accepted or rejected in favour of a possible later and potentially more beneficial booking. A simulation study is done to test the model for different demand scenarios, which leads to promising results." }, { "instance_id": "R28191xR28173", "comparison_id": "R28191", "paper_id": "R28173", "text": "Flow balancing-based empty container repositioning in typical shipping service routes This article formulates the empty container repositioning problem for general shipping service routes based on container flow balancing. Two types of flow balancing mechanisms are analysed. The first is based on point-to-point balancing, which leads to a point-to-point repositioning policy. The second is based on coordinated balancing in the whole service, which leads to a coordinated repositioning policy. A simple heuristic algorithm is presented to solve the coordinated balancing problem which aims to minimize total empty container repositioning costs. The above two repositioning policies are then applied to a range of shipping services, representing typical route structures in existing shipping networks. The relative performances of these two policies and their sensitivity to route structure and trade demand are examined in both deterministic and stochastic situations. Managerial insights are subsequently derived." }, { "instance_id": "R28191xR28159", "comparison_id": "R28191", "paper_id": "R28159", "text": "Routing, ship size, and sailing frequency decision-making for a maritime hub-and-spoke container network This study formulates a two-objective model to determine the optimal liner routing, ship size, and sailing frequency for container carriers by minimizing shipping costs and inventory costs. First, shipping and inventory cost functions are formulated using an analytical method. Then, based on a trade-off between shipping costs and inventory costs, Pareto optimal solutions of the two-objective model are determined. Not only can the optimal ship size and sailing frequency be determined for any route, but also the routing decision on whether to route containers through a hub or directly to their destination can be made in objective value space. Finally, the theoretical findings are applied to a case study, with highly reasonable results. The results show that the optimal routing, ship size, and sailing frequency with respect to each level of inventory costs and shipping costs can be determined using the proposed model. The optimal routing decision tends to be shipping the cargo through a hub as the hub charge is decreased or its efficiency improved. In addition, the proposed model not only provides a tool to analyze the trade-off between shipping costs and inventory costs, but it also provides flexibility on the decision-making for container carriers." }, { "instance_id": "R28191xR28177", "comparison_id": "R28191", "paper_id": "R28177", "text": "On cost-efficiency of the global container shipping network This paper presents a simple formulation in the form of a pipe network for modelling the global container-shipping network. The cost-efficiency and movement-patterns of the current container-shipping network have been investigated using heuristic methods. The model is able to reproduce the overall incomes, costs, and container movement patterns for the industry as well as for the individual shipping lines and ports. It was found that the cost of repositioning empties is 27% of the total world fleet running cost and that overcapacity continues to be a problem. The model is computationally efficient. Implemented in the Java language, it takes one minute to run a full-scale network on a Pentium IV computer." }, { "instance_id": "R28191xR28149", "comparison_id": "R28191", "paper_id": "R28149", "text": "Repositioning empty containers in East and North China ports This study proposes a mathematical model for repositioning containers to ports in East and North China. With specially devised links and nodes, the proposed model can consider the strategies that are deployed commonly by liners. Moreover, the formulated problem can be solved rapidly owing to its underlying multi-commodity network structure. This feature increases the practicality of the proposed model because sensitivity analyses can be performed rapidly for the purpose of decision-making. Analytical results based on a global liner prove the rationality of the proposed model. Suggestions for repositioning empty containers are given according to the results of sensitivity analyses." }, { "instance_id": "R28191xR28155", "comparison_id": "R28191", "paper_id": "R28155", "text": "Optimal Slot Allocation in Intra-Asia Service for Liner Shipping Companies Liner shipping companies strive for fully loading cargo on vessels and often neglect revenue management opportunities. Shipping agents in different ports typically compete for additional slots on containerships to improve their own revenue. In booming markets, arguments over slot allocation between shipping agencies occur frequently. Intra-Asian service routes are designed to call at many ports to provide frequent sailings, reduced shipping time and direct delivery. Slot allocation in intra-Asia liner shipping is more complex than that for long-haul liner shipping. This study uses revenue management modelling as a decision-support tool to enhance profit and management performance of liner shipping agencies. The model is explained using a case study of Taiwan liner shipping company. Experimental results show the proposed model to have better applicability and performance than conventional slot allocation models." }, { "instance_id": "R28191xR28183", "comparison_id": "R28191", "paper_id": "R28183", "text": "Container routing in liner shipping Container paths play an important role in liner shipping services with container transshipment operations. In the literature, link-based multi-commodity flow formulations are widely used for container routing. However, they have two deficiencies: the level of service in terms of the origin-to-destination transit time is not incorporated and maritime cabotage may be violated. To overcome these deficiencies, we first present an operational network representation of a liner shipping network. Based on the network, an integer linear programming model is formulated to obtain container paths with minimum cost. Finally, we add constraints to the integer linear programming model, excluding those paths already obtained, so as to find all the container paths." }, { "instance_id": "R28235xR28218", "comparison_id": "R28235", "paper_id": "R28218", "text": "An Operational Model for Empty Container Management This paper proposes a mathematical programming approach for empty container management. Since directional imbalances in trade activities result in a surplus or shortage of empty containers in ports and depots, their management can be thought of as a min cost flow problem whose arcs represent services routes, inventory links and decisions concerning the time and place to lease containers from external sources. We adopt an hourly time-step in a dynamic network and, although this time-period generates large-size instances, the two implemented algorithms show a good computational efficiency. A possible case study of the Mediterranean basin is proposed and results are presented with a graphical representation, providing a useful support to decision-makers in the field." }, { "instance_id": "R28235xR28211", "comparison_id": "R28235", "paper_id": "R28211", "text": "An approximate dynamic programming approach for the empty container allocation problem The objective of this study is to demonstrate the successful application of an approximate dynamic programming approach in deriving effective operational strategies for the relocation of empty containers in the containerized sea-cargo industry. A dynamic stochastic model for a simple two-ports two-voyages (TPTV) system is proposed first to demonstrate the effectiveness of the approximate optimal solution obtained through a simulation based approach known as the temporal difference (TD) learning for average cost minimization. An exact optimal solution can be obtained for this simple TPTV model. Approximate optimal results from the TPTV model utilizing a linear approximation architecture under the TD framework can then be compared to this exact solution. The results were found comparable and showed promising improvements over an existing commonly used heuristics. The modeling and solution approach can be extended to a realistic multiple-ports multiple-voyages (MPMV) system. Some results for the MPMV case are shown." }, { "instance_id": "R28235xR28200", "comparison_id": "R28235", "paper_id": "R28200", "text": "Empty container reposition planning for intra-Asia liner shipping This paper addresses empty container reposition planning by plainly considering safety stock management and geographical regions. This plan could avoid drawback in practice which collects mass empty containers at a port then repositions most empty containers at a time. Empty containers occupy slots on vessel and the liner shipping company loses chance to yield freight revenue. The problem is drawn up as a two-stage problem. The upper problem is identified to estimate the empty container stock at each port and the lower problem models the empty container reposition planning with shipping service network as the Transportation Problem by Liner Problem. We looked at case studies of the Taiwan Liner Shipping Company to show the application of the proposed model. The results show the model provides optimization techniques to minimize cost of empty container reposition and to provide an evidence to adjust strategy of restructuring the shipping service network." }, { "instance_id": "R28235xR28193", "comparison_id": "R28235", "paper_id": "R28193", "text": "A Two-Stage Stochastic Network Model and Solution Methods for the Dynamic Empty Container Allocation Problem Containerized liner trades have been growing steadily since the globalization of world economies intensified in the early 1990s. However, these trades are typically imbalanced in terms of the numbers of inbound and outbound containers. As a result, the relocation of empty containers has become one of the major problems faced by liner operators. In this paper, we consider the dynamic empty container allocation problem where we need to reposition empty containers and to determine the number of leased con tainers needed to meet customers? demand over time. We formulate this problem as a two-stage stochastic network: in stage one, the parameters such as supplies, demands, and ship capacities for empty containers are deterministic; whereas in stage two, these parameters are random variables. We need to make decisions in stage one such that the total of the stage one cost and the expected stage two cost is minimized. By taking advantage of the network structure, we show how a stochastic quasi-gradient method and a stochastic hybrid approximation procedure can be applied to solve the problem. In addition, we propose some new variations of these methods that seem to work faster in practice. We conduct numerical tests to evaluate the value of the two-stage stochastic model over a rolling horizon environment and to investigate the behavior of the solution methods with different implementations." }, { "instance_id": "R28235xR28229", "comparison_id": "R28235", "paper_id": "R28229", "text": "Stochastic Optimization Model for Container Shipping of Sea Carriage Abstract Container shipping of sea-carriage optimization is an important way to effectively enhance the competitiveness of shipping companies' efficiency. The optimization model of container shipping of sea carriage based on chance-constrained programming is established to maximize the profit of a shipping company, which is the objective function. The variables include the number of heavy and empty containers shipped by lines, the shortage number of heavy containers, and the renting number of empty containers from requiring ports to meet the demand. The constraints include meeting the requirements of heavy and empty containers, weight and volume limit to lines, and supplying ability for empty containers. The number of empty containers supplied is stochastic for being affected by several factors. The chance-constrained programming is translated into an integer programming, and the Lingo 9.0 is used to solve the model. Simulation is conducted under varied parameters to show the effectiveness of the model in optimizing the shipping plan. The model can be used to optimize container shipping plan of sea carriage to a shipping company and get profit raise." }, { "instance_id": "R28235xR28214", "comparison_id": "R28235", "paper_id": "R28214", "text": "Liner shipping cargo allocation with repositioning of empty containers Abstract This paper is concerned with the cargo allocation problem considering empty repositioning of containers for a liner shipping company. The aim is to maximize the profit of transported cargo in a network, subject to the cost and availability of empty containers. The formulation is a multi-commodity flow problem with additional inter-balancing constraints to control repositioning of empty containers. In a study of the cost efficiency of the global container-shipping network, Song et al. (2005) estimate that empty repositioning cost constitutes 27% of the total world fleet running cost. An arc-flow formulation is decomposed using the Dantzig-Wolfe principle to a path-flow formulation. A linear relaxation is solved with a delayed column generation algorithm. A feasible integer solution is found by rounding the fractional solution and adjusting flow balance constraints with leased containers. Computational results are reported for seven instances based on real-life shipping networks. Solving the relaxed linear path-flow model with a column generation algorithm outperforms solving the relaxed linear arc-flow model with the CPLEX barrier solver even for very small instances. The proposed algorithm is able to solve instances with 234 ports, 16,278 demands over 9 time periods in 34 min. The integer solutions found by rounding down are computed in less than 5 s and the gap is within 0.01% from the upper bound of the linear relaxation. The solved instances are quite large compared to those tested in the reviewed literature." }, { "instance_id": "R28235xR28222", "comparison_id": "R28235", "paper_id": "R28222", "text": "Effectiveness of an empty container repositioning policy with flexible destination ports Empty container repositioning is an important issue in shipping industry. The majority of previous studies followed the same mechanism as moving laden containers, i.e. destination ports have to be specified before being lifted on vessels. An interesting practice that the authors observed from interviewing industrial experts is that empty containers may be lifted on a vessel with no determined destination ports, but will be lifted off the vessel when necessary. This paper aims to formulate a repositioning policy with flexible destination ports. The policy only specifies the direction of the empty flows, whereas ports of destinations are not determined in advance and empty containers are unloaded as needed. The effectiveness of this policy is evaluated using a simulation model. Numerical experiments demonstrate that the new policy is more appropriate than a conventional policy in situations with more severely imbalanced trade patterns or with relatively smaller container fleet sizes." }, { "instance_id": "R28333xR28323", "comparison_id": "R28333", "paper_id": "R28323", "text": "Ship scheduling and container shipment planning for liners in short-term operations Good short-term ship scheduling and container shipment planning are very important for liner operations; however, in Taiwan, most such carriers currently utilize a trial-and-error process. In this study, we employ network flow techniques to construct a model for such activities. A solution algorithm, based on Lagrangian relaxation, a subgradient method, and a heuristic for the upper-bound solution, is developed to solve the model. To demonstrate and to test how well the model and the solution algorithm apply in the real world, we performed a case study using operating data from a major Taiwanese marine shipping company. The test results show that the model and the solution algorithm could be useful references for ship scheduling and container shipment planning." }, { "instance_id": "R28333xR28294", "comparison_id": "R28333", "paper_id": "R28294", "text": "Dynamic determination of vessel speed and selection of bunkering ports for liner shipping under stochastic environment In this work, we study a liner shipping operational problem which considers how to dynamically determine the vessel speed and refueling decisions, for a single vessel in one service route. Our model is a multi-stage dynamic model, where the stochastic nature of the bunker prices is represented by a scenario tree structure. Also, we explicitly incorporate the uncertainty of bunker consumption rates into our model. As the model is a large-scale mixed integer programming model, we adopt a modified rolling horizon method to tackle the problem. Numerical results show that our framework provides a lower overall cost and more reliable schedule compared with the stationary model of a related work." }, { "instance_id": "R28333xR28276", "comparison_id": "R28333", "paper_id": "R28276", "text": "Short-term liner ship fleet planning with container transhipment and uncertain container shipment demand This paper proposes a short-term liner ship fleet planning problem by taking into account container transshipment and uncertain container shipment demand. Given a liner shipping service network comprising a number of ship routes, the problem is to determine the numbers and types of ships required in the fleet and assign each of these ships to a particular ship route to maximize the expected value of the total profit over a short-term planning horizon. These decisions have to be made prior to knowing the exact container shipment demand, which is affected by some unpredictable and uncontrollable factors. This paper thus formulates this realistic short-term planning problem as a two-stage stochastic integer programming model. A solution algorithm, integrating the sample average approximation with a dual decomposition and Lagrangian relaxation approach, is then proposed. Finally, a numerical example is used to evaluate the performance of the proposed model and solution algorithm." }, { "instance_id": "R28333xR28309", "comparison_id": "R28333", "paper_id": "R28309", "text": "Liner ship fleet deployment with container transshipment operations This paper proposes a liner ship fleet deployment (LSFD) problem with container transshipment operations. The proposed problem is formulated as a mixed-integer linear programming model which allows container transshipment operations at any port, any number of times, without explicitly defining the container transshipment variables. Experiments on the Asia\u2013Europe\u2013Oceania shipping network of a global liner shipping company show that more than one third (17\u201322 ports) of the total of 46 ports have transshipment throughputs. Computational studies based on randomly generated large-scale shipping networks demonstrate that the proposed model can be solved efficiently by CPLEX." }, { "instance_id": "R28333xR27011", "comparison_id": "R28333", "paper_id": "R27011", "text": "A dynamic model and algorithm for fleet planning By analysing the merits and demerits of the existing linear model for fleet planning, this paper presents an algorithm which combines the linear programming technique with that of dynamic programming to improve the solution to linear model for fleet planning. This new approach has not only the merits that the linear model for fleet planning has, but also the merit of saving computing time. The numbers of ships newly added into the fleet every year are always integers in the final optimal solution. The last feature of the solution directly meets the requirements of practical application. Both the mathematical model of the dynamic fleet planning and its algorithm are put forward in this paper. A calculating example is also given." }, { "instance_id": "R28333xR28274", "comparison_id": "R28333", "paper_id": "R28274", "text": "Liner ship fleet deployment with week-dependent container shipment demand This paper addresses a practical liner ship fleet deployment problem with week-dependent container shipment demand and transit time constraint, namely, maximum allowable transit time in container routing between a pair of ports. It first uses the space\u2013time network approach to generate practical container routes subject to the transit time constraints. This paper proceeds to formulate the fleet deployment problem based on the practical container routes generated. In view of the intractability of the formulation, two relaxation models providing lower bounds are built: one requires known container shipment demand at the fleet deployment stage, and the other assumes constant container shipment demand over the planning horizon. An efficient global optimization algorithm is subsequently proposed. Extensive numerical experiments on the shipping data of a global liner shipping company demonstrate the applicability of the proposed model and algorithm." }, { "instance_id": "R28333xR28304", "comparison_id": "R28333", "paper_id": "R28304", "text": "Liner shipping fleet deployment with cargo transshipment and demand uncertainty This paper addresses a novel liner shipping fleet deployment problem characterized by cargo transshipment, multiple container routing options and uncertain demand, with the objective of maximizing the expected profit. This problem is formulated as a stochastic program and solved by the sample average approximation method. In this technique the objective function of the stochastic program is approximated by a sample average estimate derived from a random sample, and then the resulting deterministic program is solved. This process is repeated with different samples to obtain a good candidate solution along with the statistical estimate of its optimality gap. We apply the proposed model to a case study inspired from real-world problems faced by a major liner shipping company. Results show that the case is efficiently solved to 1% of relative optimality gap at 95% confidence level." }, { "instance_id": "R28333xR28268", "comparison_id": "R28333", "paper_id": "R28268", "text": "A chance constrained programming model for short-term liner ship fleet planning problems This article deals with a short-term Liner Ship Fleet Planning (LSFP) problem with cargo shipment demand uncertainty for a single liner container shipping company. The cargo shipment demand uncertainty enables us to propose a chance constraint for each liner service route, which guarantees that the liner service route can satisfy the customers\u2019 demand at least with a predetermined probability. Assuming that cargo shipment demand between any two ports on each liner service route is normally distributed, this article develops an integer linear programming model with chance constraints for the short-term LSFP problem. The proposed integer linear programming model can be efficiently solved by any optimization solver such as CPLEX. Finally, a numerical example is carried out to assess the model and analyze impact of the chance constraints and cargo shipment demand." }, { "instance_id": "R28333xR28270", "comparison_id": "R28333", "paper_id": "R28270", "text": "Optimal operating strategy for a long-haul liner service route This paper proposes an optimal operating strategy problem arising in liner shipping industry that aims to determine service frequency, containership fleet deployment plan, and sailing speed for a long-haul liner service route. The problem is formulated as a mixed-integer nonlinear programming model that cannot be solved efficiently by the existing solution algorithms. In view of some unique characteristics of the liner shipping operations, this paper proposes an efficient and exact branch-and-bound based [epsilon]-optimal algorithm. In particular, a mixed-integer nonlinear model is first developed for a given service frequency and ship type; two linearization techniques are subsequently presented to approximate this model with a mixed-integer linear program; and the branch-and-bound approach controls the approximation error below a specified tolerance. This paper further demonstrates that the branch-and-bound based [epsilon]-optimal algorithm obtains a globally optimal solution with the predetermined relative optimality tolerance [epsilon] in a finite number of iterations. The case study based on an existing long-haul liner service route shows the effectiveness and efficiency of the proposed solution method." }, { "instance_id": "R28333xR28255", "comparison_id": "R28333", "paper_id": "R28255", "text": "A novel modeling approach for the fleet deployment problem within a short-term planning horizon This paper is concerned with model development for a short-term fleet deployment problem of liner shipping operations. We first present a mixed integer nonlinear programming model in which the optimal vessel speeds for different vessel types on different routes are interpreted as their realistic optimal travel times. We then linearize the proposed nonlinear model and obtain a mixed integer linear programming (MILP) model that can be efficiently solved by a standard mixed integer programming solver such as CPLEX. The MILP model determines the optimal route service frequency pattern and take into account the time window constraints of shipping services. Finally, we report our numerical results and performance of CPLEX on randomly generated instances." }, { "instance_id": "R28333xR28264", "comparison_id": "R28333", "paper_id": "R28264", "text": "Purification and Characterization of Heparin Lyase I from Bacteroides stercoris HJ-15 Heparin lyase I was purified to homogeneity from Bacteroides stercoris HJ-15 isolated from human intestine, by a combination of DEAE-Sepharose, gel-filtration, hydroxyapatite, and CM-Sephadex C-50 column chromatography. This enzyme preferred heparin to heparan sulfate, but was inactive at cleaving acharan sulfate. The apparent molecular mass of heparin lyase I was estimated as 48,000 daltons by SDS-PAGE and its isoelectric point was determined as 9.0 by IEF. The purified enzyme required 500 mM NaCl in the reaction mixture for maximal activity and the optimal activity was obtained at pH 7.0 and 50 degrees C. It was rather stable within the range of 25 to 50 degrees C but lost activity rapidly above 50 degrees C. The enzyme was activated by Co(2+) or EDTA and stabilized by dithiothreitol. The kinetic constants, K(m) and V(max) for heparin were 1.3 10(-5) M and 8.8 micromol/min.mg. The purified heparin lyase I was an eliminase that acted best on porcine intestinal heparin, and to a lesser extent on porcine intestinal mucosa heparan sulfate. It was inactive in the cleavage of N-desulfated heparin and acharan sulfate. In conclusion, heparin lyase I from Bacteroides stercoris was specific to heparin rather than heparan sulfate and its biochemical properties showed a substrate specificity similar to that of Flavobacterial heparin lyase I." }, { "instance_id": "R28333xR28315", "comparison_id": "R28333", "paper_id": "R28315", "text": "Sailing speed optimization for container ships in a liner shipping network This paper first calibrates the bunker consumption \u2013 sailing speed relation for container ships using historical operating data from a global liner shipping company. It proceeds to investigate the optimal sailing speed of container ships on each leg of each ship route in a liner shipping network while considering transshipment and container routing. This problem is formulated as a mixed-integer nonlinear programming model. In view of the convexity, non-negativity, and univariate properties of the bunker consumption function, an efficient outer-approximation method is proposed to obtain an e-optimal solution with a predetermined optimality tolerance level e. The proposed model and algorithm is applied to a real case study for a global liner shipping company." }, { "instance_id": "R28333xR28328", "comparison_id": "R28333", "paper_id": "R28328", "text": "A study on bunker fuel management for the shipping liner services In this paper, we consider a bunker fuel management strategy study for a single shipping liner service. The bunker fuel management strategy includes three components: bunkering ports selection (where to bunker), bunkering amounts determination (how much to bunker) and ship speeds adjustment (how to adjust the ship speeds along the service route). As these three components are interrelated, it is necessary to optimize them jointly in order to obtain an optimal bunker fuel management strategy for a single shipping liner service. As an appropriate model representing the relationship between bunker fuel consumption rate and ship speed is important in the bunker fuel management strategy, we first study in detail this empirical relationship. We find that the relationship can be different for different sizes of containerships and provide an empirical model to express this relationship for different sizes of containerships based on real data obtained from a shipping company. We further highlight the importance of using the appropriate consumption rate model in the bunker fuel management strategy as using a wrong or aggregated model can result in inferior or suboptimal strategies. We then develop a planning level model to determine the optimal bunker fuel management strategy, i.e. optimal bunkering ports, bunkering amounts and ship speeds, so as to minimize total bunker fuel related cost for a single shipping liner service. Based on the optimization model, we study the effects of port arrival time windows, bunker fuel prices, ship bunker fuel capacity and skipping port options on the bunker fuel management strategy of a single shipping liner service. We finally provide some insights obtained from two case studies." }, { "instance_id": "R28333xR28287", "comparison_id": "R28333", "paper_id": "R28287", "text": "Fleet deployment optimization for liner shipping: an integer programming model Extending and improving an earlier work of the second author, an Integer Programming (IP) model is developed to minimize the operating and lay-up costs for a fleet of liner ships operating on various routes. The IP model determines the optimal deployment of an existing fleet, given route, service, charter, and compatibility constraints. Two examples are worked with extensive actual data provided by Flota Mercante Grancolombiana (FMG). The optimal deployment is solved for their existing ship and service requirements and results and conclusions are given." }, { "instance_id": "R28333xR28307", "comparison_id": "R28333", "paper_id": "R28307", "text": "Schedule Design and Container Routing in Liner Shipping A liner shipping company seeks to provide liner services with shorter transit time compared with the benchmark of market-level transit time because of the ever-increasing competition. When the itineraries of its liner service routes are determined, the liner shipping company designs the schedules of the liner routes such that the wait time at transshipment ports is minimized. As a result of transshipment, multiple paths are available for delivering containers from the origin port to the destination port. Therefore, the medium-term (3 to 6 months) schedule design problem and the operational-level container-routing problem must be investigated simultaneously. The schedule design and container-routing problems were formulated by minimization of the sum of the total transshipment cost and penalty cost associated with longer transit time than the market-level transit time, minus the bonus for shorter transit time. The formulation is nonlinear, noncontinuous, and nonconvex. A genetic local search approach was developed to find good solutions to the problem. The proposed solution method was applied to optimize the Asia\u2013Europe\u2013Oceania liner shipping services of a global liner company." }, { "instance_id": "R28333xR28331", "comparison_id": "R28333", "paper_id": "R28331", "text": "Liner shipping cycle cost modelling, fleet deployment optimization and what-if analysis This article formulates the mathematical model of the liner shipping company cycle cost and attempts to optimize the operational profile of company assets in regards to specific network of routes of cargo flows and vessels portfolio. In other words it attempts to give a practical solution to the modern shipping company fleet deployment problem. This is achieved by developing a generic cost model methodology that aims to minimize total operating costs by using Genetic Algorithms in optimizing various predefined attributes such as operational speed. The finalized model could be applicable to liner shipping companies for optimization purposes of liner networks, as well as for simulation and examination of possible scenarios and what-if analysis. In the era of recession, a demand shock is examined and, interesting results are produced. In further research, this model can estimate the impact of environmental legislation intensification. In the what-if analysis, the model can depict how an initial design of a liner system can be optimized by modifying system attributes to dynamically meet new requirements." }, { "instance_id": "R28333xR28311", "comparison_id": "R28333", "paper_id": "R28311", "text": "Robust schedule design for liner shipping services This paper examines the design of liner ship route schedules that can hedge against the uncertainties in port operations, which include the uncertain wait time due to port congestion and uncertain container handling time. The designed schedule is robust in that uncertainties in port operations and schedule recovery by fast steaming are captured endogenously. This problem is formulated as a mixed-integer nonlinear stochastic programming model. A solution algorithm which incorporates a sample average approximation method, linearization techniques, and a decomposition scheme, is proposed. Extensive numerical experiments demonstrate that the algorithm obtains near-optimal solutions with the stochastic optimality gap less 1.5% within reasonable time." }, { "instance_id": "R28333xR28247", "comparison_id": "R28333", "paper_id": "R28247", "text": "Liner shipping service optimisation with reefer containers capacity: an application to northern Europe\u2013South America trade Increasing the number of vessels in a container liner service while reducing speeds, known as slow steaming strategy, has been a short-term response since 2008 to the challenges of over-capacity and the rise in bunker prices faced by shipping lines. This strategy, which reduces the fuel cost per voyage but increases the operating costs as more vessels are added to the service, is difficult to sustain when the transit time significantly affects the transportation demand. This article proposes a model applied to this situation, referred to as a case of optimal speed under semi-elastic demand, for which containerised perishable product transport is sensitive to time, while frozen and dry products are not. It investigates if slow steaming is still optimal when working to maximise the total profit on the cycle. In order to demonstrate the proposed model, a numerical application is carried out for a direct Northern Europe to East Coast of South America container service, a route selected due to the high volume of fresh products. For this application, the speed that maximises the total profit with inelastic and semi-elastic demand is then estimated for several bunker fuel prices." }, { "instance_id": "R28333xR28290", "comparison_id": "R28333", "paper_id": "R28290", "text": "Minimizing fuel emissions by optimizing vessel schedules in liner shipping with uncertain port times We consider the problem of designing an optimal vessel schedule in the liner shipping route to minimize the total expected fuel consumption (and emissions) considering uncertain port times and frequency requirements on the liner schedule. The general optimal scheduling problem is formulated and tackled by simulation-based stochastic approximation methods. For special cases subject to the constraint of 100% service level, we prove the convexity and continuous differentiability of the objective function. Structural properties of the optimal schedule under certain conditions are obtained with useful managerial insights regarding the impact of port uncertainties. Case studies are given to illustrate the results." }, { "instance_id": "R28333xR28285", "comparison_id": "R28333", "paper_id": "R28285", "text": "Fleet deployment optimization for liner shipping. Part 2. Implementation and results We use linear programming (LP) for solving the problem of the optimal deployment of an existing fleet of multipurpose or fully containerized ships, among a given set of routes, including information for lay-up time, if any, and type and number of extra ships to charter, based on a detailed and realistic model for the calculation of the operating costs of all the ship types in every route and on a suitable LP formulation developed in earlier work of the authors. The optimization model is also applicable to the problem of finding the best fleet compostion and deployment, in a given set of trade routes, which may be the case when a shipping company is considering new or modified services, or a renewal of the existing fleet. In addition, two promising mixed linear-integer programming formulations are suggested." }, { "instance_id": "R28333xR28257", "comparison_id": "R28333", "paper_id": "R28257", "text": "A note on liner ship fleet deployment Liner ship fleet deployment is the assignment of ships to liner service routes for delivering containers in a planning horizon. This paper first reformulates the maximum number of voyages that can be completed in the planning horizon, which is incorrectly defined by the existing studies on liner ship fleet deployment. It proceeds to remedy a constraint in Gelareh and Meng (Transp Res 46E:76\u201389 2010) such that the nonlinear programming model proposed by Gelareh and Meng (Transp Res 46E:76\u201389 2010) is equivalent to its linear formulation. Finally, a reformulation of the fleet deployment model is presented to eliminate the combinatorial behavior of the model proposed by Gelareh and Meng (Transp Res 46E:76\u201389 2010). Computational study demonstrates significant efficiency improvement of the reformulated model." }, { "instance_id": "R28333xR28302", "comparison_id": "R28333", "paper_id": "R28302", "text": "Reversing port rotation directions in a container liner shipping network Reversing port rotation directions of ship routes is a practical alteration of container liner shipping networks. The port rotation directions of ship routes not only affect the transit time of containers, as has been recognized by the literature, but also the shipping capacity and transshipment cost. This paper aims to obtain the optimal port rotation directions that minimize the generalized network-wide cost including transshipment cost, slot-purchasing cost and inventory cost. A mixed-integer linear programming model is proposed for the optimal port rotation direction optimization problem and it nests a minimum cost multi-commodity network flow model. The proposed model is applied to a liner shipping network operated by a global liner shipping company. Results demonstrate that real-case instances could be efficiently solved and significant cost reductions are gained by optimization of port rotation directions." }, { "instance_id": "R28333xR28237", "comparison_id": "R28333", "paper_id": "R28237", "text": "A matheuristic for the liner shipping network design problem We present an integer programming based heuristic, a matheuristic, for the liner shipping network design problem. This problem consists of finding a set of container shipping routes defining a capacitated network for cargo transport. The objective is to maximize the revenue of cargo transport, while minimizing the cost of operating the network. Liner shipping companies publish a set of routes with a time schedule, and it is an industry standard to have a weekly departure at each port call on a route. A weekly frequency is achieved by deploying several vessels to a single route, respecting the available fleet of container vessels. The matheuristic is composed of four main algorithmic components: a construction heuristic, an improvement heuristic, a reinsertion heuristic, and a perturbation heuristic. The improvement heuristic uses an integer program to select a set of improving port insertions and removals on each service. Computational results are reported for the benchmark suite LINER-LIB 2012 following the industry standard of weekly departures on every schedule. The heuristic shows overall good performance and is able to find high quality solutions within competitive execution times. The matheuristic can also be applied as a decision support tool to improve an existing network by optimizing on a designated subset of the routes. A case study is presented for this approach with very promising results." }, { "instance_id": "R28369xR28340", "comparison_id": "R28369", "paper_id": "R28340", "text": "Cascading effects, network configurations and optimal transshipment volumes in liner shipping As a consequence of the delivery of large container ships and of the drop in demand since 2008, companies are struggling with low freight rates. In addition, newly delivered container ships have been deployed on the main east\u2013west trades, whereas medium-sized vessels have been pushed to smaller sectors through a phenomenon known as the cascading effect. This article investigates how this effect might lead liner companies to modify their services, such as including additional stops at major hubs. This article proposes a model that factors in potential changes in network configuration from direct to indirect services, and then tests the model with an empirical study of northern Europe/South American services that adds in a call at Tangier or Algeciras to the schedule. The results show that the optimal network configuration depends on vessel sizes and the transshipment volumes to be collected at the hub." }, { "instance_id": "R28369xR28359", "comparison_id": "R28369", "paper_id": "R28359", "text": "The container shipping network design problem with empty container repositioning This paper addresses the design of container liner shipping service networks by explicitly taking into account empty container repositioning. Two key and interrelated issues, those of deploying ships and containers are usually treated separately by most existing studies on shipping network design. In this paper, both issues are considered simultaneously. The problem is formulated as a two-stage problem. A genetic algorithm-based heuristic is developed for the problem. Through a number of numerical experiments that were conducted it was shown that the problem with the consideration of empty container repositioning provides a more insightful solution than the one without." }, { "instance_id": "R28369xR28349", "comparison_id": "R28369", "paper_id": "R28349", "text": "A mixed integer programming model for routing containerships In this paper, we formulate a mixed integer programming model for routing containerships. Our model helps in evaluating the optimal sequence of port calls and the number of containers transported between port pairs given the trip cycle time. Some numerical examples and a real world application of the Trans Pacific route are presented. The computational results show that our model, which solve the mixed integer programming optimally, is quite efficient and applicable to real world problem." }, { "instance_id": "R28369xR28356", "comparison_id": "R28369", "paper_id": "R28356", "text": "A model and solution algorithm for optimal routing of a time-chartered containership We formulate a mathematical programming model for optimally routing a chartered container ship. Our model helps in evaluating whether a container ship should be chartered or not. The model calculates the optimal sequence of port calls, the number of containers transported between port pairs, and the number of trips the ship makes in the chartered period. A specialized algorithm is developed to solve the integer network subprograms which determine the sequence of port calls. Our algorithm, which solves an integer program optimally, is quite efficient. Comparison of computational results with a Lagrangean Relaxation method and an embedded dynamic program are also presented." }, { "instance_id": "R28369xR28367", "comparison_id": "R28369", "paper_id": "R28367", "text": "Optimal design of container liner services: Interactions with the transport demand in ports This article introduces an optimization model for container liner services that simultaneously optimizes the shipping route and the container slot allocation on vessels by considering the interactions between the container shipping scheme and the transport demand in the ports. The model consists of two input factors and two processes. The input factors include the shipping scheme and the transport demand in the ports. The processes relate to the optimization of the shipping scheme for a fixed transport demand and the adjustment of the transport demand for a given shipping schedule. We demonstrate the model in an empirical case study on liner service optimization on the trade route between East Asia and West Europe." }, { "instance_id": "R28369xR28283", "comparison_id": "R28369", "paper_id": "R28283", "text": "Fleet deployment optimization for liner shipping. Part 1: background, problem formulation and solution approaches The background and the literature in liner fleet scheduling is reviewed and the objectives and assumptions of our approach are explained. We develop a detailed and realistic model for the estimation of the operating costs of liner ships on various routes, and present a linear programming formulation for the liner fleet deployment problem. Independent approaches for fixing both the service frequencies in the different routes and the speeds of the ships, are presented." }, { "instance_id": "R28369xR28336", "comparison_id": "R28369", "paper_id": "R28336", "text": "Two Approaches to Scheduling Container Ships with an Application to the North Atlantic Route The development of an interactive computer program and a heuristic optimising model for scheduling container ships on the North Atlantic is described. The constraints and multiple objective criteria governing these schedules are discussed and sample results of both approaches given." }, { "instance_id": "R28407xR28375", "comparison_id": "R28407", "paper_id": "R28375", "text": "Network Design and Allocation Mechanisms for Carrier Alliances in Liner Shipping Many real-world systems operate in a decentralized manner, where individual operators interact with varying degrees of cooperation and self motive. In this paper, we study transportation networks that operate as an alliance among different carriers. In particular, we study alliance formation among carriers in liner shipping. We address tactical problems such as the design of large-scale networks (that result from integrating the service networks of different carriers in an alliance) and operational problems such as the allocation of limited capacity on a transportation network among the carriers in the alliance. We utilize concepts from mathematical programming and game theory and design a mechanism to guide the carriers in an alliance to pursue an optimal collaborative strategy. The mechanism provides side payments to the carriers, as an added incentive, to motivate them to act in the best interest of the alliance while maximizing their own profits. Our computational results suggest that the mechanism can be used to help carriers form sustainable alliances." }, { "instance_id": "R28407xR28398", "comparison_id": "R28407", "paper_id": "R28398", "text": "A branch and cut algorithm for the container shipping network design problem The network design problem in liner shipping is of increasing importance in a strongly competitive market where potential cost reductions can influence market share and profits significantly. In this paper the network design and fleet assignment problems are combined into a mixed integer linear programming model minimizing the overall cost. To better reflect the real-life situation we take into account the cost of transhipment, a heterogeneous fleet, route dependent capacities, and butterfly routes. To the best of our knowledge it is the first time an exact solution method to the problem considers transhipment cost. The problem is solved with branch-and-cut using clover and transhipment inequalities. Computational results are reported for instances with up to 15 ports." }, { "instance_id": "R28407xR28377", "comparison_id": "R28407", "paper_id": "R28377", "text": "Joint Routing and Deployment of a Fleet of Container Vessels Liner companies face a complex problem in determining the optimal routing and deployment of a fleet of container vessels. This paper presents a model and an algorithm to address the two problems jointly. The model captures the revenues and operating expenses of a global liner company, and allows for the representation of vessel types with different cost and operating properties, transhipment hubs and associated costs, port delays, regional trade imbalances and the possibility of rejecting transportation demand selectively. Benchmark tests demonstrate that the proposed algorithm achieves good solutions quickly. The proposed algorithm is applied in a case study with 120 ports of call distributed throughout the globe. The case study explores the sensitivity of optimal fleet deployment and routing to varying bunker costs." }, { "instance_id": "R28407xR28372", "comparison_id": "R28407", "paper_id": "R28372", "text": "Ship Scheduling and Network Design for Cargo Routing in Liner Shipping Acommon problem faced by carriers in liner shipping is the design of their service network. Given a set of demands to be transported and a set of ports, a carrier wants to design service routes for its ships as efficiently as possible, using the underlying facilities. Furthermore, the profitability of the service routes designed depends on the paths chosen to ship the cargo. We present an integrated model, a mixed-integer linear program, to solve the ship-scheduling and the cargo-routing problems, simultaneously. The proposed model incorporates relevant constraints, such as the weekly frequency constraint on the operated routes, and emerging trends, such as the transshipment of cargo between two or more service routes. To solve the mixed-integer program, we propose algorithms that exploit the separability of the problem. More specifically, a greedy heuristic, a column generation-based algorithm, and a two-phase Benders decomposition-based algorithm are developed, and their computational efficiency in terms of the solution quality and the computational time taken is discussed. An efficient iterative search algorithm is proposed to generate schedules for ships. Computational experiments are performed on randomly generated instances simulating real life with up to 20 ports and 100 ships. Our results indicate high percentage utilization of ships' capacities and a significant number of transshipments in the final solution." }, { "instance_id": "R28407xR26992", "comparison_id": "R28407", "paper_id": "R26992", "text": "Optimal liner fleet routeing strategies The objective of this paper is to suggest practical optimization models for routing strategies for liner fleets. Many useful routing and scheduling problems have been studied in the transportation literature. As for ship scheduling or routing problems, relatively less effort has been devoted, in spite of the fact that sea transportation involves large capital and operating costs. This paper suggests two optimization models that can be useful to liner shipping companies. One is a linear programming model of profit maximization, which provides an optimal routing mix for each ship available and optimal service frequencies for each candidate route. The other model is a mixed integer programming model with binary variables which not only provides optimal routing mixes and service frequencies but also best capital investment alternatives to expand fleet capacity. This model is a cost minimization model." }, { "instance_id": "R28407xR28388", "comparison_id": "R28407", "paper_id": "R28388", "text": "A path based model for a green liner shipping network design problem Abstract\u2014Liner shipping networks are the backbone ofinternational trade providing low transportation cost, whichis a major driver of globalization. These networks are underconstant pressure to deliver capacity, cost effectiveness and envi-ronmentally conscious transport solutions. This article proposesa new path based MIP model for the Liner shipping NetworkDesign Problem minimizing the cost of vessels and their fuelconsumption facilitating a green network. The proposed modelreduces problem size using a novel aggregation of demands.A decomposition method enabling delayed column generationis presented. The subproblems have similar structure to Ve-hicle Routing Problems, which can be solved using dynamicprogramming.Index Terms\u2014liner shipping, network design, mathematicalprogramming, column generation, green logistics I. I NTRODUCTION G LOBAL liner shipping companies provide port to porttransport of containers, on a network which representsa billion dollar investment in assets and operational costs.The liner shipping network can be viewed as a transporta-tion system for general cargo not unlike an urban mass transitsystem for commuters, where each route (service) providestransportation links between ports and the ports allow fortranshipment in between routes (services). The liner shippingindustry is distinct from other maritime transportation modesprimarily due to a \ufb01xed public schedule with weekly fre-quency of port calls as an industry standard (Stopford 1997).The network consists of a set of services. A service connectsa sequence of ports in a cycle at a given frequency, usuallyweekly. In Figure 1 a service connecting Montreal-Halifaxand Europe is illustrated. The weekly frequency means thatseveral vessels are committed to the service as illustrated byFigure 1, where four vessels cover a round trip of 28 daysplaced with one week in between vessels. This roundtrip forthe vessel is referred to as a rotation. Note that the Montrealservice carries cargo to the Mediterranean and Asia. Thisillustrates that transhipments to other connecting servicesis at the core of liner shipping. Therefore, the design of aservice is complex, as the set of rotations and their interactionthrough transhipment is a transportation system extending thesupply chains of a multiplum of businesses. Figure 2 illus-trates two services interacting in transporting goods betweenMontreal-Halifax and the Mediterranean, while individually" }, { "instance_id": "R28407xR27007", "comparison_id": "R28407", "paper_id": "R27007", "text": "Optimal fleet design in a ship routing problem Abstract The problem of deciding an optimal fleet (the type of ships and the number of each type) in a real liner shipping problem is considered. The liner shipping problem is a multi-trip vehicle routing problem, and consists of deciding weekly routes for the selected ships. A solution method consisting of three phases is presented. In phase 1, all feasible single routes are generated for the largest ship available. Some of these routes will use only a small portion of the ship\u2019s capacity and can be performed by smaller ships at less cost. This fact is used when calculating the cost of each route. In phase 2, the single routes generated in phase 1 are combined into multiple routes. By solving a set partitioning problem (phase 3), where the columns are the routes generated in phases 1 and 2, we find both the optimal fleet and the coherent routes for the fleet." }, { "instance_id": "R28407xR28302", "comparison_id": "R28407", "paper_id": "R28302", "text": "Reversing port rotation directions in a container liner shipping network Reversing port rotation directions of ship routes is a practical alteration of container liner shipping networks. The port rotation directions of ship routes not only affect the transit time of containers, as has been recognized by the literature, but also the shipping capacity and transshipment cost. This paper aims to obtain the optimal port rotation directions that minimize the generalized network-wide cost including transshipment cost, slot-purchasing cost and inventory cost. A mixed-integer linear programming model is proposed for the optimal port rotation direction optimization problem and it nests a minimum cost multi-commodity network flow model. The proposed model is applied to a liner shipping network operated by a global liner shipping company. Results demonstrate that real-case instances could be efficiently solved and significant cost reductions are gained by optimization of port rotation directions." }, { "instance_id": "R28407xR28391", "comparison_id": "R28407", "paper_id": "R28391", "text": "International Container Transportation Network Analysis Considering Post-Panamax Class Container Ships In accordance with the growing market of international container transport, carriers are employing global strategies such as routing change, introducing huge container vessels, and making alliance, while shippers are making global logistic strategies by SCM. On the other hand, the port administrators are also making various efforts to invite cargo and carriers. However, no research can be seen to answer how effective their strategies are, and what will be the results of market. The present paper proposes a tool to answer these questions by introducing of network equilibrium analysis of the international container transport market. In the analytical model, the port administrator's strategies and O.D. cargo volume are assumed to be given a priori, and domestic shippers' strategy is formulated as to minimize their total transport cost by choosing the export and the import and assigning of their cargo to each port. The carriers' strategy is assumed to minimize the ship operation cost including the port and the cargo handling charge by choosing routes and vessel size of each of the route and controlling the tariff of each route. The analytical result is obtained by solving of the equilibrium state of the market under a given scenario of each port administrator. The present paper gives some numerical examples focusing on East Asian main ports, and particularly on Japanese shippers' behavior." }, { "instance_id": "R28446xR28424", "comparison_id": "R28446", "paper_id": "R28424", "text": "Modeling containerized shipping for developing countries Abstract Containerized transportation which is well established in industrialized countries is now being extended to trade with developing countries. Port authorities in these areas must make decisions regarding the location of container terminals, the size of terminal facilities, and the water depth required. This paper examines the economics of alternative route structure, a factor which affects design vessel size and other design parameters of the terminals. A five destination port system, with the trans-shipment terminal centrally located, and retaining 50% of the cargo, is compared to direct service with nine ports of call on a round trip of about 12,000 nm. Trans-shipment is the optimal scenario with costs 21\u201328% lower than direct service costs. Other trans-shipment and direct service scenarios are compared indicating conditions that are less favorable to trans-shipment. The results of this study can be utilized to make an initial assessment of the viability of trans-shipment in a specific case. The methodology can be extended to an in-depth study which should be based on data specifically tailored for the models." }, { "instance_id": "R28446xR28433", "comparison_id": "R28446", "paper_id": "R28433", "text": "Essential elements in tactical planning models for container liner shipping Tactical planning models for liner shipping problems such as network design and fleet deployment usually minimize the total cost or maximize the total profit subject to constraints including ship availability, service frequency, ship capacity, and transshipment. Most models in the literature do not consider slot-purchasing, multi-type containers, empty container repositioning, or ship repositioning, and they formulate the numbers of containers to transport as continuous variables. This paper develops a mixed-integer linear programming model that captures all these elements. It further examines from the theoretical point of view the additional computational burden introduced by incorporating these elements in the planning model. Extensive numerical experiments are conducted to evaluate the effects of the elements on tactical planning decisions. Results demonstrate that slot-purchasing and empty container repositioning have the largest impact on tactical planning decisions and relaxing the numbers of containers as continuous variables has little impact on the decisions." }, { "instance_id": "R28446xR28283", "comparison_id": "R28446", "paper_id": "R28283", "text": "Fleet deployment optimization for liner shipping. Part 1: background, problem formulation and solution approaches The background and the literature in liner fleet scheduling is reviewed and the objectives and assumptions of our approach are explained. We develop a detailed and realistic model for the estimation of the operating costs of liner ships on various routes, and present a linear programming formulation for the liner fleet deployment problem. Independent approaches for fixing both the service frequencies in the different routes and the speeds of the ships, are presented." }, { "instance_id": "R28446xR28411", "comparison_id": "R28446", "paper_id": "R28411", "text": "A Mixed Integer Programming Model on the Location of a Hub Port in the East Coast of South America The paper introduces a mixed integer programming model on the selection of a hub port in the East Coast of South America, among a set of 11 ports that are servicing the regional demand for container transportation. Ports in Brazil, Argentina and Uruguay are considered, together with several origin/destination ports in the world. The model minimises total system costs, taking into account both port costs (dues and terminal handling charges) and shipping costs (feedering and mainline). In total, the model consists of 3,883 decision variables and 4,225 constraints. It turns up the port of Santos (Brazil) as the optimal single-hub solution, with the port of Buenos Aires (Argentina) as a close runner up. In addition, the model provides tentative estimates of improvements in demand and costs necessary to bring a certain port up to hub status. Despite some bold assumptions and limitations \u2013 mainly due to data availability \u2013 the model offers a straightforward decision tool to all ports in the world aspiring to achieve hub status and all that comes with it." }, { "instance_id": "R28487xR27013", "comparison_id": "R28487", "paper_id": "R27013", "text": "A Scheduling Model for a High Speed Containership Service: A Hub and Spoke Short-Sea Application Advances in ship technology must be demonstrably beneficial and profitable before shipowners invest. Since the capital costs are large, investment in new technology will tend to be incremental rather than radical and will be affected by the financial viability of the service in which the ship is employed. While operating costs depend on the technology used for a given freight task, revenue from operations depends on transit time, frequency of service, freight rates, and volume of containers carried. Although high speed vessels (40 knots+) carry small payloads over short distances, this disadvantage can be offset by the greater number of round voyages achievable over a given period. After examining factors influencing the demand for fast cargo services, a high speed cargo ship design is described along with appropriate cargo handling and terminal operations. Using a mixed integer programming approach, an optimisation model is used to determine the profitability of a short-haul hub and spoke feeder operation based on Singapore. The model is used to calculate the optimum number of ships required to meet the given distribution task, the most profitable deployment of the fleet and the profitability over the planning horizon." }, { "instance_id": "R28487xR28467", "comparison_id": "R28487", "paper_id": "R28467", "text": "The economic viability of container mega ships In this study, we analyze the container mega-ship viability by considering competitive circumstances. We adopt a non-zero sum two-person game with two specific strategies based on different service network configurations for different ship sizes: hub-and-spoke for mega-ship and multi-port calling for conventional ship size. A shipping characteristic for each route is approximately optimized to set up pay-off (or profit) matrixes for both players. Throughout model applications for Asia-Europe and Asia-North America trades, the mega-ship is competitive in all scenarios for Asia-Europe, while it is viable for Asia-North America only when the freight rate and feeder costs are low." }, { "instance_id": "R28487xR28451", "comparison_id": "R28487", "paper_id": "R28451", "text": "Fleet deployment, network design and hub location of liner shipping companies A mixed integer linear programming formulation is proposed for the simultaneous design of network and fleet deployment of a deep-sea liner service provider. The underlying network design problem is based on a 4-index (5-index by considering capacity type) formulation of the hub location problem which are known for their tightness. The demand is elastic in the sense that the service provider can accept any fraction of the origin\u2013destination demand. We then propose a primal decomposition method to solve instances of the problem to optimality. Numerical results confirm superiority of our approach in comparison with a general-purpose mixed integer programming solver." }, { "instance_id": "R28487xR28458", "comparison_id": "R28487", "paper_id": "R28458", "text": "Hub-and-spoke network design and fleet deployment for string planning of liner shipping Abstract All shipping liner companies divide their service regions into several rotations (strings) in order to operate their container vessels. A string is the ordered set of ports at which a container vessel will call. Each port is usually called at no more than twice along one string, although a single port may be called at several times on different strings. The size of string dictates the number of vessels required to offer a given frequency of service. In order to better use their shipping capacity, groups of Liner Service Providers sometimes make a short term agreement to merge some of their service routes (in a certain region) into one main ocean going rotation and p feeder rotations. In order to minimize the weighted sum of transit time, and fixed deployment costs, this paper proposes a mixed integer linear programming model of the network design, and an allocation of proper capacity size and frequency setting for every rotation. Given that none of the existing general-purpose MIP solvers is able to solve even very small problem instances in a reasonable time, we propose a Lagrangian decomposition approach which uses a heuristic procedure and is capable of obtaining practical and high quality solutions in reasonable times. The model will be applied on a real example, and we shall present some of the results obtained by our model which show how it facilitates a better use of assets and a significant reduction in the use of fuel, therefore allowing a more environmentally friendly service." }, { "instance_id": "R28487xR28485", "comparison_id": "R28487", "paper_id": "R28485", "text": "Optimization of shipping network of trunk and feeder lines for inter-regional and intra-regional container transport This paper firstly analyzes the structure of the existing container shipping network, which covers several areas located respectively in two counties, and develops a new kind of shipping network that consists of trunk and feeder lines. Secondly, the paper constructs a bi-level programming model that can be used to optimize the container shipping network with the aim to minimize the generalized transport costs. Then the model is tested with container O-D data between the ports in the Surrounding Bohai area in China and two ports in the West of the USA. Through the test calculation, a feasible optimized shipping network consisting of trunk and feeder lines is founded for the case study area." }, { "instance_id": "R28487xR28456", "comparison_id": "R28487", "paper_id": "R28456", "text": "Liner shipping hub network design in a competitive environment A mixed integer programming formulation is proposed for hub-and-spoke network design in a competitive environment. It addresses the competition between a newcomer liner service provider and an existing dominating operator, both operating on hub-and-spoke networks. The newcomer company maximizes its market share--which depends on the service time and transportation cost--by locating a predefined number of hubs at candidate ports and designing its network. While general-purpose solvers do not solve instances of even small size, an accelerated Lagrangian method combined with a primal heuristic obtains promising bounds. Our computational experiments on real instances of practical size indicate superiority of our approach." }, { "instance_id": "R28487xR28478", "comparison_id": "R28487", "paper_id": "R28478", "text": "Estimation of CO2 reduction for Japanese domestic container transportation based on mathematical models This research discusses domestic feeder container transportation connected with international trades in Japan. Optimal round trip courses of container ship fleet from the perspective of CO2 emission reduction are calculated and analyzed to obtain basic knowledge about CO2 emission reduction in the container feeder transportation system. Specifically, based on the weekly origin\u2013destination (OD) data at a hub port (Kobe) and other related transportation data, the ship routes are designed by employing a mathematical modeling approach. First, a mixed integer programming model is formulated and solved by using an optimization software that employs branch and bound algorithm. The objective function of the model is to minimize the CO2 emission subject to necessary (and partially simplified) constraints. The model is then tested on various types of ships with different speed and capacity. Moreover, it is also tested on various waiting times at hub port to investigate the effect in CO2 emission of the designated fleet. Both the assessment method of container feeder transportation and the transportation\u2019s basic insights in view of CO2 emission are shown through the analysis." }, { "instance_id": "R28487xR28460", "comparison_id": "R28487", "paper_id": "R28460", "text": "The marine single assignment nonstrict Hub location problem: formulations and experimental examples Marine hub-and-spoke networks have been applied to routing containerships for over two decades, but few papers have devoted their attention to these networks. The marine network problems are known as single assignment nonstrict hub location problems (SNHLPs), which deal with the optimal location of hubs and allocation of spokes to hubs in a network, allowing direct routes between some spokes. In this paper we present a satisfactory approach for solving SHNLPs. The quadratic integer profit programming consists of two-stage computational algorithms: a hub location model and a spoke allocation model. We apply a heuristic scheme based on the shortest distance rule and an experimental case based on the Trans-Pacific Routes is presented to illustrate the model\u2019s formulation and solution methods. The results indicate that the model is a concave function, exploiting the economies of scale for total profit with respect to the number of hubs. The spoke allocation may change an optimal choice of hub" }, { "instance_id": "R28487xR28483", "comparison_id": "R28487", "paper_id": "R28483", "text": "A genetic algorithm for the hub-and-spoke problem applied to containerized cargo transport A genetic algorithm for the hub-and-spoke problem (GAHP) is proposed in this work. The GAHP configures a hub-and-spoke network with shuttle services for containerized cargo transport. For a fixed number of hubs, it determines the best network configuration of hub locations and spoke allocations that minimizes the total costs of the system. The GAHP has a simple individual structure with integer number representation, where spokes, their allocations, and hub locations are easily recognized. Due to the characteristics of the problem, which has fixed number of hubs, rearrangements should be performed after every process. The GAHP rearrangement process includes improvements of individual structures, resulting in an improved population. Before applying the GAHP to the container transport network problem, the algorithm is validated using the Civil Aeronautics Board data set, which is extensively used in the literature to benchmark heuristics of hub location problems. To illustrate an example of a hub-and-spoke network with shuttle services, a study case with 18 ports is analyzed." }, { "instance_id": "R28614xR28587", "comparison_id": "R28614", "paper_id": "R28587", "text": "Hepatic undifferentiated (embryonal) sarcoma in an adult Undifferentiated (embryonal) sarcoma of the liver (USL) is a rare malignant tumour with a poor prognosis. The absence of specific symptoms, the rapid tumour growth, the normality of the common tumour markers, and the consequential delay in the diagnosis often result in significant enlargement of the" }, { "instance_id": "R28614xR28529", "comparison_id": "R28614", "paper_id": "R28529", "text": "Malignant Mesenchymoma and Birth Defects Malignant mesenchymoma developed in an 18-year-old patient with phenytoin-associated cleft lip and palate. Although these conditions may be related by chance, the possibility of transplacental carcinogenesis by phenytoin should be considered, especially since neuroblastoma was reported recently in two children with phenytoin-induced malformations. Following combination chemotherapy for metastases, the patient experienced a 7-year disease-free interval, which is consistent with recent improvement in the treatment of soft-tissue sarcomas." }, { "instance_id": "R28614xR28544", "comparison_id": "R28614", "paper_id": "R28544", "text": "Primary sarcoma of the liver in the adult Primary undifferentiated saroma of the liver is a rare tumor, being documented primarily in the pediatric age group. This report describes the occurrence of such a tumor in a 55\u2010year\u2010old white woman with Meyenburg\u2010s complexes of the liver and the CRST syndrome. The clinicopathologic features of the tumor in the adult are characterized and the literature is reviewed." }, { "instance_id": "R28614xR28569", "comparison_id": "R28614", "paper_id": "R28569", "text": "Undifferentiated (embryonal) sarcoma of the liver: Pathologic findings and long-term survival after complete surgical resection Undifferentiated (embryonal) sarcoma of liver is a rare tumor with a reputed poor prognosis. Four patients with this tumor are reported, of whom three were alive without recurrence 1.5, 2.5, and 12 years after initial complete surgical resection, and two of whom received no adjuvant therapy. The fourth patient, in whom complete surgical resection of tumor was not achieved, died with recurrent tumor at 13 months. The latter tumor differed histologically and consisted mainly of closely packed smaller undifferentiated cells with a higher mitotic and apoptotic rate. Eosinophilic globules, characteristic of embryonal sarcoma, were found in some cases to contain condensed nuclear chromatin, evidence of origin from tumor cells dying by apoptosis. One tumor mainly contained large cysts lined by biliary\u2010type epithelium; this suggested an origin from a multipotent precursor cell able to differentiate along both stromal and epithelial lines." }, { "instance_id": "R28614xR28558", "comparison_id": "R28614", "paper_id": "R28558", "text": "Undifferentiated Sarcoma of the Liver in a 21-year-old Woman: Case Report A successful surgical case of malignant undifferentiated (embryonal) sarcoma of the liver (USL), a rare tumor normally found in children, is reported. The patient was a 21-year-old woman, complaining of epigastric pain and abdominal fullness. Chemical analyses of the blood and urine and complete blood counts revealed no significant changes, and serum alpha-fetoprotein levels were within normal limits. A physical examination demonstrated a film, slightly tender lesion at the liver's edge palpable 10 cm below the xiphoid process. CT scan and ultrasonography showed an oval mass, confined to the left lobe of the liver, which proved to be hypovascular on angiography. At laparotomy, a large, 18 x 15 x 13 cm tumor, found in the left hepatic lobe was resected. The lesion was dark red in color, encapsulated, smooth surfaced and of an elastic firm consistency. No metastasis was apparent. Histological examination resulted in a diagnosis of undifferentiated sarcoma of the liver. Three courses of adjuvant chemotherapy, including adriamycin, cis-diaminodichloroplatinum, vincristine and dacarbazine were administered following the surgery with no serious adverse effects. The patient remains well with no evidence of recurrence 12 months after her operation." }, { "instance_id": "R28614xR28572", "comparison_id": "R28614", "paper_id": "R28572", "text": "Primary Gastrointestinal Sarcomas \u2013 A Report of 21 Cases Twenty-one patients with primary gastrointestinal sarcomas underwent surgery at the University Clinics of Hamburg from 1970 to 1990. Main symptoms were gastrointestinal bleeding and abdominal pain. Al" }, { "instance_id": "R28614xR28567", "comparison_id": "R28614", "paper_id": "R28567", "text": "Embryonal sarcoma of the liver in an adult treated with preoperative chemotherapy, radiation therapy, and hepatic lobectomy A rare case of embryonal sarcoma of the liver in a 28\u2010year\u2010old man is reported. The patient was treated preoperatively with a combination of chemotherapy and radiation therapy. Complete surgical resection, 4.5 months after diagnosis, consisted of a left hepatic lobectomy. No viable tumor was found in the operative specimen. The patient was disease\u2010free 20 months postoperatively." }, { "instance_id": "R28614xR28601", "comparison_id": "R28614", "paper_id": "R28601", "text": "Mutation of TP53 gene is involved in carcinogenesis of hepatic undifferentiated (embryonal) sarcoma of the adult, in contrast with Wnt or telomerase pathways: an immunohistochemical study of three cases with genomic relation in two cases BACKGROUND/AIMS Hepatic undifferentiated (embryonal) sarcoma (HUS) is an exceptional hepatic malignant tumor in adults. Genetic studies were never reported in adult cases. METHODS In this study concerning three cases of HUS occurring in adult, we studied the three classical ways of carcinogenesis i.e. the TP53 (p53), Wnt (CTNNB1/beta-catenin and AXIN1) and telomerase (hTERT) pathways. We studied the expression of p53, beta-catenin and telomerase catalytic subunit hTERT by immunohistochemistry in the three cases; we determined TP53 gene mutation in two cases and the genome-wide allelotype, AXIN1, and CTNNB1/beta-catenin gene mutation in one case. RESULTS Immunohistochemistry showed an overexpression of p53 in more than 80% of tumoral cells; furthermore, mutations of TP53 were observed in two cases, involving the sequence-specific DNA binding domain. In contrast, no mutation was found in CTNNB1/beta-catenin and AXIN1 genes. Tumoral cells did not show hTERT staining nor nuclear expression of beta-catenin. In addition, allelotype analysis in one case showed loss of heterozygosity of chromosome 7p, 11p, 17p, 22q, and allelic imbalance of 1p, 8p, 20q. CONCLUSIONS In this report of HUS in three adult patients, we emphasize the role of TP53 pathway in carcinogenesis of this rare tumor. This point could be of interest for therapeutic strategies." }, { "instance_id": "R28614xR28599", "comparison_id": "R28614", "paper_id": "R28599", "text": "Undifferentiated (embryonal) sarcoma of liver in adult: a case report We report a case of undifferentiated (embryonal) sarcoma of the liver (UESL), which showed cystic formation in a 20-year-old man with no prior history of any hepatitis or liver cirrhosis. He was admitted with abdominal pain and a palpable epigastric mass. The physical examination findings were unremarkable except for a tenderness mass and the results of routine laboratory studies were all within normal limits. Abdominal ultrasound and computed tomography (CT) both showed a cystic mass in the left hepatic lobe. Subsequently, the patient underwent a tumor excision and another two times of hepatectomy because of tumor recurrence. Immunohistochemical study results showed that the tumor cells were positive for vimentin, alpha-1-antichymotrypsin (AACT) and desmin staining, and negative for alpha-fetoprotein (AFP), and eosinophilic hyaline globules in the cytoplasm of some giant cells were strongly positive for periodic acid-Schiff (PAS) staining. The pathological diagnosis was UESL. The patient is still alive with no tumor recurrence for four months." }, { "instance_id": "R28614xR28560", "comparison_id": "R28614", "paper_id": "R28560", "text": "Undifferentiated (Embryonal) Sarcoma of the Liver Undifferentiated (embryonal) sarcoma of the liver is a primitive mesenchymal neoplasm with predilection for individuals in the first 2 decades of life. In this study (10 boys, 6 girls), children in the age range of 6\u201310 years were most commonly affected (63%). Clinical features most frequently noted on presentation were abdominal pain or a palpable mass. In two cases there was cardiac involvement caused by invasion of the inferior vena cava with extension into the right atrium and ventricle; both children died of progressive dyspnea from tumor embolization to the lungs. One patient was a member of a kindred with the cancer family syndrome (Li-Fraumeni syndrome). There were 13 tumor-related deaths (86% mortality); one child was alive with recurrent tumor in the upper abdomen. Complete surgical resection was attempted in 10 of 15 children who underwent exploratory laparotomy; 2 were alive and well 1 and 5 years later, whereas 1 patient had a recurrence in the upper abdomen 3 years after diagnosis. Ultrastructural study (five cases) and immunohistochemistry (11 cases) supported a mesenchymal origin for the tumor, but failed to identify any diagnostic immunophenotype or specific line of differentiation. Coexpression of vimentin and cytokeratin was seen in three cases. Prompt detection of this aggressive tumor with complete surgical resection is the key to a successful outcome, but this is very difficult to achieve. Recent experience suggests that aggressive adjuvant chemotherapy may improve survival in some cases." }, { "instance_id": "R28614xR28596", "comparison_id": "R28614", "paper_id": "R28596", "text": "Clinical outcomes of surgical resections for primary liver sarcoma in adults: results from a single centre BACKGROUND Primary hepatic sarcoma is a rare tumour with a poor prognosis. METHODS From 1997 to 2002 eight patients had liver resection for primary sarcoma of the liver at our institution. The clinical characteristics, imaging findings, surgical procedures, adjuvant therapy and outcome were retrospectively reviewed. There were two patients each with angiosarcoma (AS), leiomyosarcoma (LMS), and undifferentiated embryonal sarcoma (UES), one patient with epithelioid hemangioendothelioma (EHE) and one patient with malignant peripheral nerve sheath sarcoma (PNSS). RESULTS The most common presenting symptoms were right upper quadrant pain and fever. Typical imaging findings were a heterogenous mass with poorly defined margins, pseudocapsule and aberrant vasculature. Preoperative diagnosis of a primary liver sarcoma was made in 7/8 cases, either by fine needle aspiration (n = 5) or angiography (n = 2). Five right hepatectomies and three trisegmentectomies were performed. An R (0) resection was possible in three cases. Two patients developed complications and there was one death. Adjuvant chemoradiotherapy was administered to 5/7 patients. Systemic chemotherapy led to tumour regression in both patients with UES which enabled a second hepatic resection. CONCLUSIONS The majority of patients with primary liver sarcoma present with right upper quadrant pain, fever and a liver mass. Differentiating the rare primary liver sarcoma from the much more common hepatocellular carcinoma (HCC) may aid in planning therapy. Patients with resectable tumours should be referred for surgery. Liver resection combined with adjuvant chemotherapy are the mainstays of treatment for UES in the adult." }, { "instance_id": "R28614xR28556", "comparison_id": "R28614", "paper_id": "R28556", "text": "Undifferentiated (embryonal) sarcoma of the liver. Epithelial features as shown by immunohistochemical analysis and electron microscopic examination The cell differentiation properties of two undifferentiated (embryonal) sarcomas of the liver (USL), one in a 9\u2010year\u2010old boy and one in a 23\u2010year\u2010old man, were studied by immunohistochemistry and electron microscopic examination. Both tumors showed a part pleomorphic pattern and a part myxoid spindle cell sarcomatous pattern. An electron microscopic examination showed some tonofilament\u2010like bundles of intermediate filaments and cell junctions in one case, suggesting the presence of epithelial differentiation in that tumor. An immunohistochemical analysis showed a large number of cytokeratin\u2010positive neoplastic cells in both cases as studied with two different monoclonal antibodies, and most cells were positive for vimentin. No cells showed desmin, glial fibrillary acidic protein, or epithelial membrane antigen (EMA). Due to the presence of cytokeratin immunoreactivity, the possibility was considered that these tumors would represent anaplastic sarcomatoid variants of hepatocellular carcinoma. The tumor cells showed cytoplasmic alpha\u20101\u2010antitrypsin (AAT) positivity, and were negative for alpha\u2010fetoprotein. Because the immunoreactivity of AAT is widespread in different types of tumors, it is not possible to conclude that the AAT positivity would indicate the hepatoma nature of USL; however, this remains a possibility, especially when considering that in vitro transformed hepatocytes have been shown to be capable of forming sarcomatous tumors." }, { "instance_id": "R28614xR28605", "comparison_id": "R28614", "paper_id": "R28605", "text": "Pediatric and Adult Hepatic Embryonal Sarcoma: A Comparative Ultrastructural Study with Morphologic Correlations Hepatic embryonal (undifferentiated) sarcoma (ES) is a rare pediatric tumor occurring predominantly in the first decade of life, but a few examples of adult ES have also been described. Isolated ultrastructural reports describe contradictory lines of differentiation in these tumors. Four pediatric and 3 adult ES cases were studied ultrastructurally and features were correlated with morphology. Morphologically, tumors were composed of mixture of plump spindle cells and bizarre giant cells, showing abundant cytoplasmic eosinophilic globules. Ultrastructurally, the hallmark features in all cases included dilated RERs and secondary lysosomes with dense precipitates. Dilated mitochondria and mitochondrial\u2013RER complexes were often seen. Other features included intracytoplasmic fat droplets, scant actin microfilaments, and focal glycogen pools. In summary, pediatric and adult ES show similar morphologic and ultrastructural features. Ultrastructurally, hepatic ES have distinctive findings, including dilated RER and electron-dense lysosomal precipitates, which correlate with the eosinophilic hyaline bodies seen microscopically. These findings suggest that ES are composed of fibroblastic, fibrohistiocytic, and undifferentiated cells. Other lines of differentiation were not identified." }, { "instance_id": "R28614xR28532", "comparison_id": "R28614", "paper_id": "R28532", "text": "A Case of Primary Undifferentiated Sarcoma of the Liver: Diagnosed by Peritoneoscopy and Guided Biopsy The authors report a case of primary undifferentiated sarcoma of the liver, observed in a 36-year-old man. Diagnosis was established at peritoneoscopy and guided biopsy, and confirmed by autopsy two months later." }, { "instance_id": "R28614xR28546", "comparison_id": "R28614", "paper_id": "R28546", "text": "Primary malignant mesenchymal tumour of the liver in an elderly female A case of primary malignant mesenchymal tumour of the liver occurring in an 86\u2010year\u2010old woman is described. This very uncommon tumour has previously only been described in children and young adults, the previous oldest being 28 years of age. The tumour was large, rapidly growing though well circumscribed and extensively necrotic. Microscopically it was mostly composed of spindle cell sarcoma without differentiating features. Epithelial lined ductules were seen throughout the tumour and degenerate hepatocytes were enveloped in the tumour peripherally. Intracytoplasmic and extracellular PAS\u2010positive, diastase\u2010resistant bodies were present, some showing positive staining for alpha\u20101\u2010antitrypsin. The tumour is compared with previous reports and its differential diagnosis and nomenclature discussed." }, { "instance_id": "R28614xR28576", "comparison_id": "R28614", "paper_id": "R28576", "text": "Undifferentiated embryonal sarcoma of the liver IMAGING FINDINGS Case 1: Initial abdominal ultrasound scan demonstrated a large heterogeneous, echogenic mass within the liver displaying poor blood flow (Figure 1). A contrast-enhanced CT scan of the chest, abdomen and pelvis was then performed, revealing a well-defined, hypodense mass in the right lobe of the liver (Figure 2) measuring approximately 11.3 cm AP x 9.8 cm transverse x 9.2 cm in the sagittal plane. An arterial phase CT scan showed a hypodense mass with a hyperdense rim (Figure 3A) and a delayed venous phase scan showed the low-density mass with areas of increased density displaying the solid nature of the lesion (Figure 3A). These findings combined with biopsy confirmed undifferentiated embryonal sarcoma (UES). Case 2: An abdominal ultrasound scan initially revealed a large heterogeneous lesion in the center of the liver with a small amount of blood flow (Figure 4). Inconclusive ultrasound results warranted a CT scan of the chest, abdomen and pelvis with contrast, which showed a heterogeneous low-density lesion within the right lobe of the liver that extended to the left lobe (Figure 5). The mass measured approximately 12.3 AP x 12.3 transverse x 10.7 in the sagittal plane. Arterial-phase CT showed a well-defined hypodense mass with vessels coursing throughout (Figure 6A). Delayed venous phase demonstrated the solid consistency of the mass by showing continued filling in of the mass (Figure 6B). A PET scan was done to evaluate the extent of the disease. FDG-avid tissue was documented in the large lobulated hepatic mass (Figure 7A,7B)." }, { "instance_id": "R28614xR28549", "comparison_id": "R28614", "paper_id": "R28549", "text": "Hepatic sarcomas in adults: a review of 25 cases. Twenty-five patients with an apparently primary sarcoma of the liver are reviewed. Presenting complaints were non-specific, but hepatomegaly and abnormal liver function tests were usual. Use of the contraceptive pill (four of 11 women) was identified as a possible risk factor; one patient had previously been exposed to vinyl chloride monomer. Detailed investigation showed that the primary tumour was extrahepatic in nine of the 25 patients. Distinguishing features of the 15 patients with confirmed primary hepatic sarcoma included a lower incidence of multiple hepatic lesions and a shorter time from first symptoms to diagnosis, but the most valuable discriminator was histology. Angiosarcomas and undifferentiated tumours were all of hepatic origin, epithelioid haemangioendotheliomas (EHAE) occurred as primary and secondary lesions and all other differentiated tumours arose outside the liver. The retroperitoneum was the most common site of an occult primary tumour and its careful examination therefore crucial: computed tomography scanning was found least fallible in this respect in the present series. Where resection (or transplantation), the best treatment, was not possible, results of therapy were disappointing, prognosis being considerably worse for patients with primary hepatic tumours. Patients with EHAE had a better overall prognosis regardless of primary site." }, { "instance_id": "R28889xR28795", "comparison_id": "R28889", "paper_id": "R28795", "text": "User-centered, Evolutionary Search in Conceptual Software Design Although much evidence exists to suggest that conceptual software engineering design is a difficult task for software engineers to perform, current computationally intelligent tool support for software engineers is limited. While search-based approaches involving module clustering and refactoring have been proposed and show promise, such approaches are downstream in terms of the software development lifecycle - the designer must manually produce a design before search-based clustering and refactoring can take place. Interactive, user-centered search-based approaches, on the other hand, support the designer at the beginning of, and during, conceptual software design, and are investigated in this paper by means of a case study. Results show that interactive evolutionary search, supported by software agents, appears highly promising. As an open system, search is steered jointly by designer preferences and software agents. Directly traceable to the design problem domain, a mass of useful and interesting conceptual class designs are arrived at which may be visualized by the designer with quantitative measures of structural integrity such as design coupling and class cohesion. The conceptual class designs are found to be of equivalent or better coupling and cohesion when compared to a manual conceptual design of the case study, and by exploiting concurrent execution, the performance of the software agents is highly favorable." }, { "instance_id": "R28889xR28788", "comparison_id": "R28889", "paper_id": "R28788", "text": "Multi-objective Improvement of Software Using Co-evolution and Smart Seeding Optimising non-functional properties of software is an important part of the implementation process. One such property is execution time, and compilers target a reduction in execution time using a variety of optimisation techniques. Compiler optimisation is not always able to produce semantically equivalent alternatives that improve execution times, even if such alternatives are known to exist. Often, this is due to the local nature of such optimisations. In this paper we present a novel framework for optimising existing software using a hybrid of evolutionary optimisation techniques. Given as input the implementation of a program or function, we use Genetic Programming to evolve a new semantically equivalent version, optimised to reduce execution time subject to a given probability distribution of inputs. We employ a co-evolved population of test cases to encourage the preservation of the program's semantics, and exploit the original program through seeding of the population in order to focus the search. We carry out experiments to identify the important factors in maximising efficiency gains. Although in this work we have optimised execution time, other non-functional criteria could be optimised in a similar manner." }, { "instance_id": "R28889xR28793", "comparison_id": "R28889", "paper_id": "R28793", "text": "Multiobjective Optimization of SLA-Aware Service Composition In service oriented architecture, each application is often designed as a set of abstract services, which defines its functions. A concrete service(s) is selected at runtime for each abstract service to fulfill its function. Since different concrete services may operate at different quality of service measures, application developers are required to select an appropriate set of concrete services that satisfies a given service level agreement when a number of concrete services are available for each abstract service. This problem, the QoS-aware service composition problem, is known NP-hard, which takes a significant amount of time and costs to find optimal solutions (optimal combinations of concrete services) from a huge number of possible solutions. This paper proposes an optimization framework, called E3, to address the issue. By leveraging a multiobjective genetic algorithm, E3 heuristically solves the QoS-aware service composition problem in a reasonably short time. The algorithm E3 proposes can consider multiple SLAs simultaneously and produce a set of Pareto solutions, which have the equivalent quality to satisfy multiple SLAs." }, { "instance_id": "R28889xR28826", "comparison_id": "R28889", "paper_id": "R28826", "text": "A Multi-objective Approach to Testing Resource Allocation in Modular Software Systems Nowadays, as the software systems become increasingly large and complex, the problem of allocating the limited testing-resource during the testing phase has become more and more difficult. In this paper, we propose to solve the testing-resource allocation problem (TRAP) using multi-objective evolutionary algorithms. Specifically, we formulate TRAP as two multi-objective problems. First, we consider the reliability of the system and the testing cost as two objectives. In the second formulation, the total testing-resource consumed is also taken into account as the third goal. Two multi-objective evolutionary algorithms, non-dominated sorting genetic algorithm II (NSGA2) and multi-objective differential evolution algorithms (MODE), are applied to solve the TRAP in the two scenarios. This is the first time that the TRAP is explicitly formulated and solved by multi-objective evolutionary approaches. Advantages of our approaches over the state-of-the-art single-objective approaches are demonstrated on two parallel-series modular software models." }, { "instance_id": "R28889xR28838", "comparison_id": "R28889", "paper_id": "R28838", "text": "Using Hybrid Algorithm For Pareto Effcient Multi-Objective Test Suite Minimisation Test suite minimisation techniques seek to reduce the effort required for regression testing by selecting a subset of test suites. In previous work, the problem has been considered as a single-objective optimisation problem. However, real world regression testing can be a complex process in which multiple testing criteria and constraints are involved. This paper presents the concept of Pareto efficiency for the test suite minimisation problem. The Pareto-efficient approach is inherently capable of dealing with multiple objectives, providing the decision maker with a group of solutions that are not dominated by each other. The paper illustrates the benefits of Pareto efficient multi-objective test suite minimisation with empirical studies of two and three objective formulations, in which multiple objectives such as coverage and past fault-detection history are considered. The paper utilises a hybrid, multi-objective genetic algorithm that combines the efficient approximation of the greedy approach with the capability of population based genetic algorithm to produce higher-quality Pareto fronts." }, { "instance_id": "R28889xR28845", "comparison_id": "R28889", "paper_id": "R28845", "text": "A Pareto Ant Colony Algorithm applied to the Class Integration and Test Order Problem In the context of Object-Oriented software, many works have investigated the Class Integration and Test Order (CITO) problem, proposing solutions to determine test orders for the integration test of the program classes. The existing approaches based on graphs can generate solutions that are sub-optimal, and do not consider the different factors and measures that can affect the stubbing process. To overcome this limitation, solutions based on Genetic Algorithms (GA) have presented promising results. However, the determination of a cost function, which is able to generate the best solutions, is not always a trivial task, mainly for complex systems with a great number of measures. Therefore, we introduce, in this paper, a multi-objective optimization approach to better represent the CITO problem. The approach generates a set of good solutions that achieve a balanced compromise between the different measures (objectives). It was implemented by a Pareto Ant Colony (P-ACO) algorithm, which is described in detail. The algorithm was used in a set of real programs and the obtained results are compared to the GA results. The results allow discussing the difference between single and multi-objective approaches especially for complex systems with a greater number of dependencies among the classes." }, { "instance_id": "R28889xR28817", "comparison_id": "R28889", "paper_id": "R28817", "text": "Search-based Genetic Optimization for Deployment and Reconfiguration of Software in the Cloud Migrating existing enterprise software to cloud platforms involves the comparison of competing cloud deployment options (CDOs). A CDO comprises a combination of a specific cloud environment, deployment architecture, and runtime reconfiguration rules for dynamic resource scaling. Our simulator CDOSim can evaluate CDOs, e.g., regarding response times and costs. However, the design space to be searched for well-suited solutions is extremely huge. In this paper, we approach this optimization problem with the novel genetic algorithm CDOXplorer. It uses techniques of the search-based software engineering field and CDOSim to assess the fitness of CDOs. An experimental evaluation that employs, among others, the cloud environments Amazon EC2 and Microsoft Windows Azure, shows that CDOXplorer can find solutions that surpass those of other state-of-the-art techniques by up to 60%. Our experiment code and data and an implementation of CDOXplorer are available as open source software." }, { "instance_id": "R28889xR28652", "comparison_id": "R28889", "paper_id": "R28652", "text": "On the value of user preferences in search-based software engineering: A case study in software product lines Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces." }, { "instance_id": "R28889xR28842", "comparison_id": "R28889", "paper_id": "R28842", "text": "Multi-Objective Approaches to Optimal Testing Resource Allocation in Modular Software Systems Software testing is an important issue in software engineering. As software systems become increasingly large and complex, the problem of how to optimally allocate the limited testing resource during the testing phase has become more important, and difficult. Traditional Optimal Testing Resource Allocation Problems (OTRAPs) involve seeking an optimal allocation of a limited amount of testing resource to a number of activities with respect to some objectives (e.g., reliability, or cost). We suggest solving OTRAPs with Multi-Objective Evolutionary Algorithms (MOEAs). Specifically, we formulate OTRAPs as two types of multi-objective problems. First, we consider the reliability of the system and the testing cost as two objectives. Second, the total testing resource consumed is also taken into account as the third objective. The advantages of MOEAs over state-of-the-art single objective approaches to OTRAPs will be shown through empirical studies. Our study has revealed that a well-known MOEA, namely Nondominated Sorting Genetic Algorithm II (NSGA-II), performs well on the first problem formulation, but fails on the second one. Hence, a Harmonic Distance Based Multi-Objective Evolutionary Algorithm (HaD-MOEA) is proposed and evaluated in this paper. Comprehensive experimental studies on both parallel-series, and star-structure modular software systems have shown the superiority of HaD-MOEA over NSGA-II for OTRAPs." }, { "instance_id": "R28889xR28831", "comparison_id": "R28889", "paper_id": "R28831", "text": "Generating Feasible Test Paths from an Executable Model Using a Multi-Objective Approach Search-based testing techniques using meta-heuristics, like evolutionary algorithms, has been largely used for test data generation, but most approaches were proposed for white-box testing. In this paper we present an evolutionary approach for test sequence generation from a behavior model, in particular, Extended Finite State Machine. An open problem is the production of infeasible paths, as these should be detected and discarded manually. To circumvent this problem, we use an executable model to obtain feasible paths dynamically. An evolutionary algorithm is used to search for solutions that cover a given test purpose, which is a transition of interest. The target transition is used as a criterion to get slicing information, in this way, helping to identify the parts of the model that affect the test purpose. We also present a multi-objective search: the test purpose coverage and the sequence size minimization, as longer sequences require more effort to be executed." }, { "instance_id": "R28889xR28808", "comparison_id": "R28889", "paper_id": "R28808", "text": "An Analysis of the Effects of Composite Objectives in Multiobjective Software Module Clustering The application of multiobjective optimization to address Software Engineering problems is a growing trend. Multiobjective algorithms provide a balance between the ability of the computer to search a large solution space for valuable solutions and the capacity of the human decision-maker to select an alternative when two or more incomparable objectives are presented. However, when more than a single objective is available, the set of objectives to be considered by the search becomes part of the decision. In this paper, we address the efficiency and effectiveness of using two composite objectives while searching solutions for the software clustering problem. We designed an experimental study which shows that a multiobjective genetic algorithm can find a set of solutions with increased quality and using less processing time if these composite objectives are suppressed from the formulation for the software clustering problem." }, { "instance_id": "R28889xR28801", "comparison_id": "R28889", "paper_id": "R28801", "text": "Generating Software Architecture Spectrum with Multi-Objective Genetic Algorithms A possible approach to partly automated software architecture design is the application of heuristic search methods like genetic algorithms. However, traditional genetic algorithms use a single fitness function with weighted terms for different quality attributes. This is inadequate for software architecture design that has to satisfy multiple incomparable quality requirements simultaneously. To overcome this problem, the use of Pareto optimality is proposed. This technique is studied in the presence of two central quality attributes of software architectures, modifiability and efficiency. The technique produces a spectrum of architecture proposals, ranging from highly modifiable (and less efficient) to highly efficient (and less modifiable). The technique has been implemented and evaluated using an example system. The results demonstrate that Pareto optimality has potential for producing a sensible set of architectures in the efficiency-modifiability space." }, { "instance_id": "R28889xR28799", "comparison_id": "R28889", "paper_id": "R28799", "text": "Solving the Class Responsibility Assignment Problem in Object-Oriented Analysis with Multi-Objective Genetic Algorithms In the context of object-oriented analysis and design (OOAD), class responsibility assignment is not an easy skill to acquire. Though there are many methodologies for assigning responsibilities to classes, they all rely on human judgment and decision making. Our objective is to provide decision-making support to reassign methods and attributes to classes in a class diagram. Our solution is based on a multi-objective genetic algorithm (MOGA) and uses class coupling and cohesion measurement for defining fitness functions. Our MOGA takes as input a class diagram to be optimized and suggests possible improvements to it. The choice of a MOGA stems from the fact that there are typically many evaluation criteria that cannot be easily combined into one objective, and several alternative solutions are acceptable for a given OO domain model. Using a carefully selected case study, this paper investigates the application of our proposed MOGA to the class responsibility assignment problem, in the context of object-oriented analysis and domain class models. Our results suggest that the MOGA can help correct suboptimal class responsibility assignment decisions and perform far better than simpler alternative heuristics such as hill climbing and a single-objective GA." }, { "instance_id": "R28889xR28833", "comparison_id": "R28889", "paper_id": "R28833", "text": "A Multi-Objective Genetic Algorithm to Test Data Generation Evolutionary testing has successfully applied search based optimization algorithms to the test data generation problem. The existing works use different techniques and fitness functions. However, the used functions consider only one objective, which is, in general, related to the coverage of a testing criterion. But, in practice, there are many factors that can influence the generation of test data, such as memory consumption, execution time, revealed faults, and etc. Considering this fact, this work explores a ultiobjective optimization approach for test data generation. A framework that implements a multi-objective genetic algorithm is described. Two different representations for the population are used, which allows the test of procedural and object-oriented code. Combinations of three objectives are experimentally evaluated: coverage of structural test criteria, ability to reveal faults, and execution time." }, { "instance_id": "R28889xR28848", "comparison_id": "R28889", "paper_id": "R28848", "text": "Generating Integration Test Orders for Aspect-Oriented Software with Multi-objective Algorithms The problem known as CAITO refers to the determination of an order to integrate and test classes and aspects that minimizes stubbing costs. Such problem is NP-hard and to solve it efficiently, search based algorithms have been used, mainly evolutionary ones. However, the problem is very complex since it involves different factors that may influence the stubbing process, such as complexity measures, contractual issues and so on. These factors are usually in conflict and different possible solutions for the problem exist. To deal properly with this problem, this work explores the use of multi-objective optimization algorithms. The paper presents results from the application of two evolutionary algorithms - NSGA-II and SPEA2 - to the CAITO problem in four real systems, implemented in AspectJ. Both multi-objective algorithms are evaluated and compared with the traditional Tarjan's algorithm and with a mono-objective genetic algorithm. Moreover, it is shown how the tester can use the found solutions, according to the test goals." }, { "instance_id": "R28889xR28810", "comparison_id": "R28889", "paper_id": "R28810", "text": "Applying Search Based Optimization to Software Product Line Architectures: Lessons Learned The Product-Line Architecture (PLA) is a fundamental SPL artifact. However, PLA design is a people-intensive and non-trivial task, and to find the best architecture can be formulated as an optimization problem with many objectives. We found several approaches that address search-based design of software architectures by using multi-objective evolutionary algorithms. However, such approaches have not been applied to PLAs. Considering such fact, in this work, we explore the use of these approaches to optimize PLAs. An extension of existing approaches is investigated, which uses specific metrics to evaluate the PLA characteristics. Then, we performed a case study involving one SPL. From the experience acquired during this study, we can relate some lessons learned, which are discussed in this work. Furthermore, the results point out that, in the case of PLAs, it is necessary to use SPL specific measures and evolutionary operators more sensitive to the SPL context." }, { "instance_id": "R28889xR28664", "comparison_id": "R28889", "paper_id": "R28664", "text": "Pareto Optimal Search Based Refactoring at the Design Level Refactoring aims to improve the quality of a software systems' structure, which tends to degrade as the system evolves. While manually determining useful refactorings can be challenging, search based techniques can automatically discover useful refactorings. Current search based refactoring approaches require metrics to be combined in a complex fashion, and producea single sequence of refactorings. In this paper we show how Pareto optimality can improve search based refactoring, making the combination of metrics easier, and aiding the presentation of multiple sequences of optimal refactorings to users." }, { "instance_id": "R28889xR28828", "comparison_id": "R28889", "paper_id": "R28828", "text": "A Multi-Objective Approach for the Regression Test Case Selection Problems When software is modified, some functionality that had been working can be affected. The reliable way to guarantee that the software is working correctly after those changes is to test the whole system again, but generally there is not sufficient time. Then, it is necessary to select significant test cases to be executed, in order to guarantee that the system is working as it should be. Although there are already works regarding on the regression test case selection problem, some important features which can influence in the test case selection are not considered in them. In this work, we state a new and more complete multi-objective formulation for this problem. The work also shows the results of the solution for the problem using a multi-objective genetic algorithm, comparing it with a random algorithm." }, { "instance_id": "R28889xR28884", "comparison_id": "R28889", "paper_id": "R28884", "text": "Not Going to Take this Anymore: Multi-Objective Overtime Planning for Software Engineering Projects Software Engineering and development is well-known to suffer from unplanned overtime, which causes stress and illness in engineers and can lead to poor quality software with higher defects. In this paper, we introduce a multi-objective decision support approach to help balance project risks and duration against overtime, so that software engineers can better plan overtime. We evaluate our approach on 6 real world software projects, drawn from 3 organisations using 3 standard evaluation measures and 3 different approaches to risk assessment. Our results show that our approach was significantly better (p <; 0.05) than standard multi-objective search in 76% of experiments (with high Cohen effect size in 85% of these) and was significantly better than currently used overtime planning strategies in 100% of experiments (with high effect size in all). We also show how our approach provides actionable overtime planning results and investigate the impact of the three different forms of risk assessment." }, { "instance_id": "R28889xR28823", "comparison_id": "R28889", "paper_id": "R28823", "text": "A Multi-Objective Approach to Search-based Test Data Generation There has been a considerable body of work on search-based test data generation for branch coverage. However, hitherto, there has been no work on multi-objective branch coverage. In many scenarios a single-objective formulation is unrealistic; testers will want to find test sets that meet several objectives simultaneously in order to maximize the value obtained from the inherently expensive process of running the test cases and examining the output they produce. This paper introduces multi-objective branch coverage.The paper presents results from a case study of the twin objectives of branch coverage and dynamic memory consumption for both real and synthetic programs. Several multi-objective evolutionary algorithms are applied. The results show that multi-objective evolutionary algorithms are suitable for this problem, and illustrates the way in which a Pareto optimal search can yield insights into the trade-offs between the two simultaneous objectives." }, { "instance_id": "R28889xR28866", "comparison_id": "R28889", "paper_id": "R28866", "text": "A Multi-objective approach to Redundancy Allocation Problem in Parallel-series systems The Redundancy Allocation Problem (RAP) is a kind of reliability optimization problems. It involves the selection of components with appropriate levels of redundancy or reliability to maximize the system reliability under some predefined constraints. We can formulate the RAP as a combinatorial problem when just considering the redundancy level, while as a continuous problem when considering the reliability level. The RAP employed in this paper is that kind of combinatorial optimization problems. During the past thirty years, there have already been a number of investigations on RAP. However, these investigations often treat RAP as a single objective problem with the only goal to maximize the system reliability (or minimize the designing cost). In this paper, we regard RAP as a multi-objective optimization problem: the reliability of the system and the corresponding designing cost are considered as two different objectives. Consequently, we can utilize a classical Multi-objective Evolutionary Algorithm (MOEA), named Non-dominated Sorting Genetic Algorithm II (NSGA-II), to cope with this multi-objective redundancy allocation problem (MORAP) under a number of constraints. The experimental results demonstrate that the multi-objective evolutionary approach can provide more promising solutions in comparison with two widely used single-objective approaches on two parallel-series systems which are frequently studied in the field of reliability optimization." }, { "instance_id": "R28889xR28863", "comparison_id": "R28889", "paper_id": "R28863", "text": "Software Project Planning for Robustness and Completion Time in the Presence of Uncertainty using Multi Objective Search Based Software Engineering All large-scale projects contain a degree of risk and uncertainty. Software projects are particularly vulnerable to overruns, due to the this uncertainty and the inherent difficulty of software project cost estimation. In this paper we introduce a search based approach to software project robustness. The approach is to formulate this problem as a multi objective Search Based Software Engineering problem, in which robustness and completion time are treated as two competing objectives. The paper presents the results of the application of this new approach to four large real-world software projects, using two different models of uncertainty." }, { "instance_id": "R28889xR28877", "comparison_id": "R28889", "paper_id": "R28877", "text": "A Hybrid Approach to Solve the Agile Team Allocation Problem The success of the team allocation in a agile software development project is essential. The agile team allocation is a NP-hard problem, since it comprises the allocation of self-organizing and cross-functional teams. Many researchers have driven efforts to apply Computational Intelligence techniques to solve this problem. This work presents a hybrid approach based on NSGA-II multi-objective metaheuristic and Mamdani Fuzzy Inference Systems to solve the agile team allocation problem, together with an initial evaluation of its use in a real environment." }, { "instance_id": "R28889xR28631", "comparison_id": "R28889", "paper_id": "R28631", "text": "A Study of the Bi-Objective Next Release Problem One important issue addressed by software companies is to determine which features should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while entailing the minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work, we apply three state-of-the-art multi-objective metaheuristics (two genetic algorithms, NSGA-II and MOCell, and one evolutionary strategy, PAES) for solving NRP. Our goal is twofold: on the one hand, we are interested in analyzing the results obtained by these metaheuristics over a benchmark composed of six academic problems plus a real world data set provided by Motorola; on the other hand, we want to provide insight about the solution to the problem. The obtained results show three different kinds of conclusions: NSGA-II is the technique computing the highest number of optimal solutions, MOCell provides the product manager with the widest range of different solutions, and PAES is the fastest technique (but with the least accurate results). Furthermore, we have observed that the best solutions found so far are composed of a high percentage of low-cost requirements and of those requirements that produce the largest satisfaction on the customers as well." }, { "instance_id": "R28889xR28626", "comparison_id": "R28889", "paper_id": "R28626", "text": "A Study of the Multi-Objective Next Release Problem One of the first issues which has to be taken into account by software companies is to determine what should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while this entails a minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work we study the NRP problem from the multi-objective point of view, paying attention to the quality of the obtained solutions, the number of solutions, the range of solutions covered by these fronts, and the number of optimal solutions obtained.Also, we evaluate the performance of two state-of-the-art multi-objective metaheuristics for solving NRP: NSGA-II and MOCell. The obtained results show that MOCell outperforms NSGA-II in terms of the range of solutions covered, while this latter is able of obtaining better solutions than MOCell in large instances. Furthermore, we have observed that the optimal solutions found are composed of a high percentage of low-cost requirements and, also, the requirements that produce most satisfaction on the customers." }, { "instance_id": "R28889xR28621", "comparison_id": "R28889", "paper_id": "R28621", "text": "Fairness Analysis\u201d in Requirements Assignments Requirements engineering for multiple customers, each of whom have competing and often conflicting priorities, raises issues of negotiation, mediation and conflict resolution. This paper uses a multi-objective optimisation approach to support investigation of the trade-offs in various notions of fairness between multiple customers. Results are presented to validate the approach using two real-world data sets and also using data sets created specifically to stress test the approach. Simple graphical techniques are used to visualize the solution space." }, { "instance_id": "R28889xR28851", "comparison_id": "R28889", "paper_id": "R28851", "text": "Establishing Integration Test Orders of Classes with Several Coupling Measures During the inter-class test, a common problem, named Class Integration and Test Order (CITO) problem, involves the determination of a test class order that minimizes stub creation effort, and consequently test costs. The approach based on Multi-Objective Evolutionary Algorithms (MOEAs) has achieved promising results because it allows the use of different factors and measures that can affect the stubbing process. Many times these factors are in conflict and usually there is no a single solution for the problem. Existing works on MOEAs present some limitations. The approach was evaluated with only two coupling measures, based on the number of attributes and methods of the stubs to be created. Other MOEAs can be explored and also other coupling measures. Considering this fact, this paper investigates the performance of two evolutionary algorithms: NSGA-II and SPEA2, for the CITO problem with four coupling measures (objectives) related to: attributes, methods, number of distinct return types and distinct parameter types. An experimental study was performed with four real systems developed in Java. The obtained results point out that the MOEAs can be efficiently used to solve this problem with several objectives, achieving solutions with balanced compromise between the measures, and of minimal effort to test." }, { "instance_id": "R28889xR28887", "comparison_id": "R28889", "paper_id": "R28887", "text": "The human competitiveness of search based software engineering This paper reports a comprehensive experimental study regarding the human competitiveness of search based software engineering (SBSE). The experiments were performed over four well-known SBSE problem formulations: next release problem, multi-objective next release problem, workgroup formation problem and the multi-objective test case selection problem. For each of these problems, two instances, with increasing sizes, were synthetically generated and solved by both metaheuristics and human subjects. A total of 63 professional software engineers participated in the experiment by solving some or all problem instances, producing together 128 responses. The comparison analysis strongly suggests that the results generated by search based software engineering can be said to be human competitive." }, { "instance_id": "R28889xR28835", "comparison_id": "R28889", "paper_id": "R28835", "text": "Efficient Multi Objective Higher Order Mutation Testing with Genetic Programming In academic empirical studies, mutation testing has been demonstrated to be a powerful technique for fault finding.However, it remains very expensive and the few valuable traditional mutants that resemble real faults are mixed in with many others that denote unrealistic faults.These twin problems of expense and realism have been a significant barrier to industrial uptake of mutation testing.Genetic programming is used to search the space of complex faults (higher order mutants). The space is much larger than the traditional first order mutation space of simple faults.However, the use of a search based approach makes this scalable, seeking only those mutants that challenge the tester,while the consideration of complex faults addresses the problem of fault realism; it is known that 90% of real faults are complex (i.e. higher order).We show that we are able to find examples that pose challenges totesting in the higher order space that cannot be represented in thefirst order space." }, { "instance_id": "R28889xR28820", "comparison_id": "R28889", "paper_id": "R28820", "text": "Pareto efficient multi-objective test case selection Previous work has treated test case selection as a single objective optimisation problem. This paper introduces the concept of Pareto efficiency to test case selection. The Pareto efficient approach takes multiple objectives such as code coverage, past fault-detection history and execution cost, and constructs a group of non-dominating, equivalently optimal test case subsets. The paper describes the potential bene?ts of Pareto efficient multi-objective test case selection, illustrating with empirical studies of two and three objective formulations." }, { "instance_id": "R28889xR28873", "comparison_id": "R28889", "paper_id": "R28873", "text": "Using Multi- objective Metaheuristics to Solve the Software Project Scheduling Problem The Software Project Scheduling (SPS) problem relates to the decision of who does what during a software project lifetime. This problem has a capital importance for software companies. In the SPS problem, the total budget and human resources involved in software development must be optimally managed in order to end up with a successful project. Companies are mainly concerned with reducing both the duration and the cost of the projects, and these two goals are in conflict with each other. A multi-objective approach is therefore the natural way of facing the SPS problem. In this paper, a number of multi-objective metaheuristics have been used to address this problem. They have been thoroughly compared over a set of 36 publicly available instances that cover a wide range of different scenarios. The resulting project schedulings of the algorithms have been analyzed in order to show their relevant features. The algorithms used in this paper and the analysis performed may assist project managers in the difficult task of deciding who does what in a software project." }, { "instance_id": "R28889xR28855", "comparison_id": "R28889", "paper_id": "R28855", "text": "Highly Scalable Multi- Objective Test Suite Minimisation Using Graphics Cards Despite claims of \"embarrassing parallelism\" for many optimisation algorithms, there has been very little work on exploiting parallelism as a route for SBSE scalability. This is an important oversight because scalability is so often a critical success factor for Software Engineering work. This paper shows how relatively inexpensive General Purpose computing on Graphical Processing Units (GPGPU) can be used to run suitably adapted optimisation algorithms, opening up the possibility of cheap scalability. The paper develops a search based optimisation approach for multi objective regression test optimisation, evaluating it on benchmark problems as well as larger real world problems. The results indicate that speed-ups of over 25x are possible using widely available standard GPUs. It is also encouraging that the results reveal a statistically strong correlation between larger problem instances and the degree of speed up achieved. This is the first time that GPGPU has been used for SBSE scalability." }, { "instance_id": "R28889xR28641", "comparison_id": "R28889", "paper_id": "R28641", "text": "Understanding Clusters of Optimal Solutions in Multi-Objective Decision Problems Multi-objective decisions problems are ubiquitous in requirements engineering. A common approach to solve them is to apply search-based techniques to generate a set of non-dominated solutions, formally known as the Pareto front, that characterizes all solutions for which no other solution performs better on all objectives simultaneously. Analysing the shape of the Pareto front helps decision makers understand the solution space and possible tradeoffs among the conflicting objectives. Interpreting the optimal solutions, however, remains a significant challenge. It is in particular difficult to identify whether solutions that have similar levels of goals attainment correspond to minor variants within a same design or to very different designs involving completely different sets of decisions. Our goal is to help decision makers identify groups of strongly related solutions in a Pareto front so that they can understand more easily the range of design choices, identify areas where strongly different solutions achieve similar levels of objectives, and decide first between major groups of solutions before deciding for a particular variant within the chosen group. The benefits of the approach are illustrated on a small example and validated on a larger independently-produced example representative of industrial problems." }, { "instance_id": "R28889xR28812", "comparison_id": "R28889", "paper_id": "R28812", "text": "Multi- objective Coevolutionary Automated Software Correction For a given program, testing, locating the errors identified, and correcting those errors is a critical, yet expensive process. The field of Search Based Software Engineering (SBSE) addresses these phases by formulating them as search problems. The Coevolutionary Automated Software Correction (CASC) system targets the correction phase by coevolving test cases and programs at the source code level. This paper presents the latest version of the CASC system featuring multi-objective optimization and an enhanced representation language. Results are presented demonstrating CASC's ability to successfully correct five seeded bugs in two non-trivial programs from the Siemens test suite. Additionally, evidence is provided substantiating the hypothesis that multi-objective optimization is beneficial to SBSE." }, { "instance_id": "R28889xR28661", "comparison_id": "R28889", "paper_id": "R28661", "text": "Solving Multi- objective and Fuzzy Multi-attributive Integrated Technique for QoS-Aware Web Service Selection The paper focuses on developing a new multiple criteria decision-making (MCDM) methodology for global web services selection based on QoS criteria, which integrates the multi-objective optimization with a fuzzy multi-attributive group decision-making (FMAGDM) technique. The study concentrates on the task of finding and then evaluating (or ranking) the finite number of pareto-optimal design alternatives (PODAs). A genetic algorithm based multi-objective optimization technique is employed for optimization purpose in terms of experts' opinions. Subjective attribute based aggregation technique for homogeneous and heterogeneous groups of experts is employed and used for dealing with the fuzzy opinion aggregation. Finally, we will discuss the integrated technique for Web services selection on global QoS optimization." }, { "instance_id": "R29012xR28994", "comparison_id": "R29012", "paper_id": "R28994", "text": "Overview of the Face Recognition Grand Challenge Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery." }, { "instance_id": "R29012xR28996", "comparison_id": "R29012", "paper_id": "R28996", "text": "A high-resolution 3D dynamic facial expression database Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain." }, { "instance_id": "R29012xR29010", "comparison_id": "R29012", "paper_id": "R29010", "text": "Robust Face Landmark Estimation under Occlusion Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall." }, { "instance_id": "R29012xR29006", "comparison_id": "R29012", "paper_id": "R29006", "text": "A Semi-automatic Methodology for Facial Landmark Annotation Developing powerful deformable face models requires massive, annotated face databases on which techniques can be trained, validated and tested. Manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous. Fatigue is one of the reasons that in some cases annotations are inaccurate. This is why, the majority of existing facial databases provide annotations for a relatively small subset of the training images. Furthermore, there is hardly any correspondence between the annotated land-marks across different databases. These problems make cross-database experiments almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases. We employed our tool for creating annotations for MultiPIE, XM2VTS, AR, and FRGC Ver. 2 databases. The annotations will be made publicly available from http://ibug.doc.ic.ac.uk/ resources/facial-point-annotations/. Finally, we present experiments which verify the accuracy of produced annotations." }, { "instance_id": "R29012xR29002", "comparison_id": "R29012", "paper_id": "R29002", "text": "Interactive Facial Feature Localization We address the problem of interactive facial feature localization from a single image. Our goal is to obtain an accurate segmentation of facial features on high-resolution images under a variety of pose, expression, and lighting conditions. Although there has been significant work in facial feature localization, we are addressing a new application area, namely to facilitate intelligent high-quality editing of portraits, that brings requirements not met by existing methods. We propose an improvement to the Active Shape Model that allows for greater independence among the facial components and improves on the appearance fitting step by introducing a Viterbi optimization process that operates along the facial contours. Despite the improvements, we do not expect perfect results in all cases. We therefore introduce an interaction model whereby a user can efficiently guide the algorithm towards a precise solution. We introduce the Helen Facial Feature Dataset consisting of annotated portrait images gathered from Flickr that are more diverse and challenging than currently existing datasets. We present experiments that compare our automatic method to published results, and also a quantitative evaluation of the effectiveness of our interactive method." }, { "instance_id": "R29012xR29000", "comparison_id": "R29012", "paper_id": "R29000", "text": "Localizing Parts of Faces Using a Consensus of Exemplars We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset." }, { "instance_id": "R29012xR29008", "comparison_id": "R29012", "paper_id": "R29008", "text": "The First Facial Landmark Tracking in-the-Wild Challenge: Benchmark and Results Detection and tracking of faces in image sequences is among the most well studied problems in the intersection of statistical machine learning and computer vision. Often, tracking and detection methodologies use a rigid representation to describe the facial region 1, hence they can neither capture nor exploit the non-rigid facial deformations, which are crucial for countless of applications (e.g., facial expression analysis, facial motion capture, high-performance face recognition etc.). Usually, the non-rigid deformations are captured by locating and tracking the position of a set of fiducial facial landmarks (e.g., eyes, nose, mouth etc.). Recently, we witnessed a burst of research in automatic facial landmark localisation in static imagery. This is partly attributed to the availability of large amount of annotated data, many of which have been provided by the first facial landmark localisation challenge (also known as 300-W challenge). Even though now well established benchmarks exist for facial landmark localisation in static imagery, to the best of our knowledge, there is no established benchmark for assessing the performance of facial landmark tracking methodologies, containing an adequate number of annotated face videos. In conjunction with ICCV'2015 we run the first competition/challenge on facial landmark tracking in long-term videos. In this paper, we present the first benchmark for long-term facial landmark tracking, containing currently over 110 annotated videos, and we summarise the results of the competition." }, { "instance_id": "R29012xR28998", "comparison_id": "R29012", "paper_id": "R28998", "text": "Annotated facial landmarks in the wild: A large-scale, real- world database for facial landmark localization Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework." }, { "instance_id": "R29012xR29004", "comparison_id": "R29012", "paper_id": "R29004", "text": "Face detection, pose estimation, and landmark localization in the wild We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new \u201cin the wild\u201d annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com)." }, { "instance_id": "R29034xR29030", "comparison_id": "R29034", "paper_id": "R29030", "text": "One millisecond face alignment with an ensemble of regression trees This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with efficient feature selection. Different regularization strategies and its importance to combat overfitting are also investigated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data." }, { "instance_id": "R29034xR29025", "comparison_id": "R29034", "paper_id": "R29025", "text": "Regressing a 3D Face Shape from a Single Image In this work we present a method to estimate a 3D face shape from a single image. Our method is based on a cascade regression framework that directly estimates face landmarks locations in 3D. We include the knowledge that a face is a 3D object into the learning pipeline and show how this information decreases localization errors while keeping the computational time low. We predict the actual positions of the landmarks even if they are occluded due to face rotation. To support the ability of our method to reliably reconstruct 3D shapes, we introduce a simple method for head pose estimation using a single image that reaches higher accuracy than the state of the art. Comparison of 3D face landmarks localization with the available state of the art further supports the feasibility of a single-step face shape estimation. The code, trained models and our 3D annotations will be made available to the research community." }, { "instance_id": "R29034xR28977", "comparison_id": "R29034", "paper_id": "R28977", "text": "Face Alignment Across Large Poses: A 3D Solution Face alignment, which fits a face model to an image and extracts the semantic meanings of facial pixels, has been an important topic in CV community. However, most algorithms are designed for faces in small to medium poses (below 45), lacking the ability to align faces in large poses up to 90. The challenges are three-fold: Firstly, the commonly used landmark-based face model assumes that all the landmarks are visible and is therefore not suitable for profile views. Secondly, the face appearance varies more dramatically across large poses, ranging from frontal view to profile view. Thirdly, labelling landmarks in large poses is extremely challenging since the invisible landmarks have to be guessed. In this paper, we propose a solution to the three problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA), in which a dense 3D face model is fitted to the image via convolutional neutral network (CNN). We also propose a method to synthesize large-scale training samples in profile views to solve the third problem of data labelling. Experiments on the challenging AFLW database show that our approach achieves significant improvements over state-of-the-art methods." }, { "instance_id": "R29034xR29010", "comparison_id": "R29034", "paper_id": "R29010", "text": "Robust Face Landmark Estimation under Occlusion Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall." }, { "instance_id": "R29034xR29023", "comparison_id": "R29034", "paper_id": "R29023", "text": "Supervised descent method and its applications to face alignment Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu/intraface." }, { "instance_id": "R29034xR28971", "comparison_id": "R29034", "paper_id": "R28971", "text": "Facial Landmark Detection by Deep Multi-task Learning Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21]." }, { "instance_id": "R29034xR29027", "comparison_id": "R29034", "paper_id": "R29027", "text": "Face Alignment by Explicit Shape Regression We present a very efficient, highly accurate, \u201cExplicit Shape Regression\u201d approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 min for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency." }, { "instance_id": "R29080xR29056", "comparison_id": "R29080", "paper_id": "R29056", "text": "Robust Discriminative Response Map Fitting with Constrained Local Models We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes." }, { "instance_id": "R29080xR29042", "comparison_id": "R29080", "paper_id": "R29042", "text": "Optimization Problems for Fast AAM Fitting in-the-Wild We describe a very simple framework for deriving the most-well known optimization problems in Active Appearance Models (AAMs), and most importantly for providing efficient solutions. Our formulation results in two optimization problems for fast and exact AAM fitting, and one new algorithm which has the important advantage of being applicable to 3D. We show that the dominant cost for both forward and inverse algorithms is a few times mN which is the cost of projecting an image onto the appearance subspace. This makes both algorithms not only computationally realizable but also very attractive speed-wise for most current systems. Because exact AAM fitting is no longer computationally prohibitive, we trained AAMs in-the-wild with the goal of investigating whether AAMs benefit from such a training process. Our results show that although we did not use sophisticated shape priors, robust features or robust norms for improving performance, AAMs perform notably well and in some cases comparably with current state-of-the-art methods. We provide Matlab source code for training, fitting and reproducing the results presented in this paper at http://ibug.doc.ic.ac.uk/resources." }, { "instance_id": "R29080xR28969", "comparison_id": "R29080", "paper_id": "R28969", "text": "Deep Convolutional Network Cascade for Facial Point Detection We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability." }, { "instance_id": "R29080xR29010", "comparison_id": "R29080", "paper_id": "R29010", "text": "Robust Face Landmark Estimation under Occlusion Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall." }, { "instance_id": "R29080xR29032", "comparison_id": "R29080", "paper_id": "R29032", "text": "Face Alignment at 3000 FPS via Regressing Local Binary Features This paper presents a highly efficient, very accurate regression approach for face alignment. Our approach has two novel components: a set of local binary features, and a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. Our approach achieves the state-of-the-art results when tested on the current most challenging benchmarks. Furthermore, because extracting and regressing local binary features is computationally very cheap, our system is much faster than previous methods. It achieves over 3, 000 fps on a desktop or 300 fps on a mobile phone for locating a few dozens of landmarks." }, { "instance_id": "R29080xR29047", "comparison_id": "R29080", "paper_id": "R29047", "text": "Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model This paper addresses the problem of facial landmark localization and tracking from a single camera. We present a two-stage cascaded deformable shape model to effectively and efficiently localize facial landmarks with large head pose variations. For face detection, we propose a group sparse learning method to automatically select the most salient facial landmarks. By introducing 3D face shape model, we use procrustes analysis to achieve pose-free facial landmark initialization. For deformation, the first step uses mean-shift local search with constrained local model to rapidly approach the global optimum. The second step uses component-wise active contours to discriminatively refine the subtle shape variation. Our framework can simultaneously handle face detection, pose-free landmark localization and tracking in real time. Extensive experiments are conducted on both laboratory environmental face databases and face-in-the-wild databases. All results demonstrate that our approach has certain advantages over state-of-the-art methods in handling pose variations." }, { "instance_id": "R29080xR29039", "comparison_id": "R29080", "paper_id": "R29039", "text": "Generic active appearance models revisited The proposed Active Orientation Models (AOMs) are generative models of facial shape and appearance. Their main differences with the well-known paradigm of Active Appearance Models (AAMs) are (i) they use a different statistical model of appearance, (ii) they are accompanied by a robust algorithm for model fitting and parameter estimation and (iii) and, most importantly, they generalize well to unseen faces and variations. Their main similarity is computational complexity. The project-out version of AOMs is as computationally efficient as the standard project-out inverse compositional algorithm which is admittedly the fastest algorithm for fitting AAMs. We show that not only does the AOM generalize well to unseen identities, but also it outperforms state-of-the-art algorithms for the same task by a large margin. Finally, we prove our claims by providing Matlab code for reproducing our experiments ( http://ibug.doc.ic.ac.uk/resources )." }, { "instance_id": "R29080xR29069", "comparison_id": "R29080", "paper_id": "R29069", "text": "Incremental Face Alignment in the Wild The development of facial databases with an abundance of annotated facial data captured under unconstrained 'in-the-wild' conditions have made discriminative facial deformable models the de facto choice for generic facial landmark localization. Even though very good performance for the facial landmark localization has been shown by many recently proposed discriminative techniques, when it comes to the applications that require excellent accuracy, such as facial behaviour analysis and facial motion capture, the semi-automatic person-specific or even tedious manual tracking is still the preferred choice. One way to construct a person-specific model automatically is through incremental updating of the generic model. This paper deals with the problem of updating a discriminative facial deformable model, a problem that has not been thoroughly studied in the literature. In particular, we study for the first time, to the best of our knowledge, the strategies to update a discriminative model that is trained by a cascade of regressors. We propose very efficient strategies to update the model and we show that is possible to automatically construct robust discriminative person and imaging condition specific models 'in-the-wild' that outperform state-of-the-art generic face alignment strategies." }, { "instance_id": "R29080xR29053", "comparison_id": "R29080", "paper_id": "R29053", "text": "Facial point detection using boosted regression and graph models Finding fiducial facial points in any frame of a video showing rich naturalistic facial behaviour is an unsolved problem. Yet this is a crucial step for geometric-feature-based facial expression analysis, and methods that use appearance-based features extracted at fiducial facial point locations. In this paper we present a method based on a combination of Support Vector Regression and Markov Random Fields to drastically reduce the time needed to search for a point's location and increase the accuracy and robustness of the algorithm. Using Markov Random Fields allows us to constrain the search space by exploiting the constellations that facial points can form. The regressors on the other hand learn a mapping between the appearance of the area surrounding a point and the positions of these points, which makes detection of the points very fast and can make the algorithm robust to variations of appearance due to facial expression and moderate changes in head pose. The proposed point detection algorithm was tested on 1855 images, the results of which showed we outperform current state of the art point detectors." }, { "instance_id": "R29080xR29072", "comparison_id": "R29080", "paper_id": "R29072", "text": "Real-time facial feature detection using conditional regression forests Although facial feature detection from 2D images is a well-studied field, there is a lack of real-time methods that estimate feature points even on low quality images. Here we propose conditional regression forest for this task. While regression forest learn the relations between facial image patches and the location of feature points from the entire set of faces, conditional regression forest learn the relations conditional to global face properties. In our experiments, we use the head pose as a global property and demonstrate that conditional regression forests outperform regression forests for facial feature detection. We have evaluated the method on the challenging Labeled Faces in the Wild [20] database where close-to-human accuracy is achieved while processing images in real-time." }, { "instance_id": "R29080xR29075", "comparison_id": "R29080", "paper_id": "R29075", "text": "3D constrained local model for rigid and non-rigid facial tracking We present 3D Constrained Local Model (CLM-Z) for robust facial feature tracking under varying pose. Our approach integrates both depth and intensity information in a common framework. We show the benefit of our CLM-Z method in both accuracy and convergence rates over regular CLM formulation through experiments on publicly available datasets. Additionally, we demonstrate a way to combine a rigid head pose tracker with CLM-Z that benefits rigid head tracking. We show better performance than the current state-of-the-art approaches in head pose tracking with our extension of the generalised adaptive view-based appearance model (GAVAM)." }, { "instance_id": "R29153xR34965", "comparison_id": "R29153", "paper_id": "R34965", "text": "Enterprise resource planning: An integrative review Enterprise resource planning (ERP) system solutions are currently in high demand by both manufacturing and service organisations because they provide a tightly integrated solution to an organisation's information system needs. During the last decade, ERP systems have received a significant amount of attention from researchers and practitioners from a variety of functional disciplines. In this paper, a comprehensive review of the research literature (1990\u20102003) concerning ERP systems is presented. The literature is further classified and the major outcomes of each study are addressed and analysed. Following a comprehensive review of the literature, proposals for future research are formulated to identify topics where fruitful opportunities exist." }, { "instance_id": "R29153xR29146", "comparison_id": "R29153", "paper_id": "R29146", "text": "A review of literature on Enterprise Resource Planning systems Enterprise resource planning (ERP) systems are currently involved into every aspect of organization as they provide a highly integrated solution to meet the information system needs. ERP systems have attracted a large amount of researchers and practitioners attention and received a variety of investigate and study. In this paper, we have selected a certain number of papers concerning ERP systems between 1998 and 2006, and this is by no means a comprehensive review. The literature is further classified by its topic and the major outcomes and research methods of each study are addressed. Following implications for future research are provided." }, { "instance_id": "R29153xR29149", "comparison_id": "R29153", "paper_id": "R29149", "text": "A comprehensive literature review of the ERP research field over a Decade Purpose \u2013 The purpose of this paper is first, to develop a methodological framework for conducting a comprehensive literature review on an empirical phenomenon based on a vast amount of papers published. Second, to use this framework to gain an understanding of the current state of the enterprise resource planning (ERP) research field, and third, based on the literature review, to develop a conceptual framework identifying areas of concern with regard to ERP systems.Design/methodology/approach \u2013 Abstracts from 885 peer\u2010reviewed journal publications from 2000 to 2009 have been analysed according to journal, authors and year of publication, and further categorised into research discipline, research topic and methods used, using the structured methodological framework.Findings \u2013 The body of academic knowledge about ERP systems has reached a certain maturity and several different research disciplines have contributed to the field from different points of view using different methods, showing that the ERP rese..." }, { "instance_id": "R29153xR29123", "comparison_id": "R29153", "paper_id": "R29123", "text": "The emergence of enterprise systems management: a challenge to the IS curriculum This paper proposes four cornerstones of a future Information Systems (IS) curriculum. It analyses the challenges of the IS curriculum based on the development of enterprise systems, and further argues that the practice and the research into enterprise systems have progressed to a new stage resulting in the emergence of Enterprise Systems Management (ESM). ESM calls for new competences and consequently represents new challenges to the IS curriculum. The paper outlines potential teaching issues and discusses the impact on the IS curriculum. Finally the paper suggests ways of approaching the challenges." }, { "instance_id": "R29153xR29119", "comparison_id": "R29153", "paper_id": "R29119", "text": "The Iceberg on the sea: what do you see? Although organizations began to adopt enterprise systems (ES) in the 1980s (Hayman, 2000), academic interests have just started. This literature review of ES articles appearing in academic information systems (IS) journals indicates that while previous ES studies have provided some interesting findings, only limited aspects of enterprise systems have been explored. We have essentially focused on the iceberg above the sea, ignoring what is going on under the water. This paper contributes to the ES research by identifying strengths and weaknesses of ES research to date, and suggesting future research opportunities." }, { "instance_id": "R29153xR29143", "comparison_id": "R29153", "paper_id": "R29143", "text": "Enterprise Resource Planning (ERP): a review of the literature This article is a review of work published in various journals on the topics of Enterprise Resource Planning (ERP) between January 2000 and May 2006. A total of 313 articles from 79 journals are reviewed. The article intends to serve three goals. First, it will be useful to researchers who are interested in understanding what kinds of questions have been addressed in the area of ERP. Second, the article will be a useful resource for searching for research topics. Third, it will serve as a comprehensive bibliography of the articles published during the period. The literature is analysed under six major themes and nine sub-themes." }, { "instance_id": "R29153xR29151", "comparison_id": "R29153", "paper_id": "R29151", "text": "Sustaining the Momentum: Archival Analysis of Enterprise Resource Planning Systems (2006\u20132012) here" }, { "instance_id": "R29153xR29137", "comparison_id": "R29153", "paper_id": "R29137", "text": "Work, organisation and Enterprise Resource Planning systems: an alternative research agenda This paper reviews literature that examines the design, implementation and use of Enterprise Resource Planning systems (ERPs). It finds that most of this literature is managerialist in orientation, and concerned with the impact of ERPs in terms of efficiency, effectiveness and business performance. The paper seeks to provide an alternative research agenda, one that emphasises work- and organisation-based approaches to the study of the implementation and use of ERPs." }, { "instance_id": "R29184xR29166", "comparison_id": "R29184", "paper_id": "R29166", "text": "A Comprehensive Review of the Enterprise Systems Research Enterprise systems (ES) can be considered as a novel phenomenon for the information system research and other academic fields (e.g. operations and supply chain), which has opened an immense potential and opportunities for research. Although the interest of the scholars on ES is recent, the number of publications is continuously growing since 2000. The aim of this paper is to review a sample of important contributions of the ES works published to date. To do this, the selected works have been classified in four key topics: business implications, technical issues, managerial issues, and implementation issues." }, { "instance_id": "R29184xR29156", "comparison_id": "R29184", "paper_id": "R29156", "text": "Process orientation through enterprise resource planning (ERP): a review of critical issues The significant development in global information technologies and the ever-intensifying competitive market climate have both pushed many companies to transform their businesses. Enterprise resource planning (ERP) is seen as one of the most recently emerging process-orientation tools that can enable such a transformation. Its development has presented both researchers and practitioners with new challenges and opportunities. This paper provides a comprehensive review of the state of research in the ERP field relating to process management, organizational change and knowledge management. It surveys current practices, research and development, and suggests several directions for future investigation. Copyright \u00a9 2001 John Wiley & Sons, Ltd." }, { "instance_id": "R29184xR29170", "comparison_id": "R29184", "paper_id": "R29170", "text": "Organizational adoption of enterprise resource planning systems: A conceptual framework Abstract Although Enterprise Resource Planning (ERP) systems are being used widely all around the world, they bring along many problems as well as benefits. Most of these implementations are failures and inadequate adoption is just one of the failure factors. This study provides an extensive review of the literature resulting in a taxonomy that may be used for other researchers in the field. The study also defines a framework for organizational adoption of ERP systems. The model consists of core Technology Acceptance Model (TAM) variables (perceived ease of use of ERP system and perceived usefulness), satisfaction and common actors of an ERP project: technology, user, organization and project management." }, { "instance_id": "R29184xR29164", "comparison_id": "R29184", "paper_id": "R29164", "text": "Enterprise resource planning: Developments and directions for operations management research Abstract Enterprise resource planning (ERP) has come to mean many things over the last several decades. Divergent applications by practitioners and academics, as well as by researchers in alternative fields of study, has allowed for considerable proliferation of information on the topic and for a considerable amount of confusion regarding the meaning of the term. In reviewing ERP research two distinct research streams emerge. The first focuses on the fundamental corporate capabilities driving ERP as a strategic concept. A second stream focuses on the details associated with implementing information systems and their relative success and cost. This paper briefly discusses these research streams and suggests some ideas for related future research." }, { "instance_id": "R29184xR29176", "comparison_id": "R29184", "paper_id": "R29176", "text": "A Review of ERP Research: A Future Agenda for Accounting Information Systems ABSTRACT: ERP systems are typically the largest, most complex, and most demanding information systems implemented by firms, representing a major departure from the individual and departmental information systems prevalent in the past. Firms and individuals are extensively impacted, and many problematic issues remain to be researched. ERP and related integrated technologies are a transformative force on the accounting profession. As the nature of business evolves, accounting expertise is being called on to make broader contributions such as reporting on nonfinancial measures, auditing information systems, implementing management controls within information systems, and providing management consulting services. This review of ERP research is drawn from an extensive examination of the breadth of ERP-related literature without constraints as to a narrow timeframe or limited journal list, although particular attention is directed to the leading journals in information systems and accounting information systems. Early research consisted of descriptive studies of firms implementing ERP systems. Then researchers started to address other research questions about the factors that lead to successful implementations: the need for change management and expanded forms of user education, whether the financial benefit outweighed the cost, and whether the issues are different depending on organizational type and cultural factors. This research encouraged the development of several major ERP research areas: (1) critical success factors, (2) the organizational impact, and (3) the economic impact of ERP systems. We use this taxonomy to establish (1) what we know, (2) what we need, and (3) where we are going in ERP research. The objective of this review is to synthesize the extant ERP research reported without regard to publication domain and make this readily available to accounting researchers. We organize key ERP research by topics of interest in accounting, and map ERP topics onto existing accounting information systems research areas. An emphasis is placed on topics important to accounting, including (but not limited to) the risk management and auditing of ERP systems, regulatory issues, the internal and external economic impacts of ERP systems, extensions needed in ERP systems for XBRL, for interorganizational support, and for the design of management control systems. See Supplemental Material." }, { "instance_id": "R29184xR29174", "comparison_id": "R29184", "paper_id": "R29174", "text": "Research on ERP Application from an Integrative Review Enterprise resource planning (ERP) system is an enterprise management system, currently in high demand by both manufacturing and service organizations. Recently, ERP systems have been drawn an important amount of attention by researchers and top managers. This paper will summarize the previous research literature about ERP application from an integrative review, and further research issues have been introduced to guide the future direction of research." }, { "instance_id": "R29240xR29212", "comparison_id": "R29240", "paper_id": "R29212", "text": "Challenges and influential factors in ERP adoption and implementation The adoption and implementation of Enterprise Resource Planning (ERP) systems is a challenging and expensive task that not only requires rigorous efforts but also demands to have a detailed analysis of such factors that are critical to the adoption or implementation of ERP systems. Many efforts have been made to identify such influential factors for ERP; however, they are not filtered comprehensively in terms of the different perspectives. This paper focuses on the ERP critical success factors from five different perspectives such as: stakeholders; process; technology; organisation; and project. Results from the literature review are presented and 19 such factors are identified that are imperative for a successful ERP implementation, which are listed in order of their importance. Considering these factors can realize several benefits such as reducing costs and saving time or extra effort." }, { "instance_id": "R29240xR29231", "comparison_id": "R29240", "paper_id": "R29231", "text": "Evaluation of Key Success Factors Influencing ERP Implementation Success Enterprise Resource Planning (ERP) application is often viewed as a strategic investment that can provide significant competitive advantage with positive return thus contributing to the firms' revenue and growth. Despite such strategic importance given to ERP the implementation success to achieve the desired goal has been viewed disappointing. There have been numerous industry stories about failures of ERP initiatives. There have also been stories reporting on the significant benefits achieved from successful ERP initiatives. This study review the industry and academic literature on ERP results and identify possible trends or factors which may help future ERP initiatives achieve greater success and less failure. The purpose of this study is to review the industry and academic literature on ERP results, identify and discuss critical success factors which may help future ERP initiatives achieve greater success and less failure." }, { "instance_id": "R29240xR29186", "comparison_id": "R29240", "paper_id": "R29186", "text": "Towards the unification of critical success factors for ERP implementations Abstract Despite the benefits that can be achieved from a successful ERP system implementation, there is already evidence of high failure risks in ERP implementation projects. Too often, project managers focus mainly on the technical and financial aspects of the implementation project, while neglecting or putting less effort on the nontechnical issues. Therefore, one of the major research issues in ERP systems today is the study of ERP implementation success. Some authors have shown that ERP implementation success definition and measurement depends on the points of view of the involved stakeholders. A typical approach used to define and measure ERP implementation success has been critical success factors approach." }, { "instance_id": "R29240xR29229", "comparison_id": "R29240", "paper_id": "R29229", "text": "Strategic success factors in ERP system implementation Different ways of approaching ERP implementation give different results. In order to successfully implement an ERP system it is necessary to properly balance critical success factors. By researching what the critical success factors in ERP implementation are, why they are critical, and to what extent they are relevant to users, consultants and suppliers, this paper seeks to identify strategic critical success factors in ERP implementation and to understand the impact of each factor on the success of ERP system introduction. This paper lists strategic critical success factors (CSF), which are influence the long-term goals. Key-Words: ERP implementation, measuring success, cost, critical success factors, management, IT project" }, { "instance_id": "R29240xR29208", "comparison_id": "R29240", "paper_id": "R29208", "text": "A Review of Critical Success Factors for ERP-Projects ERP projects are complex purposes which influence main internal and external operations of companies. The success of the project directly influences the performance and the survival of the organisation. Recent research has me- thodically collected plausible data in the field of critical success factors (CSFs) within ERP projects. This article describes how the collected publications were used to identify the main CSFs and how they can be ranked according to the impor- tance of success or failure through a literature review. Because of the influence of CSFs to ERP-projects in general, the term \"ERP project\" is used in the further parts of this paper. The second part of this paper proposes how CSFs can be in- tegrated into classical ERP project phases. Past researches did nearly not investigate how CSFs which were mentioned in different publications can influence the ERP-project phases. At the end of the paper the trend of CSF in relation of the publication year and the origin of the author are shown." }, { "instance_id": "R29240xR29238", "comparison_id": "R29240", "paper_id": "R29238", "text": "Critical success factors in enterprise resource planning systems Organizations perceive ERP as a vital tool for organizational competition as it integrates dispersed organizational systems and enables flawless transactions and production. This review examines studies investigating Critical Success Factors (CSFs) in implementing Enterprise Resource Planning (ERP) systems. Keywords relating to the theme of this study were defined and used to search known Web engines and journal databases for studies on both implementing ERP systems per se and integrating ERP systems with other well- known systems (e.g., SCM, CRM) whose importance to business organizations and academia is acknowledged to work in a complementary fashion. A total of 341 articles were reviewed to address three main goals. This study structures previous research by presenting a comprehensive taxonomy of CSFs in the area of ERP. Second, it maps studies, identified through an exhaustive and comprehensive literature review, to different dimensions and facets of ERP system implementation. Third, it presents studies investigating CSFs in terms of a specific ERP lifecycle phase and across the entire ERP life cycle. This study not only reviews articles in which an ERP system is the sole or primary field of research, but also articles that refer to an integration of ERP systems and other popular systems (e.g., SCM, CRM). Finally it provides a comprehensive bibliography of the articles published during this period that can serve as a guide for future research." }, { "instance_id": "R29240xR29206", "comparison_id": "R29240", "paper_id": "R29206", "text": "Examining the critical success factors in the adoption of enterprise resource planning This paper presents a literature review of the critical success factors (CSFs) in the implementation of enterprise resource planning (ERP) across 10 different countries/regions. The review covers journals, conference proceedings, doctoral dissertation, and textbooks from these 10 different countries/regions. Through a review of the literature, 18 CSFs were identified, with more than 80 sub-factors, for the successful implementation of ERP. The findings of our study reveal that 'appropriate business and IT legacy systems', 'business plan/vision/goals/justification', 'business process reengineering', 'change management culture and programme', 'communication', 'ERP teamwork and composition', 'monitoring and evaluation of performance', 'project champion', 'project management', 'software/system development, testing and troubleshooting', 'top management support', 'data management', 'ERP strategy and implementation methodology', 'ERP vendor', 'organizational characteristics', 'fit between ERP and business/process', 'national culture' and 'country-related functional requirement' were the commonly extracted factors across these 10 countries/regions. In these 18 CSFs, 'top management support' and 'training and education' were the most frequently cited as the critical factors to the successful implementation of ERP systems." }, { "instance_id": "R29240xR29233", "comparison_id": "R29240", "paper_id": "R29233", "text": "A new framework of effective external and internal factors on the success of enterprise resource planning (ERP) True understanding of the managers of the organizations that implement the system of the success factors and conditions and their fulfillment is very helpful. Many researches are done regarding the identification of key factors of this system but most of them had one-dimensional view to the subject or they only studied Internal Organizational Factors so, the lack of a Multi-purpose and coherent framework is evident. The researcher by a deep review on review of literature separated the success factors of ERP in 7 factors of country environment, ERP vendors\u2019 environment, software package environment, leadership and strategic criterions, organizational environment variables, organization users environment, IT environment in the organization in the form of two external and internal set and finally attempted to present a coherent framework." }, { "instance_id": "R29240xR29217", "comparison_id": "R29240", "paper_id": "R29217", "text": "The Core Critical Success Factors in Implementation of Enterprise Resource Planning Systems The Implementation of Enterprise Resource Planning (ERP) systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors (CSFs) in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called \u201ccore critical success factors\u201d are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies." }, { "instance_id": "R29240xR29191", "comparison_id": "R29240", "paper_id": "R29191", "text": "Critical factors for successful implementation of enterprise systems Enterprise resource planning (ERP) systems have emerged as the core of successful information management and the enterprise backbone of organizations. The difficulties of ERP implementations have been widely cited in the literature but research on the critical factors for initial and ongoing ERP implementation success is rare and fragmented. Through a comprehensive review of the literature, 11 factors were found to be critical to ERP implementation success \u2013 ERP teamwork and composition; change management program and culture; top management support; business plan and vision; business process reengineering with minimum customization; project management; monitoring and evaluation of performance; effective communication; software development, testing and troubleshooting; project champion; appropriate business and IT legacy systems. The classification of these factors into the respective phases (chartering, project, shakedown, onward and upward) in Markus and Tanis\u2019 ERP life cycle model is presented and the importance of each factor is discussed." }, { "instance_id": "R29240xR29196", "comparison_id": "R29240", "paper_id": "R29196", "text": "Critical success factors for ERP projects Over the past decade, Enterprise Resource Planning systems (ERP) have become one of the most important developments in the corporate use of information technology. ERP implementations are usually large, complex projects, involving large groups of people and other resources, working together under considerable time pressure and facing many unforeseen developments. In order for an organization to compete in this rapidly expanding and integrated marketplace, ERP systems must be employed to ensure access to an efficient, effective, and highly reliable information infrastructure. Despite the benefits that can be achieved from a successful ERP system implementation, there is evidence of high failure in ERP implementation projects. Too frequently key development practices are ignored and early warn ing signs that lead to project failure are not understood. Identifying project success and failure factors and their consequences as early as possible can provide valuable clues to help project managers improve their chances of success. It is the long-lange goal of our research to shed light on these factors and to provide a tool that project managers can use to help better manage their software development projects. This paper will present a review of the general background to our work; the results from the current research and conclude with a discussion of the findings thus far. The findings will include a list of 23 unique Critical Success Factors identified throughout the literature, which we believe to be essential for Project Managers. The implications of these results will be discussed along with the lessons learnt." }, { "instance_id": "R29240xR29221", "comparison_id": "R29240", "paper_id": "R29221", "text": "A Comparative Study of Critical Success Factors (CSFs) in Implementation of ERP in Developed and Developing Countries The main goal of this research is to understand is there any difference between ERP implementation's CSF in developed and developing countries? Understanding this subject can help us to implement ERP systems properly in developing nations. This research showed that in developed and developing countries \"Change Management\" was most important factor and in developed countries \"Country-related functional requirements\" factor was less important factor and \"Fit between ERP and business/process\" was the least cited factor among developing nations. In last it concluded that national culture of developing countries has an impressive effect on ERP implementation in these countries. In other hand developing countries companies more depend on ERP vendors in compare to developed countries companies. In addition it seems developing countries underestimate business process reengineering (BPR) and fit between ERP and business/process factors in comparison with developed countries." }, { "instance_id": "R29351xR29312", "comparison_id": "R29351", "paper_id": "R29312", "text": "Extended-enterprise systems\u2019 impact on enterprise risk management\u201d Purpose \u2013 This article aims to focus on raising awareness of the limitations of traditional \u201centerprise\u2010centric\u201d views of enterprise risk management that ignore the risks that are inherited from key business and supply chain partners. In essence, enterprise systems implementations have allowed organizations to couple their operations more tightly with other business partners, particularly in the area of supply chain management, and in the process enterprise systems applications are redefining the boundaries of the entity in terms of risk management concerns and the scope of financial audits. Design/methodology/approach \u2013 The prior literature that has begun to explore aspects of assessing key risk components in these relationships is reviewed with an eye to highlighting the limitations of what is understood about risk in interorganizational relationships. This analysis of the prior research establishes the basis for the logical formation of a framework for future enterprise risk management research in the area of e\u2010commerce relationships. Findings \u2013 Conclusions focus on the overall framework of risks that should be considered when interorganizational relationships are critical to an enterprise's operations and advocate an \u201cextended\u2010enterprise\u201d view of enterprise risk management. Research limitations/implications \u2013 The framework introduced in this paper provides guidance for future research in the area of interorganizational systems control and risk assessment. Practical implications \u2013 The framework further highlights areas of risk that auditors and corporate risk managers should consider in assessing the risk inherited through interorganizational relationships. Originality/value \u2013 The paper highlights the need to shift from an enterprise\u2010centric view of risk management to an extended\u2010enterprise risk management view." }, { "instance_id": "R29351xR29308", "comparison_id": "R29351", "paper_id": "R29308", "text": "Training for ERP: does the is training literature have value? This paper examines end-user training (EUT) in enterprise resource planning (ERP) systems, with the aim of identifying whether current EUT research is applicable to ERP systems. An extensive review and analysis of EUT research in mainstream IS journals was undertaken. The findings of this analysis were compared to views expressed by a leading ERP trainer in a large Australian company. The principles outlined in the EUT literature were used to construct the Training, Education and Learning Strategy model for an ERP environment. Our analysis found very few high-quality empirical studies involving EUT training in such an environment. Moreover, we argue that while the extensive EUT literature provides a rich source of ideas about ERP training, the findings of many studies cannot be transferred to ERP systems, as these systems are inherently more complex than the office-based, non-mandatory applications upon which most IS EUT research is based." }, { "instance_id": "R29351xR29349", "comparison_id": "R29351", "paper_id": "R29349", "text": "Factors for the acceptance of enterprise resource planning (ERP) systems and financial performance\u201d theory provides further insight into identifying the acceptance factors of ERP." }, { "instance_id": "R29351xR29302", "comparison_id": "R29351", "paper_id": "R29302", "text": "Benefit realisation through ERP: the re-emergence of data warehousing The need for an integrated enterprise-wide set of management information pronounced Data Warehousing the \u2018hot topic\u2019 of the early-to-mid 1990\u2019s, however, it became unfashionable through the mid-to-late 1990s, with the approach of Y2K and with it the widespread implementation of ERP systems. However, in recent times, the re-emergence of Data Warehousing, to address the limitations and unrealised benefits of ERP systems implementation, provides researchers with a new challenge in understanding the \u2018double learning curve\u2019 for an organisation, undertaking in quick succession both an ERP systems project and a Data Warehousing project, in an attempt to finally achieve the benefits expected but never realised." }, { "instance_id": "R29351xR29338", "comparison_id": "R29351", "paper_id": "R29338", "text": "Barriers of ERP while implementing ERP: a literature review Purpose The main purpose of the paper is to do literature survey of ERP Papers (from refereed and International Journals like Elsevier, InderScience, ASME, Springer and ACM( Digital Library) to find out the barriers of ERP when implementing it. Thus, the objective of the paper is to study the literature review papers and find out the barriers of ERP. Research findings of the paper: While implementing this ERP in an enterprise(s), it is found that there are obviously some barriers which need to be addressed. Out of 200 or so literature papers on ERP, 51 papers were reviewed for barriers and studied in depth. These barriers are mentioned in the form of Table in the literature survey. While implementing ERP, the barriers which are commonly observed arehuge capital incurred for software, poor planning or poor management, lack of perfection, lack of training and predetermined corporate goals, lack of good vendors, lack of risk assessment, lack of approach, lack of data models (support), lack of ERP Systems\u2019 benefits, lack of system performance, lack of hierarchical attribute structure and lack of management support etc. Outline of the paper: The tool or methodology applied to overcome these barriers is AHP. It analyses the barriers (of ERP) and can help to solve the issues of ERP for its implementation. The results after overcoming the barriers and implementing it are excellent, found to be more productive for the enterprises." }, { "instance_id": "R29351xR29342", "comparison_id": "R29351", "paper_id": "R29342", "text": "ERP measure success model: a new perspective This paper addresses the problem of defining and evaluating the success of ERP throughout the life cycle of the information system. In order to solve this problem, many of the theoretical and empirical contributions on the success of the information system are analysed and discussed. This approach allows the development of a new model; especially in Delone & Mclean supported research. This work will try to establish a different perspective on the success of the ERP and can be an encouragement to some organizations or the many researchers that will be engaging in these areas, in order to help achieve more clearly the expected performance in the acquisition phase of ERPs. Many times that performance does not always happen [1]." }, { "instance_id": "R29351xR29306", "comparison_id": "R29351", "paper_id": "R29306", "text": "Developing a cultural perspective on ERP Purpose \u2013 To develop an analytical framework through which the organizational cultural dimension of enterprise resource planning (ERP) implementations can be analyzed.Design/methodology/approach \u2013 This paper is primarily based on a review of the literature.Findings \u2013 ERP is an enterprise system that offers, to a certain extent, standard business solutions. This standardization is reinforced by two processes: ERP systems are generally implemented by intermediary IT organizations, mediating between the development of ERP\u2010standard software packages and specific business domains of application; and ERP systems integrate complex networks of production divisions, suppliers and customers.Originality/value \u2013 In this paper, ERP itself is presented as problematic, laying heavy burdens on organizations \u2013 ERP is a demanding technology. While in some cases recognizing the mutual shaping of technology and organization, research into ERP mainly addresses the economic\u2010technological rationality of ERP (i.e. matters of eff..." }, { "instance_id": "R29351xR29340", "comparison_id": "R29351", "paper_id": "R29340", "text": "Taxonomy of cost of quality (COQ) across the enterprise resource planning (ERP) implementation phases\u201d Companies declare that quality or customer satisfaction is their top priority in order to keep and attract more business in an increasingly competitive marketplace. The cost of quality (COQ) is a tool which can help determine the optimal level of quality investment. COQ analysis enables organizations to identify measure and control the consequences of poor quality. This study attempts to identify the COQ elements across the enterprise resource planning (ERP) implementation phases for the ERP implementation services of consultancy companies. The findings provide guidance to project managers on how best to utilize their limited resources. In summary, we suggest that project teams should focus on \u201cvalue-added\u201d activities and minimize the cost of \u201cnon-value-added\u201d activities at each phase of the ERP implementation project. Key words: Services, ERP implementation services, quality standard, service quality standard, cost of quality, project management, project quality management, project financial management." }, { "instance_id": "R29351xR29291", "comparison_id": "R29351", "paper_id": "R29291", "text": "Organisational readiness for ERP implementation An ERP implementation is a significant intervention in organisational life. As such, it affects and is affected by many variables including the organisation's culture, decision-making strategies, risk taking orientation, leadership strategies and perceptions of the value of Information Technology. For organisations to achieve business benefit in their ERP implementation, the implementation must be short, raise appropriate issues for business to make decisions on, and effectively implement those decisions.. This paper describes the research program being undertaken to identify the variables that inhibit an ERP implementation." }, { "instance_id": "R29351xR29323", "comparison_id": "R29351", "paper_id": "R29323", "text": "Organizational culture and leadership in ERP implementation This paper theorizes how leadership affects ERP implementation by fostering the desired organizational culture. We contend that ERP implementation success is positively related with organizational culture along the dimensions of learning and development, participative decision making, power sharing, support and collaboration, and tolerance for risk and conflicts. In addition, we identify the strategic and tactical actions that the top management can take to influence organizational culture and foster a culture conducive to ERP implementation. The theoretical contributions and managerial implications of this study are discussed." }, { "instance_id": "R29351xR29295", "comparison_id": "R29351", "paper_id": "R29295", "text": "Limits to using ERP systems The paper examines limitations that restrict the potential benefits from the use of Enterprise Resource Planning (ERP) systems in business firms. In the first part we discuss a limitation that arises from the strategic decision of top managers for mergers, acquisitions and divestitures as well as outsourcing. Managers tend to treat their companies like component-based business units, which are to be arranged and re-arranged to yet higher market values. Outsourcing of in-house activities to suppliers means disintegrating processes and information. Such consequences of strategic business decisions impose severe restrictions on what business organizations can benefit from ERP systems. The second part of the paper reflects upon the possibility of imbedding best practice business processes in ERP systems. We critically review the process of capturing and transferring best practices with a particular focus on context-dependence and nature of IT innovations." }, { "instance_id": "R29351xR29304", "comparison_id": "R29351", "paper_id": "R29304", "text": "Deconstructing information packages: organizational and behavioural implications of ERP systems Argues that the organizational involvement of large scale information technology packages, such as those known as enterprise resource planning (ERP), has important implications that go far beyond the acknowledged effects of keeping the organizational operations accountable and integrated across functions and production sites. Claims that ERP packages are predicated on an understanding of human agency as a procedural affair and of organizations as an extended series of functional or cross\u2010functional transactions. Accordingly, the massive introduction of ERP packages to organizations is bound to have serious implications that precisely recount the procedural forms by which such packages instrument organizational operations and fashion organizational roles. The conception of human agency and organizational operations in procedural terms may seem reasonable yet it recounts a very specific and, in a sense, limited understanding of humans and organizations. The distinctive status of framing human agency and organizations in procedural terms becomes evident in its juxtaposition with other forms of human action like improvisation, exploration or playing. These latter forms of human involvement stand out against the serial fragmentation underlying procedural action. They imply acting on the world on loose premises that trade off a variety of forms of knowledge and courses of action in attempts to explore and discover alternative ways of coping with reality." }, { "instance_id": "R29351xR29328", "comparison_id": "R29351", "paper_id": "R29328", "text": "A comparison of ERP-success measurement approaches\u201d ERP projects are complex purposes which influence main internal and external operations of companies. There are different research approaches which try to develop models for IS / ERP success measurement or IT-success measurement in general. Each model has its own area of application and sometimes a specific measurement approach based, for instance, on different systems or different stakeholders involved. This research paper shows some of the most important models developed in the literature and an overview of the different approaches of the models. An analysis which shows the strengths, weaknesses and the cases in which the specific model could be used is made." }, { "instance_id": "R29351xR29334", "comparison_id": "R29351", "paper_id": "R29334", "text": "The role and impact of project management in ERP project implementation life cycle Recent advancement of Information Technology in business management processes has flourished ERP as one of the most widely implemented business software systems in variety of industries and organizations. This paper presents review on the impact of project management in ERP project life cycle by studying various project management methodologies. Also the role and critical activities of project manager, project team and hence project management is explored in ERP projects implementation in organization of different sizes and culture." }, { "instance_id": "R29351xR29336", "comparison_id": "R29351", "paper_id": "R29336", "text": "Justifying ERP investment: the role and impacts of business case a literature survey ERP systems are booming these days. But it suffers from high rates of failure among different industries. Consequently, clear vision, objectives and compelling justification is needed to increase the rates of success. There are different approaches to justify IT investment in general and ERP investment in specific. This paper focuses on Business Case approach. A comprehensive model based on best practices for Business Case is proposed." }, { "instance_id": "R30476xR30045", "comparison_id": "R30476", "paper_id": "R30045", "text": "Investigation of the environmental Kuznets curve for carbon emissions in Malaysia: Do foreign direct investment and trade matter? Environmental degradation has become a central issue of discussion among the economists and environmentalists. In view of Malaysia's position as one of the main contributors to CO2 emissions in Asia and its status as a fast growing economy, it is vital, therefore, to conduct a study to identify the relationship between economic growth and CO2 emissions for Malaysia. This study attempts to examine empirically the environmental Kuznets curve hypothesis for Malaysia in the presence of foreign direct investment and trade openness both in the short- and long-run for the period 1970 to 2008.The bounds testing approach and Granger causality methodology are applied to test the interrelationships of the variables. The results of our study indicate that the inverted-U shaped relationship does exist between economic growth and CO2 emission in both the short- and long-run for Malaysia after controlling for two additional explanatory variables, namely FDI and trade. Importantly, the results of the study also provide some crucial policy recommendations to the policy makers." }, { "instance_id": "R30476xR29903", "comparison_id": "R30476", "paper_id": "R29903", "text": "CO2 emissions, energy consumption, income and foreign trade: a South African perspective The effect of trade liberalisation on environmental conditions has yielded significant debate in the energy economics literature. Although research on the relationship between energy consumption, emissions and economic growth is not new in South Africa, no study specifically addresses the role that South Africa's foreign trade plays in this context. A surprising fact given trade is one of the most important factors that can explain the environmental Kuznets curve. This study employs recent South African trade and energy data and modern econometric techniques to investigate this. The main finding of interest in this paper is the existence of a long run relationship between environmental quality, levels of per capita energy use and foreign trade in South Africa. As anticipated per capita energy use has a significant long run effect in raising the country's CO2 emission levels, yet surprisingly higher levels of trade for the country act to reduce these emissions. Granger causality tests confirm the existence of a positive bidirectional relationship between per capita energy use and CO2 emissions. Whilst the study also finds positive bidirectional causality between trade and income per capita and between trade and per capita energy use, it appears however that trade liberalisation in South Africa has not contributed to a long run growth in pollution-intensive activities nor higher emission levels." }, { "instance_id": "R30476xR30384", "comparison_id": "R30476", "paper_id": "R30384", "text": "The environmental Kuznets curve in Indonesia: Exploring the potential of renewable energy There is an increasing interest in investigating the environmental Kuznets curve (EKC) hypothesis because it suggests the existence of a turning point in the economy that will lead to a sustainable development path. Although many studies have focused on the EKC, only a few empirical studies have focused on analyzing the EKC with specific reference to Indonesia, and none of them have examined the potential of renewable energy sources within the EKC framework. This study attempts to estimate the EKC in the case of Indonesia for the period of 1971\u20132010 by considering the role of renewable energy in electricity production, using the autoregressive distributed lag (ARDL) approach to cointegration as the estimation method. We found an inverted U-shaped EKC relationship between economic growth and CO2 emissions in the long run. The estimated turning point was found to be 7729 USD per capita, which lies outside of our sample period. The beneficial impacts of renewable energy on CO2 emission reduction are observable both in the short run and in the long run. Our work has important implications both for policymakers and for the future development of renewable energy in Indonesia." }, { "instance_id": "R30476xR30448", "comparison_id": "R30476", "paper_id": "R30448", "text": "Foreign direct investment, income, and environmental pollution in developing countries: Panel data analysis of Latin America Effects of foreign direct investment (FDI) and income on pollution emissions are examined using time series data from 1980 to 2010 for 14 Latin American countries. Specifically, we test the validity of Pollution Haven Hypothesis (PHH) and Environmental Kuznets Curve (EKC) hypothesis for this region. Results from panel fixed and random effects models that controlled the effects of physical capital, energy, human capital, population density, and unemployment rate indicate the validity of both the PHH and EKC hypothesis. Estimating two separate models for high and low-income countries does not alter the findings for the PHH, however, the impacts of human capital on pollution emission are found to be different for the two groups of countries. Policies that focus on attracting clean and energy efficient industries through FDI have potential to improve environmental health while enhancing economic growth in Latin America." }, { "instance_id": "R30476xR29587", "comparison_id": "R30476", "paper_id": "R29587", "text": "Does One Size Fit All? A Reexamination of the Environmental Kuznets Curve Using the Dynamic Panel Data Approach This article applies the dynamic panel generalized method of moments technique to reexamine the environmental Kuznets curve (EKC) hypothesis for carbon dioxide (CO_2) emissions and asks two critical questions: \"Does the global data set fit the EKC hypothesis?\" and \"Do different income levels or regions influence the results of the EKC?\" We find evidence of the EKC hypothesis for CO_2 emissions in a global data set, middle-income, and American and European countries, but not in other income levels and regions. Thus, the hypothesis that one size fits all cannot be supported for the EKC, and even more importantly, results, robustness checking, and implications emerge. Copyright 2009 Agricultural and Applied Economics Association" }, { "instance_id": "R30476xR29863", "comparison_id": "R30476", "paper_id": "R29863", "text": "Indicators for sustainable energy development: A multivariate cointegration and causality analysis from Tunisian road transport sector This paper studies causal mechanism between indicators for sustainable energy development related to energy consumption from Tunisian road transport sector. The investigation is made using the Johansen cointegration technique and the environmental Kuznets curve (EKC) approach. It examines the nexus between transport value added, road transport-related energy consumption, road infrastructure, fuel price and CO2 emissions from Tunisian transport sector during the period of 1980\u20132010." }, { "instance_id": "R30476xR30205", "comparison_id": "R30476", "paper_id": "R30205", "text": "Investigating the validity of the environmental Kuznets curve hypothesis in Cambodia This study investigates whether better governess and corruption control help to form the inverted U-shaped relationship between income and pollution in Cambodia for the period of 1996\u20132012. The outcome from the Generalized Method of Moments and the Two-stage Least Squares revealed that GDP, urbanization, energy consumption, and trade openness increase CO2 emission while the control of corruption and governess can reduce CO2 emission. It is fundamental to note that the environmental Kuznets curve hypothesis was not confirmed in Cambodia. Based on the retrieved results, we recommend for urban planners to utilize policies that will allow them to improve urban planning by controlling sewage, industrial waste, and solid waste which are some of the major causes for the environmental deterioration in Cambodia's major cities. It is also crucial to implement pollution and trade-related actions and strategies to increase the environmental protection from trade. Additionally, it is important for Cambodia to increase the corruption control as this step will strengthen the environmental regulations which will reduce pollution. Finally, a better governess is also important to improve the quality of the environment." }, { "instance_id": "R30476xR30139", "comparison_id": "R30476", "paper_id": "R30139", "text": "Environmental Kuznets curve for CO2 emissions: The case of Arctic countries The main new contribution of this paper is to examine the Environmental Kuznets curve (EKC) hypothesis using time series data at individual country levels. Empirical focus is on the assessment of income per capita on CO2 emissions in the Arctic countries by taking into account the role of energy consumption. An autoregressive distributed lag (ARDL) modeling approach to cointegration is applied to annual data for the period 1960\u20132010. The results provide little evidence of the existence of the EKC hypothesis for the Arctic countries. We also find that economic growth has a beneficial effect on the environment only in some Arctic countries. Finally, energy consumption is found to have a detrimental effect on the environment in most countries." }, { "instance_id": "R30476xR29783", "comparison_id": "R30476", "paper_id": "R29783", "text": "Environmental Kuznets Curve and Pakistan: An Empirical Analysis In this study, the Environmental Kuznets Curve (EKC) is hypothesized to investigate the relationship between CO2 emission, economic growth, energy consumption, trade liberalization and population density in Pakistan with yearly data from 1971 to 2008. The cointegration analysis using Auto Regressive Distributed Lag (ARDL) bounds testing approach is incorporated. The results support the hypothesis both in short-run and long-run and inverted U-shaped relationship is found between CO2 emission and growth. Interestingly we found trade support the environment positively and population contributes to environmental degradation in Pakistan. The energy consumption and growth are the major explanatory variables which contribute to environmental pollution in Pakistan. Moreover, the time series data analysis is used and the stability of variables in estimated model is also assessed." }, { "instance_id": "R30476xR30117", "comparison_id": "R30476", "paper_id": "R30117", "text": "Environmental Kuznets Curve

The environmental Kuznets curve (EKC) is a hypothesized relationship between environmental degradation and GDP per capita. In the early stages of economic growth, pollution emissions and other human impacts on the environment increase, but beyond some level of GDP per capita (which varies for different indicators), the trend reverses, so that at high income levels, economic growth leads to environmental improvement. This implies that environmental impacts or emissions per capita are an inverted U-shaped function of GDP per capita. The EKC has been the dominant approach among economists to modeling ambient pollution concentrations and aggregate emissions since Grossman and Krueger introduced it in 1991 and is even found in introductory economics textbooks. Despite this, the EKC was criticized almost from the start on statistical and policy grounds, and debate continues. While concentrations and also emissions of some local pollutants, such as sulfur dioxide, have clearly declined in developed countries in recent decades, evidence for other pollutants, such as carbon dioxide, is much weaker. Initially, many understood the EKC to imply that environmental problems might be due to a lack of sufficient economic development, rather than the reverse, as was conventionally thought. This alarmed others because a simplistic policy prescription based on this idea, while perhaps addressing some issues like deforestation or local air pollution, could exacerbate environmental problems like climate change. Additionally, many of the econometric studies that supported the EKC were found to be statistically fragile. Some more recent research integrates the EKC with alternative approaches and finds that the relation between environmental impacts and development is subtler than the simple picture painted by the EKC. This research shows that usually, growth in the scale of the economy increases environmental impacts, all else held constant. However, the impact of growth might decline as countries get richer, and richer countries are likely to make more rapid progress in reducing environmental impacts. Finally, there is often convergence among countries, so that countries that have relatively high levels of impacts reduce them more quickly or increase them more slowly, all else held constant.

" }, { "instance_id": "R30476xR30228", "comparison_id": "R30476", "paper_id": "R30228", "text": "Does energy intensity contribute to CO2 emissions? A trivariate analysis in selected African countries The present study investigates the dynamic relationship between energy intensity and CO2 emissions by incorporating economic growth in environment CO2 emissions function using data of Sub Saharan African countries. For this purpose, we applied panel cointegration to examine the long run relationship between the series. We employed the VECM Granger causality to test the direction of causality amid the variables. At panel level, our results validate the existence of cointegration among the series. The long run panel results show that energy intensity has positive and statistically significant impact on CO2 emissions. There is also positive and negative link of non-linear and linear terms of real GDP per capita with CO2 emissions supporting the presence of environmental Kuznets curve (EKC). The causality analysis reveals the bidirectional causality between economic growth and CO2 emissions while energy intensity Granger causes economic growth and hence CO2 emissions, while across the individual countries, the results differ. This paper opens up new insights for policy makers to design comprehensive economic, energy and environmental policy for sustainable long run economic growth." }, { "instance_id": "R30476xR29881", "comparison_id": "R30476", "paper_id": "R29881", "text": "Environmental Kuznets curve: evidences from developed and developing economies Previous studies show that the environmental quality and economic growth can be represented by the inverted U curve called Environmental Kuznets Curve (EKC). In this study, we conduct empirical analyses on detecting the existence of EKC using the five common pollutants emissions (i.e. CO2, SO2, BOD, SPM10, and GHG) as proxy for environmental quality. The data spanning from year 1961 to 2009 and cover 40 countries. We seek to investigate if the EKC hypothesis holds in two groups of economies, i.e. developed versus developing economies. Applying panel data approach, our results show that the EKC does not hold in all countries. We also detect the existence of U shape and increasing trend in other cases. The results reveal that CO2 and SPM10 are good data to proxy for environmental pollutant and they can be explained well by GDP. Also, it is observed that the developed countries have higher turning points than the developing countries. Higher economic growth may lead to different impacts on environmental quality in different economies." }, { "instance_id": "R30476xR30093", "comparison_id": "R30476", "paper_id": "R30093", "text": "Economic growth, electricity consumption, urbanization and environmental degradation relationship in United Arab Emirates Abstract The present study explores the relationship between economic growth, electricity consumption, urbanization and environmental degradation in case of United Arab Emirates (UAE). The study covers the quarter frequency data over the period of 1975\u20132011. We have applied the ARDL bounds testing approach to examine the long run relationship between the variables in the presence of structural breaks. The VECM Granger causality is applied to investigate the direction of causal relationship between the variables. Our empirical exercise reported the existence of cointegration among the series. Further, we found an inverted U-shaped relationship between economic growth and CO 2 emissions i.e. economic growth raises energy emissions initially and declines it after a threshold point of income per capita (EKC exists). Electricity consumption declines CO 2 emissions. The relationship between urbanization and CO 2 emissions is positive. Exports seem to improve the environmental quality by lowering CO 2 emissions. The causality analysis validates the feedback effect between CO 2 emissions and electricity consumption. Economic growth and urbanization Granger cause CO 2 emissions." }, { "instance_id": "R30476xR29507", "comparison_id": "R30476", "paper_id": "R29507", "text": "Reassessing the environmental Kuznets curve for CO 2 emissions: a robustness exercise The number of studies seeking to empirically characterize the reduced-form relationship between a country economic growth and the quantity of pollutants produced in the process has recently increased significantly. In several cases, researchers have found evidence pointing to an inverted-U benvironmental KuznetsQ curve. In the case of CO2, however, the evidence is at best mixed. In this paper, we reconsider that evidence by assessing how robust it is when the analysis is conducted in a different parametric setup and when using alternative emissions data, from the International Energy Agency, relative to the literature. Our contribution can be viewed as a robustness exercise in these two respects. The econometric results lead to two conclusions. Firstly, published evidence on the EKC does not appear to depend upon the source of the data, at least as far as carbon dioxide is concerned. Secondly, when an alternative functional form is employed, there is evidence of an inverted-U pattern for the group of OECD countries, with reasonable turning point, regardless of the data set employed. Not so for non-OECD countries as the EKC is basically increasing (slowly concave) according to the IEA data and more bellshaped in the case of CDIAC data. D 2005 Elsevier B.V. All rights reserved." }, { "instance_id": "R30476xR30408", "comparison_id": "R30476", "paper_id": "R30408", "text": "Are there Environmental Kuznets Curves for US state-level CO2 emissions? The Environmental Kuznets Curve (EKC) hypothesis argues that the relationship between the pollutant and output is inverted U-shaped, implying that environmental degradation increases with output during the early stages of economic growth, but declines with output after reaching a specified threshold. For the first time in the literature on the EKC hypothesis, this paper assesses the validity of the hypothesis across 48 US States, using the Common Correlated Effects (CCE) estimation procedure by Pesaran (2006) which allows us to obtain results in the presence of cointegration in the relationship between carbon emissions and a measure of output, and its squared value \u2013 which captures the inverted U-shaped relationship postulated by the EKC hypothesis. The panel data approach allows the study individual members of the panel, also resulting in efficiency gains that would not be associated with time series approaches based on the small sample size of 51 observations (1960\u20132010). The findings postulate that the EKC hypothesis holds in only 10 States, with the remaining 38 States should be reforming their environmental regulatory policies to prevent environmental degradation coming only at the expense of production and economic growth. As for the other 10 states, given that a threshold has been achieved, higher growth would be accompanied with lower emissions, and hence, no additional environmental policies are required." }, { "instance_id": "R30476xR30016", "comparison_id": "R30476", "paper_id": "R30016", "text": "An Environment Kuznets Curve for GHG Emissions: A Panel Cointegration Analysis In this article, we attempt to use panel unit root and panel cointegration tests as well as the fully-modified ordinary least squares (OLS) approach to examine the relationships among carbon dioxide emissions, energy use and gross domestic product for 22 Organization for Economic Cooperation and Development (OECD) countries (Annex II Parties) over the 1971\u20132000 period. Furthermore, in order to investigate these results for other direct greenhouse gases (GHGs), we have estimated the Environmental Kuznets Curve (EKC) hypothesis by using the total GHG, methane, and nitrous oxide. The empirical results support that energy use still plays an important role in explaining the GHG emissions for OECD countries. In terms of the EKC hypothesis, the results showed that a quadratic relationship was found to exist in the long run. Thus, other countries could learn from developed countries in this regard and try to smooth the EKC curve at relatively less cost." }, { "instance_id": "R30476xR30422", "comparison_id": "R30476", "paper_id": "R30422", "text": "The econometric consequences of an energy consumption variable in a model of CO 2 emissions Many studies that model the determinants of CO2 emissions treat energy consumption as one of its determinants. Itkonen (2012) argues that this causes underestimation of both the responsiveness of CO2 emissions to income growth and the turning point of the carbon Kuznets curve. We first demonstrate that Itkonen's (2012) conclusions are sensitive to the assumed form of the relationship between energy consumption and income. We then argue that the presence of an energy consumption variable in a model of CO2 emissions can lead to systematic volatility in its coefficients, which has the potential to change their magnitude and sign. We also argue that misleading cointegration test results can be generated by such a model. The potential nature and severity of these effects are illustrated with data for seven countries." }, { "instance_id": "R30476xR30269", "comparison_id": "R30476", "paper_id": "R30269", "text": "The investigation of environmental Kuznets curve hypothesis in the advanced economies: The role of energy prices The aim of this research is to examine the effect of energy prices on pollution and investigate the existence of environmental Kuznets curve (EKC) hypothesis in 27 advanced economies. The panel non-stationary techniques were used to examine the selected economies taking the period of 1990\u20132012. The panel Kao and Fisher cointegration results showed that CO2 emission (CO2), gross domestic product (GDP), renewable energy consumption (RE), non-renewable energy consumption (NR), trade openness (TD), urbanization (UR), and energy prices (PC) are cointegrated. Moreover, the panel fully modified ordinary least square and the vector error correction Granger causality results revealed that GDP, NR, and UR increase CO2 emission while RE, TD, and PC reduce it. Furthermore, the inverted U-shaped relationship between GDP and CO2 emission was confirmed which signifies the presence of the EKC hypothesis. From the obtained results, multiple policy implications were provided for the investigated countries to help them control and reduce air pollution without harming their economic growth and development." }, { "instance_id": "R30476xR29987", "comparison_id": "R30476", "paper_id": "R29987", "text": "The impact of financial development, income, energy and trade on carbon emissions: Evidence from the Indian economy This paper examines the long-run equilibrium and the existence and direction of a causal relationship between carbon emissions, financial development, economic growth, energy consumption and trade openness for India. Our main contribution to the literature on Indian studies lies in the investigation of the causes of carbon emissions by taking into account the role of financial development and using single country data. The results suggest that there is evidence on the long-run and causal relationships between carbon emissions, financial development, income, energy use and trade openness. Financial development has a long-run positive impact on carbon emissions, implying that financial development improves environmental degradation. Moreover, Granger causality test indicates a long-run unidirectional causality running from financial development to carbon emissions and energy use. The evidence suggests that financial system should take into account the environment aspect in their current operations. The results of this study may be of great importance for policy and decision-makers in order to develop energy policies for India that contribute to the curbing of carbon emissions while preserving economic growth." }, { "instance_id": "R30476xR30241", "comparison_id": "R30476", "paper_id": "R30241", "text": "Factors affecting carbon dioxide (CO2) emissions in China's transport sector: a dynamic nonparametric additive regression model With the recent surge in vehicle population, particularly private vehicles, the transport sector has significantly contributed to the increase in energy consumption and carbon dioxide (CO2) emissions in China. Most existing research utilized linear models to investigate the driving forces of the transport sector's CO2 emission, but little attention has been paid to a large number of nonlinear relationships embodied in economic variables. This paper adopts provincial panel data from 2000 to 2012 and nonparametric additive regression models to examine the key influencing factors of CO2 emissions in the transport sector in China. The estimation results show that the nonlinear effect of economic growth on CO2 emissions is consistent with the Environmental Kuznets Curve (EKC) hypothesis. The nonlinear impact of urbanization exhibits an inverted \u201cU-shaped\u201d pattern on account of large-scale population migrations in the early stages and expanding use of non-polluting urban rail public transportation and hybrid fuel vehicles at the later stage. Private vehicles population follows an inverted \u201cU-shaped\u201d relationship with CO2 emissions owing to early surge in private car ownership and late increased use of electric and hybrid cars. The inverted \u201cU-shaped\u201d effect of cargo turnover is due to different modes of freight transport at different stages. But, energy efficiency improvement follows a positive \u201cU-shaped\u201d pattern in relation to CO2 emissions because of the different scale of transportation ownership and the speed of technological progress at different times. Hence, the differential dynamic effects of the driving forces at different times should be taken into consideration in reducing CO2 emissions in China's transport sector." }, { "instance_id": "R30476xR29873", "comparison_id": "R30476", "paper_id": "R29873", "text": "Investigating the energy-environmental Kuznets curve: evidence from Egypt This study examines to what extent recent empirical evidence can substantiate the claim that annual emission constraints have a modest effect on long run economic growth rates. The paper studies specifically the contribution of carbon dioxide emissions on growth in Egyptian economy during the period 1961-2008. Results indicate that there is a negative relationship between GDP per capita and carbon dioxide emissions. Results suggest that institutions play an important role to achieve progress in setting an effective policies and regulations to decrease pollutants' level that arise from industries and rationalising the consumption of energy. Developing countries and especially Egypt need to adopt a set of effective policies to face the vulnerable growth and environmental degradation. Based on these results, we assert that environmental policy should consider the different characteristics of each country and type of pollutant." }, { "instance_id": "R30476xR29838", "comparison_id": "R30476", "paper_id": "R29838", "text": "Dynamic misspecification in the environmental Kuznets curve: evidence from CO 2 and SO 2 emissions in the United Kingdom This study looks at the behaviour of emissions when in disequilibrium with respect to the environmental Kuznets curve (EKC) relationship. We use the non-linear threshold cointegration and error correction methodology and a long dataset beginning in 1830, in an application to the United Kingdom. There is significant evidence that, not only does the 'inverse-U' shape hold between per capita CO 2 and SO 2 emissions and GDP per capita, but we also find that temporary disequilibrium from the long-run EKC is corrected in an asymmetric fashion. This may be due to the historical pressure of environmental regulation in the UK to reduce emissions that are higher than permitted. However further analysis suggests that technological change can partially account for the asymmetric adjustment." }, { "instance_id": "R30476xR30038", "comparison_id": "R30476", "paper_id": "R30038", "text": "The environmental Kuznets curve and sustainability: a panel data analysis In recent years, sustainability has represented one of the most important policy goals explored in the environmental Kuznets curve (EKC) literature. But related hypotheses, performance measures and results continue to present a challenge. The present paper contributes to this ongoing literature by studying two different EKC specifications for 10 Middle East and North African (MENA) countries over the period 1990\u20132010 using panel data methods. For the first specification, namely EKC, we show that there is an inverted U-shape relationship between environmental degradation and income; while for the second specification, namely modified EKC (MEKC), we show that there is an inverted U-shape relationship between sustainability and human development (HD). The relationships are shaped by other factors such as energy, trade, manufacture added value and the role of law. More interestingly, findings from the estimation show that EKC hypothesis, HD and sustainability are crucial to build effective environmental policies." }, { "instance_id": "R30476xR29841", "comparison_id": "R30476", "paper_id": "R29841", "text": "An Econometric Analysis for CO2 Emissions, Energy Consumption, Economic Growth, Foreign Trade and Urbanization of Japan This paper examines the dynamic causal relationship between carbon dioxide emissions, energy consumption, economic growth, foreign trade and urbanization using time series data for the period of 1960-2009. Short-run unidirectional causalities are found from energy consumption and trade openness to carbon dioxide emissions, from trade openness to energy consumption, from carbon dioxide emissions to economic growth, and from economic growth to trade openness. The test results also support the evidence of existence of long-run relationship among the variables in the form of Equation (1) which also conform the results of bounds and Johansen conintegration tests. It is found that over time higher energy consumption in Japan gives rise to more carbon dioxide emissions as a result the environment will be polluted more. But in respect of economic growth, trade openness and urbanization the environmental quality is found to be normal good in the long-run." }, { "instance_id": "R30476xR30410", "comparison_id": "R30476", "paper_id": "R30410", "text": "A test of environmental Kuznets curve (EKC) for carbon emission and potential of renewable energy to reduce green houses gases (GHG) in Malaysia This study investigates the presence of environmental kuznets curve (EKC) for green house gases (GHG) measured by CO2 emission in Malaysia for the period 1970 to 2011. The study also examines the potential of the renewable source of energy to contain GHG. The long-run significant positive coefficient of GDP indicates that the GHG are increasing with economic growth while the insignificant coefficient on GDP square rejects the EKC transition. These results indicate a high GDP level for the EKC turning point for Malaysia. Therefore, it can be stated that only economic growth cannot reverse the environmental degradation in Malaysia. The government should have to come up with some policy measures to achieve CO2 emission reduction targets that Malaysia has pledged to achieve in Paris Submit (2015). The renewable energy production is found to have significant negative effect on CO2 emission. So government should focus on the renewable source of energy production and should frame a special policy for renewable energy production." }, { "instance_id": "R30476xR29553", "comparison_id": "R30476", "paper_id": "R29553", "text": "The relationship between income and environment in Turkey: Is there an environmental Kuznets curve? In this study, we investigate the relationship between income and environmental quality for Turkey at two levels. First, the relationship between the CO2 emissions and per capita income is examined by the help of a time series model using cointegration techniques. In the second stage, the relationship between income and air pollution is investigated by using PM10 and SO2 measurements in Turkish provinces. In this part of the study panel data estimation techniques are utilized. The time series model covers 1968-2003, and the panel data model covers 1992-2001 including observations from 58 provinces. A monotonically increasing relationship between CO2 and income is found in the long-run according to time series analysis. On the other hand, panel data analysis indicates an N-shape relationship for SO2 and PM10 emissions. Therefore, the results of our time series and panel data analyses do not support the Environmental Kuznets Curve hypothesis, which assumes an inverted U-shaped relationship between environmental degradation and income." }, { "instance_id": "R30476xR27556", "comparison_id": "R30476", "paper_id": "R27556", "text": "CO2 emissions, energy consumption, and output in France Abstract This paper examines the dynamic causal relationships between pollutant emissions, energy consumption, and output for France using cointegration and vector error-correction modelling techniques. We argue that these variables are strongly inter-related and therefore their relationship must be examined using an integrated framework. The results provide evidence for the existence of a fairly robust long-run relationship between these variables for the period 1960\u20132000. The causality results support the argument that economic growth exerts a causal influence on growth of energy use and growth of pollution in the long run. The results also point to a uni-directional causality running from growth of energy use to output growth in the short run." }, { "instance_id": "R30476xR27604", "comparison_id": "R30476", "paper_id": "R27604", "text": "CO2 emissions, energy consumption and economic growth in China: a panel data analysis This paper examines the causal relationships between carbon dioxide emissions, energy consumption and real economic output using panel cointegration and panel vector error correction modeling techniques based on the panel data for 28 provinces in China over the period 1995\u20132007. Our empirical results show that CO2 emissions, energy consumption and economic growth have appeared to be cointegrated. Moreover, there exists bidirectional causality between CO2 emissions and energy consumption, and also between energy consumption and economic growth. It has also been found that energy consumption and economic growth are the long-run causes for CO2 emissions and CO2 emissions and economic growth are the long-run causes for energy consumption. The results indicate that China's CO2 emissions will not decrease in a long period of time and reducing CO2 emissions may handicap China's economic growth to some degree. Some policy implications of the empirical results have finally been proposed." }, { "instance_id": "R30476xR29487", "comparison_id": "R30476", "paper_id": "R29487", "text": "An Environmental Kuznets Curve Analysis of U.S. State-Level Carbon Dioxide Emissions Most environmental Kuznets curve (EKC) theories do not apply to carbon dioxide (CO 2 )\u2014an unregulated, invisible, odorless gas with no direct human health effects. This analysis addresses the hypothesis that the income-CO 2 relationship reflects changes in the composition of an economy as it develops and the associated role of trade in an emissions-intensive good (e.g., electricity). To test this hypothesis, I use a novel data set of 1960 to 1999 state-level CO 2 emissions to estimate pretrade (production-based) CO 2 EKCs and posttrade (consumption-based) CO 2 EKCs. Based on the first EKC analysis of CO 2 emissions in the United States, I find that consumption-based EKCs peak at significantly higher incomes than production-based EKCs, suggesting that emissions-intensive trade drives, at least in part, the income-emissions relationship. I have also investigated the robustness of the estimated income-CO 2 relationship through a variety of specifications. Estimated EKCs appear to vary by state, and the estimated income-emissions relationships could be spurious for some states with nonstationary income and emissions data. Finally, I find that cold winters, warm summers, and historic coal endowments are positively associated with states\u2019 CO 2 emissions." }, { "instance_id": "R30476xR29931", "comparison_id": "R30476", "paper_id": "R29931", "text": "The long-run and causal analysis of energy, growth, openness and financial development on carbon emissions in Turkey\u201d The aim of this paper is to examine the causal relationship between financial development, trade, economic growth, energy consumption and carbon emissions in Turkey for the 1960\u20132007 period. The bounds F\u2010test for cointegration test yields evidence of a long-run relationship between per capita carbon emissions, per capita energy consumption, per capita real income, the square of per capita real income, openness and financial development. The results show that an increase in foreign trade to GDP ratio results an increase in per capita carbon emissions and financial development variable has no significant effect on per capita carbon emissions in the long- run. These results also support the validity of EKC hypothesis in the Turkish economy. It means that the level of CO2 emissions initially increases with income, until it reaches its stabilization point, then it declines in Turkey. In addition, the paper explores causal relationship between the variables by using error-correction based Granger causality models." }, { "instance_id": "R30476xR29422", "comparison_id": "R30476", "paper_id": "R29422", "text": "Growth and the Environment in Canada: An Empirical Analysis Standard reduced form models are estimated for Canada to examine the relationships between real per capita GDP and four measures of environmental degradation. Of the four chosen measures of environmental degradation, only concentrations of carbon monoxide appear to decline in the long run with increases in real per capita income. The data used in the reduced form models are also tested for the presence of unit roots and for the existence of cointegration between each of the measures of environmental degradation and per capita income. Unit root tests indicate nonstationarity in logs of the measures of environmental degradation and per capita income. The Engle-Granger test and the maximum eigenvalue test suggest that per capita income and the measures of environmental degradation are not cointegrated, or that a long-term relationship between the variables does not exist. Causality tests also indicate a bi-directional causality, rather than a uni-directional causality, from income to the environment. The results suggest that Canada does not have the luxury of being able to grow out of its environmental problems. The implication is that to prevent further environmental degradation, Canada requires concerted policies and incentives to reduce pollution intensity per unit of output across sectors, to shift from more to less pollution-producing-outputs and to lower the environmental damage associated with aggregate consumption." }, { "instance_id": "R30476xR30236", "comparison_id": "R30476", "paper_id": "R30236", "text": "Environmental Kuznets Curve time series application for Turkey: Why controversial results exist for similar models? This paper investigates the Environmental Kuznets Curve (EKC) hypothesis using 40 year spanned time series for the Turkey case. CO2 emission series representing environmental pressure and GDP per capita values representing economic development are used for the period 1968\u20132007. A cointegration has been determined among nonstationary series. First phases of an inverted U form EKC relationship have been determined for Turkey from the econometric estimations. This result is conflicting with that of the similar models for Turkey case. On the other hand, this conflict refers the important arguments in the literature and constitutes the main points of the paper. Sensitivity critiques (for example, Ahking et al.) for cointegration tests (Johansen and Engle\u2013Granger tests) have been supported in our study. Moreover, we detected important diversion in results according to drift and trend assumptions both in CI vector and EKC model specifications. We conclude that building EKC model according to cointegration (CI) equation restrictions can be important source for diversion when sensitivity exists in estimations and cointegration tests; therefore, EKC estimations should be held in non-restrictive way. The additional structural reasons have been also discussed for developing country EKC cases. The most important one is that the narrow income sample of developing countries makes it possible to be defined by similar but different paths; therefore, policy implications to be drawn from those analyses should not ignore this feature of developing country analyses." }, { "instance_id": "R30476xR29621", "comparison_id": "R30476", "paper_id": "R29621", "text": "On the relationship between energy consumption, CO2 emissions and economic growth in Europe This study examines the causal relationship between carbon dioxide emissions, energy consumption, and economic growth by using autoregressive distributed lag (ARDL) bounds testing approach of cointegration for nineteen European countries. The bounds F-test for cointegration test yields evidence of a long-run relationship between carbon emissions per capita, energy consumption per capita, real gross domestic product (GDP) per capita and the square of per capita real GDP only for Denmark, Germany, Greece, Iceland, Italy, Portugal and Switzerland. The cumulative sum and cumulative sum of squares tests also show that the estimated parameters are stable for the sample period." }, { "instance_id": "R30476xR29380", "comparison_id": "R30476", "paper_id": "R29380", "text": "The environmental Kuznets curve: an empirical analysis This paper examines the relationship between per capita income and a wide range of environmental indicators using cross-country panel sets. The manner in which this has been done overcomes several of the weaknesses asscociated with the estimation of environmental Kuznets curves (EKCs). outlined by Stern et al. (1996). Results suggest that meaningful EKCs exist only for local air pollutants whilst indicators with a more global, or indirect, impact either increase monotonically with income or else have predicted turning points at high per capita income levels with large standard errors \u2013 unless they have been subjected to a multilateral policy initiative. Two other findings are also made: that concentration of local pollutants in urban areas peak at a lower per capita income level than total emissions per capita; and that transport-generated local air pollutants peak at a higher per capita income level than total emissions per capita. Given these findings, suggestions are made regarding the necessary future direction of environmental policy." }, { "instance_id": "R30476xR29370", "comparison_id": "R30476", "paper_id": "R29370", "text": "Economic Development and Environmental Quality: An Econometric Analysis The relationship between economic development and environmental quality is analyzed econometrically for a large sample of countries over time. The results indicate that some indicators improve with rising incomes (like water and sanitation), others worsen and then improve (particulates and sulfur oxides), and others worsen steadily (dissolved oxygen in rivers, municipal solid wastes, and carbon emissions). Growth tends to be associated with environmental improvements where there are generalized local costs and substantial benefits. But where the costs of environomental degradation are borne by others (by the poor or by other countries), there are few incentives to alter damaging behavior. Copyright 1994 by Royal Economic Society." }, { "instance_id": "R30476xR29943", "comparison_id": "R30476", "paper_id": "R29943", "text": "\u201cCO2 emissions, energy consumption and economic growth in Association of Southeast Asian Nations (ASEAN) countries: a cointegration approach This study examines the cointegration and causal relationship between economic growth, carbon dioxide (CO2) emissions and energy consumption in selected Association of Southeast Asian Nations (ASEAN) countries for the period 1971\u20132009. The recently developed Autoregressive Distributed Lag (ARDL) methodology and Granger causality test based on Vector Error-Correction Model (VECM) were used to conduct the analysis. There was cointegration relationship between variables in all the countries under the study with statistically significant positive relationship between carbon emissions and energy consumption in both the short and long-run. The long-run elasticities of energy consumption with respect to carbon emissions are higher than the short-run elasticities. This implies that carbon emissions level is found to increase in respect to energy consumption over time in the selected ASEAN countries. A significant non-linear relationship between carbon emissions and economic growth was supported in Singapore and Thailand for the long-run which supports the Environmental Kuznets Curve (EKC) hypothesis. The Granger causality results suggested a bi-directional Granger causality between energy consumption and CO2 emissions in all the five ASEAN countries. This implies that carbon emissions and energy consumption are highly interrelated to each other. All the variables are found to be stable suggesting that all the estimated models are stable over the study period." }, { "instance_id": "R30476xR29768", "comparison_id": "R30476", "paper_id": "R29768", "text": "Environmental Kuznets Curve for carbon emissions in Pakistan: An empirical investigation This study investigates the relationship between carbon emissions, income, energy consumption, and foreign trade in Pakistan for the period 1972-2008. By employing the Johansen method of cointegration, the study finds that there is a quadratic long-run relationship between carbon emissions and income, confirming the existence of Environmental Kuznets Curve for Pakistan. Moreover, both energy consumption and foreign trade are found to have positive effects on emissions. The short-run results have, however, denied the existence of the Environmental Kuznets Curve. The short-run results are unique to the existing literature in the sense that none of the long-run determinants of emissions is significant. The contradictory results of short- and long-run give policy makers the opportunity to formulate different types of growth policies for the two terms taking environmental issues into consideration. In addition, the uni-directional causality from growth to energy consumption suggests that the policy makers should not only focus on forecasting future demand for energy with different growth scenarios but also on obtaining the least cost energy. Furthermore, the absence of causality from emissions to growth suggests that Pakistan can curb its carbon emissions without disturbing its economic growth." }, { "instance_id": "R30476xR29928", "comparison_id": "R30476", "paper_id": "R29928", "text": "The nexus between carbon emissions, energy consumption and economic growth in Middle East countries: a panel data analysis The environmental Kuznets curve (EKC) hypothesis assumes that there is an inverted U-shaped relationship between environmental degradation and income per capita. In other words, as a country grows, it is assumed that its environmental quality improves. In this study, we aim to test the EKC hypothesis for 12 Middle East countries during the period 1990\u20132008 by employing recently developed panel data methods. Our results provide evidence contrary to the EKC hypothesis. We found evidence favorable to the U-shaped EKC for 5 Middle East countries, whereas an inverted U-shaped curve was identified for only 3 Middle East countries. Furthermore, there appear to be no causal links between income and CO2 emissions for the other 4 countries. Regarding the direction of causality, there appears to be a unidirectional causality from economic growth to energy consumption in the short-run; in the long-run, however, the unidirectional causality chain runs from energy consumption and economic growth to CO2 emissions. We also suggest some crucial policy implications depending on these results." }, { "instance_id": "R30476xR29981", "comparison_id": "R30476", "paper_id": "R29981", "text": "Dynamic linkages among transport energy consumption, income and CO2 emission in Malaysia This paper examines the dynamic relationship between income, energy use and carbon dioxide (CO2) emissions in Malaysia using time-series data during 1975 to 2011. This study also attempts to validate the environmental Kuznet curve (EKC) hypothesis. Applying a multivariate model of income, energy consumption in the transportation sector, carbon emissions, structural change in the economy and renewable energy use, the empirical evidence confirmed that there is a long-run relationship between the variables as shown by the result of co-integration analysis. The results indicate that the inverted U-shape EKC hypothesis does not fully agree with the theory. The coefficient of squared GDP is not statistically different from zero. The time duration and the annual data used for the present study do not seem to strongly validate the existence of EKC hypothesis in the case of Malaysia. Causality test shows that the relationship between GDP and CO2 is unidirectional. The Granger causality test results reveal that emissions Granger-cause income, energy consumption and renewable energy use. Moreover, we find that income Granger-causes energy consumption and renewable energy use, and both structural change and renewable energy use Granger-cause energy consumption in road transportation." }, { "instance_id": "R30476xR29637", "comparison_id": "R30476", "paper_id": "R29637", "text": "Environmental Kuznets curve for CO2 in Canada The environmental Kuznets curve hypothesis is a theory by which the relationship between per capita GDP and per capita pollutant emissions has an inverted U shape. This implies that, past a certain point, economic growth may actually be profitable for environmental quality. Most studies on this subject are based on estimating fully parametric quadratic or cubic regression models. While this is not technically wrong, such an approach somewhat lacks flexibility since it may fail to detect the true shape of the relationship if it happens not to be of the specified form. We use semiparametric and flexible nonlinear parametric modelling methods in an attempt to provide more robust inferences. We find little evidence in favour of the environmental Kuznets curve hypothesis. Our main results could be interpreted as indicating that the oil shock of the 1970s has had an important impact on progress towards less polluting technology and production." }, { "instance_id": "R30476xR29962", "comparison_id": "R30476", "paper_id": "R29962", "text": "Environmental Kuznets curve in Romania and the role of energy consumption The aim of present study is to probe the dynamic relationship between economic growth, energy consumption and CO2 emissions for period of 1980-2010 in case of Romania. In doing so, ARDL bounds testing approach is applied to investigate the long run cointegration between these variables. Our results confirm long run relationship between economic growth, energy consumption and energy pollutants. The empirical evidence reveals that Environmental Kuznets curve (EKC) is found both in long-and-short runs in Romania. Further, energy consumption is major contributor to energy pollutants. Democratic regime shows her significant contribution to decline CO2 emissions through effective implementation of economic policies and financial development improves environment i.e. reduces CO2 emissions by redirecting the resources to environment friendly projects." }, { "instance_id": "R30476xR30373", "comparison_id": "R30476", "paper_id": "R30373", "text": "CO2emissions in Australia: economic and non-economic drivers in the long-run ABSTRACT Australia has sustained a relatively high economic growth rate since the 1980s compared to other developed countries. Per capita CO2 emissions tend to be highest amongst OECD countries, creating new challenges to cut back emissions towards international standards. This research explores the long-run dynamics of CO2 emissions, economic and population growth along with the effects of globalization tested as contributing factors. We find economic growth is not emission-intensive in Australia, while energy consumption is emissions intensive. Second, in an environment of increasing population, our findings suggest Australia needs to be energy efficient at the household level, creating appropriate infrastructure for sustainable population growth. High population growth and open migration policy can be detrimental in reducing CO2 emissions. Finally, we establish globalized environment has been conducive in combating emissions. In this respect, we establish the beneficial effect of economic globalization compared to social and political dimensions of globalization in curbing emissions." }, { "instance_id": "R30476xR29570", "comparison_id": "R30476", "paper_id": "R29570", "text": "Governance, institutions and the environment-income relationship: a cross-country study This paper examines the environment-income relationship in the context of the Environmental Kuznets Curve (EKC), and explores the possible role that factors like governance, political institutions, socioeconomic conditions, and education play in influencing this relationship. The results suggest that the EKC exists for carbon dioxide emissions for cross-country data over the period 1984\u20132002. However, there is nothing automatic about this relationship; policies designed to protect the environment may be responsible for this phenomenon. Two other significant findings are: one, countries with better quality of governance, stronger political institutions, better socioeconomic conditions and greater investment in education have lower emissions; and two, only around 15% of the countries in the dataset have reached income levels high enough to be associated with an unambiguous decline in emissions. The implications of these results are discussed within the context of the international environmental policy arena and the Kyoto Protocol. One of the main objectives of this paper is to bridge the gap between studies conducted on the EKC and developments in the international environmental policy arena. As a final note this paper emphasizes that one needs to connect the body of knowledge on the EKC hypothesis to the international environmental policy arena, despite the apparent difficulty of doing so. One hopes that future studies will further build on this line of thought." }, { "instance_id": "R30476xR29557", "comparison_id": "R30476", "paper_id": "R29557", "text": "CO2 emissions, energy usage, and output in Central America This study extends the recent work of Ang (2007) [Ang, J.B., 2007. CO2 emissions, energy consumption, and output in France. Energy Policy 35, 4772-4778] in examining the causal relationship between carbon dioxide emissions, energy consumption, and output within a panel vector error correction model for six Central American countries over the period 1971-2004. In long-run equilibrium energy consumption has a positive and statistically significant impact on emissions while real output exhibits the inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis. The short-run dynamics indicate unidirectional causality from energy consumption and real output, respectively, to emissions along with bidirectional causality between energy consumption and real output. In the long-run there appears to be bidirectional causality between energy consumption and emissions." }, { "instance_id": "R30476xR30230", "comparison_id": "R30476", "paper_id": "R30230", "text": "The impact of energy consumption, income and foreign direct investment on carbon dioxide emissions in Vietnam The aim of this study is to understand the relationship between CO2 (carbon dioxide) emissions, energy consumption, FDI (foreign direct investment) and economic growth in Vietnam over the period from 1976 to 2009. The techniques of cointegration and Granger causality are adopted to examine the relationship between the variables. The results confirm the existence of long-run equilibrium among the variables of interest. Meanwhile, energy consumption and income positively influence CO2 emissions, but square of income has negative impact on CO2 emissions in Vietnam. These outcomes support the EKC (Environmental Kuznets Curve) hypothesis which assumes an inverted U-shaped relationship between CO2 emissions and economic growth in Vietnam. The results of this study also reveal that there are two-way causalities between CO2 emissions and income, and between FDI and CO2 emissions in Vietnam. In addition, energy consumption is found to Granger-cause CO2 emissions in the short- and long-run. Energy consumption, FDI and income are the key determinants of CO2 emissions in Vietnam. Therefore, adoption of clean technologies by foreign investment is important in curtailing CO2 emissions in the country, and sustaining economic development at the same time." }, { "instance_id": "R30476xR29473", "comparison_id": "R30476", "paper_id": "R29473", "text": "Pooled mean group estimation of an environmental Kuznets curve for CO2 Abstract We apply the Pooled Mean Group Estimator to test for the existence of an environmental Kuznets curve for CO2 in 22 OECD countries. This approach allows for more flexible assumptions in a panel data framework. The period goes from 1975 to 1998." }, { "instance_id": "R30476xR29519", "comparison_id": "R30476", "paper_id": "R29519", "text": "Is there a turning point in the relationship between income and energy use and/or carbon emissions? Abstract We analyze the effect of fuel mix, model specification, and the level of development on the presence and size of a turning point in the relationship between income and energy use and/or carbon emissions. The results indicate that fuel mix, the specification for income, and the level of economic development affect conclusions about whether there is a turning point in the relationship between economic activity and energy use and carbon emissions. Including fuel shares generally reduces the size of a turning point that is estimated from a panel that includes observations from both OECD and Non-OECD nations. But this result varies according to the level of development. For OECD nations, there is limited support for a turning point in the relationship between income and per capita energy use and/or carbon emissions. For non-OECD nations, there is no turning point in the relationship between income and either energy use or carbon emissions. Instead, the relationship is positive. Together, the results indicate that forecasters and policy makers should not depend on a turning point in the relationship between income and energy use or carbon emissions to reduce either." }, { "instance_id": "R30476xR30088", "comparison_id": "R30476", "paper_id": "R30088", "text": "Environmental Kuznets curve in an open economy: a bounds testing and causality analysis for Tunisia The environmental Kuznets curve hypothesis posits that in the early stages of economic growth environmental degradation and pollution increase. However, as a nation reaches a certain level of income, measured in per capita terms, the trend reverses. The postulated relationship thus produces an inverted U-shaped curve. The topic has drawn much academic interest in the context of developed and emerging nations." }, { "instance_id": "R30476xR29578", "comparison_id": "R30476", "paper_id": "R29578", "text": "Exploring the existence of Kuznets curve in countries' environmental efficiency using DEA window analysis This paper, using data envelopment (DEA) window analysis and generalized method of moments (GMM) estimators, examines the existence of a Kuznets type relationship between countries' environmental efficiency and national income. Specifically, it measures the environmental efficiency of 17 OECD countries by constructing environmental efficiency ratios for the time period 1980-2002. The analysis with the application of dynamic panel data reveals that there isn't a Kuznets type relationship between environmental efficiency and income. Allowing for dynamic effects we find that the adjustment to the target ratio is instantaneous. We also find that increased economic activity does not always ensure environmental protection and thus the path of growth is important in addition to the growth itself." }, { "instance_id": "R30476xR30470", "comparison_id": "R30476", "paper_id": "R30470", "text": "Does trade openness affect CO2 emissions: evidence from ten newly industrialized countries? This paper examines whether the hypothetical environmental Kuznet curve (EKC) exists or not and investigates how trade openness affects CO2 emissions, together with real GDP and total primary energy consumption. The study sample comprises ten newly industrialized countries (NICs-10) from 1971 to 2013. The results support the existence of hypothetical EKC and indicate that trade openness negatively and significantly affects emissions, while real GDP and energy do positive effects of emissions. Moreover, the empirical results of short-run causalities indicate feedback hypothetical linkage of real GDP and trade, unidirectional linkages from energy to emissions, and from trade to energy. The error correction terms (ECTs) reveal in the long run, feedback linkages of emissions, real GDP, and trade openness, while energy Granger causes emissions, real GDP, and trade, respectively. The study recommendations are that our policymakers should encourage and expand the trade openness in these countries, not only to restrain CO2 emissions but also to boost their growth." }, { "instance_id": "R30476xR29851", "comparison_id": "R30476", "paper_id": "R29851", "text": "Economic growth and CO2 emissions in Malaysia: a cointegration analysis of the environmental Kuznets curve This paper attempts to establish a long-run as well as causal relationship between economic growth and carbon dioxide (CO2) emissions for Malaysia. Using data for the years from 1980 to 2009, the Environmental Kuznets Curve (EKC) hypothesis was tested utilizing the Auto Regressive Distributed Lag (ARDL) methodology. The empirical results suggest the existence of a long-run relationship between per capita CO2 emissions and real per capita Gross Domestic Product (GDP) when the CO2 emissions level is the dependent variable. We found an inverted-U shape relationship between CO2 emissions and GDP in both short and long-run, thus supporting the EKC hypothesis. The Granger Causality test based on the Vector Error Correction Model (VECM) presents an absence of causality between CO2 emissions and economic growth in the short-run while demonstrating uni-directional causality from economic growth to CO2 emissions in the long-run." }, { "instance_id": "R30476xR29543", "comparison_id": "R30476", "paper_id": "R29543", "text": "Beyond the Environmental Kuznets Curve: a comparative study of SO2 and CO2 emissions between Japan and China This study is the first systematic attempt to test statistically the contrasting hypotheses on the emission of SO 2 and CO 2 , and energy consumption in Japan and China for the last few decades. We postulate the hypotheses that local governments have incentives to internalize the local external diseconomies caused by SO 2 emissions, but not the global external diseconomies caused by CO 2 emissions. To substantiate our hypotheses, we decompose emissions of SO 2 and CO 2 into two factors: the emission factor (i.e. emission per energy use) and energy consumption. The results show that the prefectures where past energy consumption was high tend to reduce the emission factor of SO 2 significantly in Japan, while we do not find such a tendency in China. There is also evidence that neither per capita income nor past energy consumption affects the CO 2 emission factor and energy consumption significantly in both Japan and China, implying that an individual country has few incentives to reduce CO 2 emissions." }, { "instance_id": "R30476xR29755", "comparison_id": "R30476", "paper_id": "R29755", "text": "A note on the environmental Kuznets curve for CO2: A pooled mean group approach This paper investigates whether the environmental Kuznets curve (EKC) hypothesis for CO2 emissions is satisfied using the panel data of 28 countries by taking nuclear energy into account. Using the pooled mean group (PMG) estimation method, our main results indicate that (1) the impacts of nuclear energy on CO2 emissions are significantly negative, (2) CO2 emissions actually increase monotonically within the sample period in all cases: the full sample, OECD countries, and non-OECD countries, and (3) the growth rate in CO2 emissions with income is decreasing in OECD countries and increasing in non-OECD countries." }, { "instance_id": "R30476xR29407", "comparison_id": "R30476", "paper_id": "R29407", "text": "The Environmental Kuznets Curve: development path or policy result? Abstract A rich literature on the Environmental Kuznets Curve (EKC) suggests that there may be other factors besides per capita income that determine the emergence of a downward sloping segment in the EKC. Empirical studies have referred to this issue an to an omitted variable problem. This paper questions the idea that there exists a development path that necessarily links increasing environmental quality with economic growth. After illustrating the empirical evidence on the EKC, I focus on the determinants of pollution abatement policies to argue that the relationship between environmental care and economic growth may depend on other moments of the income distribution functions besides its mean. If the median voter theorem applies income distribution parameters determine the level of pollution abatement by impacting upon the willingness to pay for protecting the environment." }, { "instance_id": "R30476xR30379", "comparison_id": "R30476", "paper_id": "R30379", "text": "Time-varying analysis of CO2 emissions, energy consumption, and economic growth nexus: Statistical experience in next 11 countries This paper detects the direction of causality among carbon dioxide (CO2) emissions, energy consumption, and economic growth in Next 11 countries for the period 1972\u20132013. Changes in economic, energy, and environmental policies as well as regulatory and technological advancement over time, cause changes in the relationship among the variables. We use a novel approach i.e. time-varying Granger causality and find that economic growth is the cause of CO2 emissions in Bangladesh and Egypt. Economic growth causes energy consumption in the Philippines, Turkey, and Vietnam but the feedback effect exists between energy consumption and economic growth in South Korea. In the cases of Indonesia and Turkey, we find the unidirectional time-varying Granger causality running from economic growth to CO2 emissions thus validates the existence of the Environmental Kuznets Curve hypothesis, which indicates that economic growth is achievable at the minimal cost of environment. The paper gives new insights for policy makers to attain sustainable economic growth while maintaining long-run environmental quality." }, { "instance_id": "R30476xR29598", "comparison_id": "R30476", "paper_id": "R29598", "text": "Does higher economic and financial development lead to environmental degradation: Evidence from BRIC countries A vast number of studies addressed the environmental degradation and economic development but not financial development. Moreover, as argued by Stern [2004. The rise and fall of the environmental Kuznets curve. World Development 32, 1419-1439] they present important econometric weaknesses. Using standard reduced-form modeling approach and controlling for country-specific unobserved heterogeneity, we investigate the linkage between not only economic development and environmental quality but also the financial development. Panel data over period 1992-2004 is used. We find that both economic and financial development are determinants of the environmental quality in BRIC economies. We show that higher degree of economic and financial development decreases the environmental degradation. Our analysis suggests that financial liberalization and openness are essential factors for the CO2 reduction. The adoption of policies directed to financial openness and liberalization to attract higher levels of R&D-related foreign direct investment might reduce the environmental degradation in countries under consideration. In addition, the robustness check trough the inclusion of US and Japan does not alter our main findings." }, { "instance_id": "R30476xR29809", "comparison_id": "R30476", "paper_id": "R29809", "text": "Energy consumption, economic growth and CO2 emissions in Middle East and North African countries This article extends the recent findings of Liu (2005), Ang (2007), Apergis et al. (2009) and Payne (2010) by implementing recent bootstrap panel unit root tests and cointegration techniques to investigate the relationship between carbon dioxide emissions, energy consumption, and real GDP for 12 Middle East and North African Countries (MENA) over the period 1981\u20132005. Our results show that in the long-run energy consumption has a positive significant impact on CO2 emissions. More interestingly, we show that real GDP exhibits a quadratic relationship with CO2 emissions for the region as a whole. However, although the estimated long-run coefficients of income and its square satisfy the EKC hypothesis in most studied countries, the turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. CO2 emission reductions per capita have been achieved in the MENA region, even while the region exhibited economic growth over the period 1981\u20132005. The econometric relationships derived in this paper suggest that future reductions in CO2 emissions per capita might be achieved at the same time as GDP per capita in the MENA region continues to grow." }, { "instance_id": "R30476xR30387", "comparison_id": "R30476", "paper_id": "R30387", "text": "Reducing CO2 emissions in China's manufacturing industry: Evidence from nonparametric additive regression models Identifying the drivers of carbon dioxide emissions in the manufacturing industry is vital for developing effective environmental policies. This study adopts provincial panel data from 2000 to 2013 and uses nonparametric additive regression models to analyze the drivers of CO2 emissions in the industry. The results show that the nonlinear effect of economic growth on CO2 emissions supports the Environmental Kuznets Curve (EKC) hypothesis. Energy structure has an inverted \u201cU-shape\u201d effect owing to massive coal consumption in the early stages and the optimization of energy structure in the later stage. The inverted \u201cU-shaped\u201d impact of industrialization may be due to the priority development of heavy industry in the early stages and the optimization of industrial structure in the later stages. The impact of urbanization also exhibits an inverted \u201cU-shaped\u201d pattern because of mass consumption of steel and cement products in the early stages and the advancement in clean energy technologies at the later stages. However, specific energy consumption has a positive \u201cU-shaped\u201d impact because of the difference in the speed of technological progress at different times. Thus, the differential effects of these indicators at different times should be taken into consideration when discussing reduction of CO2 emissions in China's manufacturing industry." }, { "instance_id": "R30476xR29857", "comparison_id": "R30476", "paper_id": "R29857", "text": "Environmental Kuznets Curve hypothesis in Pakistan: Cointegration and Granger causality The paper is an effort to fill the gap in the energy literature with a comprehensive country study of Pakistan. We investigate the relationship between CO2 emissions, energy consumption, economic growth and trade openness in Pakistan over the period of 1971\u20132009. Bounds test for cointegration and Granger causality approach are employed for the empirical analysis. The result suggests that there exists a long-run relationship among the variables and the Environmental Kuznets Curve (EKC) hypothesis is supported. The significant existence of EKC shows the country's effort to condense CO2 emissions and indicates certain achievement of controlling environmental degradation in Pakistan. Furthermore, we find a one-way causal relationship running from economic growth to CO2 emissions. Energy consumption increases CO2 emissions both in the short and long runs. Trade openness reduces CO2 emissions in the long run but it is insignificant in the short run. In addition, the change of CO2 emissions from short run to the long span of time is corrected by about 10% yearly." }, { "instance_id": "R30476xR29830", "comparison_id": "R30476", "paper_id": "R29830", "text": "Threshold cointegration and nonlinear adjustment between CO2 and income: the environmental Kuznets curve in Spain In this paper we model the long-run relationship between per capita CO2 and per capita income for the Spanish economy over the period 1857\u20132007. According to the Environmental Kuznets Curve (ECK) the relationship between the two variables has an inverted-U shape. However, previous studies for the Spanish economy only considered the existence of linear relationships. Such an approach may lack flexibility to detect the true shape of the relationship. Our empirical methodology accounts for a possible non-linear relationship through the use of threshold cointegration techniques. Our results confirm the non-linearity of the link between the two above-mentioned variables pointing to the existence of an Environmental Kuznets Curve for the Spanish case." }, { "instance_id": "R30476xR29502", "comparison_id": "R30476", "paper_id": "R29502", "text": "Environmental Kuznets Curves for CO2: Heterogeneity versus Homogeneity We explore the emissions income relationship for CO2 in OECD countries using various modelling strategies.Even for this relatively homogeneous sample, we find that the inverted-U-shaped curve is quite sensitive to the degree of heterogeneity included in the panel estimations.This finding is robust, not only across different model specifications but also across estimation techniques, including the more flexible non-parametric approach.Differences in restrictions applied in panel estimations are therefore responsible for the widely divergent findings for an inverted-U shape for CO2.Our findings suggest that allowing for enough heterogeneity is essential to prevent spurious correlation from reduced-form panel estimations.Moreover, this inverted U for CO2 is likely to exist for many, but not for all, countries." }, { "instance_id": "R30476xR29821", "comparison_id": "R30476", "paper_id": "R29821", "text": "Economic development and carbon dioxide emissions in China: provincial panel data analysis This paper investigates the driving forces, emission trends and reduction potential of China's carbon dioxide (CO2) emissions based on a provincial panel data set covering the years 1995 to 2009. A series of static and dynamic panel data models are estimated, and then an optimal forecasting model selected by out-of-sample criteria is used to forecast the emission trend and reduction potential up to 2020. The estimation results show that economic development, technology progress and industry structure are the most important factors affecting China's CO2 emissions, while the impacts of energy consumption structure, trade openness and urbanization level are negligible. The inverted U-shaped relationship between per capita CO2 emissions and economic development level is not strongly supported by the estimation results. The impact of capital adjustment speed is significant. Scenario simulations further show that per capita and aggregate CO2 emissions of China will increase continuously up to 2020 under any of the three scenarios developed in this study, but the reduction potential is large." }, { "instance_id": "R30476xR29537", "comparison_id": "R30476", "paper_id": "R29537", "text": "Corruption, trade openness, and environmental quality: a panel data analysis of selected South Asian countries The second half of the twentieth century emerged with two important concepts of the economic world. In the start of the second half, economists, developmentalists, etc., introduced the idea of \u201cdevelopment\u201d, while; latter it was replaced by a more meaningful and attractive term \u201csustainable development\u201d. Sustainable development is defined as \u201cbalancing the fulfillment of human needs with the protection of the natural environment so that these needs can be met not only in the present, but also in the indefinite future\u201d [Wikipedia (2007)]. Or \u201cSustainable development means that pattern of development that permits future generations to live at least as well as the current generation\u201d [Todaro and Smith (2005)], eighth edition]. The field of sustainable development can be conceptually broken into four constituent parts: environmental sustainability, economic sustainability, social sustainability and political sustainability. Although, the word sustainable development is very vast and deep, but the main emphasis of our study will be on environmental sustainability." }, { "instance_id": "R30476xR30343", "comparison_id": "R30476", "paper_id": "R30343", "text": "The impact of trade openness on global carbon dioxide emissions: evidence from the top ten emitters among developing countries Abstract This study aims to analyze the relationship between carbon dioxide (CO 2 ) emissions, trade openness, real income and energy consumption in the top ten CO 2 emitters among the developing countries; namely China, India, South Korea, Brazil, Mexico, Indonesia, South Africa, Turkey, Thailand and Malaysia over the period of 1971\u20132011. In addition, the possible presence of the EKC hypothesis is investigated for the analyzed countries. The Zivot\u2013Andrews unit root test with structural break, the bounds testing for cointegration in the presence of structural break and the VECM Granger causality method are employed. The empirical results indicate that (i) the analyzed variables are co-integrated for Thailand, Turkey, India, Brazil, China, Indonesia and Korea, (ii) real income, energy consumption and trade openness are the main determinants of carbon emissions in the long run, (iii) there exists a number of causal relations between the analyzed variables, (iv) the EKC hypothesis is validated for Turkey, India, China and Korea. Robust policy implications can be derived from this study since the estimated models pass several diagnostic and stability tests." }, { "instance_id": "R30476xR29573", "comparison_id": "R30476", "paper_id": "R29573", "text": "An econometric study of CO2 emissions, energy consumption, income and foreign trade in Turkey This study attempts to examine empirically dynamic causal relationships between carbon emissions, energy consumption, income, and foreign trade in the case of Turkey using the time series data for the period 1960-2005. This research tests the interrelationship between the variables using the bounds testing to cointegration procedure. The bounds test results indicate that there exist two forms of long-run relationships between the variables. In the case of first form of long-relationship, carbon emissions are determined by energy consumption, income and foreign trade. In the case of second long-run relationship, income is determined by carbon emissions, energy consumption and foreign trade. An augmented form of Granger causality analysis is conducted amongst the variables. The long-run relationship of CO2 emissions, energy consumption, income and foreign trade equation is also checked for the parameter stability. The empirical results suggest that income is the most significant variable in explaining the carbon emissions in Turkey which is followed by energy consumption and foreign trade. Moreover, there exists a stable carbon emissions function. The results also provide important policy recommendations." }, { "instance_id": "R30476xR29765", "comparison_id": "R30476", "paper_id": "R29765", "text": "Environmental Kuznets Curve for carbon dioxide emissions: lack of robustness to heterogeneity?\u201d, working paper This paper focuses solely on the energy consumption, carbon dioxide ( 2 CO ) emissions and economic growth nexus applying the iterative Bayesian shrinkage procedure. The environmental Kuznets curve (EKC) hypothesis is tested using this method for the first time in this literature and the results obtained suggest that: first, the EKC hypothesis is rejected for 49 out of the 51 countries considered when heterogeneity in countries' energy efficiencies and cross-country differences in the 2 CO emissions trajectories are accounted for; second, a classification of the results with respect to countries' development levels reveals that an overall inverted U-shape curve is due to the fact that increase in gross domestic product (GDP) in the high-income countries decreases emissions, while in the low-income countries it increases emissions." }, { "instance_id": "R30476xR30267", "comparison_id": "R30476", "paper_id": "R30267", "text": "Carbon emissions, energy consumption and economic growth: An aggregate and disaggregate analysis of the Indian economy This study investigates the long and short run relationships among carbon emissions, energy consumption and economic growth in India at the aggregated and disaggregated levels during 1971\u20132014. The autoregressive distributed lag model is employed for the cointegration analyses and the vector error correction model is applied to determine the direction of causality between variables. Results show that a long run cointegration relationship exists and that the environmental Kuznets curve is validated at the aggregated and disaggregated levels. Furthermore, energy (total energy, gas, oil, electricity and coal) consumption has a positive relationship with carbon emissions and a feedback effect exists between economic growth and carbon emissions. Thus, energy-efficient technologies should be used in domestic production to mitigate carbon emissions at the aggregated and disaggregated levels. The present study provides policy makers with new directions in drafting comprehensive policies with lasting impacts on the economy, energy consumption and environment towards sustainable development." }, { "instance_id": "R30476xR29393", "comparison_id": "R30476", "paper_id": "R29393", "text": "A dynamic approach to the Environmental Kuznets Curve hypothesis Abstract The Environmental Kuznets Curve (EKC) hypothesis states that pollution levels increase as a country develops, but begin to decrease as rising incomes pass beyond a turning point. In EKC analyses, the relationship between environmental degradation and income is usually expressed as a quadratic function with the turning point occurring at a maximum pollution level. Other explanatory variables have been included in these models, but income regularly has had the most significant effect on indicators of environmental quality. One variable consistently omitted in these relationships is the price of energy. This paper analyzes previous models to illustrate the importance of prices in these models and then includes prices in an econometric EKC framework testing energy/income and CO2/income relationships. These long-run price/income models find that income is no longer the most relevant indicator of environmental quality or energy demand. Indeed, we find no significant evidence for the existence of an EKC within the range of current incomes for energy in the presence of price and trade variables." }, { "instance_id": "R30476xR30276", "comparison_id": "R30476", "paper_id": "R30276", "text": "The dynamic impact of renewable energy consumption on CO 2 emissions: A revisited Environmental Kuznets Curve approach This paper considers a revisited Environmental Kuznets Curve (EKC) hypothesis with potential impact of renewable energy consumption on environmental quality. To this end, paper aims at investigating the validity of the EKC hypothesis employing the dependent variable of CO2 emissions and regressors of GDP, quadratic GDP and renewable energy consumption. This paper, hence, analyzes this revisited EKC hypothesis to observe if (i) there exists an inverted-U shaped relationship between environmental quality (in terms of CO2 emissions), per capita income and per capita income squared and (ii) there exists a negative causality from renewables to CO2 emissions within EKC model. Paper employs a panel data set of 17 OECD countries over the period 1977\u20132010 and launches panel FMOLS and panel DOLS estimations. The findings support the EKC hypothesis for the panel and indicate that GDP per capita and GDP per capita squared have the impacts on CO2 emissions positively and negatively, respectively, and that renewable energy consumption yields negative impact on CO2 emissions. Another remark of this paper is that the validity of EKC does not depend on income level of individual countries of panel in which EKC hypothesis holds. Eventually, paper argues that if countries carry out (i) policies, i.e., for fair and easy access to the electricity from renewable sources and (ii) policies to increase renewables supply through i.e. improved renewable energy technologies, they will be able to contribute to combating global warming problem as they increase their GDP\u2019s." }, { "instance_id": "R30476xR29885", "comparison_id": "R30476", "paper_id": "R29885", "text": "Is economic growth good or bad for the environment? Empirical evidence from Korea The effects of economic growth on the environment in Korea, for a given level of energy consumption, and fossil fuels and nuclear energy in electricity production, are examined in a dynamic cointegration framework. To that end, the autoregressive distributed lag (ARDL) approach is used. We find empirical evidence supporting the existence of the environmental Kuznets curve (EKC) hypothesis for Korea; that is, economic growth indeed plays a favorable role in influencing environmental outcomes. It is also found that, in both the short- and long-run, nuclear energy has a beneficial effect on environmental quality, whereas fossil fuels in electricity production and energy consumption have a detrimental effect on the environment." }, { "instance_id": "R30476xR30397", "comparison_id": "R30476", "paper_id": "R30397", "text": "Energy Innovations-GHG Emissions Nexus: Fresh Empirical Evidence from OECD Countries This study explores the impact of improvements in energy research development (ERD) on greenhouse gas (GHG) emissions using environmental Kuznets curve hypothesis for 28 OECD countries over the period of 1990\u20132014. In doing so, we have employed a panel data where public budget in energy research development and demonstration (ERD&D) has transformed into a finite inverted V-lag distribution model developed by De Leeuw (1962). This model considers that energy innovation accumulates in time and presents empirical evidence, how energy innovation contributes in reducing energy intensity and environmental pollution as well." }, { "instance_id": "R30476xR29723", "comparison_id": "R30476", "paper_id": "R29723", "text": "CO2 emissions, energy consumption and economic growth in BRIC countries This paper examines dynamic causal relationships between pollutant emissions, energy consumption and output for a panel of BRIC countries over the period 1971-2005, except for Russia (1990-2005). In long-run equilibrium energy consumption has a positive and statistically significant impact on emissions, while real output exhibits the inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis with the threshold income of 5.393 (in logarithms). In the short term, changes in emissions are driven mostly by the error correction term and short term energy consumption shocks, as opposed to short term output shocks for each country. Short-term deviations from the long term equilibrium take from 0.770 years (Russia) to 5.848 years (Brazil) to correct. The panel causality results indicate there are energy consumption-emissions bidirectional strong causality and energy consumption-output bidirectional long-run causality, along with unidirectional both strong and short-run causalities from emissions and energy consumption, respectively, to output. Overall, in order to reduce emissions and not to adversely affect economic growth, increasing both energy supply investment and energy efficiency, and stepping up energy conservation policies to reduce unnecessary wastage of energy can be initiated for energy-dependent BRIC countries." }, { "instance_id": "R30476xR30443", "comparison_id": "R30476", "paper_id": "R30443", "text": "Energy consumption to environmental degradation, the growth appetite in SAARC nations This study aims to investigate the role of energy consumption on environmental degradation under multivariate framework for emerging and frontier Asian markets. We have included CO2 emission, GDP and population growth with energy consumption as additional determinants of environmental degradation. The application of panel unit root test suggests non-stationary properties of included variables however we report significant co-integration among these variables. Furthermore, presence of Environmental Kuznets Curve (EKC) was detected using the application of fully modified OLS and dynamic OLS. Level of energy consumption tends to increase environmental degradation thereby confirming the Pollution Haven Hypothesis (PHH). Bidirectional causality was also observed between CO2 emission and economic growth. Results of this study are variable to different countries and present sound economic policy implications for the improvement of environmental standards." }, { "instance_id": "R30476xR30197", "comparison_id": "R30476", "paper_id": "R30197", "text": "Assessing the impact of population, income and technology on energy consumption and industrial pollutant emissions in China Elucidating the complex mechanism of the impact of demographic changes, economic growth, and technological advance impacts on energy consumption and pollutant emissions is fundamentally necessary to inform effective strategies on energy saving and emission reduction in China. Here, based on a balanced provincial panel dataset in China over the period 1990\u20132012, we used an extended STIRPAT model to investigate the effects of human activity on energy consumption and three types of industrial pollutant emissions (exhaust gases, waste water and solid waste) at the national and regional levels and tested the environmental Kuznets curve (EKC) hypothesis. Empirical results show that a higher population density would result in a decrease in energy consumption in China as a whole and in its eastern, central and western regions, but the extent of its effect on the environment depends on the type of pollutants. Higher population density increased wastewater discharge but decreased solid waste production in China and its three regions. The effect of economic development on the environment was heterogeneous across the regions. The proportion of industrial output had a significant and positive influence on energy consumption and pollutant emissions in China and its three regions. Higher industrial energy intensity resulted in higher levels of pollutant emissions. No strong evidence supporting the EKC hypothesis for the three industrial wastes in China was found. Our findings further demonstrated that the impact of population, income and technology on the environment varies at different levels of development. Because of the regional disparities in anthropogenic impact on the environment, formulating specific region-oriented energy saving and emission reduction strategies may provide a more practical and effective approach to achieving sustainable development in China." }, { "instance_id": "R30476xR30466", "comparison_id": "R30476", "paper_id": "R30466", "text": "A disaggregated analysis of the environmental Kuznets curve for industrial CO2 emissions in China The present study concentrates on a Chinese context and attempts to explicitly examine the impacts of economic growth and urbanization on various industrial carbon emissions through investigation of the existence of an environmental Kuznets curve. Within the Stochastic Impacts by Regression on Population, Affluence and Technology framework, this is the first attempt to simultaneously explore the income/urbanization and disaggregated industrial carbon dioxide emissions nexus, using panel data together with semi-parametric panel fixed effects regression. Our dataset is referred to a provincial panel of China spanning the period 2000\u20132013. With this information, we find evidence in support of an inverted U-shaped curve relationship between economic growth and carbon dioxide emissions in the electricity and heat production sector, but a similar inference only for urbanization and those emissions in the manufacturing sector. The heterogeneity in the EKC relationship across industry sectors implies that there is urgent need to design more specific policies related to carbon emissions reduction for various industry sectors. Also, these findings contribute to advancing the emerging literature on the development-pollution nexus." }, { "instance_id": "R30476xR29954", "comparison_id": "R30476", "paper_id": "R29954", "text": "Environmental degradation, economic growth and energy consumption: evidence of the environmental Kuznets curve in Malaysia\u201d This paper tests for the short and long-run relationship between economic growth, carbon dioxide (CO2) emissions and energy consumption, using the Environmental Kuznets Curve (EKC) by employing both the aggregated and disaggregated energy consumption data in Malaysia for the period 1980\u20132009. The Autoregressive Distributed Lag (ARDL) methodology and Johansen\u2013Juselius maximum likelihood approach were used to test the cointegration relationship; and the Granger causality test, based on the vector error correction model (VECM), to test for causality. The study does not support an inverted U-shaped relationship (EKC) when aggregated energy consumption data was used. When data was disaggregated based on different energy sources such as oil, coal, gas and electricity, the study does show evidences of the EKC hypothesis. The long-run Granger causality test shows that there is bi-directional causality between economic growth and CO2 emissions, with coal, gas, electricity and oil consumption. This suggests that decreasing energy consumption such as coal, gas, electricity and oil appears to be an effective way to control CO2 emissions but simultaneously will hinder economic growth. Thus suitable policies related to the efficient consumption of energy resources and consumption of renewable sources are required." }, { "instance_id": "R30476xR29733", "comparison_id": "R30476", "paper_id": "R29733", "text": "Do economic, financial and institutional developments matter for environmental degradation? Evidence from transitional economies Several studies have examined the relationship between environmental degradation and economic growth. However, most of them did not take into account financial developments and institutional quality. Moreover, Stern (2004) noted that there are important econometric weaknesses in the earlier studies, such as endogeneity, heteroscedasticity, omitted variables, etc. The purpose of this paper is to fill this gap in the literature by investigating the linkage between not only economic development and environmental quality but also financial development and institutional quality. We employ the standard reduced-form modelling approach to control for country-specific unobserved heterogeneity and GMM estimation to control for endogeneity. Our study considers 24 transition economies and panel data for 1993-2004. Our results support the EKC hypothesis while confirming the importance of both institutional quality and financial development for environmental performance. We also found that financial liberalization may be harmful for environmental quality if it is not accomplished in a strong institutional framework." }, { "instance_id": "R30476xR30393", "comparison_id": "R30476", "paper_id": "R30393", "text": "Modelling the CO 2 emissions and economic growth in Croatia: Is there any environmental Kuznets curve? This paper investigates the existence of environmental Kuznets curve in Croatia for the period of 1992Q1-2011Q1. To fulfil to goals of the paper, Autoregressive Distributed Lag (ARDL) and VECM method has been applied. Results show the existence of inverted U-shape relation between CO2 emissions and economic growth in long run that is the validity of EKC. Granger causality based on VECM approach shows bi-directional causality between CO2 emissions and economic growth in short run and uni-directional causality from economic growth to CO2 emissions in long run. DOLS and FMOLS results confirm the robustness of long run results. Variance decomposition and Impulse response also show similar results. Beauty of the paper is the consistency of results from different techniques." }, { "instance_id": "R30476xR30371", "comparison_id": "R30476", "paper_id": "R30371", "text": "Compelling evidence of an environmental Kuznets curve in the United Kingdom The objective of this paper is to investigate the relationship between per capita emissions ($$\\hbox {CO}_{2}$$CO2 and $$\\hbox {SO}_{2}$$SO2) and economic growth (per capita GDP) in the UK using a long span of data. This paper examines the existence of a non-linear relationship between emissions and economic growth using methods that do not restrict the relationship to be any particular shape. The methodology employs instrumental variables in the place of per capita GDP to deal with potential concerns about errors in variables and endogeneity. The empirical results provide strong support for the environmental Kuznets curve, with estimated turning points in 1966 and 1967 for $$\\hbox {CO}_{2}$$CO2 and $$\\hbox {SO}_{2}$$SO2 , respectively. These turning points correspond roughly with the introduction of the Clean Air Act in the UK as well as the reduction in the use of coal as an energy source; and together, they provide a snapshot of the forces driving the turning points. The paper continues by further investigating the temporal behavior of the inverted U-shaped relationship. The findings indicate that if emissions and per capita GDP deviate from their long-run relationship, emissions do the \u201cheavy lifting\u201d to restore the system to equilibrium. This result is intuitively pleasing because mitigation is directly affected by legislation as opposed to declining economic growth." }, { "instance_id": "R30476xR30260", "comparison_id": "R30476", "paper_id": "R30260", "text": "The environmental Kuznets curve at different levels of economic development: a counterfactual quantile regression analysis for CO2emissions This paper applies the quantile fixed effects technique in exploring the CO2 environmental Kuznets curve within two groups of economic development (OECD and non-OECD countries) and six geographical regions \u2013 West, East Europe, Latin America, East Asia, West Asia and Africa. A comparison of the findings resulting from the use of this technique with those of conventional fixed effects method reveals that the latter may depict a flawed summary of the prevailing income\u2013emissions nexus depending on the conditional quantile examined. The paper also extends the Machado and Mata decomposition method to the Kuznets curve framework to explore the most important explanations for CO2 emissions gap between OECD and non-OECD countries. We find a statistically significant OECD--non-OECD emissions gap and the decomposition reveals that there are non-income related factors working against the non-OECD group's greening. We tentatively conclude that deliberate and systematic mitigation of current CO2 emissions in the non-OECD group is required." }, { "instance_id": "R30476xR29509", "comparison_id": "R30476", "paper_id": "R29509", "text": "Assessing income, population, and technology impacts on CO2 emissions in Canada: Where's the EKC? Abstract This study investigates the macroeconomic forces underlying carbon dioxide (CO2) emissions from fossil fuel use in Canada. In keeping with the relevant literature on environmental degradation, three forces are expected to influence CO2 emissions: gross domestic product per capita (GDP/capita), population and technological change. While previous work has employed reduced-form models that allow for non-linear relationships between CO2 and GDP/capita, it has been common practice to assume linear relationships between CO2 and the latter two variables. This study tests a more flexible model using a five-region panel data set in Canada over the period 1970\u20132000. Findings indicate that GDP/capita is unrelated to CO2, that an inverted U-shaped relationship exists with population, and that a U-shaped relationship exists with technology. Thus, technological and population changes are supported over the commonly hypothesized environmental Kuznets curve (an inverted U-shaped relationship between GDP/capita and environmental degradation) for affecting CO2 emissions from fossil fuel use in Canada." }, { "instance_id": "R30476xR30175", "comparison_id": "R30476", "paper_id": "R30175", "text": "Emissions and trade in Southeast and East Asian countries: a panel co-integration analysis Purpose \u2013 The purpose of this paper is to analyse the implication of trade on carbon emissions in a panel of eight highly trading Southeast and East Asian countries, namely, China, Indonesia, South Korea, Malaysia, Hong Kong, The Philippines, Singapore and Thailand. Design/methodology/approach \u2013 The analysis relies on the standard quadratic environmental Kuznets curve (EKC) extended to include energy consumption and international trade. A battery of panel unit root and co-integration tests is applied to establish the variables\u2019 stochastic properties and their long-run relations. Then, the specified EKC is estimated using the panel dynamic ordinary least square (OLS) estimation technique. Findings \u2013 The panel co-integration statistics verifies the validity of the extended EKC for the countries under study. Estimation of the long-run EKC via the dynamic OLS estimation method reveals the environmentally degrading effects of trade in these countries, especially in ASEAN and plus South Korea and Hong Kong. Practical implications \u2013 These countries are heavily dependent on trade for their development processes, and as such, their impacts on CO 2 emissions would be highly relevant for assessing their trade policies, along the line of the gain-from-trade hypothesis, the race-to-the-bottom hypothesis and the pollution-safe-haven hypothesis. Originality/value \u2013 The analysis adds to existing literature by focusing on the highly trading nations of Southeast and East Asian countries. The results suggest that reassessment of trade policies in these countries is much needed and it must go beyond the sole pursuit of economic development via trade." }, { "instance_id": "R30476xR30355", "comparison_id": "R30476", "paper_id": "R30355", "text": "Environmental Kuznets curve in China: new evidence from dynamic panel analysis This paper applies a panel of 28 provinces of China from 1996 to 2012 to study the impacts of economic development, energy consumption, trade openness, and urbanization on the carbon dioxide, waste water, and waste solid emissions. By estimating a dynamic panel model with the system Generalized Method of Moments (GMM) estimator and an autoregressive distributed lag (ARDL) model with alternative panel estimators, respectively, we find that the Environmental Kuznets Curve (EKC) hypothesis is well supported for all three major pollutant emissions in China across different models and estimation methods. Our study also confirms positive effects of energy consumption on various pollutant emissions. In addition, we find some evidence that trade and urbanization may deteriorate environmental quality in the long run, albeit not in the short run. From policy perspective, our estimation results bode well for Chinese government's goal of capping greenhouse emissions by 2030 as outlined in the recent China-US climate accord, while containing energy consumption and harm effects from expanding trade and urbanization remains some environmental challenges that China faces." }, { "instance_id": "R30476xR30156", "comparison_id": "R30476", "paper_id": "R30156", "text": "The renewable energy, growth and environmental Kuznets curve in Turkey: An ARDL approach This study examines the potential of renewable energy sources in reducing the impact of GHG emissions in Turkey. Using Autoregressive Distributed Lag (ARDL) approach, the relationship between CO2 emissions, electricity generated using renewables and GDP in Turkey has been investigated during 1961\u20132010. Moreover, the validity of Environmental Kuznets Curve (EKC) hypothesis has been tested. Model results show that the coefficient of electricity production from renewable sources (hydro power excluded) with respect to CO2 emissions is negative and significant in the long run. Although this effect is positive and statistically significant in the short run, since the ECM term is \u22120.82, it becomes negative around 1 year. This means that renewable electricity production will contribute to environmental enhancement with a one year lag. Our results also suggest a U-shaped (EKC) relationship between per capita GHGs and income. Estimations from a long-run regression show that, although the peak point of GDP per capita has been calculated to be 9920 US Dollars, this turning point was outside of the observed sample period. Therefore, GHG emissions start to decrease with an increase in per capita GDP in the following years. Model research implies the potential and the importance of renewable energy sources in controlling of emissions in Turkey." }, { "instance_id": "R30476xR30390", "comparison_id": "R30476", "paper_id": "R30390", "text": "Relationship between economic growth and environmental degradation: is there evidence of an environmental Kuznets curve for Brazil? This study investigates the relationship between CO2 emissions, economic growth, energy use and electricity production by hydroelectric sources in Brazil. To verify the environmental Kuznets curve (EKC) hypothesis we use time-series data for the period 1971-2011. The autoregressive distributed lag methodology was used to test for cointegration in the long run. Additionally, the vector error correction model Granger causality test was applied to verify the predictive value of independent variables. Empirical results find that there is a quadratic long run relationship between CO2emissions and economic growth, confirming the existence of an EKC for Brazil. Furthermore, energy use shows increasing effects on emissions, while electricity production by hydropower sources has an inverse relationship with environmental degradation. The short run model does not provide evidence for the EKC theory. The differences between the results in the long and short run models can be considered for establishing environmental policies. This suggests that special attention to both variables-energy use and the electricity production by hydroelectric sources- could be an effective way to mitigate CO2 emissions in Brazil" }, { "instance_id": "R30476xR29642", "comparison_id": "R30476", "paper_id": "R29642", "text": "Empirical study on the environmental Kuznets curve for CO2 in France: The role of nuclear energy This paper attempts to estimate the environmental Kuznets curve (EKC) in the case of France by taking the role of nuclear energy in electricity production into account. We adopt the autoregressive distributed lag (ARDL) approach to cointegration as the estimation method. Additionally, we examine the stability of the estimated models and investigate the Granger causality relationships between the variables in the system. The results from our estimation provide evidence supporting the EKC hypothesis, and the estimated models are shown to be stable over the sample period. The uni-direction running from other variables to CO2 emissions are confirmed from the casualty tests. Specifically, the uni-directional causality relationship running from nuclear energy to CO2 emissions statistically provides evidence on the important role of nuclear energy in reducing CO2 emissions." }, { "instance_id": "R30476xR29859", "comparison_id": "R30476", "paper_id": "R29859", "text": "Modelling the nonlinear relationship between CO2 emissions from oil and economic growth The purpose of this paper is to examine the relationship between carbon dioxide (CO2) emissions from oil and GDP, using panel data from 1971 to 2007 of 98 countries. Previous studies have discussed the environmental Kuznets curve (EKC) hypothesis, but little attention has been paid to the existence of a nonlinear relationship between these two variables. We argue that there exists a threshold effect between the two variables: different levels of economic growth bear different impacts on oil CO2 emissions. Our empirical results do not support the EKC hypothesis. Additionally, the results of short-term analyses of static and dynamic panel threshold estimations suggest the efficacy of a double-threshold (three-regime) model. In the low economic growth regime, economic growth negatively affects oil CO2 emissions growth; in the medium economic growth regime, however, economic growth positively impacts oil CO2 emissions growth; and in the high economic growth regime, the impact of economic growth is insignificant." }, { "instance_id": "R30476xR29816", "comparison_id": "R30476", "paper_id": "R29816", "text": "Environmental Kuznets curve and growth source in Iran Recent empirical research has examined the relationship between certain indicators of environmental degradation and income, concluding that in some cases an inverted U-shaped relationship, which has been called an environmental Kuznets curve (EKC), exists between these variables. The source of growth explanation is important for two reasons. First, it demonstrates how the pollution consequences of growth depend on the source of growth. Therefore, the analogy drawn by some in the environmental community between the damaging effects of economic development and those of liberalized trade is, at best, incomplete. Second, the source of growth explanation demonstrates that a strong policy response to income gains is not necessary for pollution to fall with growth. The aim of this paper investigates the role of differences source of growth in environmental quality of Iran. The results show the two growth resources in Iran cause, in the early stages, CO2 emission decreases until turning point but beyond this level of income per capita, economic growth leads to environmental degradation. I find a U relationship between environmental degradation (CO2 emission) and economic growth in Iran." }, { "instance_id": "R30476xR29979", "comparison_id": "R30476", "paper_id": "R29979", "text": "Environmental Kuznets curve in Thailand: cointegration and causality analysis\u201d The study is aim to explore the existence of environmental Kuznets curve (EKC) in case of Thailand over the period of 1971-2010. The EKC relationship posits that as economy grows, measured by per capita income, at the initial stage energy pollu" }, { "instance_id": "R30476xR29431", "comparison_id": "R30476", "paper_id": "R29431", "text": "Determinants of CO2 emissions in a small open economy Abstract The aim of the paper is to explore the relationship between economic development and carbon dioxide (CO2) emissions for a small open and industrialized country, Austria. We test whether an Environmental Kuznets Curve relationship also holds for a single country rather than concentrating on panel or cross-section data for a set of countries. A cubic (i.e. N-shaped) relationship between GDP and CO2 emissions is found to fit the data most appropriately for the period 1960\u20131999, and a structural break is identified in the mid-seventies due to the oil price shock. Furthermore, two variables are additionally significant: import shares reflecting the well-known pollution haven hypothesis, and the share of the tertiary (service) sector of total production (GDP) accounting for structural changes in the economy. Emission projections derived from this single country specification support the widely held opinion that significant policy changes are asked for when implementing the Kyoto Protocol in order to bring about a downturn in future carbon emissions." }, { "instance_id": "R30476xR30439", "comparison_id": "R30476", "paper_id": "R30439", "text": "Environmental Kuznets Curve with Adjusted Net Savings as a Trade-Off Between Environment and Development The Environmental Kuznets Curve (EKC) hypothesises that emissions first increase at low stages of development then decrease once a certain threshold has been reached. The EKC concept is usually used with per capita Gross Domestic Product as the explanatory variable. As others, we find mixed evidence, at best, of such a pattern for CO2 emissions with respect to per capita GDP. We also show that the share of manufacture in GDP and governance/institutions play a significant role in the CO2 emissions\u2013income relationship. As GDP presents shortcomings in representing income, development in a broad perspective or human well-being, it is then replaced by the World Bank's Adjusted Net Savings (ANS, also known as Genuine Savings). Using the ANS as an explanatory variable, we show that the EKC is generally empirically supported for CO2 emissions. We also show that human capital and natural capital are the main drivers of the downward sloping part of the EKC." }, { "instance_id": "R30476xR29652", "comparison_id": "R30476", "paper_id": "R29652", "text": "CO2 emissions, electricity consumption and output in ASEAN This study examines the causal relationship between carbon dioxide emissions, electricity consumption and economic growth within a panel vector error correction model for five ASEAN countries over the period 1980 to 2006. The long-run estimates indicate that there is a statistically significant positive association between electricity consumption and emissions and a non-linear relationship between emissions and real output, consistent with the Environmental Kuznets Curve. The long-run estimates, however, do not indicate the direction of causality between the variables. The results from the Granger causality tests suggest that in the long-run there is unidirectional Granger causality running from electricity consumption and emissions to economic growth. The results also point to unidirectional Granger causality running from emissions to electricity consumption in the short-run." }, { "instance_id": "R30476xR30022", "comparison_id": "R30476", "paper_id": "R30022", "text": "What role of renewable and non-renewable electricity consumption and output is needed to initially mitigate CO 2 emissions in MENA region? This study attempts to explore the causal relationship between renewable and non-renewable electricity consumption, output and carbon dioxide (CO2) emissions for 10 Middle East and North Africa (MENA) countries over the period of 1980\u20132009. The results from panel Fully Modified Ordinary Least Squares (FMOLS) and Dynamic Ordinary Least Squares (DOLS) show that renewable and non-renewable electricity consumption add in CO2 emissions while output (real gross domestic product (GDP) per capita) exhibits an inverted U-shaped relationship with CO2 emissions i.e. the environment Kuznets curve (EKC) hypothesis is validated. The short-run dynamics indicate the unidirectional causality running from renewable and non-renewable electricity consumption and output to CO2 emissions. In the long-run, there appears to be a bidirectional causality between electricity consumption (renewable and non-renewable) and CO2 emissions. The findings suggest that future reductions in CO2 emissions might be achieved at the cost of economic growth." }, { "instance_id": "R30476xR29368", "comparison_id": "R30476", "paper_id": "R29368", "text": "Economic Growth and Environmental Quality: Time-Series and Cross-Country Evidence The authors explore the relationship between economic growth and environmental quality by analyzing patterns of environmental transformation for countries at different income levels. They look at how eight indicators of environmental quality evolve in response to economic growth and policies across a large number of countries and across time. Several conclusions are drawn; (1) income has the most consistently significant effect on all indicators of environmental quality; (2) many indicators tend to improve as countries approach middle-income levels; (3) technology seems to work in favor of improved environmental quality; (4) the econometric evidence suggests that trade, debt, and other macroeconomic policy variables seem to have little effect on the environment, although some policies can be linked to specific environmental problems; (5) the evidence shows that it is possible to\"grow out of\"some environmental problems, but there is nothing automatic about doing so - policies and investments to reduce degradation are necessary; and (6) action tends to be taken where there are generalized local costs and substantial private and social benefits." }, { "instance_id": "R30476xR29967", "comparison_id": "R30476", "paper_id": "R29967", "text": "The effects of financial development, economic growth, coal consumption and trade openness on CO2 emissions in South Africa This paper explores the effects of financial development, economic growth, coal consumption and trade openness on environmental performance using time series data over the period 1965\u20132008 in case of South Africa. The ARDL bounds testing approach to cointegration has been used to test the long run relationship among the variables while short run dynamics have been investigated by applying error correction method (ECM). The unit root properties of the variables are examined by applying Saikkonen and Lutkepohl (2002. Econometric Theory 18, 313\u2013348) structural break unit root test. Our findings confirmed long run relationship among the variables. Results showed that a rise in economic growth increases energy emissions, while financial development reduces it. Coal consumption has significant contribution to deteriorate environment in South African economy. Trade openness improves environmental quality by reducing the growth of energy pollutants. Our empirical results also verified the existence of environmental Kuznets curve. This paper opens up new insights for South African economy to sustain economic growth by controlling environment from degrdation through efficient use of energy." }, { "instance_id": "R30476xR30436", "comparison_id": "R30476", "paper_id": "R30436", "text": "Financial stability, energy consumption and environmental quality: Evidence from South Asian economies A few studies are found on the relationship between financial instability, energy consumption and environmental quality in energy economics literature. The current study is an endeavor to fill this gap by investigating the relationship between financial stability, economic growth, energy consumption and carbon dioxide (CO2) emissions in South Asian countries over the period 1980\u20132012 using a multivariate framework. Bounds test for cointegration and Granger causality approach are employed for the empirical analysis. Estimated results suggest that all variables are non-stationary and cointegrated. The results show that financial stability improves environmental quality; while the increase in economic growth, energy consumption and population density are detrimental for environment quality in the long-run. The results also support the environmental Kuznets curve (EKC) hypothesis which assumes an inverted U-shaped path between income and environmental quality. Moreover, the study found the evidence of unidirectional causality running from financial stability to CO2 emissions in two countries i.e. Pakistan and Sri Lanka. The findings of this study open up new insight for policy makers to design a comprehensive financial, economic and energy supply policies to minimize the detrimental impact of environmental pollution." }, { "instance_id": "R30476xR30179", "comparison_id": "R30476", "paper_id": "R30179", "text": "The environmental Kuznets curve, economic growth, renewable and non-renewable energy, and trade in Tunisia We use the autoregressive distributed lag (ARDL) bounds testing approach for cointegration with structural breaks and the vector error correction model (VECM) Granger causality approach in order to investigate relationships between per capita CO2 emissions, GDP, renewable and non-renewable energy consumption and international trade (exports or imports) for Tunisia during the period 1980\u20132009. We show the existence of a short-run unidirectional causality running from trade, GDP, CO2 emission and non-renewable energy to renewable energy. Our long-run estimates show that non-renewable energy and trade have a positive impact on CO2 emissions, whereas renewable energy impacts weakly and negatively CO2 emission when using the model with exports and this impact is statistically insignificant when using the model with imports. The inverted U-shaped environmental Kuznets curve (EKC) hypothesis is not supported graphically and analytically in the long-run. This means that Tunisia has not yet reached the required level of per capita GDP to get an inverted U-shaped EKC. Our main policy recommendations for Tunisia are the following: (i) to radically reform the subsidies system granted by the Tunisian government for fossil fuels consumption; (ii) to encourage the use of renewable energy and energy efficiency by reinforcing actual projects and regulatory framework; (iii) to locate ports near exporting industrial zones (or vice versa) to reduce emission of pollution caused by the transport of merchandise; (iv) to elaborate a strategy for maximizing its benefit from renewable energy technology transfer occurring when importing capital goods; (v) to encourage the creation of renewable energy projects for export to the EU with a proportion of production for national consumption." }, { "instance_id": "R30476xR29970", "comparison_id": "R30476", "paper_id": "R29970", "text": "The potential of renewable energy: using the environmental Kuznets curve model This study examines the potential of Renewable Energy Sources (RES) in reducing the impact of carbon emission in Malaysia and the Greenhouse Gas (GHG) emissions, which leads to global warming. Using the Environmental Kuznets Curve (EKC) hypothesis, this study analyses the impact of electricity generated using RES on the environment and trade openness for the period 1980-2009. Using the Autoregressive Distributed Lag (ARDL) approach the results show that the elasticities of electricity production from renewable sources with respect to CO2 emissions are negative and significant in both the short and long-run. This implies the potential of renewable energy in controlling CO2 emissions in both short and long-run in Malaysia. Renewable energy can ensure sustainability of electricity supply and at the same time can reduce CO2 emissions. Trade openness has a significant negative effect on CO2 emissions in the long-run. The Granger causality test based on Vector Error Correction Mode (VECM) indicates that there is an evidence of positive bi-directional Granger causality relationship between economic growth and CO2 emissions in the short and long-run suggesting that carbon emissions and economic growth are interrelated to each other. Furthermore, there is a negative long-run bi-directional Granger causality relationship between electricity production from renewable sources and CO2 emissions. The short-run Granger causality shows a negative uni-directional causality for electricity production from renewable sources to CO2 emissions. This result suggests that there is an inverted U-shaped relationship between CO2 emissions and economic growth." }, { "instance_id": "R30476xR29564", "comparison_id": "R30476", "paper_id": "R29564", "text": "Carbon emissions in Central and Eastern Europe: environmental Kuznets curve and implications for sustainable development This study examines the impact of various factors such as gross domestic product (GDP) per capita, energy use per capita and trade openness on carbon dioxide (CO 2 ) emission per capita in the Central and Eastern European Countries. The extended environmental Kuznets curve (EKC) was employed, utilizing the available panel data from 1980 to 2002 for Bulgaria, Hungary, Romania and Turkey. The results confirm the existence of an EKC for the region such that CO 2 emission per capita decreases over time as the per capita GDP increases. Energy use per capita is a significant factor that causes pollution in the region, indicating that the region produces environmentally unclean energy. The trade openness variable implies that globalization has not facilitated the emission level in the region. The results imply that the region needs environmentally cleaner technologies in energy production to achieve sustainable development. Copyright \u00a9 2008 John Wiley & Sons, Ltd and ERP Environment." }, { "instance_id": "R30476xR29741", "comparison_id": "R30476", "paper_id": "R29741", "text": "Economic Development and Environmental Quality in Nigeria: Is There an Environmental Kuznets Curve? This study utilizes standard- and nested-EKC models to investigate the income-environment relation for Nigeria, between 1960 and 2008. The results from the standard-EKC model provides weak evidence of an inverted-U shaped relationship with turning point (T.P) around $280.84, while the nested model presents strong evidence of an N-shaped relationship between income and emissions in Nigeria, with a T.P around $237.23. Tests for structural breaks caused by the 1973 oil price shocks and 1986 Structural Adjustment are not rejected, implying that these factors have not significantly affected the income-environment relationship in Nigeria. Further, results from the rolling interdecadal analysis shows that the observed relationship is stable and insensitive to the sample interval chosen. Overall, our findings imply that economic development is compatible with environmental improvements in Nigeria. However, tighter and concentrated environmental policy regimes will be required to ensure that the relationship is maintained around the first two-strands of the N-shape" }, { "instance_id": "R30476xR29374", "comparison_id": "R30476", "paper_id": "R29374", "text": "Stoking the fires? CO2 emissions and economic growth Over the past decade, concern over potential global warming has focused attention on the emission of greenhouse gases into the atmosphere, and there is an active debate concerning the desirability of reducing emissions. At the heart of this debate is the future path of both greenhouse gas emissions and economic development among the nations. We use global panel data to estimate the relationship between per capita income and carbon dioxide emissions, and then use the estimated trajectories to forecast global emissions of CO2. The analysis yields four major results. First, the evidence suggests a diminishing marginal propensity to emit (MPE) CO2 as economies develop; a result masked in analyses that rely on cross-section data alone. Second, despite the diminishing MPE, our forecasts indicate that global emissions of CO2 will continue to grow at an annual rate of 1.8 percent. Third, continued growth stems from the fact that economic and population growth will be most rapid in the lower-income nations that have the highest MPE. For this reason, there will be an inevitable tension between policies to control greenhouse gas emissions and those toward the global distribution of income. Finally, our sensitivity analyses suggest that the pace of economic development does not dramatically alter the future annual or cumulative flow of CO2 emissions." }, { "instance_id": "R30476xR30025", "comparison_id": "R30476", "paper_id": "R30025", "text": "CO2 emissions, output, energy consumption, and trade in Tunisia\u201d This article contributes to the literature by investigating the dynamic relationship between carbon dioxide (CO2) emissions, output (GDP), energy consumption, and trade using the bounds testing approach to cointegration and the ARDL methodology for Tunisia over the period 1971\u20132008. The empirical results reveal the existence of two causal long-run relationships between the variables. In the short-run, there are three unidirectional Granger causality relationships, which run from GDP, squared GDP and energy consumption to CO2 emissions. To check the stability in the parameter of the selected model, CUSUM and CUSUMSQ were used. The results also provide important policy implications." }, { "instance_id": "R30476xR30070", "comparison_id": "R30476", "paper_id": "R30070", "text": "Bounds testing approach to analysis of the environment Kuznets curve hypothesis This paper examines the long-run and the dynamic temporal relationships between economic growth, energy consumption, population density, trade openness, and carbon dioxide (CO2) emissions in Brazil, China, Egypt, Japan, Mexico, Nigeria, South Korea, and South Africa based on the environment Kuznets curve (EKC) hypothesis. We employ the ARDL Bounds test to cointegration and CUSUM and CUSUMSQ tests to ensure cointegration and parameter stability. The estimated results show that the inverted U-shaped EKC hypothesis holds in Japan and South Korea. In the other six countries, the long-run relationship between economic growth and CO2 emissions follows an N-shaped trajectory and the estimated turning points are much higher than the sample mean. In addition, the results indicate that energy consumption Granger-causes both CO2 emissions and economic growth in all the countries. Our results are consistent with previous studies that show that there is no unique relationship between energy consumption, population density, economic growth, trade openness, and the environment across countries." }, { "instance_id": "R30476xR30090", "comparison_id": "R30476", "paper_id": "R30090", "text": "The long-run and causal analysis of energy, growth, openness and financial development on carbon emissions in Turkey The aim of this paper is to examine the causal relationship between financial development, trade, economic growth, energy consumption and carbon emissions in Turkey for the 1960\u20132007 period. The bounds F\u2010test for cointegration test yields evidence of a long-run relationship between per capita carbon emissions, per capita energy consumption, per capita real income, the square of per capita real income, openness and financial development. The results show that an increase in foreign trade to GDP ratio results an increase in per capita carbon emissions and financial development variable has no significant effect on per capita carbon emissions in the long- run. These results also support the validity of EKC hypothesis in the Turkish economy. It means that the level of CO2 emissions initially increases with income, until it reaches its stabilization point, then it declines in Turkey. In addition, the paper explores causal relationship between the variables by using error-correction based Granger causality models." }, { "instance_id": "R30476xR30463", "comparison_id": "R30476", "paper_id": "R30463", "text": "Exploring the relationship between energy usage segregation and environmental degradation in N-11 countries Numerous studies regarding the economic growth-environmental pollution link have struggled to determine the effects of various forms of energy consumption on environmental degradation, particularly in the context of emerging economies. This study examines the environmental Kuznets curve (EKC) for CO2 emissions in N-11 countries during 1990-2014 by segregating three forms of energy consumption (renewable, biomass and non-renewable). Urbanization and trade openness are additional explanatory variables that are used in the empirical framework. Using the Generalized Moments Method (GMM), the empirical evidence confirms the presence of an N-shaped relationship between economic growth and environmental degradation for N-11 countries. This study analyzed the interaction effects among trade openness, biomass consumption and economic growth; these interactions had a negative impact on CO2 emissions levels of N-11 countries. Suitable policy recommendations have been provided based on the detailed results." }, { "instance_id": "R30476xR30159", "comparison_id": "R30476", "paper_id": "R30159", "text": "Investigating the impacts of energy consumption, real GDP, tourism and trade on CO 2 emissions by accounting for cross-sectional dependence: a panel study of OECD countries The objective of this study is to analyse the long-run dynamic relationship of carbon dioxide emissions, real gross domestic product (GDP), the square of real GDP, energy consumption, trade and tourism under an Environmental Kuznets Curve (EKC) model for the Organization for Economic Co-operation and Development (OECD) member countries. Since we find the presence of cross-sectional dependence within the panel time-series data, we apply second-generation unit root tests, cointegration test and causality test which can deal with cross-sectional dependence problems. The cross-sectionally augmented Dickey-Fuller (CADF) and the cross-sectionally augmented Im-Pesaran-Shin (CIPS) unit root tests indicate that the analysed variables become stationary at their first differences. The Lagrange multiplier bootstrap panel cointegration test shows the existence of a long-run relationship between the analysed variables. The dynamic ordinary least squares (DOLS) estimation technique indicates that energy consumption and tourism contribute to the levels of gas emissions, while increases in trade lead to environmental improvements. In addition, the EKC hypothesis cannot be supported as the sign of coefficients on GDP and GDP2 is negative and positive, respectively. Moreover, the Dumitrescu\u2013Hurlin causality tests exploit a variety of causal relationship between the analysed variables. The OECD countries are suggested to invest in improving energy efficiency, regulate necessary environmental protection policies for tourism sector in specific and promote trading activities through several types of encouragement act." }, { "instance_id": "R30476xR29774", "comparison_id": "R30476", "paper_id": "R29774", "text": "Multivariate Granger causality between CO2 emissions, energy consumption, FDI (foreign direct investment) and GDP (gross domestic product): Evidence from a panel of BRIC (Brazil, Russian Federation, India, and China) countries This paper addresses the impact of both economic growth and financial development on environmental degradation using a panel cointegration technique for the period between 1980 and 2007, except for Russia (1992\u20132007). In long-run equilibrium, CO2 emissions appear to be energy consumption elastic and FDI inelastic, and the results seem to support the Environmental Kuznets Curve (EKC) hypothesis. The causality results indicate that there exists strong bidirectional causality between emissions and FDI and unidirectional strong causality running from output to FDI. The evidence seems to support the pollution haven and both the halo and scale effects. Therefore, in attracting FDI, developing countries should strictly examine the qualifications for foreign investment or to promote environmental protection through the coordinated know-how and technological transfer with foreign companies to avoid environmental damage. Additionally, there exists strong output-emissions and output-energy consumption bidirectional causality, while there is unidirectional strong causality running from energy consumption to emissions. Overall, the method of managing both energy demand and FDI and increasing both investment in the energy supply and energy efficiency to reduce CO2 emissions and without compromising the country\u2019s competitiveness can be adopted by energy-dependent BRIC countries." }, { "instance_id": "R30476xR29761", "comparison_id": "R30476", "paper_id": "R29761", "text": "The impact of growth, energy and financial development on the environment in China: A cointegration analysis This article aims to investigate the impact of financial development, economic growth and energy consumption on environmental pollution in China from 1953 to 2006 using the Autoregressive Distributed Lag (ARDL) bounds testing procedure. The main objective is to examine the long run equilibrium relationship between financial development and environmental pollution. The results of the analysis reveal a negative sign for the coefficient of financial development, suggesting that financial development in China has not taken place at the expense of environmental pollution. On the contrary, it is found that financial development has led to a decrease in environmental pollution. It is concluded that carbon emissions are mainly determined by income, energy consumption and trade openness in the long run. Moreover, the findings confirm the existence of an Environmental Kuznets Curve in the case of China." }, { "instance_id": "R30476xR29825", "comparison_id": "R30476", "paper_id": "R29825", "text": "Is there an environmental Kuznets curve for Spain? Fresh evidence from old data The information content of the environmental Kuznets curve (EKC) is subject to change over time and all the empirical modeling work that does not take into account the possible variations and instabilities may fail to explain the variations in the per-capita CO2 and per-capita income relationship. In this paper we consider the possibility that a linear cointegrated regression model with multiple structural changes would provide a better empirical description of the Spanish EKC during the period 1857\u20132007. Our methodology is based on instability tests recently proposed in Kejriwal and Perron (2008, 2010) as well as on cointegration tests developed in Arai and Kurozumi (2007) and Kejriwal (2008). Overall, the results of Kejriwal\u2013Perron tests suggest a model with two breaks estimated at 1941 and 1967 and three regimes. The coefficient estimated between per-capita CO2 and per-capita income (or long-run elasticity) in a two-break model shows a tendency to decrease over time. Therefore, even if per-capita CO2 consumption is monotonically rising in income, the \u201cincome elasticity\u201d is less than one. This implies that even if the shape of the EKC does not follow an inverted U, it shows a decreasing growth path pointing to a prospective turning point." }, { "instance_id": "R30476xR29549", "comparison_id": "R30476", "paper_id": "R29549", "text": "Demographic trends and energy consumption in European Union Nations, 1960\u20132025 Abstract We analyze data for fourteen foundational European Union Nations covering the period 1960\u20132000 to estimate the effects of demographic and economic factors on energy consumption. We find that population size and age structure have clear effects on energy consumption. Economic development and urbanization also contribute substantially to changes in energy consumption. We use the resultant model to project energy consumption for the year 2025 based on demographic and economic projections to assess the implications of various demographic scenarios. The projections suggest that the expected decline of population growth in Europe will help curtail expansion in energy consumption." }, { "instance_id": "R30476xR29751", "comparison_id": "R30476", "paper_id": "R29751", "text": "An Empirical Study on the Environmental Kuznets Curve for China\u2019s Carbon Emissions: Based on Provincial Panel Data Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990\u20132007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China\u2019s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path." }, { "instance_id": "R30476xR29848", "comparison_id": "R30476", "paper_id": "R29848", "text": "CO2 emissions, energy consumption, trade and income: A comparative analysis of China and India In order to prevent the destabilisation of the Earth's biosphere, CO2 emissions must be reduced quickly and significantly. The causes of CO2 emissions by individual countries need to be apprehended in order to understand the processes required for reducing emissions around the globe. China and India are the two largest transitional countries and growing economies, but are in two entirely different categories in terms of structural changes in growth, trade and energy use. CO2 emissions from the burning of fossil fuels have significantly increased in the recent past. This paper compares China and India using the bounds testing approach to cointegration and the ARDL methodology to test the long- and short-run relationships between growth, trade, energy use and endogenously determined structural breaks. The CO2 emissions in China were influenced by per capita income, structural changes and energy consumption. A similar causal connection cannot be established for India with regard to structural changes and CO2 emissions, because India's informal economy is much larger than China's. India possesses an extraordinarily large number of micro-enterprises that are low energy consumers and not competitive enough to reach international markets. Understanding these contrasting scenarios is prerequisite to reaching an international agreement on climate change affecting these two countries." }, { "instance_id": "R30476xR30474", "comparison_id": "R30476", "paper_id": "R30474", "text": "CO2 emissions, renewable energy and the Environmental Kuznets Curve, a panel cointegration approach Abstract This study combines a panel cointegration analysis with a set of robustness tests to assess the short and long-run impacts of renewable energy on CO2 emissions, as well as the Kuznets Environmental Curve hypothesis for 25 selected african countries, over the period 1980-2012. The results provide no evidence of a total validation of EKC predictions. However, CO2 emissions are found to increase with income per capita. The overall estimations strongly reveal that renewable energy, with a negative effect on CO2 emissions, coupled with an increasing long-run effect, remains an efficient substitute for the conventional fossil-fuelled energy. Nonetheless, the impact of renewable energy is outweighed by primary energy consumption in both the short and long run, entailing more global synergy for outpacing the environmental challenges." }, { "instance_id": "R30476xR30292", "comparison_id": "R30476", "paper_id": "R30292", "text": "The influence of real output, renewable and non-renewable energy, trade and financial development on carbon emissions in the top renewable energy countries Due to tremendous increase in the level of carbon dioxide (CO2) emissions in the last several decades, a number of studies in the energy-growth-environment literature have attempted to identify the determinants of CO2 emissions. A major criticism related to the existing studies, we realize, is the selection of panel estimation techniques. Almost all studies use panel methods that ignore the issue of cross-sectional dependence even though countries in the panel are most likely heterogeneous and cross-sectionally dependent. In addition, the majority of existing studies use aggregate energy consumption, and thus fail to identify the impacts of energy consumption by sources on the environment. In order to fulfill the mentioned gaps in the literature, this empirical study analyzes the influence of the real income, renewable energy consumption, non-renewable energy consumption, trade openness and financial development on CO2 emissions in the EKC model for the top countries listed in the Renewable Energy Country Attractiveness Index by employing heterogeneous panel estimation techniques with cross-section dependence. We find that the analyzed variables become stationary at their first-differences by using the CADF and the CIPS unit root tests, and the analyzed variables are cointegrated by employing the LM bootstrap cointegration test. By using the FMOLS and the DOLS, we also find that increases in renewable energy consumption, trade openness and financial development decrease carbon emissions while increases in non-renewable energy consumption contribute to the level of emissions, and the EKC hypothesis is supported for the top renewable energy countries." }, { "instance_id": "R30476xR29854", "comparison_id": "R30476", "paper_id": "R29854", "text": "An Empirical Analysis of the Environmental Kuznets Curve for CO2 Emissions in Indonesia: The Role of Energy Consumption and Foreign Trade This study examines the dynamic relationship among carbon dioxide (CO2) emissions, economic growth, energy consumption and foreign trade based on the environmental Kuznets curve (EKC) hypothesis in Indonesia for the period 1971\u20132007, using the Auto Regressive Distributed Lag (ARDL) methodology. The results do not support the EKC hypothesis, which assumes an inverted U-shaped relationship between income and environmental degradation. The long-run results indicate that foreign trade is the most significant variable in explaining CO2 emissions in Indonesia followed by Energy consumption and economic growth. The stability of the variables in estimated model is also examined. The result suggests that the estimated model is stable over the study period." }, { "instance_id": "R30512xR30482", "comparison_id": "R30512", "paper_id": "R30482", "text": "BeWell+ Smartphone sensing and persuasive feedback design is enabling a new generation of wellbeing applications capable of automatically monitoring multiple aspects of physical and mental health. In this paper, we present BeWell+ the next generation of the BeWell smartphone health app, which continuously monitors user behavior along three distinct health dimensions, namely sleep, physical activity, and social interaction. BeWell promotes improved behavioral patterns via feedback rendered as an ambient display on the smartphone's wallpaper. With BeWell+, we introduce new wellbeing mechanisms to address challenges identified during the initial deployment of the BeWell app; specifically, (i) community adaptive wellbeing feedback, which automatically generalize to diverse user communities (e.g., elderly, young adults, children) by balancing the need to promote better behavior yet remains realistic to the user's goals; and, (ii) wellbeing adaptive energy allocation, which prioritizes monitoring fidelity and feedback responsiveness on specific health dimensions of wellbeing (e.g., social interaction) where the user needs most help. We evaluate the performance of these mechanisms as part of an initial deployment and user study that includes 27 people using BeWell+ over a 19 day field trial. Our findings show that not only can BeWell+ operate successfully on consumer-grade smartphones, but users understand feedback and respond by taking positive steps towards leading healthier lifestyles." }, { "instance_id": "R30512xR30500", "comparison_id": "R30512", "paper_id": "R30500", "text": "MOPET: A context-aware and user-adaptive wearable system for fitness training OBJECTIVE Cardiovascular disease, obesity, and lack of physical fitness are increasingly common and negatively affect people's health, requiring medical assistance and decreasing people's wellness and productivity. In the last years, researchers as well as companies have been increasingly investigating wearable devices for fitness applications with the aim of improving user's health, in terms of cardiovascular benefits, loss of weight or muscle strength. Dedicated GPS devices, accelerometers, step counters and heart rate monitors are already commercially available, but they are usually very limited in terms of user interaction and artificial intelligence capabilities. This significantly limits the training and motivation support provided by current systems, making them poorly suited for untrained people who are more interested in fitness for health rather than competitive purposes. To better train and motivate users, we propose the mobile personal trainer (MOPET) system. METHODS AND MATERIAL MOPET is a wearable system that supervises a physical fitness activity based on alternating jogging and fitness exercises in outdoor environments. By exploiting real-time data coming from sensors, knowledge elicited from a sport physiologist and a professional trainer, and a user model that is built and periodically updated through a guided autotest, MOPET can provide motivation as well as safety and health advice, adapted to the user and the context. To better interact with the user, MOPET also displays a 3D embodied agent that speaks, suggests stretching or strengthening exercises according to user's current condition, and demonstrates how to correctly perform exercises with interactive 3D animations. RESULTS AND CONCLUSION By describing MOPET, we show how context-aware and user-adaptive techniques can be applied to the fitness domain. In particular, we describe how such techniques can be exploited to train, motivate, and supervise users in a wearable personal training system for outdoor fitness activity." }, { "instance_id": "R30512xR30498", "comparison_id": "R30512", "paper_id": "R30498", "text": "Bringing mobile guides and fitness activities together Sports and fitness are increasingly attracting the interest of computer science researchers as well as companies. In particular, recent mobile devices with hardware graphics acceleration offer new, still unexplored possibilities. This paper investigates the use of mobile guides in fitness activities, proposing the Mobile Personal Trainer (MOPET) application. MOPET uses a GPS device to monitor user's position during her physical activity in an outdoor fitness trail. It provides navigation assistance by using a fitness trail map and giving speech directions. Moreover, MOPET provides motivation support and exercise demonstrations by using an embodied virtual trainer, called Evita. Evita shows how to correctly perform the exercises along the trail with 3D animations and incites the user. To the best of our knowledge, our project is the first to employ a mobile guide for fitness activities. The effects of MOPET on motivation, as well as its navigational and training support, have been experimentally evaluated with 12 users. Evaluation results encourage the use of mobile guides and embodied virtual trainers in outdoor fitness applications." }, { "instance_id": "R30512xR30488", "comparison_id": "R30512", "paper_id": "R30488", "text": "Harnessing Different Motivational Frames via Mobile Phones to Promote Daily Physical Activity and Reduce Sedentary Behavior in Aging Adults Mobile devices are a promising channel for delivering just-in-time guidance and support for improving key daily health behaviors. Despite an explosion of mobile phone applications aimed at physical activity and other health behaviors, few have been based on theoretically derived constructs and empirical evidence. Eighty adults ages 45 years and older who were insufficiently physically active, engaged in prolonged daily sitting, and were new to smartphone technology, participated in iterative design development and feasibility testing of three daily activity smartphone applications based on motivational frames drawn from behavioral science theory and evidence. An \u201canalytically\u201d framed custom application focused on personalized goal setting, self-monitoring, and active problem solving around barriers to behavior change. A \u201csocially\u201d framed custom application focused on social comparisons, norms, and support. An \u201caffectively\u201d framed custom application focused on operant conditioning principles of reinforcement scheduling and emotional transference to an avatar, whose movements and behaviors reflected the physical activity and sedentary levels of the user. To explore the applications' initial efficacy in changing regular physical activity and leisure-time sitting, behavioral changes were assessed across eight weeks in 68 participants using the CHAMPS physical activity questionnaire and the Australian sedentary behavior questionnaire. User acceptability of and satisfaction with the applications was explored via a post-intervention user survey. The results indicated that the three applications were sufficiently robust to significantly improve regular moderate-to-vigorous intensity physical activity and decrease leisure-time sitting during the 8-week behavioral adoption period. Acceptability of the applications was confirmed in the post-intervention surveys for this sample of midlife and older adults new to smartphone technology. Preliminary data exploring sustained use of the applications across a longer time period yielded promising results. The results support further systematic investigation of the efficacy of the applications for changing these key health-promoting behaviors." }, { "instance_id": "R30512xR30490", "comparison_id": "R30512", "paper_id": "R30490", "text": "Activity sensing in the wild Recent advances in small inexpensive sensors, low-power processing, and activity modeling have enabled applications that use on-body sensing and machine learning to infer people's activities throughout everyday life. To address the growing rate of sedentary lifestyles, we have developed a system, UbiFit Garden, which uses these technologies and a personal, mobile display to encourage physical activity. We conducted a 3-week field trial in which 12 participants used the system and report findings focusing on their experiences with the sensing and activity inference. We discuss key implications for systems that use on-body sensing and activity inference to encourage physical activity." }, { "instance_id": "R30512xR30506", "comparison_id": "R30512", "paper_id": "R30506", "text": "MPTrain We present MPTrain, a mobile phone based system that takes advantage of the influence of music in exercise performance, enabling users to more easily achieve their exercise goals. MPTrain is designed as a mobile and personal system (hardware and software) that users wear while exercising (walking, jogging or running). MPTrain's hardware includes a set of physiological sensors wirelessly connected to a mobile phone carried by the user. MPTrain's software allows the user to enter a desired exercise pattern (in terms of desired heart-rate over time) and assists the user in achieving his/her exercising goals by: (1) constantly monitoring the user's physiology (heart-rate in number of beats per minute) and movement (speed in number of steps per minute); and (2) selecting and playing music with specific features that will encourage the user to speed up, slow down or keep the pace to be on track with his/her exercise goals.We describe the hardware and software components of the MPTrain system, and present some preliminary results when using MPTrain while jogging." }, { "instance_id": "R30512xR30478", "comparison_id": "R30512", "paper_id": "R30478", "text": "Context awareness in a handheld exercise agent Work towards the development of a handheld health counseling agent designed to promote physical activity is described. Previous work on automated health counselors is discussed, along with the affordances of mobility and context awareness for health behavior interventions. We present a general-purpose software architecture for the rapid design and deployment of mobile health counseling agents. We also describe the results of an initial field trial in which such a mobile agent plays the role of an exercise coach designed to motivate users to walk more. Results were mixed. We found that the context awareness mechanism that was implemented for detecting walking led to greater user-agent social bonding, but less walking in study participants." }, { "instance_id": "R30512xR30480", "comparison_id": "R30512", "paper_id": "R30480", "text": "Move2Play Throughout the last decade, there has been an alarming decrease in daily physical activity among both children and adults. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight and improving health. Yet so many people have difficulty increasing and maintaining physical activity in everyday life. We have created a solution called Move2Play, which encourages a healthier lifestyle and motivates to participate in regular physical activity. We have integrated four essential parts that form the basis for long-term progress and sustainability - Activity Recommendation, Tracking, Evaluation and Motivation. In order to recognize and assess physical activity, we have developed a system for mobile phones that collects data from various sensors, such as accelerometer, GPS and GSM. We provide personalization, i.e. we have proposed and realized activity recommendation for an individual user to ensure that the users engage in regular exercise, as opposed to occasional outbursts of activity, which are unhealthy and even harmful. We discuss our proposed mechanisms of activity recommendation and the concept of motivation, which represent a key element for any system that fights the sedentary lifestyle of the modern generation." }, { "instance_id": "R30512xR30508", "comparison_id": "R30512", "paper_id": "R30508", "text": "Mobile system to motivate teenagers' physical activity This paper reports a mobile persuasive application to motivate teenagers to start and continue being physically active. Being physically active can lead to reduced risks of having weight and cardiovascular problems; however efforts in this direction had variable success. Designing technology that will be engaging and motivating for teenagers requires an understanding of the factors that contribute to behavior adoption in teenagers. To understand these, we approach the design from several theoretical models: Theory of Planned Behavior, Theory of Meaning Behavior, and Personality Theory. We found that 1) Personality traits affect perceptions on physical activities and the usefulness of devices that motivate them; 2) Favored motivational phrases are universal across traits; 3) Those who tried our prototype was generally positive and stated that they would use it on their own; 5) The characteristics of games that are desired are: social or competitive, outdoor, simple to learn and with large variations." }, { "instance_id": "R30512xR30504", "comparison_id": "R30512", "paper_id": "R30504", "text": "TripleBeat We present TripleBeat, a mobile phone based system that assists runners in achieving predefined exercise goals via musical feedback and two persuasive techniques: a glanceable interface for increased personal awareness and a virtual competition. TripleBeat is based on a previous system named MPTrain. First, we describe TripleBeat's hardware and software, emphasizing how it differs from its predecessor MPTrain. Then, we present the results of a runner study with 10 runners. The study compared the runners efficacy and enjoyment in achieving predefined workout goals when running with MPTrain and TripleBeat. The conclusions from the study include: (1) significantly higher efficacy and enjoyment with TripleBeat, and (2) a unanimous preference for TripleBeat over MPTrain. The glanceable interface and the virtual competition are the two main reasons for the improvements in the running experience. We believe that systems like TripleBeat will play an important role in enhancing the exercise experience and in assisting users towards more active lifestyles." }, { "instance_id": "R30579xR30553", "comparison_id": "R30579", "paper_id": "R30553", "text": "Detection and localization of sybil nodes in VANETs Sybil attacks have been regarded as a serious security threat to ad hoc networks and sensor networks. They may also impair the potential applications of VANETs (Vehicular Ad hoc Networks) by creating an illusion of traffic congestion. In this paper, we present a lightweight security scheme for detecting and localizing Sybil nodes in VANETs, based on statistic analysis of signal strength distribution. Our scheme is a distributed and localized approach, in which each vehicle on a road can perform the detection of potential Sybil vehicles nearby by verifying their claimed positions. We first introduce a basic signal-strength-based position verification scheme. However, the basic scheme proves to be inaccurate and vulnerable to spoof attacks. In order to compensate for the weaknesses of the basic scheme, we propose a technique to prevent Sybil nodes from covering up for each other. In this technique, traffic patterns and support from roadside base stations are used to our advantage. We, then, propose two statistic algorithms to enhance the accuracy of position verification. The algorithms can detect potential Sybil attacks by observing the signal strength distribution of a suspect node over a period of time. The statistic nature of our algorithms significantly reduces the verification error rate. Finally, we conduct simulations to explore the feasibility of our scheme." }, { "instance_id": "R30579xR30567", "comparison_id": "R30579", "paper_id": "R30567", "text": "A distributed key management framework with cooperative message authentication in VANETs In this paper, we propose a distributed key management framework based on group signature to provision privacy in vehicular ad hoc networks (VANETs). Distributed key management is expected to facilitate the revocation of malicious vehicles, maintenance of the system, and heterogeneous security policies, compared with the centralized key management assumed by the existing group signature schemes. In our framework, each road side unit (RSU) acts as the key distributor for the group, where a new issue incurred is that the semi-trust RSUs may be compromised. Thus, we develop security protocols for the scheme which are able to detect compromised RSUs and their colluding malicious vehicles. Moreover, we address the issue of large computation overhead due to the group signature implementation. A practical cooperative message authentication protocol is thus proposed to alleviate the verification burden, where each vehicle just needs to verify a small amount of messages. Details of possible attacks and the corresponding solutions are discussed. We further develop a medium access control (MAC) layer analytical model and carry out NS2 simulations to examine the key distribution delay and missed detection ratio of malicious messages, with the proposed key management framework being implemented over 802.11 based VANETs." }, { "instance_id": "R30579xR30577", "comparison_id": "R30579", "paper_id": "R30577", "text": "Security Challenges for Emerging VANETs Vehicle ad-hoc networks (VANETs) are a prominent form of mobile ad-hoc networks. This paper outlines the architecture of VANETs and discusses the security and privacy challenges that need to be overcome to make such networks practically viable. It compares the various security schemes that were suggested for VANETs. It then proposes a new implementation of an identity based cryptosystem that is robust and computationally efficient." }, { "instance_id": "R30579xR30564", "comparison_id": "R30579", "paper_id": "R30564", "text": "A novel defense mechanism against sybil attacks in VANET Security is an important concern for many Vehicular Ad hoc Network (VANET) applications. One particular serious attack, known as Sybil attack, against ad hoc networks involves an attacker illegitimately claiming multiple identities. In this paper, we present a simple security scheme, based on the difference in movement patterns of Sybil nodes and normal nodes, for detecting Sybil nodes in VANET. Our approach is distributed in nature because all nodes contribute for detection of Sybil nodes in VANET and it scales well in an expanding network. In this approach, each Road Side Unit (RSU) calculates and stores different parameter values (Received Signal Strength, distance, angle) after receiving the beacon packets from nearby vehicles. The reason for choosing the angle as one of the parameters is that it will always be different for two vehicles (not moving side-by-side), even if they have same values for distance and received signal strength (RSS) with reference to a RSU. The combination of the parameters makes our detection approach highly accurate. After a significant observation period, these RSUs exchange their records and calculate the difference of the parameters. If some nodes have same values for the parameters during this observation period, these nodes are classified as Sybil nodes. Our preliminary simulation results show 99% accuracy and approximately 0.5% error rate, lower as compared to existing techniques." }, { "instance_id": "R30579xR30570", "comparison_id": "R30579", "paper_id": "R30570", "text": "A Group Signature Based Secure and Privacy-Preserving Vehicular Communication Framework We propose a novel group signature based security framework for vehicular communications. Compared to the traditional digital signature scheme, the new scheme achieves authenticity, data integrity, anonymity, and accountability at the same time. Furthermore, we describe a scalable role-based access control approach for vehicular networks. Finally, we present a probabilistic signature verification scheme that can efficiently detect the tampered messages or the messages from an unauthorized node." }, { "instance_id": "R30579xR30558", "comparison_id": "R30579", "paper_id": "R30558", "text": "Privacy-Preserving Detection of Sybil Attacks in Vehicular Ad Hoc Networks Vehicular ad hoc networks (VANETs) are being advocated for traffic control, accident avoidance, and a variety of other applications. Security is an important concern in VANETs because a malicious user may deliberately mislead other vehicles and vehicular agencies. One type of malicious behavior is called a Sybil attack, wherein a malicious vehicle pretends to be multiple other vehicles. Reported data from a Sybil attacker will appear to arrive from a large number of distinct vehicles, and hence will be credible. This paper proposes a light-weight and scalable framework to detect Sybil attacks. Importantly, the proposed scheme does not require any vehicle in the network to disclose its identity, hence privacy is preserved at all times. Simulation results demonstrate the efficacy of our protocol." }, { "instance_id": "R30579xR30551", "comparison_id": "R30579", "paper_id": "R30551", "text": "Outlier detection in ad hoc networks using dempster-shafer theory Mobile Ad-hoc NETworks (MANETs) are known to be vulnerable to a variety of attacks due to lack of central authority or fixed network infrastructure. Many security schemes have been proposed to identify misbehaving nodes. Most of these security schemes rely on either a predefined threshold, or a set of well-defined training data to build up the detection mechanism before effectively identifying the malicious peers. However, it is generally difficult to set appropriate thresholds, and collecting training datasets representative of an attack ahead of time is also problematic. We observe that the malicious peers generally demonstrate behavioral patterns different from all the other normal peers, and argue that outlier detection techniques can be used to detect malicious peers in ad hoc networks. A problem with this approach is combining evidence from potentially untrustworthy peers to detect the outliers. In this paper, an outlier detection algorithm is proposed that applies the Dempster-Shafer theory to combine observation results from multiple nodes because it can appropriately reflect uncertainty as well as unreliability of the observations. The simulation results show that the proposed scheme is highly resilient to attackers and it can converge stably to a common outlier view amongst distributed nodes with a limited communication overhead." }, { "instance_id": "R30579xR30556", "comparison_id": "R30579", "paper_id": "R30556", "text": "P2DAP-sybil attacks detection in vehicular ad hoc networks Vehicular ad hoc networks (VANETs) are being increasingly advocated for traffic control, accident avoidance, and management of parking lots and public areas. Security and privacy are two major concerns in VANETs. Unfortunately, in VANETs, most privacy-preserving schemes are vulnerable to Sybil attacks, whereby a malicious user can pretend to be multiple (other) vehicles. In this paper, we present a lightweight and scalable protocol to detect Sybil attacks. In this protocol, a malicious user pretending to be multiple (other) vehicles can be detected in a distributed manner through passive overhearing by s set of fixed nodes called road-side boxes (RSBs). The detection of Sybil attacks in this manner does not require any vehicle in the network to disclose its identity; hence privacy is preserved at all times. Simulation results are presented for a realistic test case to highlight the overhead for a centralized authority such as the DMV, the false alarm rate, and the detection latency. The results also quantify the inherent trade-off between security, i.e., the detection of Sybil attacks and detection latency, and the privacy provided to the vehicles in the network. From the results, we see our scheme being able to detect Sybil attacks at low overhead and delay, while preserving privacy of vehicles." }, { "instance_id": "R30646xR30632", "comparison_id": "R30646", "paper_id": "R30632", "text": "Automatic eye detection using intensity filtering and K-means clustering This paper proposes a novel eye detection method, which can locate the accurate positions of the eyes from frontal face images. The proposed method is robust to pose changes, different facial expressions and illumination variations. Initially, it utilizes image enhancement, Gabor transformation and cluster analysis to extract eye windows. It then localizes the pupil centers by applying two neighborhood operators within the eye windows. Experiments with the color FERET and the LFW (Labeled Face in the Wild) datasets (including a total of 3587 images) are used to evaluate this method. The experimental results demonstrate the consistent robustness and efficiency of the proposed method." }, { "instance_id": "R30646xR30604", "comparison_id": "R30646", "paper_id": "R30604", "text": "Robust Precise Eye Location by Adaboost and SVM Techniques This paper presents a novel approach for eye detection using a hierarchy cascade classifier based on Adaboost statistical learning method combined with SVM (Support Vector Machines) post classifier. On the first stage a face detector is used to locate the face in the whole image. After finding the face, an eye detector is used to detect the possible eye candidates within the face areas. Finally, the precise eye positions are decided by the eye-pair SVM classifiers which using geometrical and relative position information of eye-pair and the face. Experimental results show that this method can effectively cope with various image conditions and achieve better location performance on diverse test sets than some newly proposed methods." }, { "instance_id": "R30646xR30594", "comparison_id": "R30646", "paper_id": "R30594", "text": "Eye Localization based on Multi-Scale Gabor Feature Vector Model Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches." }, { "instance_id": "R30646xR30629", "comparison_id": "R30646", "paper_id": "R30629", "text": "Enhanced Pictorial Structures for precise eye localization under incontrolled conditions In this paper, we present an enhanced pictorial structure (PS) model for precise eye localization, a fundamental problem involved in many face processing tasks. PS is a computationally efficient framework for part-based object modelling. For face images taken under uncontrolled conditions, however, the traditional PS model is not flexible enough for handling the complicated appearance and structural variations. To extend PS, we 1) propose a discriminative PS model for a more accurate part localization when appearance changes seriously, 2) introduce a series of global constraints to improve the robustness against scale, rotation and translation, and 3) adopt a heuristic prediction method to address the difficulty of eye localization with partial occlusion. Experimental results on the challenging LFW (Labeled Face in the Wild) database show that our model can locate eyes accurately and efficiently under a broad range of uncontrolled variations involving poses, expressions, lightings, camera qualities, occlusions, etc." }, { "instance_id": "R30646xR30634", "comparison_id": "R30646", "paper_id": "R30634", "text": "For your eyes only In this paper, we take a look at an enhanced approach for eye detection under difficult acquisition circumstances such as low-light, distance, pose variation, and blur. We present a novel correlation filter based eye detection pipeline that is specifically designed to reduce face alignment errors, thereby increasing eye localization accuracy and ultimately face recognition accuracy. The accuracy of our eye detector is validated using data derived from the Labeled Faces in the Wild (LFW) and the Face Detection on Hard Datasets Competition 2011 (FDHD) sets. The results on the LFW dataset also show that the proposed algorithm exhibits enhanced performance, compared to another correlation filter based detector, and that a considerable increase in face recognition accuracy may be achieved by focusing more effort on the eye localization stage of the face recognition process. Our results on the FDHD dataset show that our eye detector exhibits superior performance, compared to 11 different state-of-the-art algorithms, on the entire set of difficult data without any per set modifications to our detection or preprocessing algorithms. The immediate application of eye detection is automatic face recognition, though many good applications exist in other areas, including medical research, training simulators, communication systems for the disabled, and automotive engineering." }, { "instance_id": "R30646xR30644", "comparison_id": "R30646", "paper_id": "R30644", "text": "Automatic eye detection and its validation The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions." }, { "instance_id": "R30646xR30596", "comparison_id": "R30646", "paper_id": "R30596", "text": "Combining Face and Eye Detectors in a High- Performance Face-Detection System A combined face and eye detector system based on multiresolution local ternary patterns and local phase quantization descriptors can achieve noticeable performance improvements by extracting features locally." }, { "instance_id": "R30646xR30611", "comparison_id": "R30646", "paper_id": "R30611", "text": "Robust Facial Features Localization on Rotation Arbitrary Multi-View face in Complex Background Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background." }, { "instance_id": "R30646xR30588", "comparison_id": "R30646", "paper_id": "R30588", "text": "Eye localization in low and standard definition content with application to face matching In this paper we address the problem of eye localization for the purpose of face matching in low and standard definition image and video content. In addition to an explorative study that aimed at discovering the effect of eye localization accuracy on face matching performance, we also present a probabilistic eye localization method based on well-known multi-scale local binary patterns (LBPs). These patterns provide a simple but powerful spatial description of texture, and are robust to the noise typical to low and standard definition content. The extensive evaluation involving multiple eye localizers and face matchers showed that the shape of the eye localizer error distribution has a big impact on face matching performance. Conditioned by the error distribution shape and the minimum required eye localization accuracy, eye localization can boost the performance of naive face matchers and allow for more efficient face matching without degrading its performance. The evaluation also showed that our proposed method has superior accuracy with respect to the state-of-the-art on eye localization, and that it fulfills the criteria for improving the face matching performance and efficiency mentioned above." }, { "instance_id": "R30646xR30620", "comparison_id": "R30646", "paper_id": "R30620", "text": "Eye localization through multiscale sparse dictionaries This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method." }, { "instance_id": "R30646xR30617", "comparison_id": "R30646", "paper_id": "R30617", "text": "Accurate eye center location and tracking using isophote curvature The ubiquitous application of eye tracking is precluded by the requirement of dedicated and expensive hardware, such as infrared high definition cameras. Therefore, systems based solely on appearance (i.e. not involving active infrared illumination) are being proposed in literature. However, although these systems are able to successfully locate eyes, their accuracy is significantly lower than commercial eye tracking devices. Our aim is to perform very accurate eye center location and tracking, using a simple Web cam. By means of a novel relevance mechanism, the proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve rotational invariance and to keep low computational costs. In this paper we test our approach for accurate eye location and robustness to changes in illumination and pose, using the BioIDand the Yale Face B databases, respectively. We demonstrate that our system can achieve a considerable improvement in accuracy over state of the art techniques." }, { "instance_id": "R30646xR30592", "comparison_id": "R30646", "paper_id": "R30592", "text": "Regression and Classification Approaches to Eye Localization in Face Images We address the task of accurately localizing the eyes in face images extracted by a face detector, an important problem to be solved because of the negative effect of poor localization on face recognition accuracy. We investigate three approaches to the task: a regression approach aiming to directly minimize errors in the predicted eye positions, a simple Bayesian model of eye and non-eye appearance, and a discriminative eye detector trained using AdaBoost. By using identical training and test data for each method we are able to perform an unbiased comparison. We show that, perhaps surprisingly, the simple Bayesian approach performs best on databases including challenging images, and performance is comparable to more complex state-of-the-art methods" }, { "instance_id": "R30698xR30669", "comparison_id": "R30698", "paper_id": "R30669", "text": "The effect of socio-economic status and ethnicity on the comparative oral health of Asian and White Caucasian 12-year-old children OBJECTIVE To investigate the oral health of 12-year-old children of different deprivation but similar fluoridation status from South Asian and White Caucasian ethnic groups. DESIGN An epidemiological survey of 12-year-old children using BASCD criteria, with additional tooth erosion, ethnic classification and postcode data. CLINICAL SETTING Examinations were completed in schools in Leicestershire and Rutland, England, UK. Participants A random sample of 1,753 12-year-old children from all schools in the study area. MAIN OUTCOME MEASURES Caries experience was measured using the DMFT index diagnosed at the caries into dentine (D3) threshold, and tooth erosion using the index employed in the Children's Dental Health UK study reported in 1993. RESULTS The overall prevalence of caries was greater in White than Asian children, but varied at different levels of deprivation and amongst different Asian religious groups. There was a significant positive association between caries and deprivation for White children, but the reverse was true for non-Muslim Asians. White Low Deprivation children had significantly less tooth erosion, but erosion experience increased with decreasing deprivation in non-Muslim Asians. CONCLUSIONS Oral health is associated with ethnicity and linked to deprivation on an ethnic basis. The intra-Asian dental health disadvantage found in the primary dentition of Muslim children is perpetuated into the permanent dentition." }, { "instance_id": "R30698xR30658", "comparison_id": "R30698", "paper_id": "R30658", "text": "Dental erosion in 12-year-old schoolchildren: A cross- sectional study in Southern Brazil OBJECTIVE The aim of this study was to assess the prevalence and severity of dental erosion among 12-year-old schoolchildren in Joa\u00e7aba, southern Brazil, and to compare prevalence between boys and girls, and between public and private school students. METHODS A cross-sectional study was carried out involving all of the municipality's 499, 12-year-old schoolchildren. The dental erosion index proposed by O'Sullivan was used for the four maxillary incisors. Data analysis included descriptive statistics, location, distribution, and extension of affected area and severity of dental erosion. RESULTS The prevalence of dental erosion was 13.0% (95% confidence interval = 9.0-17.0). There was no statistically significant difference in prevalence between boys and girls, but prevalence was higher in private schools (21.1%) than in public schools (9.7%) (P < 0.001). Labial surfaces were less often affected than palatal surfaces. Enamel loss was the most prevalent type of dental erosion (4.86 of 100 incisors). Sixty-three per cent of affected teeth showed more than a half of their surface affected. CONCLUSION The prevalence of dental erosion in 12-year-old schoolchildren living in a small city in southern Brazil appears to be lower than that seen in most of epidemiological studies carried out in different parts of the world. Further longitudinal studies should be conducted in Brazil in order to measure the incidence of dental erosion and its impact on children's quality of life." }, { "instance_id": "R30698xR30673", "comparison_id": "R30698", "paper_id": "R30673", "text": "Smile aesthetics and malocclusion in UK teenage magazines assessed using the Index of Orthodontic Treatment Need (IOTN) Objective There is a significant demand for orthodontic treatment within the UK from adolescent girls, a group known to be influenced by the media portrayal of body form and body image, which may extend to the presentation of malocclusions. This study examined the portrayal of malocclusion in a media type that targets teenage girls under 16 years of age. Materials and methods A representative selection of 1 month's magazines targeting this group were investigated, and the frequency and severity of malocclusions displayed were assessed. Two calibrated examiners viewed all the smiles (on two occasions) using a modification of Index of Orthodontic Treatment Need (IOTN) and assigned an Aesthetic Component Score to each smile. Results It was found that the aesthetic score is low (less than 7) for the majority of models (92.8%) indicating no need or a borderline need for treatment. Only 7.2% of models exhibited a definite need for treatment. Conclusion It appears that the portrayal of malocclusion in teenage magazines does not reflect the general treatment need of the adolescent population." }, { "instance_id": "R30698xR30693", "comparison_id": "R30698", "paper_id": "R30693", "text": "The oral health of children with clefts of the lip, palate, or both Objective: The purpose of this study was to assess the prevalence of dental caries, developmental defects of enamel, and related factors in children with clefts. Design: This cross-sectional prevalence study used standard dental indices for assessment. Setting: Children underwent a dental examination under standard conditions of seating and lighting in the outpatient department of a dental hospital as part of an ongoing audit to monitor clinical outcomes. Participants: Ninety-one children aged 4, 8, and 12 years were included in the study. Outcome Measurements Dental caries were assessed by use of the decayed, missing, and filled index for primary teeth (dmft); Decayed, Missing, and Filled index for permanent teeth (DMFT) according to the criteria as used in the national survey of children's dental health in the United Kingdom (O'Brien, 1994). Developmental defects were assessed using the modified Developmental Defects of Enamel Index (Clarkson and O'Mullane, 1989). Dental erosion was assessed using the criteria derived for the national survey of children's dental health (O'Brien, 1994). Results: Caries prevalence increased with age; 63% of patients at 4 years and 34% at 12 years were caries free. The mean dmft for the 4-year-olds was 1.3 with a mean DMFT for the 12-year-olds of 1.8. All the 4-year-olds had evidence of erosion of enamel in the primary teeth (incisors and first molars) and 56% of the 12-year-olds had erosion of permanent teeth (incisors and first permanent molars). Developmental defects of enamel became more prevalent with age, with at least one opacity in 56% of 4-year-olds and 100% of 12-year-olds. Hypoplasia was not found in the primary dentition but affected permanent teeth in 38% of 8-year-olds and 23% of the 12-year-olds. Conclusion: This study has shown that dental disease is prevalent in these patients. These assessments not only provide a baseline on oral health parameters in young people with clefts but underline the need for a more aggressive approach to prevention of oral disease to optimize clinical outcome." }, { "instance_id": "R30698xR30651", "comparison_id": "R30698", "paper_id": "R30651", "text": "Prevalence of erosive tooth wear and associated risk factors in 2-7-year-old German kindergarten children OBJECTIVES The aims of this study were to (1) investigate prevalence and severity of erosive tooth wear among kindergarten children and (2) determine the relationship between dental erosion and dietary intake, oral hygiene behaviour, systemic diseases and salivary concentration of calcium and phosphate. MATERIALS AND METHODS A sample of 463 children (2-7 years old) from 21 kindergartens were examined under standardized conditions by a calibrated examiner. Dental erosion of primary and permanent teeth was recorded using a scoring system based on O'Sullivan Index [Eur J Paediatr Dent 2 (2000) 69]. Data on the rate and frequency of dietary intake, systemic diseases and oral hygiene behaviour were obtained from a questionnaire completed by the parents. Unstimulated saliva samples of 355 children were analysed for calcium and phosphate concentration by colorimetric assessment. Descriptive statistics and multiple regression analysis were applied to the data. RESULTS Prevalence of erosion amounted to 32% and increased with increasing age of the children. Dentine erosion affecting at least one tooth could be observed in 13.2% of the children. The most affected teeth were the primary maxillary first and second incisors (15.5-25%) followed by the canines (10.5-12%) and molars (1-5%). Erosions on primary mandibular teeth were as follows: incisors: 1.5-3%, canines: 5.5-6% and molars: 3.5-5%. Erosions of the primary first and second molars were mostly seen on the occlusal surfaces (75.9%) involving enamel or enamel-dentine but not the pulp. In primary first and second incisors and canines, erosive lesions were often located incisally (51.2%) or affected multiple surfaces (28.9%). None of the permanent incisors (n = 93) or first molars (n=139) showed signs of erosion. Dietary factors, oral hygiene behaviour, systemic diseases and salivary calcium and phosphate concentration were not associated with the presence of erosion. CONCLUSIONS Erosive tooth wear of primary teeth was frequently seen in primary dentition. As several children showed progressive erosion into dentine or exhibited severe erosion affecting many teeth, preventive and therapeutic measures are recommended." }, { "instance_id": "R30698xR30684", "comparison_id": "R30698", "paper_id": "R30684", "text": "Is there a relationship between asthma and dental erosion? A case control study OBJECTIVES The aims of this study were firstly to assess and compare the prevalence of dental erosion and dietary intake between three groups of children; children with asthma, those with significant tooth erosion but with no history of asthma, and children with no history of asthma or other medical problems. Secondly, to discover whether there was a relationship between medical history and dietary practises of these children and the levels of dental erosion. Thirdly, to measure and compare their salivary flow rates, pH and buffering capacity. METHODS The study consisted of 3 groups of children aged 11-18 years attending Birmingham Dental Hospital: 20 children with asthma requiring long-term medication, 20 children referred with dental erosion, and 20 children in the age and sex matched control group. Tooth wear was recorded using a modification of the tooth wear index (TWI) of Smith and Knight. Data on the medical and dietary history were obtained from a self-reported questionnaire supplemented by a structured interview. The salivary samples were collected under standard methods for measurements. RESULTS Fifty percent of the children in the control group had low erosion and 50% moderate erosion. However, high levels were recorded in 35% of children in the asthma group and 65% in the erosion group. There appeared to be no overall differences in diet between the groups. There was an association between dental erosion and the consumption of soft drinks, carbonated beverages and fresh fruits in all the three groups. More variables related to erosion were found in the erosion and asthma groups. A comparison between the three groups showed no significant differences in unstimulated and stimulated salivary flow rates, or pH and buffering capacity. CONCLUSION There were significant differences in the prevalence of erosion between the three groups, children with asthma having a higher prevalence than the control group. Although there was a relationship between the levels of erosion and some medical history and acidic dietary components, these did not explain the higher levels in asthmatic children. Further investigation is required into the factors affecting the increased prevalence of erosion in children with asthma." }, { "instance_id": "R30698xR30689", "comparison_id": "R30698", "paper_id": "R30689", "text": "Erosion, caries and rampant caries in preschool children in Jeddah, Saudi Arabia OBJECTIVES The objective of this study was to determine the prevalence of dental erosion in preschool children in Jeddah, Saudi Arabia, and to relate this to caries and rampant caries in the same children. METHODS A sample of 987 children (2-5 years) was drawn from 17 kindergartens. Clinical examinations were carried out under standardised conditions by a trained and calibrated examiner (M.Al-M.). Measurement of erosion was confined to primary maxillary incisors and used a scoring system and criteria based on those used in the UK National Survey of Child Dental Health. Caries was diagnosed using BASCD criteria. Rampant caries was defined as caries affecting the smooth surfaces of two or more maxillary incisors. RESULTS Of the 987 children, 309 (31%) had evidence of erosion. For 186 children this was confined to enamel but for 123 it involved dentine and/or pulp. Caries were diagnosed in 720 (73%) of the children and rampant caries in 336 (34%). The mean dmft for the 987 children was 4.80 (+/-4.87). Of the 384 children who had caries but not rampant caries, 141 (37%) had erosion, a significantly higher proportion than the 72 (27%) out of 267 who were clinically caries free (SND=2.61, P<0.01). Of the 336 with rampant caries, 96 (29%) also had evidence of erosion. CONCLUSIONS The level of erosion was similar to that seen in children of an equivalent age in the UK. Caries was a risk factor for erosion in this group of children." }, { "instance_id": "R30698xR30686", "comparison_id": "R30698", "paper_id": "R30686", "text": "Gastro-esophageal reflux disease and dental erosion in children Recurrent exposure to gastric acid as in children with bulimia and gastroesophageal reflux disease (GERD) may contribute to dental erosion. We performed a prospective study to evaluate the presence of GERD and dental erosions in children with primary and permanent dentition. Children undergoing elective endoscopy for possible GERD (n = 37) underwent evaluation of their teeth for the presence, severity, and pattern of erosion and stage of dentition: 24 patients had GERD. Dental erosions were identified in 20; all had GERD. Erosion patterns showed more involvement of the posterior teeth. Many affected patients had primary dentition." }, { "instance_id": "R30698xR30680", "comparison_id": "R30698", "paper_id": "R30680", "text": "Oral health of children with gastro-esophageal reflux disease: a controlled study BACKGROUND The aim of this study was to compare the dental health of children with gastro-esophageal reflux disease (GERD) with a healthy control group. METHODS Dental examinations were conducted for 52 children (31 boys and 21 girls) with a definitive history of GERD. For every subject enrolled in the study, a healthy control sibling without the condition was recruited. Medical histories were obtained from medical records, and dental and dietary histories were obtained from parents. The teeth were examined for erosion, dental caries, and enamel hypoplasia, and sampled for Streptococcus mutans. RESULTS The prevalence of erosion by teeth was found to be statistically significant between GERD patients (14 per cent) and controls (10 per cent) (p<0.05). GERD patients had erosion in more permanent teeth compared to controls (4 per cent vs 0.8 per cent, p<0.05), and more severe erosion (p<0.05). Caries experience was also higher in GERD patients compared to controls (p<0.05). Although there were more subjects with Streptococcus mutans in the GERD group compared to the control group (42 per cent vs 25 per cent), the difference was not statistically significant. CONCLUSIONS Children with GERD have more erosion and dental caries compared to healthy controls and should be targeted for increased preventive and restorative care." }, { "instance_id": "R30739xR30715", "comparison_id": "R30739", "paper_id": "R30715", "text": "Dental erosion, gastro-oesophageal reflux disease and saliva: how are they related? AIMS The purpose of this study was to assess the prevalence of tooth wear, symptoms of reflux and salivary parameters in a group of patients referred for investigation of gastro-oesophageal reflux disease (GORD) compared with a group of control subjects. MATERIALS AND METHODS Tooth wear, stimulated salivary flow rate and buffering capacity and symptoms of GORD were assessed in patients attending an Oesophageal Laboratory. Patients had manometry and 24-h pH tests, which are the gold standard for the diagnosis of GORD. Tooth wear was assessed using a modification of the Smith and Knight tooth wear index. The results were compared to those obtained from a group of controls with no symptoms of GORD. RESULTS Patients with symptoms of GORD and those subsequently diagnosed with GORD had higher total and palatal tooth wear (p<0.05). The buffering capacity of the stimulated saliva from the control subjects was greater than patients with symptoms of GORD (p<0.001). Patients with hoarseness had a lower salivary flow rate compared with those with no hoarseness. CONCLUSIONS Tooth wear involving dentine was more prevalent in patients complaining of symptoms of GORD and those diagnosed as having GORD following 24-h pH monitoring than controls. Patients had poorer salivary buffering capacity than control subjects. Patients complaining of hoarseness had lower salivary flow rate than controls." }, { "instance_id": "R30739xR30737", "comparison_id": "R30739", "paper_id": "R30737", "text": "Evaluation of dental erosion in patients with gastroesophageal reflux disease STATEMENT OF PROBLEM The cause of dental erosion may be difficult to establish because of its many presentations. Determination of the cause is an important aspect of diagnosis before extensive prosthodontic rehabilitation. PURPOSE This cross-sectional study evaluated the association between loss of tooth structure as a result of dental erosion and gastroesophageal reflux disease. MATERIAL AND METHODS Twenty consecutive adult dentate subjects referred to the Division of Gastroenterology for investigation of gastroesophageal tract disease were also evaluated for signs of dental erosion. All subjects underwent a dental evaluation that included a patient history to determine potential etiologic factors responsible for dental erosion. Subjects were examined clinically to quantify loss of tooth structure using a Tooth Wear Index (TWI). Endoscopic examination and 24-hour pH manometry were carried out to determine which subjects met the criteria for gastroesophageal reflux disease (GERD). Scores for maxillary versus mandibular dentition and anterior versus posterior dentition were also compared. Data were analyzed with the Kruskal-Wallis test (P =.004). RESULTS Ten subjects were diagnosed with GERD and 10 subjects had manometry scores below the level indicating GERD. Overall, subjects diagnosed with GERD had significantly higher TWI scores compared with control subjects (mean difference = 0.6554; P =.004). GERD subjects had higher TWI scores in all quadrants, except in the mandibular anterior region where there was no difference. CONCLUSION The results indicated that a relationship exists between loss of tooth structure, as measured by the TWI index, and the occurrence of GERD in this group of subjects." }, { "instance_id": "R30739xR30713", "comparison_id": "R30739", "paper_id": "R30713", "text": "Oral and dental health among inpatients in treatment for alcohol use disorders: a pilot study UNLABELLED Individuals undergoing treatment for alcohol use disorders exhibit increased risk for impaired oral health. We conducted a study to assess oral health and demographic characteristics of inpatients under treatment for alcohol use disorders. MATERIALS AND METHODS Thirty-four inpatients, 24 male and 10 female, with diverse ethnicity, were recruited in a rehabilitation center for alcohol use disorders in Buffalo, NY. Before undergoing oral examination, subjects completed a questionnaire on dental hygiene, associated behaviors, and demographic characteristics. Information regarding patients' oral health was collected using plaque, gingival, and decayed, missing or filled teeth (DMF) indices, and by examining soft tissue and evaluating signs of abrasion, erosion, and attrition. Statistical analysis determined prevalence and descriptive characteristics. RESULTS Alcohol intake for the population was, on average, 45.7 drinks/week, and 61.8% had smoked cigarettes within the past month. Patients were missing 15.1% of their teeth. Of teeth examined, 13.5% had dental caries. Prevalence of soft tissue abnormalities was 35.3%, prevalence of tooth erosion was 47.1%, and prevalence of moderate/severe gingival inflammation was 82.3%. Although study participants reported brushing at least once a day, 70.6% of subjects presented with heavy dental plaque accumulation. Most participants (85.3%) described the condition of their mouth and teeth as fair or poor. Finally, we observed a satisfactory participation rate among those who qualified for the study. CONCLUSION Oral examination showed significant levels of dental caries, gingival inflammation, soft tissue abnormalities, and tooth erosion. In addition, this study indicates that patients undergoing treatment for alcohol use disorders evidence poor oral health, and are at heightened risk for the development of periodontal disease." }, { "instance_id": "R30739xR30700", "comparison_id": "R30739", "paper_id": "R30700", "text": "Tooth surface loss in adult subjects attending a university dental clinic in Trinidad OBJECTIVES To determine the prevalence of tooth surface loss (TSL) in a sample of subjects attending a university dental clinic in Trinidad and to investigate the relationship to tooth brushing, medical history, parafunction and dietary habits. DESIGN Tooth surface loss was measured clinically by the index used in the 1998 UK, Adult Dental Health Survey. SETTING Trinidad, West Indies. PARTICIPANTS Convenience sample of adult subjects attending The University of the West Indies Dental School Polyclinic, Mount Hope. METHODS A questionnaire was administered and tooth surface loss measured clinically. MAIN OUTCOME MEASURES mild, moderate and severe tooth surface loss. RESULTS 155 subjects were examined (mean age 40.6 years) of whom 72% had some degree of TSL with the majority (52%), exhibiting mild, 16% with moderate and 4% with severe TSL. There were associations found between TSL and age (OR=3.14), reflux (OR=1.37), parafunction (OR=1.06), weekly consumption of citrus fruits (OR=1.31) and soft drinks (OR=1.78), daily consumption of alcohol (OR=1.40) and a vegetarian diet (OR=2.79). CONCLUSIONS Tooth surface loss in this Trinidadian population group appears to be common. Data supports an association between TSL and age, reflux parafunction and certain dietary patterns." }, { "instance_id": "R30739xR30719", "comparison_id": "R30739", "paper_id": "R30719", "text": "Dental and periodontal lesions in patients with gastro-oesophageal reflux disease OBJECTIVE Dental erosion has been considered an extraesophageal manifestation of gastro-oesophageal reflux disease, but few reports have studied the relationship between this disease and other periodontal or dental lesions. The aim of this study was to investigate the prevalence of dental and periodontal lesions in patients with gastro-oesophageal reflux disease. PATIENTS AND METHODS A total of 253 subjects were prospectively studied between April 1998 and May 2000. Two study groups were established: 181 patients with gastro-oesophageal reflux disease and 72 healthy volunteers. Clinical assessment, including body mass index and consumption of tobacco and alcohol, was performed in all subjects, as well as a dental and periodontal examination performed by a dentist physician, blind as to the diagnosis of subjects. Parameters evaluated were: (a) presence and number of dental erosion, location and severity, according to the Eccles and Jenkins index [Prosthet Dent 1979;42:649-53], modified by Hattab [Int J Prosthes 2000;13:101-71; (b) assessment of dental condition by means of the CAO index; and (c) periodontal status analysed by the plaque index, the haemorrhage index, and gingival recessions. RESULTS Clinical parameters were similar in both groups (p > 0.05). Age was statistically associated with the CAO index, presence of dental erosion, and gingival recession (p < 0.001, Student's t-test). Compared with the control group, the percentage of dental erosion was significantly higher in the gastro-oesophageal reflux disease group (12.5 vs. 47.5%, p < 0.001, chi2-test), as was the number and severity of dental erosions (p < 0.001, Student's t-test). Location of dental erosion was significantly different between groups. Age was not statistically related to either the amount or severity of dental erosion. CAO and periodontal indices were similarly distributed between groups. CONCLUSIONS Dental erosion may even be considered as an extraesophageal manifestation of gastro-oesophageal reflux disease. The fact that the prevalence of caries and periodontal lesions is similar in patients with gastro-oesophageal reflux disease and in healthy volunteers suggests a lack of relationship with gastro-oesophageal reflux disease." }, { "instance_id": "R30739xR30708", "comparison_id": "R30739", "paper_id": "R30708", "text": "Patterns of tooth surface loss among winemakers There are a few documented case studies on the adverse effect of wine on both dental hard and soft tissues. Professional wine tasting could present some degree of increased risk to dental erosion. Alcoholic beverages with a low pH may cause erosion, particularly if the attack is of long duration, and repeated over time. The purpose of this study was to compare the prevalence and severity of tooth surface loss between winemakers (exposed) and their spouses (non-exposed). Utilising a cross-sectional, comparative study design, a clinical examination was conducted to assess caries status; the presence and severity of tooth surface loss; staining (presence or absence); fluorosis and prosthetic status. The salivary flow rate, buffering capacity and pH were also measured. Thirty-six persons, twenty-one winemakers and fifteen of their spouses participated in the study. It was possible to show that there was a difference in terms of the prevalence and severity of tooth surface loss between the teeth of winemakers and those who are not winemakers. The occurrence of tooth surface loss amongst winemakers was highly likely due to frequent exposure of their teeth to wine. Frequent exposure of the teeth to wine, as occurs among wine tasters, is deleterious to enamel, and constitutes an occupational hazard. Erosion is an occupational risk for wine tasters." }, { "instance_id": "R30739xR30723", "comparison_id": "R30739", "paper_id": "R30723", "text": "Associated factors of tooth wear in southern Thailand The purpose of this study was to evaluate the possible risk factors connected with tooth wear. Using the Tooth Wear Index (TWI) and the charting of pre-disposing factors tooth surface loss was recorded in 506 patients, of the Dental Hospital, Prince of Songkla University. We found that age, sex, number of tooth loss, frequency of alcohol, sour fruit and carbonate intake were significant risk factors. Regarding the tooth position, the first molar showed the greatest degree of wear, while the canine and premolar showed the least, respectively. The occlusal surface showed the greatest wear and the cervical, lingual and buccal surfaces showed the least, respectively." }, { "instance_id": "R30739xR30676", "comparison_id": "R30739", "paper_id": "R30676", "text": "Comparison of factors potentially related to the occurrence of dental erosion in high- and low-erosion groups Soft drink intake, method of drinking, pH variations, plaque topography, and various salivary, microbial and clinical factors were compared in Saudi men with high (n = 10, mean = 20.5 yr) and low (n = 9, mean = 20.3 yr) dental erosion. pH-measurements were carried out with a microtouch electrode at six different intraoral locations after the subjects had consumed 330 ml of regular cola-type drink in their customary manner. The results showed that higher intake of cola-type drinks was more common in the high- (253 l yr(-1)) than in the low-erosion group (140 l yr(-1)). High erosion was associated with a method of drinking whereby the drink was kept in the mouth for a longer period (71 s vs. 40 s). pH after drinking did not differ between the groups for any of the six measuring sites. Plaque accumulation on the palatal surfaces of maxillary anterior teeth and urea concentration in unstimulated saliva were lower in high-erosion subjects. Aside from these, there were no differences in salivary and microbial factors between the groups. First molar cuppings, buccal cervical defects, and mouth breathing were more common in the high- than in the low-erosion group. In summary, consumption of cola-type drink, method of drinking, amount of palatal plaque on anterior teeth, and salivary urea concentration are factors associated with dental erosion." }, { "instance_id": "R30739xR30728", "comparison_id": "R30739", "paper_id": "R30728", "text": "The prevalence, aetiology and clinical appearance of tooth wear: the Nigerian experience OBJECTIVE To establish the prevalence and severity of tooth wear among Nigerians and to compare the pattern and aetiology with findings of earlier studies in Western populations. DESIGN Clinical examinations for tooth wear using the tooth wear index (TWI). SETTING The Federal Republic of Nigeria. PARTICIPANTS Patients attending the Dental Hospital, Obafemi Awolowo University Teaching Hospital's Complex Ile-Ife. OUTCOME MEASURES Attrition, abrasion and erosion. RESULTS Of the 126 patients with tooth wear 81 had attrition, 20 had abrasion, 9 had erosion and 16 had attrition and abrasion combined. A total of 15,480 tooth surfaces were examined. 2,229 (14.4%) surfaces had tooth wear out of which 1,007 (6.5%) were pathologically worn down. The frequency of tooth wear increased with the age of patients. Most of the pathologically worn surfaces were just one point above maximum acceptable value. CONCLUSIONS The aetiological factors associated with tooth wear are not different from those encountered in Western cultures but the pattern of wear differs. Pathological tooth wear presents as an age related phenomenon and is probably more severe in Nigerians." }, { "instance_id": "R30739xR30735", "comparison_id": "R30739", "paper_id": "R30735", "text": "Patterns of tooth wear associated with methamphetamine use BACKGROUND Methamphetamine (MAP) abuse is a significant worldwide problem. This prospective study was conducted to determine if MAP users had distinct patterns of tooth wear. METHODS Methamphetamine users were identified and interviewed about their duration and preferred route of MAP use. Study participants were interviewed in the emergency department of a large urban university hospital serving a geographic area with a high rate of illicit MAP production and consumption. Tooth wear was documented for each study participant and scored using a previously validated index and demographic information was obtained using a questionnaire. RESULTS Forty-three MAP patients were interviewed. Preferred route of administration was injection (37%) followed by snorting (33%). Patients who preferentially snorted MAP had significantly higher tooth wear in the anterior maxillary teeth than patients who injected, smoked, or ingested MAP (P = 0.005). CONCLUSION Patients who use MAP have distinct patterns of wear based on route of administration. This difference may be explained anatomically." }, { "instance_id": "R30739xR30702", "comparison_id": "R30739", "paper_id": "R30702", "text": "Tooth wear among psychiatric patients: prevalence, distribution, and associat- ed factors PURPOSE The purpose of this study was to evaluate the prevalence, distribution, and associated factors of tooth wear among psychiatric patients. MATERIALS AND METHODS Tooth wear was evaluated using the tooth wear index with scores ranging from 0 to 4. The presence of predisposing factors was recorded in 143 psychiatric patients attending the outpatient clinic at the Prince Rashed Hospital in northern Jordan. RESULTS The prevalence of a tooth wear score of 3 in at least one tooth was 90.9%. Patients in the age group 16 to 25 had the lowest prevalence (78.6%) of tooth wear. Increasing age was found to be a significant risk factor for the prevalence of tooth wear (P < .005). The occlusal/incisal surfaces were the most affected by wear, with mandibular teeth being more affected than maxillary teeth, followed by the palatal surface of the maxillary anterior teeth and then the buccal/labial surface of the mandibular teeth. The factors found to be associated with tooth wear were age, retirement and unemployment, masseter muscle pain, depression, and anxiety. CONCLUSION Patients' psychiatric condition and prescribed medication may be considered factors that influence tooth wear." }, { "instance_id": "R30739xR30733", "comparison_id": "R30739", "paper_id": "R30733", "text": "Oral health status of workers exposed to acid fumes in phosphate and battery industries in Jordan OBJECTIVES To investigate the prevalence and nature of oral health problems among workers exposed to acid fumes in two industries in Jordan. SETTING Jordan's Phosphate Mining Company and a main private battery factory. DESIGN Comparison of general and oral health conditions between workers exposed to acid fumes and control group from the same workplace. SUBJECTS AND METHODS The sample consisted of 68 subjects from the phosphate industry (37 acid workers and 31 controls) drawn as a sample of convenience and 39 subjects from a battery factory (24 acid workers and 15 controls). Structured questionnaires on medical and dental histories were completed by interview. Clinical examinations were carried out to assess dental erosion, oral hygiene, and gingival health using the appropriate indices. Data were statistically analysed using Wilcoxon rank-sum test to assess the significance of differences between results attained by acid workers and control groups for the investigated parameters. RESULTS Differences in the erosion scores between acid workers in both industries and their controls were highly significant (P<0.05). In both industries, acid workers showed significantly higher oral hygiene scores, obtained by adding the debris and calculus scores, and gingival index scores than their controls (P<0.05). The single most common complaint was tooth hypersensitivity (80%) followed by dry mouth (77%) on average. CONCLUSION Exposure to acid fumes in the work place was significantly associated with dental erosion and deteriorated oral health status. Such exposure was also detrimental to general health. Findings pointed to the need of establishing appropriate educational, preventive and treatment measures coupled with efficient surveillance and environmental monitoring for detection of acid fumes in the workplace atmosphere." }, { "instance_id": "R30817xR30811", "comparison_id": "R30817", "paper_id": "R30811", "text": "Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method." }, { "instance_id": "R30817xR30798", "comparison_id": "R30817", "paper_id": "R30798", "text": "A humanitarian logistics model for disaster relief operation considering network failure and standard relief time: A case study on San Francisco district We propose a multi-depot location-routing model considering network failure, multiple uses of vehicles, and standard relief time. The model determines the locations of local depots and routing for last mile distribution after an earthquake. The model is extended to a two-stage stochastic program with random travel time to ascertain the locations of distribution centers. Small instances have been solved to optimality in GAMS. A variable neighborhood search algorithm is devised to solve the deterministic model. Computational results of our case study show that the unsatisfied demands can be significantly reduced at the cost of higher number of local depots and vehicles." }, { "instance_id": "R30817xR30767", "comparison_id": "R30817", "paper_id": "R30767", "text": "A two\u2010stage procurement model for humanitarian relief supply chains Purpose \u2013 The purpose of this paper is to discuss and to help address the need for quantitative models to support and improve procurement in the context of humanitarian relief efforts.Design/methodology/approach \u2013 This research presents a two\u2010stage stochastic decision model with recourse for procurement in humanitarian relief supply chains, and compares its effectiveness on an illustrative example with respect to a standard solution approach.Findings \u2013 Results show the ability of the new model to capture and model both the procurement process and the uncertainty inherent in a disaster relief situation, in support of more efficient and effective procurement plans.Research limitations/implications \u2013 The research focus is on sudden onset disasters and it does not differentiate between local and international suppliers. A number of extensions of the base model could be implemented, however, so as to address the specific needs of a given organization and their procurement process.Practical implications \u2013 Despi..." }, { "instance_id": "R30817xR30761", "comparison_id": "R30817", "paper_id": "R30761", "text": "Pre-positioning planning for emergency response with service quality constraints Pre-positioning of emergency supplies is a means for increasing preparedness for natural disasters. Key decisions in pre-positioning are the locations and capacities of emergency distribution centers, as well as allocations of inventories of multiple relief commodities to those distribution locations. The location and allocation decisions are complicated by uncertainty about if, or where, a natural disaster will occur. An earlier paper (Rawls and Turnquist 44:521\u2013534, 2010) describes a stochastic mixed integer programming formulation to minimize expected costs (including penalties for unmet demand) in such a situation. This paper extends that model with additional service quality constraints. The added constraints ensure that the probability of meeting all demand is at least \u03b1, and that the demand is met with supplies whose average shipment distance is no greater than a specific limit. A case study using hurricane threats is used to illustrate the model and how the additional constraints modify the pre-positioning strategy." }, { "instance_id": "R30817xR30759", "comparison_id": "R30817", "paper_id": "R30759", "text": "Pre-disaster investment decisions for strengthening a highway network We address a pre-disaster planning problem that seeks to strengthen a highway network whose links are subject to random failures due to a disaster. Each link may be either operational or non-functional after the disaster. The link failure probabilities are assumed to be known a priori, and investment decreases the likelihood of failure. The planning problem seeks connectivity for first responders between various origin-destination (O-D) pairs and hence focuses on uncapacitated road conditions. The decision-maker's goal is to select the links to invest in under a limited budget with the objective of maximizing the post-disaster connectivity and minimizing traversal costs between the origin and destination nodes. The problem is modeled as a two-stage stochastic program in which the investment decisions in the first stage alter the survival probabilities of the corresponding links. We restructure the objective function into a monotonic non-increasing multilinear function and show that using the first order terms of this function leads to a knapsack problem whose solution is a local optimum to the original problem. Numerical experiments on real-world data related to strengthening Istanbul's urban highway system against earthquake risk illustrate the tractability of the method and provide practical insights for decision-makers." }, { "instance_id": "R30817xR30800", "comparison_id": "R30817", "paper_id": "R30800", "text": "Stochastic network design for disaster preparedness This article introduces a risk-averse stochastic modeling approach for a pre-disaster relief network design problem under uncertain demand and transportation capacities. The sizes and locations of the response facilities and the inventory levels of relief supplies at each facility are determined while guaranteeing a certain level of network reliability. A probabilistic constraint on the existence of a feasible flow is introduced to ensure that the demand for relief supplies across the network is satisfied with a specified high probability. Responsiveness is also accounted for by defining multiple regions in the network and introducing local probabilistic constraints on satisfying demand within each region. These local constraints ensure that each region is self-sufficient in terms of providing for its own needs with a large probability. In particular, the Gale\u2013Hoffman inequalities are used to represent the conditions on the existence of a feasible network flow. The solution method rests on two pillars. A preprocessing algorithm is used to eliminate redundant Gale\u2013Hoffman inequalities and then proposed models are formulated as computationally efficient mixed-integer linear programs by utilizing a method based on combinatorial patterns. Computational results for a case study and randomly generated problem instances demonstrate the effectiveness of the models and the solution method." }, { "instance_id": "R30817xR30774", "comparison_id": "R30817", "paper_id": "R30774", "text": "A two-echelon stochastic facility location model for humanitarian relief logistics We develop a two-stage stochastic programming model for a humanitarian relief logistics problem where decisions are made for pre- and post-disaster rescue centers, the amount of relief items to be stocked at the pre-disaster rescue centers, the amount of relief item flows at each echelon, and the amount of relief item shortage. The objective is to minimize the total cost of facility location, inventory holding, transportation and shortage. The deterministic equivalent of the model is formulated as a mixed-integer linear programming model and solved by a heuristic method based on Lagrangean relaxation. Results on randomly generated test instances show that the proposed solution method exhibits good performance up to 25 scenarios. We also validate our model by calculating the value of the stochastic solution and the expected value of perfect information." }, { "instance_id": "R30817xR30743", "comparison_id": "R30817", "paper_id": "R30743", "text": "A scenario planning approach for the flood emergency logistics preparation problem under uncertainty This paper aims to develop a decision-making tool that can be used by government agencies in planning for flood emergency logistics. In this article, the flood emergency logistics problem with uncertainty is formulated as two stochastic programming models that allow for the determination of a rescue resource distribution system for urban flood disasters. The decision variables include the structure of rescue organizations, locations of rescue resource storehouses, allocations of rescue resources under capacity restrictions, and distributions of rescue resources. By applying the data processing and network analysis functions of the geographic information system, flooding potential maps can estimate the possible locations of rescue demand points and the required amount of rescue equipment. The proposed models are solved using a sample average approximation scheme. Finally, a real example of planning for flood emergency logistics is presented to highlight the significance of the proposed model as well as the efficacy of the proposed solution strategy." }, { "instance_id": "R30817xR30763", "comparison_id": "R30817", "paper_id": "R30763", "text": "Stochastic Optimization for Natural Disaster Asset Prepositioning A key strategic issue in pre-disaster planning for humanitarian logistics is the pre-establishment of adequate capacity and resources that enable efficient relief operations. This paper develops a two-stage stochastic optimization model to guide the allocation of budget to acquire and position relief assets, decisions that typically need to be made well in advance before a disaster strikes. The optimization focuses on minimizing the expected number of casualties, so our model includes first-stage decisions to represent the expansion of resources such as warehouses, medical facilities with personnel, ramp spaces, and shelters. Second-stage decisions concern the logistics of the problem, where allocated resources and contracted transportation assets are deployed to rescue critical population (in need of emergency evacuation), deliver required commodities to stay-back population, and transport the transfer population displaced by the disaster. Because of the uncertainty of the event's location and severity, these and other parameters are represented as scenarios. Computational results on notional test cases provide guidance on budget allocation and prove the potential benefit of using stochastic optimization." }, { "instance_id": "R30817xR30741", "comparison_id": "R30817", "paper_id": "R30741", "text": "A two-stage stochastic programming framework for transportation planning in disaster response This study proposes a two-stage stochastic programming model to plan the transportation of vital first-aid commodities to disaster-affected areas during emergency response. A multi-commodity, multi-modal network flow formulation is developed to describe the flow of material over an urban transportation network. Since it is difficult to predict the timing and magnitude of any disaster and its impact on the urban system, resource mobilization is treated in a random manner, and the resource requirements are represented as random variables. Furthermore, uncertainty arising from the vulnerability of the transportation system leads to random arc capacities and supply amounts. Randomness is represented by a finite sample of scenarios for capacity, supply and demand triplet. The two stages are defined with respect to information asymmetry, which discloses uncertainty during the progress of the response. The approach is validated by quantifying the expected value of perfect and stochastic information in problem instances generated out of actual data." }, { "instance_id": "R30817xR30757", "comparison_id": "R30817", "paper_id": "R30757", "text": "Stochastic optimization of medical supply location and distribution in disaster management We propose a stochastic optimization approach for the storage and distribution problem of medical supplies to be used for disaster management under a wide variety of possible disaster types and magnitudes. In preparation for disasters, we develop a stochastic programming model to select the storage locations of medical supplies and required inventory levels for each type of medical supply. Our model captures the disaster specific information and possible effects of disasters through the use of disaster scenarios. Thus, we balance the preparedness and risk despite the uncertainties of disaster events. A benefit of this approach is that the subproblem can be used to suggest loading and routing of vehicles to transport medical supplies for disaster response, given the evaluation of up-to-date disaster field information. We present a case study of our stochastic optimization approach for disaster planning for earthquake scenarios in the Seattle area. Our modeling approach can aid interdisciplinary agencies to both prepare and respond to disasters by considering the risk in an efficient manner." }, { "instance_id": "R30817xR30769", "comparison_id": "R30817", "paper_id": "R30769", "text": "Sheltering network planning and management with a case in the Gulf Coast region Abstract This paper studies sheltering network planning and operations for natural disaster preparedness and responses with a two-stage stochastic program. The preparedness phase decides the locations, capacities and resources of new Permanent Shelters. Under each disaster scenario, both evacuees and resources are distributed to shelters in the response phase. To address the computational burden, the L-shaped algorithm is applied to decompose the problem into the scenario level with linear programs. A case study for hurricanes in the Gulf Coast region of the US is conducted to demonstrate the implementation of the proposed model." }, { "instance_id": "R30817xR30788", "comparison_id": "R30817", "paper_id": "R30788", "text": "Inventory planning and coordination in disaster relief efforts This research proposes a stochastic programming model to determine how supplies should be positioned and distributed among a network of cooperative warehouses. The model incorporates constraints that enforce equity in service while also considering traffic congestion resulting from possible evacuation behavior and time constraints for providing effective response. We make use of short-term information (e.g., hurricane forecasts) to more effectively preposition supplies in preparation for their distribution at an operational level. Through an extensive computational study, we characterize the conditions under which prepositioning is beneficial, as well as discuss the relationship between inventory placement, capacity and coordination within the network." }, { "instance_id": "R30817xR30753", "comparison_id": "R30817", "paper_id": "R30753", "text": "The evacuation optimal network design problem: model formulation and comparisons Abstract The goal of this paper is twofold. First, we present a stochastic programming-based model that provides optimal design solutions for transportation networks in light of possible emergency evacuations. Second, as traffic congestion is a growing problem in metropolitan areas around the world, decision makers might not be willing to design transportation networks solely for evacuation purposes since daily traffic patterns differ tremendously from traffic observed during evacuations. This is especially true when potential disaster locations are limited in number and confined to specific regions (e.g. coastal regions might be more prone to flooding). However, as extreme events such as excessive rainfall become more prevalent everywhere, it is less obvious that the design of transportation networks for evacuation planning and congestion reduction is mutually exclusive. That is, capacity expansion decisions to reduce congestion might also be reasonable from an evacuation planning point of view. Conversely, expansion decisions for evacuation planning might turn out to be effective for congestion relief. To date, no numerical evidence has been presented in the literature to support or disprove these conjectures. Preliminary numerical evidence is provided in this paper." }, { "instance_id": "R30817xR30815", "comparison_id": "R30817", "paper_id": "R30815", "text": "Humanitarian logistics network design under mixed uncertainty In this paper, we address a two-echelon humanitarian logistics network design problem involving multiple central warehouses (CWs) and local distribution centers (LDCs) and develop a novel two-stage scenario-based possibilistic-stochastic programming (SBPSP) approach. The research is motivated by the urgent need for designing a relief network in Tehran in preparation for potential earthquakes to cope with the main logistical problems in pre- and post-disaster phases. During the first stage, the locations for CWs and LDCs are determined along with the prepositioned inventory levels for the relief supplies. In this stage, inherent uncertainties in both supply and demand data as well as the availability level of the transportation network's routes after an earthquake are taken into account. In the second stage, a relief distribution plan is developed based on various disaster scenarios aiming to minimize: total distribution time, the maximum weighted distribution time for the critical items, total cost of unused inventories and weighted shortage cost of unmet demands. A tailored differential evolution (DE) algorithm is developed to find good enough feasible solutions within a reasonable CPU time. Computational results using real data reveal promising performance of the proposed SBPSP model in comparison with the existing relief network in Tehran. The paper contributes to the literature on optimization based design of relief networks under mixed possibilistic-stochastic uncertainty and supports informed decision making by local authorities in increasing resilience of urban areas to natural disasters." }, { "instance_id": "R30817xR30796", "comparison_id": "R30817", "paper_id": "R30796", "text": "Pre-positioning disaster response facilities at safe locations: An evaluation of deterministic and stochastic modeling approaches Choosing the locations of disaster response facilities for the storage of emergency supplies is critical to the quality of service provided post-occurrence of a large scale emergency like an earthquake. In this paper, we provide two location models that explicitly take into consideration the impact a disaster can have on the disaster response facilities and the population centers in surrounding areas. The first model is a deterministic model that incorporates distance-dependent damages to disaster response facilities and population centers. The second model is a stochastic programming model that extends the first by directly considering the damage intensity as a random variable. For this second model we also develop a novel solution method based on Benders Decomposition that is generalizable to other 2-stage stochastic programming problems. We provide a detailed case study using large-scale emergencies caused by an earthquake in California to demonstrate the performance of these new models. We find that the locations suggested by the stochastic model in this paper significantly reduce the expected cost of providing supplies when one considers the damage a disaster causes to the disaster response facilities and areas near it. We also demonstrate that the cost advantage of the stochastic model over the deterministic model is especially large when only a few facilities can be placed. Thus, the value of the stochastic model is particularly great in realistic, budget-constrained situations." }, { "instance_id": "R30817xR30745", "comparison_id": "R30817", "paper_id": "R30745", "text": "Facility location in humanitarian relief In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model." }, { "instance_id": "R30817xR30813", "comparison_id": "R30817", "paper_id": "R30813", "text": "An approximation approach to a trade-off among efficiency, efficacy, and balance for relief pre-positioning in disaster management This work develops a multi-objective, two-stage stochastic, non-linear, and mixed-integer mathematical model for relief pre-positioning in disaster management. Improved imbalance and efficacy measures are incorporated into the model based on a new utility level of the delivered relief commodities. This model considers the usage possibility of a set of alternative routes for each of the applied transportation modes and consequently improves the network reliability. An integrated separable programming-augmented e-constraint approach is proposed to address the problem. The best Pareto-optimal solution is selected by PROMETHEE-II. The theoretical improvements of the presented approach are validated by experiments and a real case study." }, { "instance_id": "R30817xR30747", "comparison_id": "R30817", "paper_id": "R30747", "text": "Decomposition algorithms for the design of a nonsimultaneous capacitated evacuation tree network In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed-integer program and each subproblem is a time-expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. \u00a9 2008 Wiley Periodicals, Inc. NETWORKS, 2009" }, { "instance_id": "R30817xR30751", "comparison_id": "R30817", "paper_id": "R30751", "text": "A two-stage stochastic programming model for transportation network protection Network protection against natural and human-caused hazards has become a topical research theme in engineering and social sciences. This paper focuses on the problem of allocating limited retrofit resources over multiple highway bridges to improve the resilience and robustness of the entire transportation system in question. The main modeling challenges in network retrofit problems are to capture the interdependencies among individual transportation facilities and to cope with the extremely high uncertainty in the decision environment. In this paper, we model the network retrofit problem as a two-stage stochastic programming problem that optimizes a mean-risk objective of the system loss. This formulation hedges well against uncertainty, but also imposes computational challenges due to involvement of integer decision variables and increased dimension of the problem. An efficient algorithm is developed, via extending the well-known L-shaped method using generalized benders decomposition, to efficiently handle the binary integer variables in the first stage and the nonlinear recourse in the second stage of the model formulation. The proposed modeling and solution methods are general and can be applied to other network design problems as well." }, { "instance_id": "R30817xR30778", "comparison_id": "R30817", "paper_id": "R30778", "text": "Pre-positioning hurricane supplies in a commercial supply chain Inventory control for retailers situated in the projected path of an observed hurricane or tropical storm can be challenging due to the inherent uncertainties associated with storm forecasts and demand requirements. In many cases, retailers react to pre- and post-storm demand surge by ordering emergency supplies from manufacturers posthumously. This wait-and-see approach often leads to stockout of the critical supplies and equipment used to support post-storm disaster relief operations, which compromises the performance of emergency response efforts and proliferates lost sales in the commercial supply chain. This paper proposes a proactive approach to managing disaster relief inventories from the perspective of a single manufacturing facility, where emergency supplies are pre-positioned throughout a network of geographically dispersed retailers in anticipation of an observed storm's landfall. Once the requirements of a specific disaster scenario are observed, supplies are then transshipped among retailers, with possible direct shipments from the manufacturer, to satisfy any unfulfilled demands. The manufacturer's pre-positioning problem is formulated as a two-stage stochastic programming model which is illustrated via a case study comprised of real-world hurricane scenarios. Our findings indicate that the expected performance of the proposed pre-positioning strategy over a variety of hurricane scenarios is more effective than the wait-and-see approach; currently used in practice." }, { "instance_id": "R30817xR30772", "comparison_id": "R30817", "paper_id": "R30772", "text": "A multi-objective robust stochastic programming model for disaster relief logistics under uncertainty Humanitarian relief logistics is one of the most important elements of a relief operation in disaster management. The present work develops a multi-objective robust stochastic programming approach for disaster relief logistics under uncertainty. In our approach, not only demands but also supplies and the cost of procurement and transportation are considered as the uncertain parameters. Furthermore, the model considers uncertainty for the locations where those demands might arise and the possibility that some of the pre-positioned supplies in the relief distribution center or supplier might be partially destroyed by the disaster. Our multi-objective model attempts to minimize the sum of the expected value and the variance of the total cost of the relief chain while penalizing the solution\u2019s infeasibility due to parameter uncertainty; at the same time the model aims to maximize the affected areas\u2019 satisfaction levels through minimizing the sum of the maximum shortages in the affected areas. Considering the global evaluation of two objectives, a compromise programming model is formulated and solved to obtain a non-dominating compromise solution. We present a case study of our robust stochastic optimization approach for disaster planning for earthquake scenarios in a region of Iran. Our findings show that the proposed model can help in making decisions on both facility location and resource allocation in cases of disaster relief efforts." }, { "instance_id": "R30817xR30805", "comparison_id": "R30817", "paper_id": "R30805", "text": "Bi-objective stochastic programming models for determining depot locations in disaster relief operations This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX." }, { "instance_id": "R30817xR30792", "comparison_id": "R30817", "paper_id": "R30792", "text": "A dual two-stage stochastic model for flood management with inexact-integer analysis under multiple uncertainties This study introduces a hybrid optimization approach for flood management under multiple uncertainties. An inexact two-stage integer programming (ITIP) model and its dual formation are developed by integrating the concepts of mixed-integer and interval-parameter programming techniques into a general framework of two-stage stochastic programming. The proposed approach provides a linkage to pre-defined management policies, deals with capacity-expansion planning issues, and reflects various uncertainties expressed as probability distributions and discrete intervals for a flood management system. Penalties are imposed when the policies are violated. The marginal costs are determined based on dual formulation of the ITIP model, and their effects on the optimal solutions are investigated. The developed model is applied to a case study of flood management. The solutions of binary variables represent the decisions of flood-diversion\u2013capacity expansion within a multi-region, multi-flow-level, and multi-option context. The solutions of continuous variables are related to decisions of flood diversion toward different regions. The solutions of dual variables indicate the decisions of marginal costs associated with the resources of regions\u2019 capacity, water availability, and allowable diversions. The results show that the proposed approach could obtain reliable solutions and adequately support decision making in flood management." }, { "instance_id": "R30914xR30749", "comparison_id": "R30914", "paper_id": "R30749", "text": "Dual-Interval Two-Stage Optimization for Flood Management and Risk Analyses In this study, a dual interval two-stage restricted-recourse programming (DITRP) method is developed for flood-diversion planning under uncertainty. Compared with other conventional methods, DITRP improves upon them by addressing system uncertainties with complex presentations and incorporating subjective information within its optimization framework. Uncertainties in DITRP can be represented as probability distributions and intervals. In addition, the dual-interval concept is presented when the available information is highly uncertain for boundaries of intervals. Moreover, decision makers\u2019 attitudes towards system risk can be reflected using a restricted-resource measure by controlling the variability of the recourse cost. The method has been applied to a case study of flood management. The results indicate that reasonable solutions for planning flood management practice have been generated which are related to decisions of flood-diversion. Several policy scenarios are analyzed, assisting in gaining insight into the tradeoffs between risk and cost." }, { "instance_id": "R30914xR30741", "comparison_id": "R30914", "paper_id": "R30741", "text": "A two-stage stochastic programming framework for transportation planning in disaster response This study proposes a two-stage stochastic programming model to plan the transportation of vital first-aid commodities to disaster-affected areas during emergency response. A multi-commodity, multi-modal network flow formulation is developed to describe the flow of material over an urban transportation network. Since it is difficult to predict the timing and magnitude of any disaster and its impact on the urban system, resource mobilization is treated in a random manner, and the resource requirements are represented as random variables. Furthermore, uncertainty arising from the vulnerability of the transportation system leads to random arc capacities and supply amounts. Randomness is represented by a finite sample of scenarios for capacity, supply and demand triplet. The two stages are defined with respect to information asymmetry, which discloses uncertainty during the progress of the response. The approach is validated by quantifying the expected value of perfect and stochastic information in problem instances generated out of actual data." }, { "instance_id": "R30914xR30761", "comparison_id": "R30914", "paper_id": "R30761", "text": "Pre-positioning planning for emergency response with service quality constraints Pre-positioning of emergency supplies is a means for increasing preparedness for natural disasters. Key decisions in pre-positioning are the locations and capacities of emergency distribution centers, as well as allocations of inventories of multiple relief commodities to those distribution locations. The location and allocation decisions are complicated by uncertainty about if, or where, a natural disaster will occur. An earlier paper (Rawls and Turnquist 44:521\u2013534, 2010) describes a stochastic mixed integer programming formulation to minimize expected costs (including penalties for unmet demand) in such a situation. This paper extends that model with additional service quality constraints. The added constraints ensure that the probability of meeting all demand is at least \u03b1, and that the demand is met with supplies whose average shipment distance is no greater than a specific limit. A case study using hurricane threats is used to illustrate the model and how the additional constraints modify the pre-positioning strategy." }, { "instance_id": "R30914xR30798", "comparison_id": "R30914", "paper_id": "R30798", "text": "A humanitarian logistics model for disaster relief operation considering network failure and standard relief time: A case study on San Francisco district We propose a multi-depot location-routing model considering network failure, multiple uses of vehicles, and standard relief time. The model determines the locations of local depots and routing for last mile distribution after an earthquake. The model is extended to a two-stage stochastic program with random travel time to ascertain the locations of distribution centers. Small instances have been solved to optimality in GAMS. A variable neighborhood search algorithm is devised to solve the deterministic model. Computational results of our case study show that the unsatisfied demands can be significantly reduced at the cost of higher number of local depots and vehicles." }, { "instance_id": "R30914xR30792", "comparison_id": "R30914", "paper_id": "R30792", "text": "A dual two-stage stochastic model for flood management with inexact-integer analysis under multiple uncertainties This study introduces a hybrid optimization approach for flood management under multiple uncertainties. An inexact two-stage integer programming (ITIP) model and its dual formation are developed by integrating the concepts of mixed-integer and interval-parameter programming techniques into a general framework of two-stage stochastic programming. The proposed approach provides a linkage to pre-defined management policies, deals with capacity-expansion planning issues, and reflects various uncertainties expressed as probability distributions and discrete intervals for a flood management system. Penalties are imposed when the policies are violated. The marginal costs are determined based on dual formulation of the ITIP model, and their effects on the optimal solutions are investigated. The developed model is applied to a case study of flood management. The solutions of binary variables represent the decisions of flood-diversion\u2013capacity expansion within a multi-region, multi-flow-level, and multi-option context. The solutions of continuous variables are related to decisions of flood diversion toward different regions. The solutions of dual variables indicate the decisions of marginal costs associated with the resources of regions\u2019 capacity, water availability, and allowable diversions. The results show that the proposed approach could obtain reliable solutions and adequately support decision making in flood management." }, { "instance_id": "R30914xR30743", "comparison_id": "R30914", "paper_id": "R30743", "text": "A scenario planning approach for the flood emergency logistics preparation problem under uncertainty This paper aims to develop a decision-making tool that can be used by government agencies in planning for flood emergency logistics. In this article, the flood emergency logistics problem with uncertainty is formulated as two stochastic programming models that allow for the determination of a rescue resource distribution system for urban flood disasters. The decision variables include the structure of rescue organizations, locations of rescue resource storehouses, allocations of rescue resources under capacity restrictions, and distributions of rescue resources. By applying the data processing and network analysis functions of the geographic information system, flooding potential maps can estimate the possible locations of rescue demand points and the required amount of rescue equipment. The proposed models are solved using a sample average approximation scheme. Finally, a real example of planning for flood emergency logistics is presented to highlight the significance of the proposed model as well as the efficacy of the proposed solution strategy." }, { "instance_id": "R30914xR30811", "comparison_id": "R30914", "paper_id": "R30811", "text": "Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method." }, { "instance_id": "R30914xR30774", "comparison_id": "R30914", "paper_id": "R30774", "text": "A two-echelon stochastic facility location model for humanitarian relief logistics We develop a two-stage stochastic programming model for a humanitarian relief logistics problem where decisions are made for pre- and post-disaster rescue centers, the amount of relief items to be stocked at the pre-disaster rescue centers, the amount of relief item flows at each echelon, and the amount of relief item shortage. The objective is to minimize the total cost of facility location, inventory holding, transportation and shortage. The deterministic equivalent of the model is formulated as a mixed-integer linear programming model and solved by a heuristic method based on Lagrangean relaxation. Results on randomly generated test instances show that the proposed solution method exhibits good performance up to 25 scenarios. We also validate our model by calculating the value of the stochastic solution and the expected value of perfect information." }, { "instance_id": "R30914xR30763", "comparison_id": "R30914", "paper_id": "R30763", "text": "Stochastic Optimization for Natural Disaster Asset Prepositioning A key strategic issue in pre-disaster planning for humanitarian logistics is the pre-establishment of adequate capacity and resources that enable efficient relief operations. This paper develops a two-stage stochastic optimization model to guide the allocation of budget to acquire and position relief assets, decisions that typically need to be made well in advance before a disaster strikes. The optimization focuses on minimizing the expected number of casualties, so our model includes first-stage decisions to represent the expansion of resources such as warehouses, medical facilities with personnel, ramp spaces, and shelters. Second-stage decisions concern the logistics of the problem, where allocated resources and contracted transportation assets are deployed to rescue critical population (in need of emergency evacuation), deliver required commodities to stay-back population, and transport the transfer population displaced by the disaster. Because of the uncertainty of the event's location and severity, these and other parameters are represented as scenarios. Computational results on notional test cases provide guidance on budget allocation and prove the potential benefit of using stochastic optimization." }, { "instance_id": "R30914xR30802", "comparison_id": "R30914", "paper_id": "R30802", "text": "A scenario planning approach for propositioning rescue centers for urban waterlog disasters A system specification for urban waterlog disasters is developed.A two-stage stochastic programming model is formulated.The economic cost and loss, and environmental and casualty risks are considered.The urban waterlog disasters in Pudong District of Shanghai, China is examined. An urban waterlog disaster can produce severe results, such as residents property loss, environmental damages and pollution, and even casualties. This paper presents a system specification for urban waterlog disasters according to the analysis of urban waterlog disaster risks. Then, a two-stage stochastic mixed-integer programming model is formulated. The model minimizes the total logistics cost, and risk-induced penalties. Moreover, a deterministic counterpart of the stochastic model is proposed to study the expected value of perfect information. The multi-attribute utility theory is used to build assessment functions that assess the utility of the rescue system and the degree contributed to disaster relief for each rescue center. Finally, a real example of rescue logistics is examined for the urban waterlog disasters in Pudong District of Shanghai, China. Using the proposed model, two main results can be obtained. First, the expected value of perfect information experiment reveals that an additional 45,005 logistics cost and an additional 2417 risk-induced penalties can be incurred due to the presence of uncertainty. Second, as the weight of risk-induced penalty increases from 0.1 to 0.9, the logistics cost is increased by 41.21%, which thus contributes to a decrease of risk-induced penalty by 97.44%. Some managerial implications are discussed based on the numerical studies." }, { "instance_id": "R30914xR30790", "comparison_id": "R30914", "paper_id": "R30790", "text": "Prepositioning emergency supplies to support disaster relief: a stochastic programming approach ABSTRACT This paper studies the strategic problem of designing emergency supply networks to support disaster relief over a planning horizon. The problem addresses decisions on the location and number of distribution centres needed, their capacity, and the quantity of each emergency item to keep in stock. It builds on a case study inspired by real-world data obtained from the North Carolina Emergency Management Division (NCEM) and the Federal Emergency Management Agency (FEMA). To tackle the problem, a scenario-based approach is proposed involving three phases: disaster scenario generation, design generation and design evaluation. Disasters are modelled as stochastic processes and a Monte Carlo procedure is derived to generate plausible catastrophic scenarios. Based on this detailed representation of disasters, a multi-phase modelling framework is proposed to design the emergency supply network. The two-stage stochastic programming model proposed is solved using a sample average approximation method. This scenario-based solution approach is applied to the case study to generate plausible scenarios, to produce alternative designs and to evaluate them on a set of performance measures in order to select the best design." }, { "instance_id": "R30914xR30776", "comparison_id": "R30914", "paper_id": "R30776", "text": "Shelter location and transportation planning under hurricane conditions This paper develops a scenario-based bilevel programming model to optimize the selection of shelter locations with explicit consideration of a range of possible hurricane events and the evacuation needs under each of those events. A realistic case study for the state of North Carolina is presented. Through the case study, we demonstrate (i) the criticality of considering multiple hurricane scenarios in the location of shelters, and; (ii) the importance of considering the transportation demands of all evacuees when selecting locations for public shelters." }, { "instance_id": "R30914xR30794", "comparison_id": "R30914", "paper_id": "R30794", "text": "A two-stage mixed-integer fuzzy programming with interval-valued membership functions approach for flood-diversion planning Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints." }, { "instance_id": "R30914xR30751", "comparison_id": "R30914", "paper_id": "R30751", "text": "A two-stage stochastic programming model for transportation network protection Network protection against natural and human-caused hazards has become a topical research theme in engineering and social sciences. This paper focuses on the problem of allocating limited retrofit resources over multiple highway bridges to improve the resilience and robustness of the entire transportation system in question. The main modeling challenges in network retrofit problems are to capture the interdependencies among individual transportation facilities and to cope with the extremely high uncertainty in the decision environment. In this paper, we model the network retrofit problem as a two-stage stochastic programming problem that optimizes a mean-risk objective of the system loss. This formulation hedges well against uncertainty, but also imposes computational challenges due to involvement of integer decision variables and increased dimension of the problem. An efficient algorithm is developed, via extending the well-known L-shaped method using generalized benders decomposition, to efficiently handle the binary integer variables in the first stage and the nonlinear recourse in the second stage of the model formulation. The proposed modeling and solution methods are general and can be applied to other network design problems as well." }, { "instance_id": "R30914xR30772", "comparison_id": "R30914", "paper_id": "R30772", "text": "A multi-objective robust stochastic programming model for disaster relief logistics under uncertainty Humanitarian relief logistics is one of the most important elements of a relief operation in disaster management. The present work develops a multi-objective robust stochastic programming approach for disaster relief logistics under uncertainty. In our approach, not only demands but also supplies and the cost of procurement and transportation are considered as the uncertain parameters. Furthermore, the model considers uncertainty for the locations where those demands might arise and the possibility that some of the pre-positioned supplies in the relief distribution center or supplier might be partially destroyed by the disaster. Our multi-objective model attempts to minimize the sum of the expected value and the variance of the total cost of the relief chain while penalizing the solution\u2019s infeasibility due to parameter uncertainty; at the same time the model aims to maximize the affected areas\u2019 satisfaction levels through minimizing the sum of the maximum shortages in the affected areas. Considering the global evaluation of two objectives, a compromise programming model is formulated and solved to obtain a non-dominating compromise solution. We present a case study of our robust stochastic optimization approach for disaster planning for earthquake scenarios in a region of Iran. Our findings show that the proposed model can help in making decisions on both facility location and resource allocation in cases of disaster relief efforts." }, { "instance_id": "R30914xR30851", "comparison_id": "R30914", "paper_id": "R30851", "text": "Pre-positioning of emergency supplies for disaster response Pre-positioning of emergency supplies is one mechanism of increasing preparedness for natural disasters. The goal of this research is to develop an emergency response planning tool that determines the location and quantities of various types of emergency supplies to be pre-positioned, under uncertainty about if, or where, a natural disaster will occur. The paper presents a two-stage stochastic mixed integer program (SMIP) that provides an emergency response pre-positioning strategy for hurricanes or other disaster threats. The SMIP is a robust model that considers uncertainty in demand for the stocked supplies as well as uncertainty regarding transportation network availability after an event. Due to the computational complexity of the problem, a heuristic algorithm referred to as the Lagrangian L-shaped method (LLSM) is developed to solve large-scale instances of the problem. A case study focused on hurricane threat in the Gulf Coast area of the US illustrates application of the model." }, { "instance_id": "R30914xR30778", "comparison_id": "R30914", "paper_id": "R30778", "text": "Pre-positioning hurricane supplies in a commercial supply chain Inventory control for retailers situated in the projected path of an observed hurricane or tropical storm can be challenging due to the inherent uncertainties associated with storm forecasts and demand requirements. In many cases, retailers react to pre- and post-storm demand surge by ordering emergency supplies from manufacturers posthumously. This wait-and-see approach often leads to stockout of the critical supplies and equipment used to support post-storm disaster relief operations, which compromises the performance of emergency response efforts and proliferates lost sales in the commercial supply chain. This paper proposes a proactive approach to managing disaster relief inventories from the perspective of a single manufacturing facility, where emergency supplies are pre-positioned throughout a network of geographically dispersed retailers in anticipation of an observed storm's landfall. Once the requirements of a specific disaster scenario are observed, supplies are then transshipped among retailers, with possible direct shipments from the manufacturer, to satisfy any unfulfilled demands. The manufacturer's pre-positioning problem is formulated as a two-stage stochastic programming model which is illustrated via a case study comprised of real-world hurricane scenarios. Our findings indicate that the expected performance of the proposed pre-positioning strategy over a variety of hurricane scenarios is more effective than the wait-and-see approach; currently used in practice." }, { "instance_id": "R30914xR30796", "comparison_id": "R30914", "paper_id": "R30796", "text": "Pre-positioning disaster response facilities at safe locations: An evaluation of deterministic and stochastic modeling approaches Choosing the locations of disaster response facilities for the storage of emergency supplies is critical to the quality of service provided post-occurrence of a large scale emergency like an earthquake. In this paper, we provide two location models that explicitly take into consideration the impact a disaster can have on the disaster response facilities and the population centers in surrounding areas. The first model is a deterministic model that incorporates distance-dependent damages to disaster response facilities and population centers. The second model is a stochastic programming model that extends the first by directly considering the damage intensity as a random variable. For this second model we also develop a novel solution method based on Benders Decomposition that is generalizable to other 2-stage stochastic programming problems. We provide a detailed case study using large-scale emergencies caused by an earthquake in California to demonstrate the performance of these new models. We find that the locations suggested by the stochastic model in this paper significantly reduce the expected cost of providing supplies when one considers the damage a disaster causes to the disaster response facilities and areas near it. We also demonstrate that the cost advantage of the stochastic model over the deterministic model is especially large when only a few facilities can be placed. Thus, the value of the stochastic model is particularly great in realistic, budget-constrained situations." }, { "instance_id": "R30914xR30757", "comparison_id": "R30914", "paper_id": "R30757", "text": "Stochastic optimization of medical supply location and distribution in disaster management We propose a stochastic optimization approach for the storage and distribution problem of medical supplies to be used for disaster management under a wide variety of possible disaster types and magnitudes. In preparation for disasters, we develop a stochastic programming model to select the storage locations of medical supplies and required inventory levels for each type of medical supply. Our model captures the disaster specific information and possible effects of disasters through the use of disaster scenarios. Thus, we balance the preparedness and risk despite the uncertainties of disaster events. A benefit of this approach is that the subproblem can be used to suggest loading and routing of vehicles to transport medical supplies for disaster response, given the evaluation of up-to-date disaster field information. We present a case study of our stochastic optimization approach for disaster planning for earthquake scenarios in the Seattle area. Our modeling approach can aid interdisciplinary agencies to both prepare and respond to disasters by considering the risk in an efficient manner." }, { "instance_id": "R30914xR30783", "comparison_id": "R30914", "paper_id": "R30783", "text": "The bi-objective stochastic covering tour problem We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach." }, { "instance_id": "R30914xR30753", "comparison_id": "R30914", "paper_id": "R30753", "text": "The evacuation optimal network design problem: model formulation and comparisons Abstract The goal of this paper is twofold. First, we present a stochastic programming-based model that provides optimal design solutions for transportation networks in light of possible emergency evacuations. Second, as traffic congestion is a growing problem in metropolitan areas around the world, decision makers might not be willing to design transportation networks solely for evacuation purposes since daily traffic patterns differ tremendously from traffic observed during evacuations. This is especially true when potential disaster locations are limited in number and confined to specific regions (e.g. coastal regions might be more prone to flooding). However, as extreme events such as excessive rainfall become more prevalent everywhere, it is less obvious that the design of transportation networks for evacuation planning and congestion reduction is mutually exclusive. That is, capacity expansion decisions to reduce congestion might also be reasonable from an evacuation planning point of view. Conversely, expansion decisions for evacuation planning might turn out to be effective for congestion relief. To date, no numerical evidence has been presented in the literature to support or disprove these conjectures. Preliminary numerical evidence is provided in this paper." }, { "instance_id": "R30914xR30755", "comparison_id": "R30914", "paper_id": "R30755", "text": "Solving Stochastic Transportation Network Protection Problems Using the Progressive Hedging-based Method This research focuses on pre-disaster transportation network protection against uncertain future disasters. Given limited resources, the goal of the central planner is to choose the best set of network components to protect while allowing the network users to follow their own best-perceived routes in any resultant network configuration. This problem is formulated as a two-stage stochastic programming problem with equilibrium constraints, where the objective is to minimize the total expected physical and social losses caused by potential disasters. Developing efficient solution methods for such a problem can be challenging. In this work, we will demonstrate the applicability of progressive hedging-based method for solving large scale stochastic network optimization problems with equilibrium constraints. In the proposed solution procedure, we solve each modified scenario sub-problem as a mathematical program with complementary constraints and then gradually aggregate scenario-dependent solutions to the final optimal solution." }, { "instance_id": "R30914xR30747", "comparison_id": "R30914", "paper_id": "R30747", "text": "Decomposition algorithms for the design of a nonsimultaneous capacitated evacuation tree network In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed-integer program and each subproblem is a time-expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. \u00a9 2008 Wiley Periodicals, Inc. NETWORKS, 2009" }, { "instance_id": "R30950xR30877", "comparison_id": "R30950", "paper_id": "R30877", "text": "Risk-averse two-stage stochastic programming with an application to disaster management Traditional two-stage stochastic programming is risk-neutral; that is, it considers the expectation as the preference criterion while comparing the random variables (e.g., total cost) to identify the best decisions. However, in the presence of variability risk measures should be incorporated into decision making problems in order to model its effects. In this study, we consider a risk-averse two-stage stochastic programming model, where we specify the conditional-value-at-risk (CVaR) as the risk measure. We construct two decomposition algorithms based on the generic Benders-decomposition approach to solve such problems. Both single-cut and multicut versions of the proposed decomposition algorithms are presented. We adapt the concepts of the value of perfect information (VPI) and the value of the stochastic solution (VSS) for the proposed risk-averse two-stage stochastic programming framework and define two stochastic measures on the VPI and VSS. We apply the proposed model to disaster management, which is one of the research fields that can significantly benefit from risk-averse two-stage stochastic programming models. In particular, we consider the problem of determining the response facility locations and the inventory levels of the relief supplies at each facility in the presence of uncertainty in demand and the damage level of the disaster network. We present numerical results to discuss how incorporating a risk measure affects the optimal solutions and demonstrate the computational effectiveness of the proposed methods." }, { "instance_id": "R30950xR30851", "comparison_id": "R30950", "paper_id": "R30851", "text": "Pre-positioning of emergency supplies for disaster response Pre-positioning of emergency supplies is one mechanism of increasing preparedness for natural disasters. The goal of this research is to develop an emergency response planning tool that determines the location and quantities of various types of emergency supplies to be pre-positioned, under uncertainty about if, or where, a natural disaster will occur. The paper presents a two-stage stochastic mixed integer program (SMIP) that provides an emergency response pre-positioning strategy for hurricanes or other disaster threats. The SMIP is a robust model that considers uncertainty in demand for the stocked supplies as well as uncertainty regarding transportation network availability after an event. Due to the computational complexity of the problem, a heuristic algorithm referred to as the Lagrangian L-shaped method (LLSM) is developed to solve large-scale instances of the problem. A case study focused on hurricane threat in the Gulf Coast area of the US illustrates application of the model." }, { "instance_id": "R30950xR30809", "comparison_id": "R30950", "paper_id": "R30809", "text": "Stochastic network models for logistics planning in disaster relief Emergency logistics in disasters is fraught with planning and operational challenges, such as uncertainty about the exact nature and magnitude of the disaster, a lack of reliable information about the location and needs of victims, possible random supplies and donations, precarious transport links, scarcity of resources, and so on. This paper develops a new two-stage stochastic network flow model to help decide how to rapidly supply humanitarian aid to victims of a disaster within this context. The model takes into account practical characteristics that have been neglected by the literature so far, such as budget allocation, fleet sizing of multiple types of vehicles, procurement, and varying lead times over a dynamic multiperiod horizon. Attempting to improve demand fulfillment policy, we present some extensions of the model via state-of-art risk measures, such as semideviation and conditional value-at-risk. A simple two-phase heuristic to solve the problem within a reasonable amount of computing time is also suggested. Numerical tests based on the floods and landslides in Rio de Janeiro state, Brazil, show that the model can help plan and organise relief to provide good service levels in most scenarios, and how this depends on the type of disaster and resources. Moreover, we demonstrate that our heuristic performs well for real and random instances." }, { "instance_id": "R30950xR30788", "comparison_id": "R30950", "paper_id": "R30788", "text": "Inventory planning and coordination in disaster relief efforts This research proposes a stochastic programming model to determine how supplies should be positioned and distributed among a network of cooperative warehouses. The model incorporates constraints that enforce equity in service while also considering traffic congestion resulting from possible evacuation behavior and time constraints for providing effective response. We make use of short-term information (e.g., hurricane forecasts) to more effectively preposition supplies in preparation for their distribution at an operational level. Through an extensive computational study, we characterize the conditions under which prepositioning is beneficial, as well as discuss the relationship between inventory placement, capacity and coordination within the network." }, { "instance_id": "R30950xR30772", "comparison_id": "R30950", "paper_id": "R30772", "text": "A multi-objective robust stochastic programming model for disaster relief logistics under uncertainty Humanitarian relief logistics is one of the most important elements of a relief operation in disaster management. The present work develops a multi-objective robust stochastic programming approach for disaster relief logistics under uncertainty. In our approach, not only demands but also supplies and the cost of procurement and transportation are considered as the uncertain parameters. Furthermore, the model considers uncertainty for the locations where those demands might arise and the possibility that some of the pre-positioned supplies in the relief distribution center or supplier might be partially destroyed by the disaster. Our multi-objective model attempts to minimize the sum of the expected value and the variance of the total cost of the relief chain while penalizing the solution\u2019s infeasibility due to parameter uncertainty; at the same time the model aims to maximize the affected areas\u2019 satisfaction levels through minimizing the sum of the maximum shortages in the affected areas. Considering the global evaluation of two objectives, a compromise programming model is formulated and solved to obtain a non-dominating compromise solution. We present a case study of our robust stochastic optimization approach for disaster planning for earthquake scenarios in a region of Iran. Our findings show that the proposed model can help in making decisions on both facility location and resource allocation in cases of disaster relief efforts." }, { "instance_id": "R30950xR30753", "comparison_id": "R30950", "paper_id": "R30753", "text": "The evacuation optimal network design problem: model formulation and comparisons Abstract The goal of this paper is twofold. First, we present a stochastic programming-based model that provides optimal design solutions for transportation networks in light of possible emergency evacuations. Second, as traffic congestion is a growing problem in metropolitan areas around the world, decision makers might not be willing to design transportation networks solely for evacuation purposes since daily traffic patterns differ tremendously from traffic observed during evacuations. This is especially true when potential disaster locations are limited in number and confined to specific regions (e.g. coastal regions might be more prone to flooding). However, as extreme events such as excessive rainfall become more prevalent everywhere, it is less obvious that the design of transportation networks for evacuation planning and congestion reduction is mutually exclusive. That is, capacity expansion decisions to reduce congestion might also be reasonable from an evacuation planning point of view. Conversely, expansion decisions for evacuation planning might turn out to be effective for congestion relief. To date, no numerical evidence has been presented in the literature to support or disprove these conjectures. Preliminary numerical evidence is provided in this paper." }, { "instance_id": "R30950xR30757", "comparison_id": "R30950", "paper_id": "R30757", "text": "Stochastic optimization of medical supply location and distribution in disaster management We propose a stochastic optimization approach for the storage and distribution problem of medical supplies to be used for disaster management under a wide variety of possible disaster types and magnitudes. In preparation for disasters, we develop a stochastic programming model to select the storage locations of medical supplies and required inventory levels for each type of medical supply. Our model captures the disaster specific information and possible effects of disasters through the use of disaster scenarios. Thus, we balance the preparedness and risk despite the uncertainties of disaster events. A benefit of this approach is that the subproblem can be used to suggest loading and routing of vehicles to transport medical supplies for disaster response, given the evaluation of up-to-date disaster field information. We present a case study of our stochastic optimization approach for disaster planning for earthquake scenarios in the Seattle area. Our modeling approach can aid interdisciplinary agencies to both prepare and respond to disasters by considering the risk in an efficient manner." }, { "instance_id": "R30950xR30794", "comparison_id": "R30950", "paper_id": "R30794", "text": "A two-stage mixed-integer fuzzy programming with interval-valued membership functions approach for flood-diversion planning Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints." }, { "instance_id": "R30950xR30776", "comparison_id": "R30950", "paper_id": "R30776", "text": "Shelter location and transportation planning under hurricane conditions This paper develops a scenario-based bilevel programming model to optimize the selection of shelter locations with explicit consideration of a range of possible hurricane events and the evacuation needs under each of those events. A realistic case study for the state of North Carolina is presented. Through the case study, we demonstrate (i) the criticality of considering multiple hurricane scenarios in the location of shelters, and; (ii) the importance of considering the transportation demands of all evacuees when selecting locations for public shelters." }, { "instance_id": "R30950xR30763", "comparison_id": "R30950", "paper_id": "R30763", "text": "Stochastic Optimization for Natural Disaster Asset Prepositioning A key strategic issue in pre-disaster planning for humanitarian logistics is the pre-establishment of adequate capacity and resources that enable efficient relief operations. This paper develops a two-stage stochastic optimization model to guide the allocation of budget to acquire and position relief assets, decisions that typically need to be made well in advance before a disaster strikes. The optimization focuses on minimizing the expected number of casualties, so our model includes first-stage decisions to represent the expansion of resources such as warehouses, medical facilities with personnel, ramp spaces, and shelters. Second-stage decisions concern the logistics of the problem, where allocated resources and contracted transportation assets are deployed to rescue critical population (in need of emergency evacuation), deliver required commodities to stay-back population, and transport the transfer population displaced by the disaster. Because of the uncertainty of the event's location and severity, these and other parameters are represented as scenarios. Computational results on notional test cases provide guidance on budget allocation and prove the potential benefit of using stochastic optimization." }, { "instance_id": "R30950xR30800", "comparison_id": "R30950", "paper_id": "R30800", "text": "Stochastic network design for disaster preparedness This article introduces a risk-averse stochastic modeling approach for a pre-disaster relief network design problem under uncertain demand and transportation capacities. The sizes and locations of the response facilities and the inventory levels of relief supplies at each facility are determined while guaranteeing a certain level of network reliability. A probabilistic constraint on the existence of a feasible flow is introduced to ensure that the demand for relief supplies across the network is satisfied with a specified high probability. Responsiveness is also accounted for by defining multiple regions in the network and introducing local probabilistic constraints on satisfying demand within each region. These local constraints ensure that each region is self-sufficient in terms of providing for its own needs with a large probability. In particular, the Gale\u2013Hoffman inequalities are used to represent the conditions on the existence of a feasible network flow. The solution method rests on two pillars. A preprocessing algorithm is used to eliminate redundant Gale\u2013Hoffman inequalities and then proposed models are formulated as computationally efficient mixed-integer linear programs by utilizing a method based on combinatorial patterns. Computational results for a case study and randomly generated problem instances demonstrate the effectiveness of the models and the solution method." }, { "instance_id": "R30950xR30755", "comparison_id": "R30950", "paper_id": "R30755", "text": "Solving Stochastic Transportation Network Protection Problems Using the Progressive Hedging-based Method This research focuses on pre-disaster transportation network protection against uncertain future disasters. Given limited resources, the goal of the central planner is to choose the best set of network components to protect while allowing the network users to follow their own best-perceived routes in any resultant network configuration. This problem is formulated as a two-stage stochastic programming problem with equilibrium constraints, where the objective is to minimize the total expected physical and social losses caused by potential disasters. Developing efficient solution methods for such a problem can be challenging. In this work, we will demonstrate the applicability of progressive hedging-based method for solving large scale stochastic network optimization problems with equilibrium constraints. In the proposed solution procedure, we solve each modified scenario sub-problem as a mathematical program with complementary constraints and then gradually aggregate scenario-dependent solutions to the final optimal solution." }, { "instance_id": "R30950xR30778", "comparison_id": "R30950", "paper_id": "R30778", "text": "Pre-positioning hurricane supplies in a commercial supply chain Inventory control for retailers situated in the projected path of an observed hurricane or tropical storm can be challenging due to the inherent uncertainties associated with storm forecasts and demand requirements. In many cases, retailers react to pre- and post-storm demand surge by ordering emergency supplies from manufacturers posthumously. This wait-and-see approach often leads to stockout of the critical supplies and equipment used to support post-storm disaster relief operations, which compromises the performance of emergency response efforts and proliferates lost sales in the commercial supply chain. This paper proposes a proactive approach to managing disaster relief inventories from the perspective of a single manufacturing facility, where emergency supplies are pre-positioned throughout a network of geographically dispersed retailers in anticipation of an observed storm's landfall. Once the requirements of a specific disaster scenario are observed, supplies are then transshipped among retailers, with possible direct shipments from the manufacturer, to satisfy any unfulfilled demands. The manufacturer's pre-positioning problem is formulated as a two-stage stochastic programming model which is illustrated via a case study comprised of real-world hurricane scenarios. Our findings indicate that the expected performance of the proposed pre-positioning strategy over a variety of hurricane scenarios is more effective than the wait-and-see approach; currently used in practice." }, { "instance_id": "R30950xR30741", "comparison_id": "R30950", "paper_id": "R30741", "text": "A two-stage stochastic programming framework for transportation planning in disaster response This study proposes a two-stage stochastic programming model to plan the transportation of vital first-aid commodities to disaster-affected areas during emergency response. A multi-commodity, multi-modal network flow formulation is developed to describe the flow of material over an urban transportation network. Since it is difficult to predict the timing and magnitude of any disaster and its impact on the urban system, resource mobilization is treated in a random manner, and the resource requirements are represented as random variables. Furthermore, uncertainty arising from the vulnerability of the transportation system leads to random arc capacities and supply amounts. Randomness is represented by a finite sample of scenarios for capacity, supply and demand triplet. The two stages are defined with respect to information asymmetry, which discloses uncertainty during the progress of the response. The approach is validated by quantifying the expected value of perfect and stochastic information in problem instances generated out of actual data." }, { "instance_id": "R30950xR30761", "comparison_id": "R30950", "paper_id": "R30761", "text": "Pre-positioning planning for emergency response with service quality constraints Pre-positioning of emergency supplies is a means for increasing preparedness for natural disasters. Key decisions in pre-positioning are the locations and capacities of emergency distribution centers, as well as allocations of inventories of multiple relief commodities to those distribution locations. The location and allocation decisions are complicated by uncertainty about if, or where, a natural disaster will occur. An earlier paper (Rawls and Turnquist 44:521\u2013534, 2010) describes a stochastic mixed integer programming formulation to minimize expected costs (including penalties for unmet demand) in such a situation. This paper extends that model with additional service quality constraints. The added constraints ensure that the probability of meeting all demand is at least \u03b1, and that the demand is met with supplies whose average shipment distance is no greater than a specific limit. A case study using hurricane threats is used to illustrate the model and how the additional constraints modify the pre-positioning strategy." }, { "instance_id": "R30950xR30769", "comparison_id": "R30950", "paper_id": "R30769", "text": "Sheltering network planning and management with a case in the Gulf Coast region Abstract This paper studies sheltering network planning and operations for natural disaster preparedness and responses with a two-stage stochastic program. The preparedness phase decides the locations, capacities and resources of new Permanent Shelters. Under each disaster scenario, both evacuees and resources are distributed to shelters in the response phase. To address the computational burden, the L-shaped algorithm is applied to decompose the problem into the scenario level with linear programs. A case study for hurricanes in the Gulf Coast region of the US is conducted to demonstrate the implementation of the proposed model." }, { "instance_id": "R30950xR30749", "comparison_id": "R30950", "paper_id": "R30749", "text": "Dual-Interval Two-Stage Optimization for Flood Management and Risk Analyses In this study, a dual interval two-stage restricted-recourse programming (DITRP) method is developed for flood-diversion planning under uncertainty. Compared with other conventional methods, DITRP improves upon them by addressing system uncertainties with complex presentations and incorporating subjective information within its optimization framework. Uncertainties in DITRP can be represented as probability distributions and intervals. In addition, the dual-interval concept is presented when the available information is highly uncertain for boundaries of intervals. Moreover, decision makers\u2019 attitudes towards system risk can be reflected using a restricted-resource measure by controlling the variability of the recourse cost. The method has been applied to a case study of flood management. The results indicate that reasonable solutions for planning flood management practice have been generated which are related to decisions of flood-diversion. Several policy scenarios are analyzed, assisting in gaining insight into the tradeoffs between risk and cost." }, { "instance_id": "R30950xR30743", "comparison_id": "R30950", "paper_id": "R30743", "text": "A scenario planning approach for the flood emergency logistics preparation problem under uncertainty This paper aims to develop a decision-making tool that can be used by government agencies in planning for flood emergency logistics. In this article, the flood emergency logistics problem with uncertainty is formulated as two stochastic programming models that allow for the determination of a rescue resource distribution system for urban flood disasters. The decision variables include the structure of rescue organizations, locations of rescue resource storehouses, allocations of rescue resources under capacity restrictions, and distributions of rescue resources. By applying the data processing and network analysis functions of the geographic information system, flooding potential maps can estimate the possible locations of rescue demand points and the required amount of rescue equipment. The proposed models are solved using a sample average approximation scheme. Finally, a real example of planning for flood emergency logistics is presented to highlight the significance of the proposed model as well as the efficacy of the proposed solution strategy." }, { "instance_id": "R30950xR30774", "comparison_id": "R30950", "paper_id": "R30774", "text": "A two-echelon stochastic facility location model for humanitarian relief logistics We develop a two-stage stochastic programming model for a humanitarian relief logistics problem where decisions are made for pre- and post-disaster rescue centers, the amount of relief items to be stocked at the pre-disaster rescue centers, the amount of relief item flows at each echelon, and the amount of relief item shortage. The objective is to minimize the total cost of facility location, inventory holding, transportation and shortage. The deterministic equivalent of the model is formulated as a mixed-integer linear programming model and solved by a heuristic method based on Lagrangean relaxation. Results on randomly generated test instances show that the proposed solution method exhibits good performance up to 25 scenarios. We also validate our model by calculating the value of the stochastic solution and the expected value of perfect information." }, { "instance_id": "R30950xR30796", "comparison_id": "R30950", "paper_id": "R30796", "text": "Pre-positioning disaster response facilities at safe locations: An evaluation of deterministic and stochastic modeling approaches Choosing the locations of disaster response facilities for the storage of emergency supplies is critical to the quality of service provided post-occurrence of a large scale emergency like an earthquake. In this paper, we provide two location models that explicitly take into consideration the impact a disaster can have on the disaster response facilities and the population centers in surrounding areas. The first model is a deterministic model that incorporates distance-dependent damages to disaster response facilities and population centers. The second model is a stochastic programming model that extends the first by directly considering the damage intensity as a random variable. For this second model we also develop a novel solution method based on Benders Decomposition that is generalizable to other 2-stage stochastic programming problems. We provide a detailed case study using large-scale emergencies caused by an earthquake in California to demonstrate the performance of these new models. We find that the locations suggested by the stochastic model in this paper significantly reduce the expected cost of providing supplies when one considers the damage a disaster causes to the disaster response facilities and areas near it. We also demonstrate that the cost advantage of the stochastic model over the deterministic model is especially large when only a few facilities can be placed. Thus, the value of the stochastic model is particularly great in realistic, budget-constrained situations." }, { "instance_id": "R30950xR30745", "comparison_id": "R30950", "paper_id": "R30745", "text": "Facility location in humanitarian relief In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model." }, { "instance_id": "R30950xR30807", "comparison_id": "R30950", "paper_id": "R30807", "text": "Implementation of Equity in Resource Allocation for Regional Earthquake Risk Mitigation Using Two-Stage Stochastic Programming This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two-stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user-defined level to achieve the equity-efficiency tradeoff in the decision-making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk-return tradeoff, equity-reconstruction expenditures tradeoff, and variation of per-capita expected earthquake loss in different income classes are also presented." }, { "instance_id": "R30950xR30798", "comparison_id": "R30950", "paper_id": "R30798", "text": "A humanitarian logistics model for disaster relief operation considering network failure and standard relief time: A case study on San Francisco district We propose a multi-depot location-routing model considering network failure, multiple uses of vehicles, and standard relief time. The model determines the locations of local depots and routing for last mile distribution after an earthquake. The model is extended to a two-stage stochastic program with random travel time to ascertain the locations of distribution centers. Small instances have been solved to optimality in GAMS. A variable neighborhood search algorithm is devised to solve the deterministic model. Computational results of our case study show that the unsatisfied demands can be significantly reduced at the cost of higher number of local depots and vehicles." }, { "instance_id": "R31077xR30790", "comparison_id": "R31077", "paper_id": "R30790", "text": "Prepositioning emergency supplies to support disaster relief: a stochastic programming approach ABSTRACT This paper studies the strategic problem of designing emergency supply networks to support disaster relief over a planning horizon. The problem addresses decisions on the location and number of distribution centres needed, their capacity, and the quantity of each emergency item to keep in stock. It builds on a case study inspired by real-world data obtained from the North Carolina Emergency Management Division (NCEM) and the Federal Emergency Management Agency (FEMA). To tackle the problem, a scenario-based approach is proposed involving three phases: disaster scenario generation, design generation and design evaluation. Disasters are modelled as stochastic processes and a Monte Carlo procedure is derived to generate plausible catastrophic scenarios. Based on this detailed representation of disasters, a multi-phase modelling framework is proposed to design the emergency supply network. The two-stage stochastic programming model proposed is solved using a sample average approximation method. This scenario-based solution approach is applied to the case study to generate plausible scenarios, to produce alternative designs and to evaluate them on a set of performance measures in order to select the best design." }, { "instance_id": "R31077xR30776", "comparison_id": "R31077", "paper_id": "R30776", "text": "Shelter location and transportation planning under hurricane conditions This paper develops a scenario-based bilevel programming model to optimize the selection of shelter locations with explicit consideration of a range of possible hurricane events and the evacuation needs under each of those events. A realistic case study for the state of North Carolina is presented. Through the case study, we demonstrate (i) the criticality of considering multiple hurricane scenarios in the location of shelters, and; (ii) the importance of considering the transportation demands of all evacuees when selecting locations for public shelters." }, { "instance_id": "R31077xR30753", "comparison_id": "R31077", "paper_id": "R30753", "text": "The evacuation optimal network design problem: model formulation and comparisons Abstract The goal of this paper is twofold. First, we present a stochastic programming-based model that provides optimal design solutions for transportation networks in light of possible emergency evacuations. Second, as traffic congestion is a growing problem in metropolitan areas around the world, decision makers might not be willing to design transportation networks solely for evacuation purposes since daily traffic patterns differ tremendously from traffic observed during evacuations. This is especially true when potential disaster locations are limited in number and confined to specific regions (e.g. coastal regions might be more prone to flooding). However, as extreme events such as excessive rainfall become more prevalent everywhere, it is less obvious that the design of transportation networks for evacuation planning and congestion reduction is mutually exclusive. That is, capacity expansion decisions to reduce congestion might also be reasonable from an evacuation planning point of view. Conversely, expansion decisions for evacuation planning might turn out to be effective for congestion relief. To date, no numerical evidence has been presented in the literature to support or disprove these conjectures. Preliminary numerical evidence is provided in this paper." }, { "instance_id": "R31077xR30765", "comparison_id": "R31077", "paper_id": "R30765", "text": "Identification of optimal strategies for improving eco-resilience to floods in ecologically vulnerable regions of a wetland In this study, a mixed integer fuzzy interval-stochastic programming model was developed for supporting the improvement of eco-resilience to floods in wetlands. This method allows uncertainties that are associated with eco-resilience improvement and can be presented as both probability distributions and interval values to be incorporated within a general modeling framework. Also, capacity-expansion plans of eco-resilience can be addressed through introducing binary variables. Moreover, penalties due to ecological damages which are associated with the violation of predefined targets can be effectively incorporated within the modeling and decision process. Thus, complexities associated with flood resistance and eco-resilience planning in wetlands can be systematically reflected, highly enhancing robustness of the modeling process. The developed method was then applied to a case of eco-resilience enhancement planning in three ecologically vulnerable regions of a wetland. Interval solutions under different river flow levels and different ecological damages were generated. They could be used for generating decision alternatives and thus help decision makers identify desired eco-resilience schemes to resist floods without causing too much damages. The application indicates that the model is helpful for supporting: (a) adjustment or justification of allocation patterns of ecological flood-resisting capacities, (b) formulation of local policies regarding eco-resilience enhancement options and policy interventions, and (c) analysis of interactions among multiple administrative targets within a wetland." }, { "instance_id": "R31077xR30796", "comparison_id": "R31077", "paper_id": "R30796", "text": "Pre-positioning disaster response facilities at safe locations: An evaluation of deterministic and stochastic modeling approaches Choosing the locations of disaster response facilities for the storage of emergency supplies is critical to the quality of service provided post-occurrence of a large scale emergency like an earthquake. In this paper, we provide two location models that explicitly take into consideration the impact a disaster can have on the disaster response facilities and the population centers in surrounding areas. The first model is a deterministic model that incorporates distance-dependent damages to disaster response facilities and population centers. The second model is a stochastic programming model that extends the first by directly considering the damage intensity as a random variable. For this second model we also develop a novel solution method based on Benders Decomposition that is generalizable to other 2-stage stochastic programming problems. We provide a detailed case study using large-scale emergencies caused by an earthquake in California to demonstrate the performance of these new models. We find that the locations suggested by the stochastic model in this paper significantly reduce the expected cost of providing supplies when one considers the damage a disaster causes to the disaster response facilities and areas near it. We also demonstrate that the cost advantage of the stochastic model over the deterministic model is especially large when only a few facilities can be placed. Thus, the value of the stochastic model is particularly great in realistic, budget-constrained situations." }, { "instance_id": "R31077xR30798", "comparison_id": "R31077", "paper_id": "R30798", "text": "A humanitarian logistics model for disaster relief operation considering network failure and standard relief time: A case study on San Francisco district We propose a multi-depot location-routing model considering network failure, multiple uses of vehicles, and standard relief time. The model determines the locations of local depots and routing for last mile distribution after an earthquake. The model is extended to a two-stage stochastic program with random travel time to ascertain the locations of distribution centers. Small instances have been solved to optimality in GAMS. A variable neighborhood search algorithm is devised to solve the deterministic model. Computational results of our case study show that the unsatisfied demands can be significantly reduced at the cost of higher number of local depots and vehicles." }, { "instance_id": "R31077xR30755", "comparison_id": "R31077", "paper_id": "R30755", "text": "Solving Stochastic Transportation Network Protection Problems Using the Progressive Hedging-based Method This research focuses on pre-disaster transportation network protection against uncertain future disasters. Given limited resources, the goal of the central planner is to choose the best set of network components to protect while allowing the network users to follow their own best-perceived routes in any resultant network configuration. This problem is formulated as a two-stage stochastic programming problem with equilibrium constraints, where the objective is to minimize the total expected physical and social losses caused by potential disasters. Developing efficient solution methods for such a problem can be challenging. In this work, we will demonstrate the applicability of progressive hedging-based method for solving large scale stochastic network optimization problems with equilibrium constraints. In the proposed solution procedure, we solve each modified scenario sub-problem as a mathematical program with complementary constraints and then gradually aggregate scenario-dependent solutions to the final optimal solution." }, { "instance_id": "R31077xR30783", "comparison_id": "R31077", "paper_id": "R30783", "text": "The bi-objective stochastic covering tour problem We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach." }, { "instance_id": "R31077xR30747", "comparison_id": "R31077", "paper_id": "R30747", "text": "Decomposition algorithms for the design of a nonsimultaneous capacitated evacuation tree network In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed-integer program and each subproblem is a time-expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. \u00a9 2008 Wiley Periodicals, Inc. NETWORKS, 2009" }, { "instance_id": "R31077xR30794", "comparison_id": "R31077", "paper_id": "R30794", "text": "A two-stage mixed-integer fuzzy programming with interval-valued membership functions approach for flood-diversion planning Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints." }, { "instance_id": "R31077xR30802", "comparison_id": "R31077", "paper_id": "R30802", "text": "A scenario planning approach for propositioning rescue centers for urban waterlog disasters A system specification for urban waterlog disasters is developed.A two-stage stochastic programming model is formulated.The economic cost and loss, and environmental and casualty risks are considered.The urban waterlog disasters in Pudong District of Shanghai, China is examined. An urban waterlog disaster can produce severe results, such as residents property loss, environmental damages and pollution, and even casualties. This paper presents a system specification for urban waterlog disasters according to the analysis of urban waterlog disaster risks. Then, a two-stage stochastic mixed-integer programming model is formulated. The model minimizes the total logistics cost, and risk-induced penalties. Moreover, a deterministic counterpart of the stochastic model is proposed to study the expected value of perfect information. The multi-attribute utility theory is used to build assessment functions that assess the utility of the rescue system and the degree contributed to disaster relief for each rescue center. Finally, a real example of rescue logistics is examined for the urban waterlog disasters in Pudong District of Shanghai, China. Using the proposed model, two main results can be obtained. First, the expected value of perfect information experiment reveals that an additional 45,005 logistics cost and an additional 2417 risk-induced penalties can be incurred due to the presence of uncertainty. Second, as the weight of risk-induced penalty increases from 0.1 to 0.9, the logistics cost is increased by 41.21%, which thus contributes to a decrease of risk-induced penalty by 97.44%. Some managerial implications are discussed based on the numerical studies." }, { "instance_id": "R31077xR30743", "comparison_id": "R31077", "paper_id": "R30743", "text": "A scenario planning approach for the flood emergency logistics preparation problem under uncertainty This paper aims to develop a decision-making tool that can be used by government agencies in planning for flood emergency logistics. In this article, the flood emergency logistics problem with uncertainty is formulated as two stochastic programming models that allow for the determination of a rescue resource distribution system for urban flood disasters. The decision variables include the structure of rescue organizations, locations of rescue resource storehouses, allocations of rescue resources under capacity restrictions, and distributions of rescue resources. By applying the data processing and network analysis functions of the geographic information system, flooding potential maps can estimate the possible locations of rescue demand points and the required amount of rescue equipment. The proposed models are solved using a sample average approximation scheme. Finally, a real example of planning for flood emergency logistics is presented to highlight the significance of the proposed model as well as the efficacy of the proposed solution strategy." }, { "instance_id": "R31077xR30761", "comparison_id": "R31077", "paper_id": "R30761", "text": "Pre-positioning planning for emergency response with service quality constraints Pre-positioning of emergency supplies is a means for increasing preparedness for natural disasters. Key decisions in pre-positioning are the locations and capacities of emergency distribution centers, as well as allocations of inventories of multiple relief commodities to those distribution locations. The location and allocation decisions are complicated by uncertainty about if, or where, a natural disaster will occur. An earlier paper (Rawls and Turnquist 44:521\u2013534, 2010) describes a stochastic mixed integer programming formulation to minimize expected costs (including penalties for unmet demand) in such a situation. This paper extends that model with additional service quality constraints. The added constraints ensure that the probability of meeting all demand is at least \u03b1, and that the demand is met with supplies whose average shipment distance is no greater than a specific limit. A case study using hurricane threats is used to illustrate the model and how the additional constraints modify the pre-positioning strategy." }, { "instance_id": "R31077xR30757", "comparison_id": "R31077", "paper_id": "R30757", "text": "Stochastic optimization of medical supply location and distribution in disaster management We propose a stochastic optimization approach for the storage and distribution problem of medical supplies to be used for disaster management under a wide variety of possible disaster types and magnitudes. In preparation for disasters, we develop a stochastic programming model to select the storage locations of medical supplies and required inventory levels for each type of medical supply. Our model captures the disaster specific information and possible effects of disasters through the use of disaster scenarios. Thus, we balance the preparedness and risk despite the uncertainties of disaster events. A benefit of this approach is that the subproblem can be used to suggest loading and routing of vehicles to transport medical supplies for disaster response, given the evaluation of up-to-date disaster field information. We present a case study of our stochastic optimization approach for disaster planning for earthquake scenarios in the Seattle area. Our modeling approach can aid interdisciplinary agencies to both prepare and respond to disasters by considering the risk in an efficient manner." }, { "instance_id": "R31077xR30811", "comparison_id": "R31077", "paper_id": "R30811", "text": "Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method." }, { "instance_id": "R31077xR30741", "comparison_id": "R31077", "paper_id": "R30741", "text": "A two-stage stochastic programming framework for transportation planning in disaster response This study proposes a two-stage stochastic programming model to plan the transportation of vital first-aid commodities to disaster-affected areas during emergency response. A multi-commodity, multi-modal network flow formulation is developed to describe the flow of material over an urban transportation network. Since it is difficult to predict the timing and magnitude of any disaster and its impact on the urban system, resource mobilization is treated in a random manner, and the resource requirements are represented as random variables. Furthermore, uncertainty arising from the vulnerability of the transportation system leads to random arc capacities and supply amounts. Randomness is represented by a finite sample of scenarios for capacity, supply and demand triplet. The two stages are defined with respect to information asymmetry, which discloses uncertainty during the progress of the response. The approach is validated by quantifying the expected value of perfect and stochastic information in problem instances generated out of actual data." }, { "instance_id": "R31077xR30813", "comparison_id": "R31077", "paper_id": "R30813", "text": "An approximation approach to a trade-off among efficiency, efficacy, and balance for relief pre-positioning in disaster management This work develops a multi-objective, two-stage stochastic, non-linear, and mixed-integer mathematical model for relief pre-positioning in disaster management. Improved imbalance and efficacy measures are incorporated into the model based on a new utility level of the delivered relief commodities. This model considers the usage possibility of a set of alternative routes for each of the applied transportation modes and consequently improves the network reliability. An integrated separable programming-augmented e-constraint approach is proposed to address the problem. The best Pareto-optimal solution is selected by PROMETHEE-II. The theoretical improvements of the presented approach are validated by experiments and a real case study." }, { "instance_id": "R31077xR30745", "comparison_id": "R31077", "paper_id": "R30745", "text": "Facility location in humanitarian relief In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model." }, { "instance_id": "R31077xR30809", "comparison_id": "R31077", "paper_id": "R30809", "text": "Stochastic network models for logistics planning in disaster relief Emergency logistics in disasters is fraught with planning and operational challenges, such as uncertainty about the exact nature and magnitude of the disaster, a lack of reliable information about the location and needs of victims, possible random supplies and donations, precarious transport links, scarcity of resources, and so on. This paper develops a new two-stage stochastic network flow model to help decide how to rapidly supply humanitarian aid to victims of a disaster within this context. The model takes into account practical characteristics that have been neglected by the literature so far, such as budget allocation, fleet sizing of multiple types of vehicles, procurement, and varying lead times over a dynamic multiperiod horizon. Attempting to improve demand fulfillment policy, we present some extensions of the model via state-of-art risk measures, such as semideviation and conditional value-at-risk. A simple two-phase heuristic to solve the problem within a reasonable amount of computing time is also suggested. Numerical tests based on the floods and landslides in Rio de Janeiro state, Brazil, show that the model can help plan and organise relief to provide good service levels in most scenarios, and how this depends on the type of disaster and resources. Moreover, we demonstrate that our heuristic performs well for real and random instances." }, { "instance_id": "R31077xR30879", "comparison_id": "R31077", "paper_id": "R30879", "text": "Pre-positioning and dynamic delivery planning for short-term response following a natural disaster Natural disasters often result in large numbers of evacuees being temporarily housed in schools, churches, and other shelters. The sudden influx of people seeking shelter creates demands for emergency supplies, which must be delivered quickly. A dynamic allocation model is constructed to optimize pre-event planning for meeting short-term demands (over approximately the first 72h) for emergency supplies under uncertainty about what demands will have to be met and where those demands will occur. The model also includes requirements for reliability in the solutions \u2013 i.e., the solution must ensure that all demands are met in scenarios comprising at least 100\u03b1% of all outcomes. A case study application using shelter locations in North Carolina and a set of hurricane threat scenarios is used to illustrate the model and how it supports an emergency relief strategy." }, { "instance_id": "R31077xR30877", "comparison_id": "R31077", "paper_id": "R30877", "text": "Risk-averse two-stage stochastic programming with an application to disaster management Traditional two-stage stochastic programming is risk-neutral; that is, it considers the expectation as the preference criterion while comparing the random variables (e.g., total cost) to identify the best decisions. However, in the presence of variability risk measures should be incorporated into decision making problems in order to model its effects. In this study, we consider a risk-averse two-stage stochastic programming model, where we specify the conditional-value-at-risk (CVaR) as the risk measure. We construct two decomposition algorithms based on the generic Benders-decomposition approach to solve such problems. Both single-cut and multicut versions of the proposed decomposition algorithms are presented. We adapt the concepts of the value of perfect information (VPI) and the value of the stochastic solution (VSS) for the proposed risk-averse two-stage stochastic programming framework and define two stochastic measures on the VPI and VSS. We apply the proposed model to disaster management, which is one of the research fields that can significantly benefit from risk-averse two-stage stochastic programming models. In particular, we consider the problem of determining the response facility locations and the inventory levels of the relief supplies at each facility in the presence of uncertainty in demand and the damage level of the disaster network. We present numerical results to discuss how incorporating a risk measure affects the optimal solutions and demonstrate the computational effectiveness of the proposed methods." }, { "instance_id": "R31077xR30763", "comparison_id": "R31077", "paper_id": "R30763", "text": "Stochastic Optimization for Natural Disaster Asset Prepositioning A key strategic issue in pre-disaster planning for humanitarian logistics is the pre-establishment of adequate capacity and resources that enable efficient relief operations. This paper develops a two-stage stochastic optimization model to guide the allocation of budget to acquire and position relief assets, decisions that typically need to be made well in advance before a disaster strikes. The optimization focuses on minimizing the expected number of casualties, so our model includes first-stage decisions to represent the expansion of resources such as warehouses, medical facilities with personnel, ramp spaces, and shelters. Second-stage decisions concern the logistics of the problem, where allocated resources and contracted transportation assets are deployed to rescue critical population (in need of emergency evacuation), deliver required commodities to stay-back population, and transport the transfer population displaced by the disaster. Because of the uncertainty of the event's location and severity, these and other parameters are represented as scenarios. Computational results on notional test cases provide guidance on budget allocation and prove the potential benefit of using stochastic optimization." }, { "instance_id": "R31077xR30774", "comparison_id": "R31077", "paper_id": "R30774", "text": "A two-echelon stochastic facility location model for humanitarian relief logistics We develop a two-stage stochastic programming model for a humanitarian relief logistics problem where decisions are made for pre- and post-disaster rescue centers, the amount of relief items to be stocked at the pre-disaster rescue centers, the amount of relief item flows at each echelon, and the amount of relief item shortage. The objective is to minimize the total cost of facility location, inventory holding, transportation and shortage. The deterministic equivalent of the model is formulated as a mixed-integer linear programming model and solved by a heuristic method based on Lagrangean relaxation. Results on randomly generated test instances show that the proposed solution method exhibits good performance up to 25 scenarios. We also validate our model by calculating the value of the stochastic solution and the expected value of perfect information." }, { "instance_id": "R31077xR30751", "comparison_id": "R31077", "paper_id": "R30751", "text": "A two-stage stochastic programming model for transportation network protection Network protection against natural and human-caused hazards has become a topical research theme in engineering and social sciences. This paper focuses on the problem of allocating limited retrofit resources over multiple highway bridges to improve the resilience and robustness of the entire transportation system in question. The main modeling challenges in network retrofit problems are to capture the interdependencies among individual transportation facilities and to cope with the extremely high uncertainty in the decision environment. In this paper, we model the network retrofit problem as a two-stage stochastic programming problem that optimizes a mean-risk objective of the system loss. This formulation hedges well against uncertainty, but also imposes computational challenges due to involvement of integer decision variables and increased dimension of the problem. An efficient algorithm is developed, via extending the well-known L-shaped method using generalized benders decomposition, to efficiently handle the binary integer variables in the first stage and the nonlinear recourse in the second stage of the model formulation. The proposed modeling and solution methods are general and can be applied to other network design problems as well." }, { "instance_id": "R31077xR30778", "comparison_id": "R31077", "paper_id": "R30778", "text": "Pre-positioning hurricane supplies in a commercial supply chain Inventory control for retailers situated in the projected path of an observed hurricane or tropical storm can be challenging due to the inherent uncertainties associated with storm forecasts and demand requirements. In many cases, retailers react to pre- and post-storm demand surge by ordering emergency supplies from manufacturers posthumously. This wait-and-see approach often leads to stockout of the critical supplies and equipment used to support post-storm disaster relief operations, which compromises the performance of emergency response efforts and proliferates lost sales in the commercial supply chain. This paper proposes a proactive approach to managing disaster relief inventories from the perspective of a single manufacturing facility, where emergency supplies are pre-positioned throughout a network of geographically dispersed retailers in anticipation of an observed storm's landfall. Once the requirements of a specific disaster scenario are observed, supplies are then transshipped among retailers, with possible direct shipments from the manufacturer, to satisfy any unfulfilled demands. The manufacturer's pre-positioning problem is formulated as a two-stage stochastic programming model which is illustrated via a case study comprised of real-world hurricane scenarios. Our findings indicate that the expected performance of the proposed pre-positioning strategy over a variety of hurricane scenarios is more effective than the wait-and-see approach; currently used in practice." }, { "instance_id": "R31160xR30790", "comparison_id": "R31160", "paper_id": "R30790", "text": "Prepositioning emergency supplies to support disaster relief: a stochastic programming approach ABSTRACT This paper studies the strategic problem of designing emergency supply networks to support disaster relief over a planning horizon. The problem addresses decisions on the location and number of distribution centres needed, their capacity, and the quantity of each emergency item to keep in stock. It builds on a case study inspired by real-world data obtained from the North Carolina Emergency Management Division (NCEM) and the Federal Emergency Management Agency (FEMA). To tackle the problem, a scenario-based approach is proposed involving three phases: disaster scenario generation, design generation and design evaluation. Disasters are modelled as stochastic processes and a Monte Carlo procedure is derived to generate plausible catastrophic scenarios. Based on this detailed representation of disasters, a multi-phase modelling framework is proposed to design the emergency supply network. The two-stage stochastic programming model proposed is solved using a sample average approximation method. This scenario-based solution approach is applied to the case study to generate plausible scenarios, to produce alternative designs and to evaluate them on a set of performance measures in order to select the best design." }, { "instance_id": "R31160xR30743", "comparison_id": "R31160", "paper_id": "R30743", "text": "A scenario planning approach for the flood emergency logistics preparation problem under uncertainty This paper aims to develop a decision-making tool that can be used by government agencies in planning for flood emergency logistics. In this article, the flood emergency logistics problem with uncertainty is formulated as two stochastic programming models that allow for the determination of a rescue resource distribution system for urban flood disasters. The decision variables include the structure of rescue organizations, locations of rescue resource storehouses, allocations of rescue resources under capacity restrictions, and distributions of rescue resources. By applying the data processing and network analysis functions of the geographic information system, flooding potential maps can estimate the possible locations of rescue demand points and the required amount of rescue equipment. The proposed models are solved using a sample average approximation scheme. Finally, a real example of planning for flood emergency logistics is presented to highlight the significance of the proposed model as well as the efficacy of the proposed solution strategy." }, { "instance_id": "R31160xR30757", "comparison_id": "R31160", "paper_id": "R30757", "text": "Stochastic optimization of medical supply location and distribution in disaster management We propose a stochastic optimization approach for the storage and distribution problem of medical supplies to be used for disaster management under a wide variety of possible disaster types and magnitudes. In preparation for disasters, we develop a stochastic programming model to select the storage locations of medical supplies and required inventory levels for each type of medical supply. Our model captures the disaster specific information and possible effects of disasters through the use of disaster scenarios. Thus, we balance the preparedness and risk despite the uncertainties of disaster events. A benefit of this approach is that the subproblem can be used to suggest loading and routing of vehicles to transport medical supplies for disaster response, given the evaluation of up-to-date disaster field information. We present a case study of our stochastic optimization approach for disaster planning for earthquake scenarios in the Seattle area. Our modeling approach can aid interdisciplinary agencies to both prepare and respond to disasters by considering the risk in an efficient manner." }, { "instance_id": "R31160xR30792", "comparison_id": "R31160", "paper_id": "R30792", "text": "A dual two-stage stochastic model for flood management with inexact-integer analysis under multiple uncertainties This study introduces a hybrid optimization approach for flood management under multiple uncertainties. An inexact two-stage integer programming (ITIP) model and its dual formation are developed by integrating the concepts of mixed-integer and interval-parameter programming techniques into a general framework of two-stage stochastic programming. The proposed approach provides a linkage to pre-defined management policies, deals with capacity-expansion planning issues, and reflects various uncertainties expressed as probability distributions and discrete intervals for a flood management system. Penalties are imposed when the policies are violated. The marginal costs are determined based on dual formulation of the ITIP model, and their effects on the optimal solutions are investigated. The developed model is applied to a case study of flood management. The solutions of binary variables represent the decisions of flood-diversion\u2013capacity expansion within a multi-region, multi-flow-level, and multi-option context. The solutions of continuous variables are related to decisions of flood diversion toward different regions. The solutions of dual variables indicate the decisions of marginal costs associated with the resources of regions\u2019 capacity, water availability, and allowable diversions. The results show that the proposed approach could obtain reliable solutions and adequately support decision making in flood management." }, { "instance_id": "R31160xR30807", "comparison_id": "R31160", "paper_id": "R30807", "text": "Implementation of Equity in Resource Allocation for Regional Earthquake Risk Mitigation Using Two-Stage Stochastic Programming This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two-stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user-defined level to achieve the equity-efficiency tradeoff in the decision-making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk-return tradeoff, equity-reconstruction expenditures tradeoff, and variation of per-capita expected earthquake loss in different income classes are also presented." }, { "instance_id": "R31160xR30788", "comparison_id": "R31160", "paper_id": "R30788", "text": "Inventory planning and coordination in disaster relief efforts This research proposes a stochastic programming model to determine how supplies should be positioned and distributed among a network of cooperative warehouses. The model incorporates constraints that enforce equity in service while also considering traffic congestion resulting from possible evacuation behavior and time constraints for providing effective response. We make use of short-term information (e.g., hurricane forecasts) to more effectively preposition supplies in preparation for their distribution at an operational level. Through an extensive computational study, we characterize the conditions under which prepositioning is beneficial, as well as discuss the relationship between inventory placement, capacity and coordination within the network." }, { "instance_id": "R31160xR30767", "comparison_id": "R31160", "paper_id": "R30767", "text": "A two\u2010stage procurement model for humanitarian relief supply chains Purpose \u2013 The purpose of this paper is to discuss and to help address the need for quantitative models to support and improve procurement in the context of humanitarian relief efforts.Design/methodology/approach \u2013 This research presents a two\u2010stage stochastic decision model with recourse for procurement in humanitarian relief supply chains, and compares its effectiveness on an illustrative example with respect to a standard solution approach.Findings \u2013 Results show the ability of the new model to capture and model both the procurement process and the uncertainty inherent in a disaster relief situation, in support of more efficient and effective procurement plans.Research limitations/implications \u2013 The research focus is on sudden onset disasters and it does not differentiate between local and international suppliers. A number of extensions of the base model could be implemented, however, so as to address the specific needs of a given organization and their procurement process.Practical implications \u2013 Despi..." }, { "instance_id": "R31160xR30785", "comparison_id": "R31160", "paper_id": "R30785", "text": "A Two-stage Stochastic Programming Model for Emergency Resources Storage Region Division Abstract In recent years the frequent emergencies have become one of the important factors which influenced our social development. Emergency resources storage takes an important role in emergency management research which decides whether the disaster relief process can be carried out smoothly. However the emergency resources storage construction is a complicated project. This paper aims to study the regional emergency resources storage. We analyse the necessity of regional emergency resources storage and the first step of regional emergency resources storage system is region division. A two-stage stochastic programming model is proposed to solve the region division problem. Finally, we propose a case study to highlight efficiency of the proposed solution strategy." }, { "instance_id": "R31160xR30813", "comparison_id": "R31160", "paper_id": "R30813", "text": "An approximation approach to a trade-off among efficiency, efficacy, and balance for relief pre-positioning in disaster management This work develops a multi-objective, two-stage stochastic, non-linear, and mixed-integer mathematical model for relief pre-positioning in disaster management. Improved imbalance and efficacy measures are incorporated into the model based on a new utility level of the delivered relief commodities. This model considers the usage possibility of a set of alternative routes for each of the applied transportation modes and consequently improves the network reliability. An integrated separable programming-augmented e-constraint approach is proposed to address the problem. The best Pareto-optimal solution is selected by PROMETHEE-II. The theoretical improvements of the presented approach are validated by experiments and a real case study." }, { "instance_id": "R31160xR30751", "comparison_id": "R31160", "paper_id": "R30751", "text": "A two-stage stochastic programming model for transportation network protection Network protection against natural and human-caused hazards has become a topical research theme in engineering and social sciences. This paper focuses on the problem of allocating limited retrofit resources over multiple highway bridges to improve the resilience and robustness of the entire transportation system in question. The main modeling challenges in network retrofit problems are to capture the interdependencies among individual transportation facilities and to cope with the extremely high uncertainty in the decision environment. In this paper, we model the network retrofit problem as a two-stage stochastic programming problem that optimizes a mean-risk objective of the system loss. This formulation hedges well against uncertainty, but also imposes computational challenges due to involvement of integer decision variables and increased dimension of the problem. An efficient algorithm is developed, via extending the well-known L-shaped method using generalized benders decomposition, to efficiently handle the binary integer variables in the first stage and the nonlinear recourse in the second stage of the model formulation. The proposed modeling and solution methods are general and can be applied to other network design problems as well." }, { "instance_id": "R31160xR30809", "comparison_id": "R31160", "paper_id": "R30809", "text": "Stochastic network models for logistics planning in disaster relief Emergency logistics in disasters is fraught with planning and operational challenges, such as uncertainty about the exact nature and magnitude of the disaster, a lack of reliable information about the location and needs of victims, possible random supplies and donations, precarious transport links, scarcity of resources, and so on. This paper develops a new two-stage stochastic network flow model to help decide how to rapidly supply humanitarian aid to victims of a disaster within this context. The model takes into account practical characteristics that have been neglected by the literature so far, such as budget allocation, fleet sizing of multiple types of vehicles, procurement, and varying lead times over a dynamic multiperiod horizon. Attempting to improve demand fulfillment policy, we present some extensions of the model via state-of-art risk measures, such as semideviation and conditional value-at-risk. A simple two-phase heuristic to solve the problem within a reasonable amount of computing time is also suggested. Numerical tests based on the floods and landslides in Rio de Janeiro state, Brazil, show that the model can help plan and organise relief to provide good service levels in most scenarios, and how this depends on the type of disaster and resources. Moreover, we demonstrate that our heuristic performs well for real and random instances." }, { "instance_id": "R31160xR30765", "comparison_id": "R31160", "paper_id": "R30765", "text": "Identification of optimal strategies for improving eco-resilience to floods in ecologically vulnerable regions of a wetland In this study, a mixed integer fuzzy interval-stochastic programming model was developed for supporting the improvement of eco-resilience to floods in wetlands. This method allows uncertainties that are associated with eco-resilience improvement and can be presented as both probability distributions and interval values to be incorporated within a general modeling framework. Also, capacity-expansion plans of eco-resilience can be addressed through introducing binary variables. Moreover, penalties due to ecological damages which are associated with the violation of predefined targets can be effectively incorporated within the modeling and decision process. Thus, complexities associated with flood resistance and eco-resilience planning in wetlands can be systematically reflected, highly enhancing robustness of the modeling process. The developed method was then applied to a case of eco-resilience enhancement planning in three ecologically vulnerable regions of a wetland. Interval solutions under different river flow levels and different ecological damages were generated. They could be used for generating decision alternatives and thus help decision makers identify desired eco-resilience schemes to resist floods without causing too much damages. The application indicates that the model is helpful for supporting: (a) adjustment or justification of allocation patterns of ecological flood-resisting capacities, (b) formulation of local policies regarding eco-resilience enhancement options and policy interventions, and (c) analysis of interactions among multiple administrative targets within a wetland." }, { "instance_id": "R31160xR30759", "comparison_id": "R31160", "paper_id": "R30759", "text": "Pre-disaster investment decisions for strengthening a highway network We address a pre-disaster planning problem that seeks to strengthen a highway network whose links are subject to random failures due to a disaster. Each link may be either operational or non-functional after the disaster. The link failure probabilities are assumed to be known a priori, and investment decreases the likelihood of failure. The planning problem seeks connectivity for first responders between various origin-destination (O-D) pairs and hence focuses on uncapacitated road conditions. The decision-maker's goal is to select the links to invest in under a limited budget with the objective of maximizing the post-disaster connectivity and minimizing traversal costs between the origin and destination nodes. The problem is modeled as a two-stage stochastic program in which the investment decisions in the first stage alter the survival probabilities of the corresponding links. We restructure the objective function into a monotonic non-increasing multilinear function and show that using the first order terms of this function leads to a knapsack problem whose solution is a local optimum to the original problem. Numerical experiments on real-world data related to strengthening Istanbul's urban highway system against earthquake risk illustrate the tractability of the method and provide practical insights for decision-makers." }, { "instance_id": "R31160xR30798", "comparison_id": "R31160", "paper_id": "R30798", "text": "A humanitarian logistics model for disaster relief operation considering network failure and standard relief time: A case study on San Francisco district We propose a multi-depot location-routing model considering network failure, multiple uses of vehicles, and standard relief time. The model determines the locations of local depots and routing for last mile distribution after an earthquake. The model is extended to a two-stage stochastic program with random travel time to ascertain the locations of distribution centers. Small instances have been solved to optimality in GAMS. A variable neighborhood search algorithm is devised to solve the deterministic model. Computational results of our case study show that the unsatisfied demands can be significantly reduced at the cost of higher number of local depots and vehicles." }, { "instance_id": "R31160xR30753", "comparison_id": "R31160", "paper_id": "R30753", "text": "The evacuation optimal network design problem: model formulation and comparisons Abstract The goal of this paper is twofold. First, we present a stochastic programming-based model that provides optimal design solutions for transportation networks in light of possible emergency evacuations. Second, as traffic congestion is a growing problem in metropolitan areas around the world, decision makers might not be willing to design transportation networks solely for evacuation purposes since daily traffic patterns differ tremendously from traffic observed during evacuations. This is especially true when potential disaster locations are limited in number and confined to specific regions (e.g. coastal regions might be more prone to flooding). However, as extreme events such as excessive rainfall become more prevalent everywhere, it is less obvious that the design of transportation networks for evacuation planning and congestion reduction is mutually exclusive. That is, capacity expansion decisions to reduce congestion might also be reasonable from an evacuation planning point of view. Conversely, expansion decisions for evacuation planning might turn out to be effective for congestion relief. To date, no numerical evidence has been presented in the literature to support or disprove these conjectures. Preliminary numerical evidence is provided in this paper." }, { "instance_id": "R31160xR30851", "comparison_id": "R31160", "paper_id": "R30851", "text": "Pre-positioning of emergency supplies for disaster response Pre-positioning of emergency supplies is one mechanism of increasing preparedness for natural disasters. The goal of this research is to develop an emergency response planning tool that determines the location and quantities of various types of emergency supplies to be pre-positioned, under uncertainty about if, or where, a natural disaster will occur. The paper presents a two-stage stochastic mixed integer program (SMIP) that provides an emergency response pre-positioning strategy for hurricanes or other disaster threats. The SMIP is a robust model that considers uncertainty in demand for the stocked supplies as well as uncertainty regarding transportation network availability after an event. Due to the computational complexity of the problem, a heuristic algorithm referred to as the Lagrangian L-shaped method (LLSM) is developed to solve large-scale instances of the problem. A case study focused on hurricane threat in the Gulf Coast area of the US illustrates application of the model." }, { "instance_id": "R31160xR30745", "comparison_id": "R31160", "paper_id": "R30745", "text": "Facility location in humanitarian relief In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model." }, { "instance_id": "R31160xR30802", "comparison_id": "R31160", "paper_id": "R30802", "text": "A scenario planning approach for propositioning rescue centers for urban waterlog disasters A system specification for urban waterlog disasters is developed.A two-stage stochastic programming model is formulated.The economic cost and loss, and environmental and casualty risks are considered.The urban waterlog disasters in Pudong District of Shanghai, China is examined. An urban waterlog disaster can produce severe results, such as residents property loss, environmental damages and pollution, and even casualties. This paper presents a system specification for urban waterlog disasters according to the analysis of urban waterlog disaster risks. Then, a two-stage stochastic mixed-integer programming model is formulated. The model minimizes the total logistics cost, and risk-induced penalties. Moreover, a deterministic counterpart of the stochastic model is proposed to study the expected value of perfect information. The multi-attribute utility theory is used to build assessment functions that assess the utility of the rescue system and the degree contributed to disaster relief for each rescue center. Finally, a real example of rescue logistics is examined for the urban waterlog disasters in Pudong District of Shanghai, China. Using the proposed model, two main results can be obtained. First, the expected value of perfect information experiment reveals that an additional 45,005 logistics cost and an additional 2417 risk-induced penalties can be incurred due to the presence of uncertainty. Second, as the weight of risk-induced penalty increases from 0.1 to 0.9, the logistics cost is increased by 41.21%, which thus contributes to a decrease of risk-induced penalty by 97.44%. Some managerial implications are discussed based on the numerical studies." }, { "instance_id": "R31160xR30783", "comparison_id": "R31160", "paper_id": "R30783", "text": "The bi-objective stochastic covering tour problem We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach." }, { "instance_id": "R31160xR30811", "comparison_id": "R31160", "paper_id": "R30811", "text": "Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method." }, { "instance_id": "R31160xR30763", "comparison_id": "R31160", "paper_id": "R30763", "text": "Stochastic Optimization for Natural Disaster Asset Prepositioning A key strategic issue in pre-disaster planning for humanitarian logistics is the pre-establishment of adequate capacity and resources that enable efficient relief operations. This paper develops a two-stage stochastic optimization model to guide the allocation of budget to acquire and position relief assets, decisions that typically need to be made well in advance before a disaster strikes. The optimization focuses on minimizing the expected number of casualties, so our model includes first-stage decisions to represent the expansion of resources such as warehouses, medical facilities with personnel, ramp spaces, and shelters. Second-stage decisions concern the logistics of the problem, where allocated resources and contracted transportation assets are deployed to rescue critical population (in need of emergency evacuation), deliver required commodities to stay-back population, and transport the transfer population displaced by the disaster. Because of the uncertainty of the event's location and severity, these and other parameters are represented as scenarios. Computational results on notional test cases provide guidance on budget allocation and prove the potential benefit of using stochastic optimization." }, { "instance_id": "R31160xR30755", "comparison_id": "R31160", "paper_id": "R30755", "text": "Solving Stochastic Transportation Network Protection Problems Using the Progressive Hedging-based Method This research focuses on pre-disaster transportation network protection against uncertain future disasters. Given limited resources, the goal of the central planner is to choose the best set of network components to protect while allowing the network users to follow their own best-perceived routes in any resultant network configuration. This problem is formulated as a two-stage stochastic programming problem with equilibrium constraints, where the objective is to minimize the total expected physical and social losses caused by potential disasters. Developing efficient solution methods for such a problem can be challenging. In this work, we will demonstrate the applicability of progressive hedging-based method for solving large scale stochastic network optimization problems with equilibrium constraints. In the proposed solution procedure, we solve each modified scenario sub-problem as a mathematical program with complementary constraints and then gradually aggregate scenario-dependent solutions to the final optimal solution." }, { "instance_id": "R31160xR30778", "comparison_id": "R31160", "paper_id": "R30778", "text": "Pre-positioning hurricane supplies in a commercial supply chain Inventory control for retailers situated in the projected path of an observed hurricane or tropical storm can be challenging due to the inherent uncertainties associated with storm forecasts and demand requirements. In many cases, retailers react to pre- and post-storm demand surge by ordering emergency supplies from manufacturers posthumously. This wait-and-see approach often leads to stockout of the critical supplies and equipment used to support post-storm disaster relief operations, which compromises the performance of emergency response efforts and proliferates lost sales in the commercial supply chain. This paper proposes a proactive approach to managing disaster relief inventories from the perspective of a single manufacturing facility, where emergency supplies are pre-positioned throughout a network of geographically dispersed retailers in anticipation of an observed storm's landfall. Once the requirements of a specific disaster scenario are observed, supplies are then transshipped among retailers, with possible direct shipments from the manufacturer, to satisfy any unfulfilled demands. The manufacturer's pre-positioning problem is formulated as a two-stage stochastic programming model which is illustrated via a case study comprised of real-world hurricane scenarios. Our findings indicate that the expected performance of the proposed pre-positioning strategy over a variety of hurricane scenarios is more effective than the wait-and-see approach; currently used in practice." }, { "instance_id": "R31160xR30805", "comparison_id": "R31160", "paper_id": "R30805", "text": "Bi-objective stochastic programming models for determining depot locations in disaster relief operations This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX." }, { "instance_id": "R31160xR30772", "comparison_id": "R31160", "paper_id": "R30772", "text": "A multi-objective robust stochastic programming model for disaster relief logistics under uncertainty Humanitarian relief logistics is one of the most important elements of a relief operation in disaster management. The present work develops a multi-objective robust stochastic programming approach for disaster relief logistics under uncertainty. In our approach, not only demands but also supplies and the cost of procurement and transportation are considered as the uncertain parameters. Furthermore, the model considers uncertainty for the locations where those demands might arise and the possibility that some of the pre-positioned supplies in the relief distribution center or supplier might be partially destroyed by the disaster. Our multi-objective model attempts to minimize the sum of the expected value and the variance of the total cost of the relief chain while penalizing the solution\u2019s infeasibility due to parameter uncertainty; at the same time the model aims to maximize the affected areas\u2019 satisfaction levels through minimizing the sum of the maximum shortages in the affected areas. Considering the global evaluation of two objectives, a compromise programming model is formulated and solved to obtain a non-dominating compromise solution. We present a case study of our robust stochastic optimization approach for disaster planning for earthquake scenarios in a region of Iran. Our findings show that the proposed model can help in making decisions on both facility location and resource allocation in cases of disaster relief efforts." }, { "instance_id": "R31160xR30747", "comparison_id": "R31160", "paper_id": "R30747", "text": "Decomposition algorithms for the design of a nonsimultaneous capacitated evacuation tree network In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed-integer program and each subproblem is a time-expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. \u00a9 2008 Wiley Periodicals, Inc. NETWORKS, 2009" }, { "instance_id": "R31174xR30815", "comparison_id": "R31174", "paper_id": "R30815", "text": "Humanitarian logistics network design under mixed uncertainty In this paper, we address a two-echelon humanitarian logistics network design problem involving multiple central warehouses (CWs) and local distribution centers (LDCs) and develop a novel two-stage scenario-based possibilistic-stochastic programming (SBPSP) approach. The research is motivated by the urgent need for designing a relief network in Tehran in preparation for potential earthquakes to cope with the main logistical problems in pre- and post-disaster phases. During the first stage, the locations for CWs and LDCs are determined along with the prepositioned inventory levels for the relief supplies. In this stage, inherent uncertainties in both supply and demand data as well as the availability level of the transportation network's routes after an earthquake are taken into account. In the second stage, a relief distribution plan is developed based on various disaster scenarios aiming to minimize: total distribution time, the maximum weighted distribution time for the critical items, total cost of unused inventories and weighted shortage cost of unmet demands. A tailored differential evolution (DE) algorithm is developed to find good enough feasible solutions within a reasonable CPU time. Computational results using real data reveal promising performance of the proposed SBPSP model in comparison with the existing relief network in Tehran. The paper contributes to the literature on optimization based design of relief networks under mixed possibilistic-stochastic uncertainty and supports informed decision making by local authorities in increasing resilience of urban areas to natural disasters." }, { "instance_id": "R31174xR30774", "comparison_id": "R31174", "paper_id": "R30774", "text": "A two-echelon stochastic facility location model for humanitarian relief logistics We develop a two-stage stochastic programming model for a humanitarian relief logistics problem where decisions are made for pre- and post-disaster rescue centers, the amount of relief items to be stocked at the pre-disaster rescue centers, the amount of relief item flows at each echelon, and the amount of relief item shortage. The objective is to minimize the total cost of facility location, inventory holding, transportation and shortage. The deterministic equivalent of the model is formulated as a mixed-integer linear programming model and solved by a heuristic method based on Lagrangean relaxation. Results on randomly generated test instances show that the proposed solution method exhibits good performance up to 25 scenarios. We also validate our model by calculating the value of the stochastic solution and the expected value of perfect information." }, { "instance_id": "R31174xR30755", "comparison_id": "R31174", "paper_id": "R30755", "text": "Solving Stochastic Transportation Network Protection Problems Using the Progressive Hedging-based Method This research focuses on pre-disaster transportation network protection against uncertain future disasters. Given limited resources, the goal of the central planner is to choose the best set of network components to protect while allowing the network users to follow their own best-perceived routes in any resultant network configuration. This problem is formulated as a two-stage stochastic programming problem with equilibrium constraints, where the objective is to minimize the total expected physical and social losses caused by potential disasters. Developing efficient solution methods for such a problem can be challenging. In this work, we will demonstrate the applicability of progressive hedging-based method for solving large scale stochastic network optimization problems with equilibrium constraints. In the proposed solution procedure, we solve each modified scenario sub-problem as a mathematical program with complementary constraints and then gradually aggregate scenario-dependent solutions to the final optimal solution." }, { "instance_id": "R31174xR30759", "comparison_id": "R31174", "paper_id": "R30759", "text": "Pre-disaster investment decisions for strengthening a highway network We address a pre-disaster planning problem that seeks to strengthen a highway network whose links are subject to random failures due to a disaster. Each link may be either operational or non-functional after the disaster. The link failure probabilities are assumed to be known a priori, and investment decreases the likelihood of failure. The planning problem seeks connectivity for first responders between various origin-destination (O-D) pairs and hence focuses on uncapacitated road conditions. The decision-maker's goal is to select the links to invest in under a limited budget with the objective of maximizing the post-disaster connectivity and minimizing traversal costs between the origin and destination nodes. The problem is modeled as a two-stage stochastic program in which the investment decisions in the first stage alter the survival probabilities of the corresponding links. We restructure the objective function into a monotonic non-increasing multilinear function and show that using the first order terms of this function leads to a knapsack problem whose solution is a local optimum to the original problem. Numerical experiments on real-world data related to strengthening Istanbul's urban highway system against earthquake risk illustrate the tractability of the method and provide practical insights for decision-makers." }, { "instance_id": "R31174xR30809", "comparison_id": "R31174", "paper_id": "R30809", "text": "Stochastic network models for logistics planning in disaster relief Emergency logistics in disasters is fraught with planning and operational challenges, such as uncertainty about the exact nature and magnitude of the disaster, a lack of reliable information about the location and needs of victims, possible random supplies and donations, precarious transport links, scarcity of resources, and so on. This paper develops a new two-stage stochastic network flow model to help decide how to rapidly supply humanitarian aid to victims of a disaster within this context. The model takes into account practical characteristics that have been neglected by the literature so far, such as budget allocation, fleet sizing of multiple types of vehicles, procurement, and varying lead times over a dynamic multiperiod horizon. Attempting to improve demand fulfillment policy, we present some extensions of the model via state-of-art risk measures, such as semideviation and conditional value-at-risk. A simple two-phase heuristic to solve the problem within a reasonable amount of computing time is also suggested. Numerical tests based on the floods and landslides in Rio de Janeiro state, Brazil, show that the model can help plan and organise relief to provide good service levels in most scenarios, and how this depends on the type of disaster and resources. Moreover, we demonstrate that our heuristic performs well for real and random instances." }, { "instance_id": "R31174xR30798", "comparison_id": "R31174", "paper_id": "R30798", "text": "A humanitarian logistics model for disaster relief operation considering network failure and standard relief time: A case study on San Francisco district We propose a multi-depot location-routing model considering network failure, multiple uses of vehicles, and standard relief time. The model determines the locations of local depots and routing for last mile distribution after an earthquake. The model is extended to a two-stage stochastic program with random travel time to ascertain the locations of distribution centers. Small instances have been solved to optimality in GAMS. A variable neighborhood search algorithm is devised to solve the deterministic model. Computational results of our case study show that the unsatisfied demands can be significantly reduced at the cost of higher number of local depots and vehicles." }, { "instance_id": "R31174xR30761", "comparison_id": "R31174", "paper_id": "R30761", "text": "Pre-positioning planning for emergency response with service quality constraints Pre-positioning of emergency supplies is a means for increasing preparedness for natural disasters. Key decisions in pre-positioning are the locations and capacities of emergency distribution centers, as well as allocations of inventories of multiple relief commodities to those distribution locations. The location and allocation decisions are complicated by uncertainty about if, or where, a natural disaster will occur. An earlier paper (Rawls and Turnquist 44:521\u2013534, 2010) describes a stochastic mixed integer programming formulation to minimize expected costs (including penalties for unmet demand) in such a situation. This paper extends that model with additional service quality constraints. The added constraints ensure that the probability of meeting all demand is at least \u03b1, and that the demand is met with supplies whose average shipment distance is no greater than a specific limit. A case study using hurricane threats is used to illustrate the model and how the additional constraints modify the pre-positioning strategy." }, { "instance_id": "R31214xR31185", "comparison_id": "R31214", "paper_id": "R31185", "text": "Terrain-based genetic algorithm (TBGA): modeling parameter space as terrain The Terrain-Based Genetic Algorithm (TBGA) is a self-tuning version of the traditional Cellular Genetic Algorithm (CGA). In a TBGA, various combinations of parameter values appear in different physical locations of the population, forming a sort of terrain in which individual solutions evolve. We compare the performance of the TBGA against that of the CGA on a known suite of problems. Our results indicate that the TBGA performs better than the CGA on the test suite, with less parameter tuning, when the CGA is set to parameter values thought in prior studies to be good. While we had hoped that good solutions would cluster around the best parameter settings, this was not observed. However, we were able to use the TBGA to automatically determine better parameter settings for the CGA. The resulting CGA produced even better results than were achieved by the TBGA which found those parameter settings." }, { "instance_id": "R31214xR31198", "comparison_id": "R31214", "paper_id": "R31198", "text": "A patchwork model for evolutionary algorithms with structure and variable size populations The paper investigates a new PATCHWORK model for structured population in evolutionary search, where population size may vary. This model allows control of both population diversity and selective pressure, and its operators are local in scope. Moreover, the PATCHWORK model gives a significant flexibility for introducing many additional concepts, like behavioral rules for individuals. First experiments allowed us to observe some interesting patterns which emerged during evolutionary process." }, { "instance_id": "R31214xR31208", "comparison_id": "R31214", "paper_id": "R31208", "text": "Cheating for problem solving: a genetic algorithm with social interactions We propose a variation of the standard genetic algorithm that incorporates social interaction between the individuals in the population. Our goal is to understand the evolutionary role of social systems and its possible application as a non-genetic new step in evolutionary algorithms. In biological populations, i.e. animals, even human beings and microorganisms, social interactions often affect the fitness of individuals. It is conceivable that the perturbation of the fitness via social interactions is an evolutionary strategy to avoid trapping into local optimum, thus avoiding a fast convergence of the population. We model the social interactions according to Game Theory. The population is, therefore, composed by cooperator and defector individuals whose interactions produce payoffs according to well known game models (prisoner\u2019s dilemma, chicken game, and others). Our results on Knapsack problems show, for some game models, a significant performance improvement as compared to a standard genetic algorithm." }, { "instance_id": "R31214xR31191", "comparison_id": "R31214", "paper_id": "R31191", "text": "Multinational evolutionary algorithms Since practical problems often are very complex with a large number of objectives, it can be difficult or impossible to create an objective function expressing all the criteria of good solutions. Sometimes a simpler function can be used where local optimas could be both valid and interesting. Because evolutionary algorithms are population based, they have the best potential for finding more of the best solutions among the possible solutions. However, standard EAs often converge to one solution and leave therefore only this single option for a final human selection. So far, at least two methods, sharing and tagging, have been proposed to solve the problem. The paper presents a new method for finding more quality solutions, not only global optimas but local as well. The method tries to adapt its search strategy to the problem by taking the topology of the fitness landscape into account. The idea is to use the topology to group the individuals into sub-populations, each covering a part of the fitness landscape." }, { "instance_id": "R31214xR31212", "comparison_id": "R31214", "paper_id": "R31212", "text": "MLGA: a multilevel cooperative genetic algorithm This paper incorporate the multilevel selection (MLS) theory into the genetic algorithm. Based on this theory, a Multilevel Cooperative Genetic Algorithm (MLGA) is presented. In MLGA, a species is subdivided in a set of populations, each population is subdivided in groups, and evolution occurs at two levels so called individual and group level. A fast population dynamics occurs at individual level. At this level, selection occurs between individuals of the same group. The popular genetic operators such as mutation and crossover are applied within groups. A slow population dynamics occurs at group level. At this level, selection occurs between groups of a population. A group level operator so called colonization is applied between groups in which a group is selected as extinct, and replaced by offspring of a colonist group. We used a set of well known numerical functions in order to evaluate performance of the proposed algorithm. The results showed that the MLGA is robust, and provides an efficient way for numerical function optimization." }, { "instance_id": "R31281xR31224", "comparison_id": "R31281", "paper_id": "R31224", "text": "Tracking the Middle-Income Trap: What is it, Who is in it, and Why? This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950\u20132010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country\u2019s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only." }, { "instance_id": "R31281xR31217", "comparison_id": "R31281", "paper_id": "R31217", "text": "When Fast-Growing Economies Slow Down: International Evidence and Implications for China Using international data starting in 1957, we construct a sample of cases where fast-growing economies slow down. The evidence suggests that rapidly growing economies slow down significantly, in the sense that the growth rate downshifts by at least 2 percentage points, when their per capita incomes reach around US$ 17,000 in year-2005 constant international prices, a level that China should achieve by or soon after 2015. Among our more provocative findings is that growth slowdowns are more likely in countries that maintain undervalued real exchange rates." }, { "instance_id": "R31281xR31244", "comparison_id": "R31281", "paper_id": "R31244", "text": "Transitioning from Low-Income Growth to High-Income Growth: Is There a Middle Income Trap? Is there a \u201cmiddle-income trap\u201d? Theory suggests that the determinants of growth at low and high income levels may be different. If countries struggle to transition from growth strategies that are effective at low income levels to growth strategies that are effective at high income levels, they may stagnate at some middle income level; this phenomenon can be thought of as a \u201cmiddle-income trap.\u201d Defining income levels based on per capita gross domestic product relative to the United States, we do not find evidence for (unusual) stagnation at any particular middle income level. However, we do find evidence that the determinants of growth at low and high income levels differ. These findings suggest a mixed conclusion: middle-income countries may need to change growth strategies in order to transition smoothly to high income growth strategies, but this can be done smoothly and does not imply the existence of a middle-income trap." }, { "instance_id": "R31281xR31231", "comparison_id": "R31281", "paper_id": "R31231", "text": "Growth Slowdowns and the Middle-Income Trap The \u201cmiddle-income trap\u201d is the phenomenon of hitherto rapidly growing economies stagnating at middle-income levels and failing to graduate into the ranks of high-income countries. In this study we examine the middle-income trap as a special case of growth slowdowns, which are identified as large sudden and sustained deviations from the growth path predicted by a basic conditional convergence framework. We then examine their determinants by means of probit regressions, looking into the role of institutions, demography, infrastructure, the macroeconomic environment, output structure and trade structure. Two variants of Bayesian Model Averaging are used as robustness checks. The results\u2014including some that indeed speak to the special status of middle-income countries\u2014are then used to derive policy implications, with a particular focus on Asian economies." }, { "instance_id": "R31669xR31653", "comparison_id": "R31669", "paper_id": "R31653", "text": "Neural networks for the identification and control of blast furnace hot metal quality Abstract The operation and control of blast furnaces poses a great challenge because of the difficult measurement and control problems associated with the unit. The measurement of hot metal composition with respect to silica and sulfur are critical to the economic operation of blast furnaces. The measurement of the compositions require spectrographic techniques which can be performed only off line. An alternate technique for measuring these variables is a Soft Sensor based on neural networks. In the present work a neural network based model has been developed and trained relating the output variables with a set of thirty three process variables. The output variables include the quantity of the hot metal and slag as well as their composition with respect to all the important constituents. These process variables can be measured on-line and hence the soft sensor can be used on-line to predict the output parameters. The soft sensor has been able to predict the variables with an error less than 3%. A supervisory control system based on the neural network estimator and an expert system has been found to substantially improve the hot metal quality with respect to silicon and sulfur." }, { "instance_id": "R31669xR31503", "comparison_id": "R31669", "paper_id": "R31503", "text": "Radial basis function neural networks- based modeling of the membrane separation process: Hydrogen recovery from refinery gases Abstract Membrane technology has found wide applications in the petrochemical industry, mainly in the purification and recovery of the hydrogen resources. Accurate prediction of the membrane separation performance plays an important role in carrying out advanced process control (APC). For the first time, a soft-sensor model for the membrane separation process has been established based on the radial basis function (RBF) neural networks. The main performance parameters, i.e , permeate hydrogen concentration, permeate gas flux, and residue hydrogen concentration, are estimated quantitatively by measuring the operating temperature, feed-side pressure, permeate-side pressure, residue-side pressure, feed-gas flux, and feed-hydrogen concentration excluding flow structure, membrane parameters, and other compositions. The predicted results can gain the desired effects. The effectiveness of this novel approach lays a foundation for integrating control technology and optimizing the operation of the gas membrane separation process." }, { "instance_id": "R31669xR31340", "comparison_id": "R31669", "paper_id": "R31340", "text": "Neural network applications for detecting process faults in packed towers Abstract Artificial neural networks can be used as a fault diagnostic tool in chemical process industries. Connection strengths representing correlation between inputs (sensor measurements) and outputs (faults) are made to learn by the network using the back propagation algorithm. Results are presented for diagnostic faults in an ammonia\u2013water packed distillation column. First, a 6-4-6 network architecture (six input nodes corresponding to the state variables and six output nodes corresponding to the six malfunctions) was chosen based on the minimum root-mean-square-error and mean absolute percentage error; and a maximum value of the Pearson correlation coefficient (CP). The values of the learning rate, momentum and the gain terms were taken as 0.8, 0.8 and 1.0, respectively. The detection of the designated faults by the network was good. Relative importance of the various input variables on the output variables was calculated based on the partitioning of connection weights which showed that bottoms temperature, overhead composition and overhead temperature are not much affected by the disturbances in feed rate, feed composition and vapor rate in the given range. This resulted in a simplified 3-4-6 net architecture with similar capabilities as the 6-4-6 net thereby reducing the number of computations." }, { "instance_id": "R31669xR31320", "comparison_id": "R31669", "paper_id": "R31320", "text": "Dual composition control and soft estimation for a pilot distillation column using a neurogenetic design Abstract Artificial neural networks exhibit a great potential for both model based control and software sensing due to their non-linear identification capabilities. This paper proposes the use of adaptive neural networks applied to the prediction of product composition starting from secondary variable measurements, and to both dual composition control and inventory control for a continuous ethanol\u2013water nonlinear pilot distillation column monitored under LabVIEW. A principal component analysis based algorithm has been applied to select the optimal net input vector for the soft sensor. Genetic algorithms are used for the automatic choice of the optimum control law based on a neural network model of the plant. The proposed real time control scheme offers a high speed of response for changes in set points and null stationary error for both dual composition control and inventory control, and reveals the potential use of this control strategy when an experimental multivariable set-up is addressed." }, { "instance_id": "R31669xR31626", "comparison_id": "R31669", "paper_id": "R31626", "text": "Control of a batch polymerization system using hybrid neural network - First principle model In this work, the utilization of neural network in hybrid with first principle models for modelling and control of a batch polymerization process was investigated. Following the steps of the methodology, hybrid neural network (HNN) forward models and HNN inverse model of the process were first developed and then the performance of the model in direct inverse control strategy and internal model control (IMC) strategy was investigated. For comparison purposes, the performance of conventional neural network and PID controller in control was compared with the proposed HNN. The results show that HNN is able to control perfectly for both set points tracking and disturbance rejection studies." }, { "instance_id": "R31669xR31599", "comparison_id": "R31669", "paper_id": "R31599", "text": "Melt index prediction based on fuzzy neural networks and PSO algorithm with online correction strategy A black-box modeling scheme to predict melt index (MI) in the industrial propylene polymerization process is presented. MI is one of the most important quality variables determining product specification, and is influenced by a large number of process variables. Considering it is costly and time consuming to measure MI in laboratory, a much cheaper and faster statistical modeling method is presented here to predicting MI online, which involves technologies of fuzzy neural network, particle swarm optimization (PSO) algorithm, and online correction strategy (OCS). The learning efficiency and prediction precision of the proposed model are checked based on real plant history data, and the comparison between different learning algorithms is carried out in detail to reveal the advantage of the proposed best-neighbor PSO (BNPSO) algorithm with OCS. \u00a9 2011 American Institute of Chemical Engineers AIChE J, 2012" }, { "instance_id": "R31669xR31638", "comparison_id": "R31669", "paper_id": "R31638", "text": "Dynamic modeling and optimal control of batch reactors, based on structure approaching hybrid neural networks A novel Structure Approaching Hybrid Neural Network (SAHNN) approach to model batch reactors is presented. The Virtual Supervisor\u2212Artificial Immune Algorithm method is utilized for the training of SAHNN, especially for the batch processes with partial unmeasurable state variables. SAHNN involves the use of approximate mechanistic equations to characterize unmeasured state variables. Since the main interest in batch process operation is on the end-of-batch product quality, an extended integral square error control index based on the SAHNN model is applied to track the desired temperature profile of a batch process. This approach introduces model mismatches and unmeasured disturbances into the optimal control strategy and provides a feedback channel for control. The performance of robustness and antidisturbances of the control system are then enhanced. The simulation result indicates that the SAHNN model and model-based optimal control strategy of the batch process are effective." }, { "instance_id": "R31669xR31643", "comparison_id": "R31669", "paper_id": "R31643", "text": "Estimating biofilm reaction kinetics using hybrid mechanistic-neural network rate function model This work describes an alternative method for estimation of reaction rate of a biofilm process without using a model equation. A first principles model of the biofilm process is integrated with artificial neural networks to derive a hybrid mechanistic-neural network rate function model (HMNNRFM), and this combined model structure is used to estimate the complex kinetics of the biofilm process as a consequence of the validation of its steady state solution. The performance of the proposed methodology is studied with the aid of the experimental data of an anaerobic fixed bed biofilm reactor. The statistical significance of the method is also analyzed by means of the coefficient of determination (R2) and model efficiency (ME). The results demonstrate the effectiveness of HMNNRFM for estimating the complex kinetics of the biofilm process involved in the treatment of industry wastewater." }, { "instance_id": "R31669xR31430", "comparison_id": "R31669", "paper_id": "R31430", "text": "Neural network modelling for on-line state estimation in fed-batch culture of l-lysine production Abstract A moving window neural network was applied for dynamic modelling and on-line estimation of consumed sugar, cell mass, and product concentration in l -lysine fed-batch culture. The organisms were auxotrophic mutants for l -homoserine and were resistant to S -(2-aminoethyl)- l -cysteine. The reducing sugar concentration was calculated on line by using the estimated consumed sugar concentration and was maintained at a given value by a simple compensatory feeding strategy. With the estimator, the fed-batch culture could be satisfactorily carried out." }, { "instance_id": "R31669xR31388", "comparison_id": "R31669", "paper_id": "R31388", "text": "Prediction of polymer quality in batch polymerisation reactors using robust neural networks Abstract A technique for predicting polymer quality in batch polymerisation reactors using robust neural networks is proposed in this paper. Robust neural networks are used to learn the relationship between batch recipes and the trajectories of polymer quality variables in batch polymerisation reactors. The robust neural networks are obtained by stacking multiple nonperfect neural networks which are developed based on the bootstrap re-samples of the original training data. Neural network generalisation capability can be improved by combining several neural networks and neural network prediction confidence bounds can also be calculated based on the bootstrap technique. A main factor affecting prediction accuracy is reactive impurities which commonly exist in industrial polymerisation reactors. The amount of reactive impurities is estimated on-line during the initial stage of polymerisation using another neural network. From the estimated amount of reactive impurities, the effective batch initial condition can be worked out. Accurate predictions of polymer quality variables can then be obtained from the effective batch initial conditions. The technique can be used to design optimal batch recipes and to monitor polymerisation processes. The proposed techniques are applied to the simulation studies of a batch methylmethacrylate polymerisation reactor." }, { "instance_id": "R31669xR31497", "comparison_id": "R31669", "paper_id": "R31497", "text": "Applying neural networks as software sensors for enzyme engineering The on-line control of enzyme-production processes is difficult, owing to the uncertainties typical of biological systems and to the lack of suitable on-line sensors for key process variables. For example, intelligent methods to predict the end point of fermentation could be of great economic value. Computer-assisted control based on artificial-neural-network models offers a novel solution in such situations. Well-trained feedforward-backpropagation neural networks can be used as software sensors in enzyme-process control; their performance can be affected by a number of factors." }, { "instance_id": "R31669xR31481", "comparison_id": "R31669", "paper_id": "R31481", "text": "The predictions of coal/ char combustion rate using an artificial neural network approach Abstract In this study, the use of an artificial neural network for predicting the reactivity of coal/char combustion was investigated. A database containing the combustion rate reactivity of 55 chars derived from 26 coals covering a wide range of rank and geographic origin was established to train and test the neural networks. The heat treatment temperature of the chars ranged from 1000 to 1500\u00b0C and the combustion rate reactivity of the chars were measured using thermogravimetric analysis in a temperature range of 420\u2013600\u00b0C. Three correlation parameter sets were compared, which contained a coal rank parameter (either vitrinite reflectance or fixed carbon content), a parameter representing the extent of pyrolysis, combustion temperature, and char surface area. The results showed that when sufficient amount of training data are available, a neural network model can be developed to predict the combustion rates of coal chars with good accuracy and robustness. Fixed carbon content appeared to correlate better than random vitrinite reflectance R 0 with combustion rates of coal chars. Total surface areas of the chars correlated to the combustion rates and when these values were used as one of the inputs to the neural network, better predictions were achieved." }, { "instance_id": "R31669xR31353", "comparison_id": "R31669", "paper_id": "R31353", "text": "Intelligent process control using neural fuzzy techniques Abstract In this paper, we combine the advantages of fuzzy logic and neural network techniques to develop an intelligent control system for processes having complex, unknown and uncertain dynamics. In the proposed scheme, a neural fuzzy controller (NFC), which is constructed by an equivalent four-layer connectionist network, is adopted as the process feedback controller. With a derived learning algorithm, the NFC is able to learn to control a process adaptively by updating the fuzzy rules and the membership functions. To identify the input\u2013output dynamic behavior of an unknown plant and therefore give a reference signal to the NFC, a shape-tunable neural network with an error back-propagation algorithm is implemented. As a case study, we implemented the proposed algorithm to the direct adaptive control of an open-loop unstable nonlinear CSTR. Some important issues were studied extensively. Simulation comparison with a conventional static fuzzy controller was also performed. Extensive simulation results show that the proposed scheme appears to be a promising approach to the intelligent control of complex and unknown plants, which is directly operational and does not require any a priori system information." }, { "instance_id": "R31669xR31602", "comparison_id": "R31669", "paper_id": "R31602", "text": "Neural-fuzzy modelling of polymer quality in batch polymerization reactors The estimation of parameters and obtaining an accurate and comprehensive mathematical model of the polymerization process is of strategic importance to the control engineering purposes in the polymerization industry. It is characteristic for these processes a grate non-linearity and many difficulties applying traditional estimation techniques. This paper describes an approach based upon neural-fuzzy representation of the model. A concrete model is constructed with the Sugeno fuzzy inference technique and a fuzzy-neural network is used to model the dynamic behavior of the polymer process. Such neural-fuzzy models of polymer quality could be used successfully for optimization and control of polymerization processes. Short example for such implementation is included with additional results for modeling of Mn and Mw." }, { "instance_id": "R31669xR31562", "comparison_id": "R31669", "paper_id": "R31562", "text": "An advanced integrated expert system for wastewater treatment plants control. Knowledge-Based Systems The activated sludge process is a commonly used method for treating wastewater. Due to the biological nature of the process it is characterized by poorly understood basic biological behavior mechanisms, a lack of reliable on-line instrumentation, and by control goals that are not always clearly stated. It is generally recognized that an Expert System (ES) can cope with many of the common problems relative with the operation and control of the activated sludge process. In this work an integrated and distributed ES is developed which supervises the control system of the whole treatment plant. The system has the capability to learn from the correct or wrong solutions given to previous cases. The structure of the suggested ES is analyzed and the supervision of the local controllers is described. In this way, the main problems of conventional control strategies and individual knowledge-based systems are overcome." }, { "instance_id": "R31669xR31309", "comparison_id": "R31669", "paper_id": "R31309", "text": "Soft sensors for product quality monitoring in debutanizer distillation columns The paper deals with the design of neural based soft sensors to improve product quality monitoring and control in a refinery by estimating the stabilized gasoline concentration (C5) in the top flow and the butane (C4) concentration in the bottom flow of a debutanizer column, on the basis of a set of available measurements. Three-step predictive dynamic neural models were implemented in order to evaluate in real time the top and bottom product concentrations in the column. The soft sensors designed overcome the great time delay introduced by the corresponding gas chromatograph, giving on-line estimations that are suitable for monitoring and control purposes." }, { "instance_id": "R31669xR31464", "comparison_id": "R31669", "paper_id": "R31464", "text": "Modelling of the flow behavior of activated carbon cloths using a neural network approach This work investigates the hydrodynamic and aerodynamic behaviors of recent adsorbents, activated carbon cloths (ACC). A first part presents their characteristics, a particular attention being given to the properties related to their woven structure. The influence of these characteristics on air and water pressure drops through ACC is shown by experimental measurements. It is established that a classic model set up for particular media, the Ergun model, does not enable a satisfying modelling of experimental data. An artificial neural network (ANN) is then used in order to include as explicative factors the cloths properties. The optimization of the ANN architecture is carried out, in terms of selection of the input neurons and number of hidden neurons. The generalization ability of the ANN is evaluated using a test dataset distinct from the training set. The influence of specific characteristics of cloths on their flow behavior is confirmed by an analysis of inputs sensibility, and the determination of their predictive influence." }, { "instance_id": "R31669xR31662", "comparison_id": "R31669", "paper_id": "R31662", "text": "A Fuzzy-based Adaptive Genetic Algorithm and Its Case Study in Chemical Engineering Abstract Considering that the performance of a genetic algorithm (GA) is affected by many factors and their relationships are complex and hard to be described, a novel fuzzy-based adaptive genetic algorithm (FAGA) combined a new artificial immune system with fuzzy system theory is proposed due to the fact fuzzy theory can describe high complex problems. In FAGA, immune theory is used to improve the performance of selection operation. And, crossover probability and mutation probability are adjusted dynamically by fuzzy inferences, which are developed according to the heuristic fuzzy relationship between algorithm performances and control parameters. The experiments show that FAGA can efficiently overcome shortcomings of GA, i.e. , premature and slow, and obtain better results than two typical fuzzy GAs. Finally, FAGA was used for the parameters estimation of reaction kinetics model and the satisfactory result was obtained." }, { "instance_id": "R31669xR31439", "comparison_id": "R31669", "paper_id": "R31439", "text": "A neural network approach for non-iterative calculation of heat transfer coefficient in fluid\u2013particle systems Abstract A non-iterative procedure was developed using an artificial neural network (ANN) for calculating the fluid-to-particle heat transfer coefficient, h fp , in fluid\u2013particle systems. The problem considered has relevance in agitation processing of cans containing liquid/particle mixtures where fluid temperature is time-dependent. In developing the ANN model, two configurations were evaluated: (i) the input parameters (fluid and particle temperatures) and output parameters (Biot number, Bi ) were taken initially on a linear scale, and (ii) input/output parameters were transformed using logarithmic and arctangent scales. The second configuration yielded an optimal ANN model with eight neurons in each of the three hidden layers. This configuration was capable of predicting the value of Bi in the range of 0.1 to 10 with an error of less than 2%. The ANN model used information about experimental transient temperatures of the fluid and particle center and predicted Bi . The Bi /heat transfer coefficients evaluated using the developed ANN model were in close agreement with those evaluated using the numerical method under a wide range of experimental conditions." }, { "instance_id": "R31669xR31385", "comparison_id": "R31669", "paper_id": "R31385", "text": "Performance of different types of controllers in tracking optimal temperature profiles in batch reactors Abstract Performance of three different types of controllers in tracking the optimal reactor temperature profiles in batch reactor is considered hear. A complex exothermic batch reaction scheme is used for this purpose. The optimal reactor temperature profiles are obtained by solving optimal control problems off-line. Dual mode (DM) control with propotional-integral (PI) and proportional-integral-derivative (PID) and generic model control (GMC) algorithms are used to design the controllers to track the optimal temperature profiles (dynamic set points). Neutral network technique is used as the on-line estimator the amount of heat released by the chemical reaction within th GMC algorithm. The GMC controller coupled with a neural network based heat-release estimator is found to be more effective and robust than the PI and PID controllers in tracking the optimal temperature profiles to obtain the desired products on target." }, { "instance_id": "R31669xR31472", "comparison_id": "R31669", "paper_id": "R31472", "text": "Prediction of thermal conductivity detection response factors using an artificial neural network The main aim of the present work was the development of a quantitative structure-activity relationship method using an artificial neural network (ANN) for predicting the thermal conductivity detector response factor. As a first step a multiple linear regression (MLR) model was developed and the descriptors appearing in this model were considered as inputs for the ANN. The descriptors of molecular mass, number of vibrational modes of the molecule, molecular surface area and Balaban index appeared in the MLR model. In agreement with the molecular diameter approach, molecular mass and molecular surface area play a major role in estimating the thermal conductivity detector response factor (TCD-RF). A 4-7-1 neural network was generated for the prediction of the TCD-RFs of a collection of 110 organic compounds including hydrocarbons, benzene derivatives, esters, alcohols, aldehydes, ketones and heterocyclics. The mean absolute error between the ANN calculated and the experimental values of the response factors was 0.02 for the prediction set." }, { "instance_id": "R31669xR31519", "comparison_id": "R31669", "paper_id": "R31519", "text": "Design of a fuzzy logic controller for regulating substrate feed to fed-batch fermentation Fuzzy logic control based on the Takagi-Sugeno inference method has been applied for the regulation of feed rate to a fed-batch fermentation process. The process chosen is the baker's yeast fermentation. The simulation results show that the conventional fuzzy logic controller produces oscillations in the process response. To improve the performance of the conventional scheme, implementation of adaptive and hybrid control schemes are proposed. Significant improvements in the controller performance could be achieved by combining these two approaches. The adaptive control scheme reduces severe oscillations and the hybrid control scheme enhances control precision." }, { "instance_id": "R31669xR31323", "comparison_id": "R31669", "paper_id": "R31323", "text": "Linearizing control of a binary distillation column based on a neuro-estimator Abstract In this work, the LV-control problem in binary distillation columns is addressed. With least prior knowledge, a linear reference model with unknown terms is obtained. The time variations of the unknown terms are estimated using two on-line trained perceptrons. These estimates are subsequently used to design a feedback linearizing-like controller. The closed-loop behavior is analyzed through numerical examples. The resulting controller shows robustness against external disturbances and set-point changes." }, { "instance_id": "R31669xR31593", "comparison_id": "R31669", "paper_id": "R31593", "text": "Composition Estimation of Reactive Batch Distillation by Using Adaptive Neuro-Fuzzy Inference System Abstract Composition estimation plays very important role in plant operation and control. Extended Kalman filter (EKF) is one of the most common estimators, which has been used in composition estimation of reactive batch distillation, but its performance is heavily dependent on the thermodynamic modeling of vapor-liquid equilibrium, which is difficult to initialize and tune. In this paper an inferential state estimation scheme based on adaptive neuro-fuzzy inference system (ANFIS), which is a model base estimator, is employed for composition estimation by using temperature measurements in multicomponent reactive batch distillation. The state estimator is supported by data from a complete dynamic model that includes component and energy balance equations accompanied with thermodynamic relations and reaction kinetics. The mathematical model is verified by pilot plant data. The simulation results show that the ANFIS estimator provides reliable and accurate estimation for component concentrations in reactive batch distillation. The estimated states form a basis for improving the performance of reactive batch distillation either through decision making of an operator or through an automatic closed-loop control scheme." }, { "instance_id": "R31669xR31630", "comparison_id": "R31669", "paper_id": "R31630", "text": "A generalised approach to process state estimation using hybrid artificial neural network/mechanistic models Abstract In this work, a hybrid model which combines mechanistic elements with Artificial Neural Networks (ANNs) is used as a basis for a generalised on-line state estimation technique. Balance equations, which are more clearly defined, constitute the mechanistic side of the model whilst the ANN element is exclusively applied to modelling the more complex non-linear rate relationships present. The black-box approach offered by ANNs avoids the call made on both manpower and laboratory resources in modelling such complex system features mechanistically. The standard extended Kalman filter algorithm is modified to accommodate the hybrid model and, along with the stochasitic process and measurement noises, handles intrinsic ANN error explicitly. Application of the approach is demonstrated in a simulation case study based on a pilot scale process involving three tanks in series. Results demonstrate the effectiveness of both the estimation scheme and an inferential estimate-based control scheme for the system." }, { "instance_id": "R31669xR31360", "comparison_id": "R31669", "paper_id": "R31360", "text": "Static and dynamic neural network models for estimating biomass concentration during thermophilic lactic acid bacteria batch cultures Neural networks were used to elaborate static and dynamic models for the on-line estimation of biomass concentration during batch cultures of Streptococcus salivarius ssp. thermophilus 404 and Lactobacillus delbrueckii ssp. bulgaricus 398 conducted at controlled pH and temperature. Four static models with different structures and input variables were tested. The model relating the increase of lactic acid concentration and the working conditions (pH and temperature) to the increase of biomass was the most appropriate. Nevertheless, all the static models could furnish biased estimations when initial values of biomass were erroneous or when lactic acid measurements were perturbed or noisy. To overcome these drawbacks, recurrent neural networks were used to model the dynamic behaviour of fermentations. These dynamic models, when acting as estimators, performed just as well as the static models but offered more stable responses, due to an implicit corrective action arising from the training methodology and the associated method for biomass estimation." }, { "instance_id": "R31669xR31345", "comparison_id": "R31669", "paper_id": "R31345", "text": "Accounts of Experiences in the Application of Artificial Neural Networks in Chemical Engineering Considerable literature describing the use of artificial neural networks (ANNs) has evolved for a diverse range of applications such as fitting experimental data, machine diagnostics, pattern recognition, quality control, signal processing, process modeling, and process control, all topics of interest to chemists and chemical engineers. Because ANNs are nets of simple functions, they can provide satisfactory empirical models of complex nonlinear processes useful for a wide variety of purposes. This article describes the characteristics of ANNs including their advantages and disadvantages, focuses on two types of neural networks that have proved in our experience to be effective in practical applications, and presents short examples of four specific applications. In the competitive field of modeling, ANNs have secured a niche that now, after two decades, seems secure." }, { "instance_id": "R31669xR31608", "comparison_id": "R31669", "paper_id": "R31608", "text": "An adaptive neuro-fuzzy approach for modeling of water-in-oil emulsion formation Abstract Oil composition and properties including density, viscosity, asphaltene, saturate, aromatics and resin contents are responsible factors for the formation of water-in-crude-oil emulsions. These factors can be used to develop an stability index which determines states of water-in-oil emulsion in terms of either an unstable, entrained, mesostable or stable conditions. It is important to note that most of the regression models cannot capture the non-linear relationships involved in the formation of these emulsions. This study deals with the prediction of water-in-oil emulsions stability by an adaptive neuro-fuzzy inference system (ANFIS) with basic compositional factors such as density, viscosity and percentages of SARA (saturates, aromatics, resins, and asphaltenes) components. In the computational method, grid partition and subtractive clustering fuzzy inference systems were tried to generate the optimum fuzzy rule base sets. The stability estimation was conducted by applying hybrid learning algorithm and the model performance was tested by the means of distinct test data set randomly selected from the experimental domain. The ANFIS-based predictions were also compared to the conventional regression approach by means of various descriptive statistical indicators, such as root mean-square error (RMSE), index of agreement (IA), the factor of two (FA2), fractional variance (FV), proportion of systematic error (PSE), etc. With trying various types of fuzzy inference system (FIS) structures and several numbers of training epochs ranging from 1 to 100, the lowest root mean square error (RMSE = 2.0907) and the highest determination coefficient ( R 2 = 0.967) were obtained with subtractive clustering method of a first-order Sugeno type FIS. For the optimum ANFIS structure, input variables were fuzzified with four Gaussian membership functions, and the number of training epochs was computed as 21. In the computational analysis, the predictive performance of the ANFIS model was examined for the following ranges of the clustering parameters: range of influence (ROI) = 0.45\u20130.60, squash factor (SF) = 1.20\u20131.35, accept ratio (AR) = 0.40\u20130.55, and reject ratio (RR) = 0.10\u20130.20. Results indicated that ROI, SF, AR and RR were obtained to be 0.54, 1.25, 0.50 and 0.15, respectively, for the best FIS structure. It was clearly concluded that the proposed ANFIS model demonstrated a superior predictive performance on forecasting of water-in-oil emulsions stability. Findings of this study clearly indicated that the neuro-fuzzy modeling could be successfully used for predicting the stability of a specific water-in-oil mixture to provide a good discrimination between several visual stability conditions." }, { "instance_id": "R31669xR31536", "comparison_id": "R31669", "paper_id": "R31536", "text": "Application of fuzzy logic for state estimation of a microbial fermentation with dual inhibition and variable product kinetics. Food and Bioproducts Processing Fuzzy logic has been applied to a batch microbial fermentation described by a model with two adjustable parameters which associate product formation with the increasing and/or stationary phases of cell growth. The fermentation is inhibited by its product and, beyond a critical concentration, also by the substrate. To mimic an industrial condition, Gaussian noise was added and the resulting performance was simulated by fuzzy estimation systems. Simple rules with a few membership functions were able to portray bioreactor performance and the feedback interactions between cell growth and the concentrations of substrate and product. Through careful choices of the membership functions and the fuzzy logic, accuracies better than previously reported for ideal fermentations could be obtained, suggesting the suitability of fuzzy estimations for on-line applications." }, { "instance_id": "R31669xR31667", "comparison_id": "R31669", "paper_id": "R31667", "text": "A genetic neural fuzzy system-based quality prediction model for injection process In this paper, a genetic neural fuzzy system (GNFS) is presented and a hybrid learning algorithm divided into two stages is proposed to train GNFS. During first learning stage, Genetic algorithm is used to optimize the structure of GNFS and the membership function of each fuzzy term because of its capability of parallel and global search. On the basis of optimized training stage, the back-propagation algorithm (B-P algorithm) is chosen to update the parameters of GNFS to improve the system precision. The proposed GNFS is used to predict the weight of modeled part in injection process. The process of constructing quality prediction model for injection process based on GNFS is introduced. The results predicted by the constructed model show it can perform very well. The comparison between the presented GNFS and the other model based on regression and the neural network is made. The comparison verifies the proposed GNFS has superior performance and good generalization capability and also can apply to other industrial process." }, { "instance_id": "R31669xR31580", "comparison_id": "R31669", "paper_id": "R31580", "text": "Temperature control of a pilot plant reactor system using a genetic algorithm model-based control approach The work described in this paper aims at exploring the use of an artificial intelligence technique, i.e. genetic algorithm (GA), for designing an optimal model-based controller to regulate the temperature of a reactor. GA is utilized to identify the best control action for the system by creating possible solutions and thereby to propose the correct control action to the reactor system. This value is then used as the set point for the closed loop control system of the heat exchanger. A continuous stirred tank reactor is chosen as a case study, where the controller is then tested with multiple set-point tracking and changes in its parameters. The GA model-based control (GAMBC) is then implemented experimentally to control the reactor temperature of a pilot plant, where an irreversible exothermic chemical reaction is simulated by using the calculated steam flow rate. The dynamic behavior of the pilot plant reactor during the online control studies is highlighted, and comparison with the conventional tuned proportional integral derivative (PID) is presented. It is found that both controllers are able to control the process with comparable performance. Copyright \u00a9 2007 Curtin University of Technology and John Wiley & Sons, Ltd." }, { "instance_id": "R31669xR31552", "comparison_id": "R31669", "paper_id": "R31552", "text": "On-line soft sensor for polyethylene process with multiple production grades Abstract Since online measurement of the melt index (MI) of polyethylene is difficult, a virtual sensor model is desirable. However, a polyethylene process usually produces products with multiple grades. The relation between process and quality variables is highly nonlinear. Besides, a virtual sensor model in real plant process with many inputs has to deal with collinearity and time-varying issues. A new recursive algorithm, which models a multivariable, time-varying and nonlinear system, is presented. Principal component analysis (PCA) is used to eliminate the collinearity. Fuzzy c-means (FCM) and fuzzy Takagi\u2013Sugeno (FTS) modeling are used to decompose the nonlinear system into several linear subsystems. Effectiveness of the model is demonstrated using real plant data from a polyethylene process." }, { "instance_id": "R31669xR31468", "comparison_id": "R31669", "paper_id": "R31468", "text": "Neural network approximation of iron oxide reduction process Abstract The kinetics of Fe2O3 to FeO reduction process was investigated using the thermogravimetric data. The authors\u2019 previous experimental results indicated that initially the reduction of hematite is a surface controlled process, however once a thin layer of lower oxidation state iron oxides (magnetite, wustite) is formed on the surface, it changes to diffusion control. In order to analyze the time-behavior of Fe2O3 reduction under various process conditions, artificial neural network (ANN) was tested for modeling of this complex reaction pathways. The data used included the reduction of hematite in various temperatures by CO, H2 and a mixture of CO and H2. The ANN model proved its applicability and capability to mimic some extreme (minimum) of reaction rate within specific temperature range, when the classical Arrhenius equation is of limited use." }, { "instance_id": "R31669xR31634", "comparison_id": "R31669", "paper_id": "R31634", "text": "Prediction of pores formation (porosity) in foods during drying: Generic models by the use of hybrid neural network Abstract General porosity prediction models of food during air-drying have been developed using regression analysis and hybrid neural network techniques. Porosity data of apple, carrot, pear, potato, starch, onion, lentil, garlic, calamari, squid, and celery were used to develop the model using 286 data points obtained from the literature. The best generic model was developed based on four inputs as temperature of drying, moisture content, initial porosity, and product type. The error for predicting porosity using the best generic model developed is 0.58%, thus identified as an accurate prediction model." }, { "instance_id": "R31669xR31304", "comparison_id": "R31669", "paper_id": "R31304", "text": "Application of feedforward neural networks for soft sensors in the sugar industry Neural networks have been successfully applied as intelligent sensors for process modeling and control. In this paper, the application of soft sensors in the cane sugar industry is discussed. A neural network is trained on historical data to predict process quality variables so that it can replace the lab-test procedure. An immediate benefit of building intelligent sensors is that the neural network can predict product quality in a timely manner." }, { "instance_id": "R31669xR31413", "comparison_id": "R31669", "paper_id": "R31413", "text": "Online prediction of polymer product quality in an industrial reactor using recurrent neural networks In this paper, internally recurrent neural networks (IRNN) are used to predict a key polymer product quality variable from an industrial polymerization reactor. IRNN are selected as the modeling tools for two reasons: 1) over the wide range of operating regions required to make multiple polymer grades, the process is highly nonlinear; and 2) the finishing of the polymer product after it leaves the reactor imparts significant dynamics to the process by \"mixing\" effects. IRNN are shown to be very effective tools for predicting key polymer quality variables from secondary measurements taken around the reactor." }, { "instance_id": "R31669xR31489", "comparison_id": "R31669", "paper_id": "R31489", "text": "Development of an artificial neural network correlation for prediction of hold-up of slurry transport in pipelines In the literature, very few correlations have been proposed for hold-up prediction in slurry pipelines. However, these correlations fail to predict hold-up over a wide range of conditions. Based on a databank of around 220 measurements collected from the open literature, a correlation for hold-up was derived using artificial neural network (ANN) modeling. The hold-up for slurry was found to be a function of nine parameters such as solids concentration, particle dia, slurry velocity, pressure drop and solid and liquid properties. Statistical analysis showed that the proposed correlation has an average absolute relative error (AARE) of 2.5% and a standard deviation of 3.0%. A comparison with selected correlations in the literature showed that the developed ANN correlation noticeably improved prediction of hold-up over a wide range of operating conditions, physical properties and pipe diameters. This correlation also predicts properly the trend of the effect of the operating and design parameters on hold-up." }, { "instance_id": "R31669xR31449", "comparison_id": "R31669", "paper_id": "R31449", "text": "Prediction of simple physical properties of mixed solvent systems by artificial neural networks Abstract Artificial neural networks (ANNs) are used to predict the density, viscosity and refractive index of several ternary and quaternary solvent systems based on training data from binary systems. These networks employed a relatively simple topology consisting of one hidden layer with three nodes and single linear output node. The topology was optimized empirically using the water\u2013methanol\u2013acetonitrile\u2013tetrahydrofuran system and applied to data for four other solvent systems obtained from the literature. The Bertrand\u2013Acree\u2013Burchfield (BAB) equation is used to predict the viscosity and refractive index for the same systems and the results are compared. The BAB equation and the ANNs performed comparably for most of the mixtures, but the BAB equation provided somewhat better predictions in a number of cases. The relative standard error of prediction using the ANNs was generally less than 1% for density and refractive index for all of the systems examined but ranges from 1% to 15% for the viscosity." }, { "instance_id": "R31669xR31426", "comparison_id": "R31669", "paper_id": "R31426", "text": "Application of artificial neural network and fuzzy control for fed-batch cultivation of recombinant Saccharomyces cerevisiae Abstract A recombinant Saccharomyces cerevisiae expressing \u03b2-galactosidase under the control of the GAL10 promoter was constructed. The strain was used as a model to study fuzzy control with neural network, which served as state estimators for fed-batch cultivation of recombinant cells. To optimize the expression of \u03b2-galactosidase the effects of medium enrichment and induction on cell growth and expression were investigated. The activity of \u03b2-galactosidase was 2-fold higher in the presence compared with in the absence of yeast extract in the basal medium. Furthermore, the specific activity of \u03b2-galactosidase increased with increasing galactose concentration up to 30 g/ l . Two artificial neural networks (ANNs) were developed to estimate glucose and galactose concentrations using on-line measurements of ethanol and biomass concentrations, culture volume and the amount of carbon source fed to the fermentor. To improve productivity and product yield of \u03b2-galactosidase two multi-variable fuzzy controllers were used to control the glucose and galactose feed rates during the cell growth and production phases, respectively. Experimental data show that under fuzzy control with neural network estimators, the productivity was 2.7-fold higher than that in the case of exponential feeding, and 1.7-fold higher than that in the case of exponential feeding with feedback compensation using ethanol concentration." }, { "instance_id": "R31669xR31313", "comparison_id": "R31669", "paper_id": "R31313", "text": "ANN based estimator for distillation- inferential control Abstract Typical production objectives in distillation process require the delivery of products whose compositions meet certain specifications. The distillation control system, therefore, must hold product compositions as near the set points as possible in the faces of upset. Distillation column is generally subjected to disturbances in the feed and the control of product quality is often achieved by maintaining a suitable tray temperature near its set point. Secondary measurements are used to adjust the values of the manipulated variables, as the controlled variables are not easily measured or not economically viable to measure (inferential control). In the present paper, an artificial neural network (ANN) based estimator to estimate composition of the distillate is proposed. Nowadays with the advent of digital computers, the demand of the time is to amalgamate the control of various variables to achieve the best results in optimum time. It is therefore required to monitor all the desired variables and perform the control action (feed forward, feed back and inferential) as per algorithm adopted. The developed estimator is tested and the results are compared. The comparison shows that the predictions made by the neural network are in good agreement with results of simulation." }, { "instance_id": "R31669xR31547", "comparison_id": "R31669", "paper_id": "R31547", "text": "A fuzzy-logic-based model to predict biogas and methane production rates in a pilot-scale mesophilic UASB reactor treating molasses wastewater A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98." }, { "instance_id": "R31669xR31658", "comparison_id": "R31669", "paper_id": "R31658", "text": "Froth collapse in column flotation: A prevention method using froth density estimation and fuzzy expert systems This paper aims at describing the use of two well-known techniques, i.e. expert systems and soft sensors in order to develop a prevention method for the froth collapse problem. This will be achieved by monitoring concentrate froth solid-to-liquids ratio. This scheme is outlined like an alternative solution to artificial vision based methods, and it was tested by simulation under several perturbations affecting the metallurgical performance." }, { "instance_id": "R31669xR31507", "comparison_id": "R31669", "paper_id": "R31507", "text": "Prediction of moisture content in pre-osmosed and ultrasounded dried banana using genetic algorithm and neural network. Food and Bioproducts Processing In this study, application of a versatile approach for estimation moisture content of dried banana using neural network and genetic algorithm has been presented. The banana samples were dehydrated using two non-thermal processes namely osmotic and ultrasound pretreatments, at different solution concentrations and dehydration times and were then subjected to air drying at 60 and 80 \u25e6 C for 4, 5 and 6 h. The processing conditions were considered as inputs of neural network to predict final moisture content of banana. Network structure and learning parameters were optimized using genetic algorithm. It was found that the designed networks containing 7 and 10 neurons in first and second hidden layers, respectively, give the best fitting to experimental data. This configuration could predict moisture content of dried banana with correlation coefficient of 0.94. In addition, sensitivity analysis showed that the two most sensitive input variables towards such prediction were drying time and temperature. \u00a9 2010 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved." }, { "instance_id": "R31669xR31528", "comparison_id": "R31669", "paper_id": "R31528", "text": "FuREAP: A fuzzy-rough estimator of algae populations Abstract Concern for environmental issues has increased in recent years. Waste production influences humanity's future. The alga, an ubiquitous single-celled plant, can thrive on industrial waste, to the detriment of water clarity and human activities. To avoid this, biologists need to isolate the chemical parameters of these rapid population fluctuations. This paper proposes a Fuzzy\u2013Rough Estimator of Algae Populations (FuREAP), a hybrid system involving Fuzzy Set and Rough Set theories that estimates the size of algae populations given certain water characteristics. Through dimensionality reduction, FuREAP significantly reduces computer time and space requirements. Also, it decreases the cost of obtaining measurements and increases runtime efficiency, making the system more viable economically. By retaining only information required for the estimation task, FuREAP offers higher accuracy than conventional rule induction systems. Finally, FuREAP does not alter the domain semantics, making the distilled knowledge human-readable. The paper addresses the problem domain, architecture and modus operandi of FuREAP, and provides and discusses detailed experimental results." }, { "instance_id": "R31669xR31573", "comparison_id": "R31669", "paper_id": "R31573", "text": "Prediction of thermal and fluid flow characteristics in helically coiled tubes using ANFIS and GA based correlations Abstract This study introduces the ability of Adaptive Neuro-Fuzzy Inference System (ANFIS) and genetic algorithm (GA) based correlations for estimating the hydrodynamics and heat transfer characteristics in coiled tubes. The experimental data related to the heat transfer and pressure drop in helically coiled tubes with deferent geometrical parameters (coil diameter and pitch) were used. In the experiments, hot water was passed in the coiled tubes, which were placed in a cold bath. Two ANFIS models were developed for predicting the Nusselt number (Nu) and friction factor (f) in the coiled tubes and the geometric parameters were employed as input data. Moreover, empirical correlations for estimating the Nu and f were developed by a phenomenological argument in the form of classical power\u2013law correlations and their constants were found using the GA technique. The mean relative errors (MRE) of the developed ANFIS models for estimation of Nu and f are 6.24% and 3.54%, respectively. On the other hand, for empirical correlations, a MRE of 8.06% was found for prediction Nu while MRE of 5.03% was obtained for f. The results show that the ANFIS models can predict Nu and f with the higher accuracy than the developed correlations." }, { "instance_id": "R31669xR31604", "comparison_id": "R31669", "paper_id": "R31604", "text": "Automatization of a penicillin production process with soft sensors and an adaptive controller based on neuro fuzzy systems Abstract This paper addresses the automatization of a penicillin production process with the development of soft sensors as well as Internal Model Controllers (IMC) for a penicillin fermentation plant using modules based on FasArt and FasBack neuro-fuzzy systems. While soft sensors are intended to aid the human supervision of the process currently being conducted at pilot plants, the proposed controller will make the automatization feasible and eliminate the need for human operator. FasArt and FasBack feature fast stable learning and good MIMO identification, which makes them suitable for the development of adaptive controllers and soft sensors. In this paper, these modules are evaluated by training the neuro-fuzzy systems first on simulated data and then applying the resulting IMC controllers to a simulated plant. Moreover, training the systems on data coming from a real pilot plant, and evaluating the controller performance on the same real plant. Results show that the trend of reference is captured, thus allowing high penicillin production. Moreover, soft sensors derived for biomass, viscosity and penicillin are very accurate. In addition, on-line adaptive capabilities were implemented and tested with FasBack, since this system presents learning guided by error minimization as new data samples arrive. With these features, adaptive IMC controllers can be implemented and are helpful when dynamics have been poorly learned or the plant parameters vary with time, since the performance of static models and controllers can be improved through adaptation." }, { "instance_id": "R31669xR31624", "comparison_id": "R31669", "paper_id": "R31624", "text": "Hybrid neural network\u2014prior knowledge model in temperature control of a semi-batch polymerization process Nonlinear process control is a challenging research topic at present. In recent years, neural network and hybrid neural networks have been much studied especially for modeling of nonlinear system. It has however been applied mainly as an estimator in parts of various control systems and the idea of utilizing it directly as a neural-controller has not been studied. Hence the contribution of this work is to use an inverse neural network in hybrid with a first principle model for the direct control of a nonlinear semi-batch polymerization process. These hybrid models were utilized in the direct inverse control strategy to track the set point of the temperature of the polymerization reactor under nominal condition and with various disturbances. For comparison purposes, the standard neural network and proportional-integral-derivative controller were also implemented in these control strategies. Adaptation mechanisms to improve the results have also been carried out to test the capability of these hybrid methods in control. The simulation results show the advantages and robustness of utilizing the neural network in this hybrid strategy especially when an adaptive algorithm is implemented." }, { "instance_id": "R31669xR31443", "comparison_id": "R31669", "paper_id": "R31443", "text": "Neural networks applied to the prediction of fed-batch fermentation kinetics of Bacillus thuringiensis Abstract. This paper proposes using a new recurrent neural network model (RNNM) to predict and control fed batch fermentations of Bacillus thuringiensis. The control variables are the limiting substrate and the feeding conditions. The multi-input multi-output RNNM proposed has twelve inputs, seven outputs, nineteen neurons in the hidden layer, and global and local feedbacks. The weight update learning algorithm designed is a version of the well known backpropagation through time algorithm directed to the RNNM learning. The error approximation for the last epoch of learning is 2% and the total learning time is 51 epochs, where the size of an epoch is 162 iterations. The RNNM generalization was carried out reproducing a B. thuringiensis fermentation not included in the learning process. It attains an error approximation of 1.8%." }, { "instance_id": "R31669xR31597", "comparison_id": "R31669", "paper_id": "R31597", "text": "Neuro-fuzzy networks and their application to fault detection of dynamical systems The paper tackles the problem of robust fault detection using Takagi-Sugeno neuro-fuzzy (N-F) models. A model-based strategy is employed to generate residuals in order to make a decision about the state of the process. Unfortunately, such an approach is corrupted by model uncertainty due to the fact that in real applications there exists a model-reality mismatch. In order to ensure reliable fault detection, the adaptive threshold technique is used to deal with the problem. The paper focuses also on the N-F model design procedure. The bounded-error approach is applied to generate rules for the model using available data. The proposed algorithms are applied to fault detection in a valve that is a part of the technical installation at the Lublin sugar factory in Poland. Experimental results are presented in the final part of the paper to confirm the effectiveness of the method." }, { "instance_id": "R31669xR31648", "comparison_id": "R31669", "paper_id": "R31648", "text": "Applications of genetic neural network for prediction of critical heat flux Abstract In this study, the data set of the 2006 CHF look-up table is partitioned into five subsets by using Fuzzy c-means (FCM) clustering algorithm. The elements of the same subset are \u2018similar\u2019 to each other in some sense while those assign to different subsets are \u2018dissimilar\u2019. At the same time, a Genetic Neural Network (GNN) model for predicting critical heat flux (CHF) is set up. It has some advantages of its globe optimal searching, quick convergence speed and solving non-linear problem. The methods of establishing the model and training of GNN are discussed particularly in the article. Local condition type CHF is predicted by GNN on the basis of 6930 CHF data from the 2006 CHF look-up table. The prediction results are consistent with database very well. Next, the mainly parametric trends of the CHF are analyzed by applying GNN. At last, prediction of dryout point is investigated by GNN with distilled water flowing upward through narrow annular channel with 0.95 mm and 1.5 mm gaps, respectively. The prediction results by GNN have a good agreement with experimental data. Simulation and analysis results show that the network model can effectively predict CHF." }, { "instance_id": "R31669xR31578", "comparison_id": "R31669", "paper_id": "R31578", "text": "Optimization of a large scale industrial reactor by genetic algorithms The present work aims to employ genetic algorithms (GAs) to optimize an industrial chemical process, characterized by being difficult to be optimized by conventional methods. The considered chemical process is the three phase catalytic slurry reactor in which the reaction of the hydrogenation of o-cresol producing 2-methyl-cyclohexanol occurs. In order to describe the dynamic behavior of the multivariable process, a non-linear mathematical model is used. Due to the high dimensionality and non-linearity of the model, a rigorous one, the solution of the optimization problem through conventional algorithms does not always lead to convergence. This fact justifies the use of an evolutionary method, based on the GAs, to deal with this process. In this way, in order to optimize the process, the GA code is coupled with the rigorous model of the reactor. The aim of the optimization through GAs is the searching of the process inputs that maximizes the productivity of 2-methyl-cyclohexanol subject to the environmental constraint of conversion. Many simulations are conducted in order to find the maximization of the objective function without violating the constraint. The results show that the GAs are used successfully in the process optimization. The selection of the most important GA parameters making use of a factorial design approach by fractional factorial design is proposed. A factorial design approach by a central composite design is also proposed in order to determine the best values of the GA parameters that lead to the optimal solution of the optimization problem." }, { "instance_id": "R31669xR31393", "comparison_id": "R31669", "paper_id": "R31393", "text": "Estimation of impurity and fouling in batch polymerisation reactors through the application of neural networks The estimation of the amount of reactive impurities and the level of fouling in a batch polymerisation reactor is of strategic importance to the polymerisation industry. It is essential that the level of impurities and reactor fouling are known (estimated) in order to be able to develop robust and reliable monitoring and control strategies. This paper describes two approaches based upon stacked neural network representations. In the first approach, an inverse neural network model of the polymer process is constructed and the initial reaction conditions are predicted. The amount of impurities and reactor wall fouling are then calculated by comparing the predicted values with the nominal initial conditions. In the second approach, a neural network is used to model the dynamic behaviour of the polymer process. The predicted trajectories are then compared with the on-line measurements of conversion and coolant temperatures. The techniques are compared on a first-principles-based simulation of a pilot scale batch methyl methacrylate (MMA) polymerisation reactor." }, { "instance_id": "R31669xR31532", "comparison_id": "R31669", "paper_id": "R31532", "text": "Intelligent fuzzy weighted input estimation method applied to inverse heat conduction problems Abstract The innovative intelligent fuzzy weighted input estimation method which efficiently and robustly estimates the unknown time-varying heat flux in real-time is presented in this paper. The algorithm includes the Kalman Filter (KF) and the recursive least square estimator (RLSE), which is weighted by the fuzzy weighting factor proposed based on the fuzzy logic inference system. To directly synthesize the Kalman filter with the estimator, this work presents an efficient robust forgetting zone, which is capable of providing a reasonable compromise between the tracking capability and the flexibility against noises. The capability of this inverse method are demonstrated in one- and two-dimensional time-varying estimation cases and the proposed algorithm is compared by alternating between the constant and adaptive weighting factors. The results show that this method has the properties of faster convergence in the initial response, better target tracking capability and more effective noise reduction." }, { "instance_id": "R31669xR31541", "comparison_id": "R31669", "paper_id": "R31541", "text": "Further developments in chemical plant cost estimating using fuzzy matching Abstract Capital cost estimates for chemical plants are needed to assess economic viability. Simple methods for producing accurate estimates are needed at the beginning of a project. However, existing methods have limited applicability and disappointing accuracy. Plants with similar specifications (capacities, process conditions, etc.) have similar costs, therefore we have tried using the capital cost of the existing plant that is \u2018most like\u2019 the proposed plant as an estimate of the cost of the new plant. We have developed fuzzy matching, a method based on fuzzy logic, which finds the \u2018most like\u2019 plant from a database of existing plants by quantifying the closeness of their specifications. Fuzzy matching has since been enhanced by testing different functions for quantifying closeness, and \u2018optimising\u2019 parameters in these functions. The method has been extended to take account of plant materials of construction. Finally, the closeness of individual specifications has been weighted in the calculations of the quantity expressing overall closeness, to account for the different degree of influence of the specifications on the capital cost. Fuzzy matching offers accurate preliminary capital cost estimates, especially when compared with existing methods, and with minimal estimating effort" }, { "instance_id": "R31669xR31476", "comparison_id": "R31669", "paper_id": "R31476", "text": "Neural net-based softsensor for dynamic particle size estimation in grinding circuits Abstract A softsensor was developed to estimate the hydrocyclone overflow dynamic particle size distribution of a grinding circuit using neural network models. Because of the inherent dynamics associated with the grinding process, the time histories of the four variables measured around the hydrocyclone along with the past value of the cyclone overflow particle size or the past percent passing 53 \u03bcm were utilized as the inputs to formulate the neural model for the estimation of the present value of the particle size. The fact that the neural net-based softsensor performs well in the particle size estimation suggests that the model developed has captured the essence of the process dynamics. To cope with the time-varying nature of the grinding circuit, on-line adaptation of the neural network was considered. A simplified neural net-based softsensor was thus developed by making use of the principal component analysis for the data reduction so as to simplify the neural model structure and to render it more suitable for on-line adaptation." }, { "instance_id": "R31669xR31406", "comparison_id": "R31669", "paper_id": "R31406", "text": "Neural network modeling of temperature behavior in an exothermic polymerization process Abstract Neural networks are applied to modeling the behavior of temperature caused by exothermic reactions in a polymerization process. In batch emulsion polymerization, there may occur unexpected thermal reactive runaway. Therefore, it is difficult to control the behavior of the system in order to keep uniform fine product quality in each batch job. Besides it is not easy to formulate physical expressions in the thermal transport phenomena of polymerization processes. Neural networks are used to model the energy balance in the batch polymerization process using normal operational data and additional operational data without an initiator. In this study, the input nodes of cooling and heating operations are respectively integrated to only one node to improve adaptability and efficiency of networks. It was shown that the temperature changes caused by exothermic reactions could be easily estimated and predicted by such neural networks in the complicated polymerization processes. The onset point of an exothermic reaction could be precisely distinguished. It was founded that the NN models could be applied as useful tools in developing temperature control systems for batch emulsion polymerization processes. Moreover, it was found that the number of input nodes affected the learning efficiency and the recalling adaptability in case of insufficient data in practical polymerization processes." }, { "instance_id": "R31669xR31586", "comparison_id": "R31669", "paper_id": "R31586", "text": "Emission control in palm oil mills using artificial neural network and genetic algorithm Abstract The present study utilized a combination of artificial neural network (ANN) and genetic algorithms (GA) to optimize the release of emission from the palm oil mill. A model based on ANN is developed from the actual data taken from the palm oil mill. The predicted data agree well with the actual data taken. GA is then employed to find the optimal operating conditions so that the overlimit release of emission is reduced to the allowable limit." }, { "instance_id": "R31669xR31364", "comparison_id": "R31669", "paper_id": "R31364", "text": "Artificial neural networks to infer biomass and product concentration during the production of penicillin G acylase from Bacillus megaterium BACKGROUND: Production of microbial enzymes in bioreactors is a complex process including such phenomena as metabolic networks and mass transport resistances. The use of neural networks (NNs) to infer the state of bioreactors may be an interesting option that may handle the nonlinear dynamics of biomass growth and protein production. RESULTS: Feedforward multilayer perceptron (MLP) NNs were used for identification of the cultivation phase of Bacillus megaterium to produce the enzyme penicillin G acylase (EC. 3.5.1.11). The following variables were used as input to the net: run time and carbon dioxide concentration in the exhausted gas. The NN output associates a numerical value to the metabolic state of the cultivation, close to 0 during the lag phase, close to 1 during the exponential phase and approximately 2 for the stationary phase. This is a non-conventional approach for pattern recognition. During the exponential phase, another MLP was used to infer cellular concentration. Time, carbon dioxide concentration and stirrer speed form an integrated net input vector. Cellular concentrations provided by the NN were used in a hybrid approach to estimate product concentrations of the enzyme. The model employed a first-order approximation. CONCLUSION: Results showed that the algorithm was able to infer accurate values of cellular and product concentrations up to the end of the exponential growth phase, where an industrial run should stop. Copyright \u00a9 2008 Society of Chemical Industry" }, { "instance_id": "R31669xR31403", "comparison_id": "R31669", "paper_id": "R31403", "text": "Intelligent modelling in the chemical process industry with neural networks: A case study Abstract Nowadays the increasing complexity of most processes increases the demand for performant models. Most of these processes are highly non-linear and dynamic ones, which require complex modelling techniques. Neural networks are eligible modelling candidates for such processes, since they have the ability to map a variety of input-output patterns quite easily. Moreover certain types of networks (the so-called spatio-temporal networks) can not only model spatial but also temporal patterns. Nevertheless a continuous search for improvement is mandatory. Therefore in this paper combinations of spatio-temporal neural network types with other modelling techniques are discussed whilst applied to a complex problem from the chemical process industry, i.e. a polymerisation reactor." }, { "instance_id": "R31669xR31376", "comparison_id": "R31669", "paper_id": "R31376", "text": "Catalytic reaction performed in the liquid-liquid system: Comparison of conventional and neural networks modelling methods A comparative analysis of two modelling methods has been performed for the hydrolysis of propionic anhydrite catalysed with sulphuric acid. The first modelling method is based on a theory of mass transfer with simultaneous catalytic reaction; in the second method the reaction kinetics and mass transfer phenomena are approximated with the neural network." }, { "instance_id": "R31689xR31685", "comparison_id": "R31689", "paper_id": "R31685", "text": "Active learning using pre-clustering The paper is concerned with two-class active learning. While the common approach for collecting data in active learning is to select samples close to the classification boundary, better performance can be achieved by taking into account the prior data distribution. The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the set of the cluster representatives, and then propagates the classification decision to the other samples via a local noise model. The proposed model allows to select the most representative samples as well as to avoid repeatedly labeling samples in the same cluster. During the active learning process, the clustering is adjusted using the coarse-to-fine strategy in order to balance between the advantage of large clusters and the accuracy of the data representation. The results of experiments in image databases show a better performance of our algorithm compared to the current methods." }, { "instance_id": "R31689xR31677", "comparison_id": "R31689", "paper_id": "R31677", "text": "Active learning using adaptive resampling Classi cation modeling (a.k.a. supervised learning) is an extremely useful analytical technique for developing predictive and forecasting applications. The explosive growth in data warehousing and internet usage has made large amounts of data potentially available for developing classi cation models. For example, natural language text is widely available in many forms (e.g., electronic mail, news articles, reports, and web page contents). Categorization of data is a common activity which can be automated to a large extent using supervised learning methods. Examples of this include routing of electronic mail, satellite image classi cation, and character recognition. However, these tasks require labeled data sets of su ciently high quality with adequate instances for training the predictive models. Much of the on-line data, particularly the unstructured variety (e.g., text), is unlabeled. Labeling is usually a expensive manual process done by domain experts. Active learning is an approach to solving this problem and works by identifying a subset of the data that needs to be labeled and uses this subset to generate classi cation models. We present an active learning method that uses adaptive resampling in a natural way to signi cantly reduce the size of the required labeled set and generates a classi cation model that achieves the high accuracies possible with current adaptive resampling methods." }, { "instance_id": "R31689xR31670", "comparison_id": "R31689", "paper_id": "R31670", "text": "Less is more: Active learning with support vector machines We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe a number of benefits, the most surprising of which is that a SVM trained on a wellchosen subset of the available corpus frequently performs better than one trained on all available data. The heuristic for choosing this subset is simple to compute, and makes no use of information about the test set. Given that the training time of SVMs depends heavily on the training set size, our heuristic not only offers better performance with fewer data, it frequently does so in less time than the naive approach of training on all available data." }, { "instance_id": "R31689xR31679", "comparison_id": "R31689", "paper_id": "R31679", "text": "Toward optimal active learning through sampling estimation of error reduction This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These other methods are popular because for many learning models, closed form calculation of the expected future error is intractable. Our approach is made feasible by taking a sampling approach to estimating the expected reduction in error due to the labeling of a query. In experimental results on two real-world data sets we reach high accuracy very quickly, sometimes with four times fewer labeled examples than competing methods." }, { "instance_id": "R31689xR31683", "comparison_id": "R31689", "paper_id": "R31683", "text": "A probabilistic active support vector learning algorithm The paper describes a probabilistic active learning strategy for support vector machine (SVM) design in large data applications. The learning strategy is motivated by the statistical query model. While most existing methods of active SVM learning query for points based on their proximity to the current separating hyperplane, the proposed method queries for a set of points according to a distribution as determined by the current separating hyperplane and a newly defined concept of an adaptive confidence factor. This enables the algorithm to have more robust and efficient learning capabilities. The confidence factor is estimated from local information using the k nearest neighbor principle. The effectiveness of the method is demonstrated on real-life data sets both in terms of generalization performance, query complexity, and training time." }, { "instance_id": "R31725xR31715", "comparison_id": "R31725", "paper_id": "R31715", "text": "State Design Pattern Implementation of a DSP processor: A case study of TMS5416C This paper presents an empirical study of the impact of State Design Pattern Implementation on the memory and execution time of popular fixed-point DSP processor from Texas Instruments; TMS320VC5416. Actually, the object-oriented approach is known to introduce a significant performance penalty compared to classical procedural programming [1]. One can find the studies of the object-oriented penalty on the system performance, in terms of execution time and memory overheads in the literature. Since, to the author's best knowledge the study of the overheads of Design Patterns (DP) in the embedded system programming is not widely published in the literature. The main contribution of the paper is to bring further evidence that embedded system software developers have to consider the memory and the execution time overheads of DPs in their implementations. The results of the experiment show that implementation in C++ with DP increases the memory usage and the execution time but meanwhile these overheads would not prevent embedded system software developers to use DPs." }, { "instance_id": "R31725xR31711", "comparison_id": "R31725", "paper_id": "R31711", "text": "Design Patterns and Change Proneness: A Replication Using Proprietary C# Software This paper documents a study of change in commercial, proprietary software and attempts to determine whether a relationship exists between a class\u2019 propensity to change and its design context; more specifically: whether a class is a participant in a design pattern. We identify specific design patterns and their propensity for change. Design pattern participants were found to have a higher propensity to change than classes that did not participate in a design pattern, supporting an earlier study by Bieman et al.; some design patterns, such as the Adaptor, Factory Method and Singleton were found have a higher change propensity than others." }, { "instance_id": "R31725xR31707", "comparison_id": "R31725", "paper_id": "R31707", "text": "Myth or Reality? Analyzing the Effect of Design Patterns on Software Maintainability Although the belief of utilizing design patterns to create better quality software is fairly widespread, there is relatively little research objectively indicating that their usage is indeed beneficial." }, { "instance_id": "R31725xR31719", "comparison_id": "R31725", "paper_id": "R31719", "text": "An empirical investigation on the impact of design pattern application on computer game defects In this paper, we investigate the correlation between design pattern application and software defects. In order to achieve this goal we conducted an empirical study on java open source games. More specifically, we examined several successful open source games, identified the number of defects, the debugging rate and performed design pattern related measurements. The results of the study suggest that the overall number of design pattern instances is not correlated to defect frequency and debugging effectiveness. However, specific design patterns appear to have a significant impact on the number of reported bugs and debugging rate." }, { "instance_id": "R31725xR31713", "comparison_id": "R31725", "paper_id": "R31713", "text": "Design patterns and change proneness: an examination of five evolving systems Design patterns are recognized, named solutions to common design problems. The use of the most commonly referenced design patterns should promote adaptable and reusable program code. When a system evolves, changes to code involving a design pattern should, in theory, consist of creating new concrete classes that are extensions or subclasses of previously existing classes. Changes should not, in theory, involve direct modifications to the classes in prior versions that play roles in a design patterns. We studied five systems, three proprietary systems and two open source systems, to identify the observable effects of the use of design patterns in early versions on changes that occur as the systems evolve. In four of the five systems, pattern classes are more rather than less change prone. Pattern classes in one of the systems were less change prone. These results held up after normalizing for the effect of class size - larger classes are more change prone in two of the five systems. These results provide insight into how design patterns are actually used, and should help us to learn to develop software designs that are more easily adapted." }, { "instance_id": "R31725xR31723", "comparison_id": "R31725", "paper_id": "R31723", "text": "Design patterns and fault-proneness a study of commercial C# software In this paper, we document a study of design patterns in commercial, proprietary software and determine whether design pattern participants (i.e. the constituent classes of a pattern) had a greater propensity for faults than non-participants. We studied a commercial software system for a 24 month period and identified design pattern participants by inspecting the design documentation and source code; we also extracted fault data for the same period to determine whether those participant classes were more fault-prone than non-participant classes. Results showed that design pattern participant classes were marginally more fault-prone than non-participant classes, The Adaptor, Method and Singleton patterns were found to be the most fault-prone of thirteen patterns explored. However, the primary reason for this fault-proneness was the propensity of design classes to be changed more often than non-design pattern classes." }, { "instance_id": "R31725xR31701", "comparison_id": "R31725", "paper_id": "R31701", "text": "Design Patterns in Software Maintenance: An Experiment Replication at Brigham Young University In 2001 Prechelt et al. published the results of a controlled experiment in software maintenance comparing design patterns to simpler solutions. Since that time, only one replication of the experiment has been performed (published in 2004). The replication found remarkably (though not surprisingly) different results. In this paper we present the results of another replication of Prechelt's experiment, conducted at Brigham Young University (BYU) in 2010. This replication was performed as part of a joint replication project hosted by the 2011 Workshop on Replication in Empirical Software Engineering Research (RESER). The data and results from this experiment are meant to be considered in connection with the results of other contributions to the joint replication project." }, { "instance_id": "R31725xR31693", "comparison_id": "R31725", "paper_id": "R31693", "text": "A Controlled Experiment Comparing the Maintainability of Programs Designed with and without Design Patterns\u2014A Replication in a Real Programming Environment Software \u201cdesign patterns\u201d seek to package proven solutions to design problems in a form that makes it possible to find, adapt and reuse them. To support the industrial use of design patterns, this research investigates when, and how, using patterns is beneficial, and whether some patterns are more difficult to use than others. This paper describes a replication of an earlier controlled experiment on design patterns in maintenance, with major extensions. Experimental realism was increased by using a real programming environment instead of pen and paper, and paid professionals from multiple major consultancy companies as subjects. Measurements of elapsed time and correctness were analyzed using regression models and an estimation method that took into account the correlations present in the raw data. Together with on-line logging of the subjects\u2019 work, this made possible a better qualitative understanding of the results. The results indicate quite strongly that some patterns are much easier to understand and use than others. In particular, the Visitor pattern caused much confusion. Conversely, the patterns Observer and, to a certain extent, Decorator were grasped and used intuitively, even by subjects with little or no knowledge of patterns. The implication is that design patterns are not universally good or bad, but must be used in a way that matches the problem and the people. When approaching a program with documented design patterns, even basic training can improve both the speed and quality of maintenance activities." }, { "instance_id": "R31725xR31697", "comparison_id": "R31725", "paper_id": "R31697", "text": "Design Patterns in Software Maintenance: An Experiment Replication at UPM - Experiences with the RESER'11 Joint Replication Project Replication of software engineering experiments is crucial for dealing with validity threats to experiments in this area. Even though the empirical software engineering community is aware of the importance of replication, the replication rate is still very low. The RESER'11 Joint Replication Project aims to tackle this problem by simultaneously running a series of several replications of the same experiment. In this article, we report the results of the replication run at the Universidad Polit\u00e9cnica de Madrid. Our results are inconsistent with the original experiment. However, we have identified possible causes for them. We also discuss our experiences (in terms of pros and cons) during the replication." }, { "instance_id": "R31725xR31691", "comparison_id": "R31725", "paper_id": "R31691", "text": "A controlled experiment in maintenance: comparing design patterns to simpler solutions Software design patterns package proven solutions to recurring design problems in a form that simplifies reuse. We are seeking empirical evidence whether using design patterns is beneficial. In particular, one may prefer using a design pattern even if the actual design problem is simpler than that solved by the pattern, i.e., if not all of the functionality offered by the pattern is actually required. Our experiment investigates software maintenance scenarios that employ various design patterns and compares designs with patterns to simpler alternatives. The subjects were professional software engineers. In most of our nine maintenance tasks, we found positive effects from using a design pattern: either its inherent additional flexibility was achieved without requiring more maintenance time or maintenance time was reduced compared to the simpler alternative. In a few cases, we found negative effects: the alternative solution was less error-prone or required less maintenance time. Overall, we conclude that, unless there is a clear reason to prefer the simpler solution, it is probably wise to choose the flexibility provided by the design pattern because unexpected new requirements often appear. We identify several questions for future empirical research." }, { "instance_id": "R31725xR31703", "comparison_id": "R31725", "paper_id": "R31703", "text": "Do Rules and Patterns Affect Design Maintainability? At the present time, best rules and patterns have reached a zenith in popularity and diffusion, thanks to the software community\u2019s efforts to discover, classify and spread knowledge concerning all types of rules and patterns. Rules and patterns are useful elements, but many features remain to be studied if we wish to apply them in a rational manner. The improvement in quality that rules and patterns can inject into design is a key issue to be analyzed, so a complete body of empirical knowledge dealing with this is therefore necessary. This paper tackles the question of whether design rules and patterns can help to improve the extent to which designs are easy to understand and modify. An empirical study, composed of one experiment and a replica, was conducted with the aim of validating our conjecture. The results suggest that the use of rules and patterns affect the understandability and modifiability of the design, as the diagrams with rules and patterns are more difficult to understand than non-rule/pattern versions and more effort is required to carry out modifications to designs with rules and patterns." }, { "instance_id": "R31768xR31731", "comparison_id": "R31768", "paper_id": "R31731", "text": "Comparison of the Loop-Mediated Isothermal Amplification (LAMP) Method and Conventional Culture Method for the Detection of Campylobacter Species from Retail Chickens \u65b0\u305f\u306bC.jejuni\u304a\u3088\u3073C.coli\u306eLAMP\u30d7\u30e9\u30a4\u30de\u30fc\u304c\u8a2d\u8a08\u3055\u308c\u305f\u3053\u3068\u3092\u6a5f\u306b, LAMP\u6cd5\u3092\u7528\u3044\u3066\u5e02\u8ca9\u9d8f\u8089134\u691c\u4f53\u304b\u3089\u30ab\u30f3\u30d4\u30ed\u30d0\u30af\u30bf\u30fc\u306e\u691c\u51fa\u3092\u8a66\u307f, \u57f9\u990a\u6cd5\u3068\u306e\u6bd4\u8f03\u3092\u884c\u3044, LAMP\u6cd5\u306e\u6709\u7528\u6027\u306b\u3064\u3044\u3066\u691c\u8a0e\u3057\u305f.\u4e21\u8a66\u9a13\u65b9\u6cd5\u306b\u304a\u3044\u3066, \u3068\u3082\u306b\u967d\u6027\u306e\u691c\u4f53\u304c24\u691c\u4f53 (17.9%), \u9670\u6027\u306e\u691c\u4f53\u304c99\u691c\u4f53 (73.9%) \u3042\u308a, \u4e21\u6cd5\u306e\u4e00\u81f4\u7387\u306f91.8%\u3068\u9ad8\u304b\u3063\u305f.\u307e\u305f, \u6210\u7e3e\u304c\u7570\u306a\u3063\u305f\u691c\u4f53\u306f, LAMP\u6cd5\u967d\u6027, \u57f9\u990a\u6cd5\u9670\u6027\u304c10\u691c\u4f53 (7.5%), \u9006\u306b\u57f9\u990a\u6cd5\u967d\u6027, LAMP\u6cd5\u9670\u6027\u304c\u308f\u305a\u304b\u306b1\u691c\u4f53 (0.7%) \u3067\u3042\u3063\u305f.\u3053\u308c\u3089LAMP\u6cd5\u3068\u57f9\u990a\u6cd5\u306b\u3088\u308b\u691c\u51fa\u72b6\u6cc1\u306b\u3064\u3044\u3066x2\u691c\u5b9a\u3092\u884c\u3063\u305f\u304c, \u4e21\u8a66\u9a13\u6cd5\u306e\u9593\u306b\u6709\u610f\u5dee\u306f\u8a8d\u3081\u3089\u308c\u306a\u304b\u3063\u305f.\u6750\u6599\u306e\u7a2e\u985e\u5225\u306b\u304a\u3051\u308b\u30ab\u30f3\u30d4\u30ed\u30d0\u30af\u30bf\u30fc\u306e\u691c\u51fa\u72b6\u6cc1\u3067\u306f\u624b\u7fbd23\u691c\u4f53\u306b\u304a\u3044\u3066, LAMP\u6cd5\u3067\u306f5\u691c\u4f53 (21.7%) \u304c\u967d\u6027\u3067\u3042\u3063\u305f\u304c, \u57f9\u990a\u6cd5\u3067\u306f1\u691c\u4f53\u3082\u5206\u96e2\u3055\u308c\u306a\u304b\u3063\u305f, \u305d\u306e\u4ed6, \u30e2\u30e2\u8089, \u30ec\u30d0\u30fc, \u633d\u8089, \u30e0\u30cd\u8089, \u30b5\u30b5\u30df\u306a\u3069\u3067\u306f\u4e21\u8a66\u9a13\u6cd5\u306b\u3088\u308b\u691c\u51fa\u7387\u306b\u5dee\u7570\u306f\u8a8d\u3081\u3089\u308c\u306a\u304b\u3063\u305f.\u4eca\u56de\u306eLAMP\u6cd5\u306e\u6210\u7e3e\u306f, \u57f9\u990a\u6cd5\u3068\u6bd4\u8f03\u3057\u3066\u905c\u8272\u306a\u3044\u826f\u597d\u306a\u3082\u306e\u3067\u3042\u3063\u305f.\u306a\u304a, \u672c\u7a3f\u306e\u8981\u65e8\u306f\u65e5\u672c\u9632\u83cc\u9632\u5fbd\u5b66\u4f1a\u7b2c33\u56de\u5e74\u6b21\u5927\u4f1a (\u6771\u4eac) \u306b\u304a\u3044\u3066\u767a\u8868\u3057\u305f." }, { "instance_id": "R31768xR31749", "comparison_id": "R31768", "paper_id": "R31749", "text": "Survival of Campylobacter jejuni in Frozen Chicken Meat and Genetic Analysis of Isolates by Pulsed-Field Gel Electrophoresis \u5e02\u8ca9\u9d8f\u8089\u306e\u30ab\u30f3\u30d4\u30ed\u30d0\u30af\u30bf\u30fc\u6c5a\u67d3\u8abf\u67fb\u3092\u884c\u3063\u305f\u3068\u3053\u308d, 100\u691c\u4f53\u4e2d49\u691c\u4f53 (49.0%) \u304cCampylobacter jejuni\u967d\u6027\u3067\u3042\u3063\u305f.\u305d\u306e49\u691c\u4f53\u306b\u3064\u3044\u3066, \u51b7\u51cd\u4fdd\u5b58\u306b\u3088\u308b\u9d8f\u8089\u4e2d\u306eCampylobacter\u83cc\u6570\u306e\u5909\u5316\u3092MPN\u6cd5\u306b\u3088\u308a\u8abf\u67fb\u3057\u305f\u3068\u3053\u308d, -20\u2103, 7\u65e5\u9593\u4fdd\u5b58\u5f8c\u306e\u83cc\u6570\u306f\u4fdd\u5b58\u524d\u306e\u691c\u4f53\u306b\u6bd4\u3079\u30661/10\uff5e1/100\u306b\u6e1b\u5c11\u3057, 25/49\u691c\u4f53 (51.0%) \u3067\u306f\u691c\u51fa\u9650\u754c\u672a\u6e80 (MPN\u5024<15/100g) \u3068\u306a\u3063\u305f.PFGE\u6cd5\u306b\u3088\u308a\u5206\u96e2\u83cc\u682a\u306e\u907a\u4f1d\u5b50\u89e3\u6790\u3092\u884c\u3063\u305f\u3068\u3053\u308d, \u5e02\u8ca9\u9d8f\u8089\u306f\u5358\u4e00\u3067\u306f\u306a\u304f\u8907\u6570\u306e\u907a\u4f1d\u5b50\u578b\u306e\u83cc\u306b\u3088\u3063\u3066\u6c5a\u67d3\u3055\u308c\u3066\u3044\u308b\u3053\u3068\u304c\u793a\u5506\u3055\u308c, \u307e\u305f, 8/24\u691c\u4f53 (33.3%) \u306b\u304a\u3044\u3066, \u51b7\u51cd\u4fdd\u5b58\u524d\u5f8c\u3067\u7570\u306a\u308b\u907a\u4f1d\u5b50\u578b\u306e\u83cc\u304c\u5206\u96e2\u3055\u308c\u305f.\u3053\u306e\u305f\u3081, \u98df\u4e2d\u6bd2\u4e8b\u4ef6\u306e\u539f\u56e0\u7a76\u660e\u306e\u305f\u3081\u306b\u306f, \u98df\u54c1\u691c\u4f53\u304b\u3089\u3067\u304d\u308b\u3060\u3051\u591a\u304f\u306e\u83cc\u682a\u3092\u5206\u96e2\u3057, \u907a\u4f1d\u5b50\u89e3\u6790\u3092\u884c\u3046\u5fc5\u8981\u6027\u304c\u3042\u308b\u3053\u3068\u304c\u8003\u3048\u3089\u308c\u305f.\u9d8f\u8089\u3078\u306eC. jejuni\u63a5\u7a2e\u8a66\u9a13\u3067\u306f, \u89e3\u51cd\u305b\u305a\u306b\u51cd\u7d50\u72b6\u614b\u3067\u4fdd\u5b58\u3057\u305f\u691c\u4f53\u3067\u306f, \u51cd\u7d50\u30fb\u89e3\u51cd\u3092\u7e70\u308a\u8fd4\u3057\u305f\u3082\u306e\u3088\u308a\u3082\u83cc\u6570\u306e\u6e1b\u5c11\u304c\u308f\u305a\u304b\u3067\u3042\u3063\u305f\u3053\u3068\u304b\u3089, \u83cc\u306e\u6b7b\u6ec5\u306f\u4e3b\u306b\u51cd\u7d50\u6642\u3042\u308b\u3044\u306f\u89e3\u51cd\u6642\u306b\u8d77\u3053\u308b\u3053\u3068\u304c\u793a\u5506\u3055\u308c\u305f." }, { "instance_id": "R31809xR31787", "comparison_id": "R31809", "paper_id": "R31787", "text": "DNA methylation and embryogenic compe- tence in leaves and callus of napiergrass (Pennisetum purpureum Schum Quantitative and qualitative levels of DNA methylation were evaluated in leaves and callus of Pennisetum purpureum Schum. The level of methylation did not change during leaf differentiation or aging and similar levels of methylation were found in embryogenic and nonembryogenic callus." }, { "instance_id": "R31809xR31801", "comparison_id": "R31809", "paper_id": "R31801", "text": "Genetic characterization of late-flowering traits induced by DNA hypomethylation mutation in Arabidopsis thaliana Arabidopsis DNA hypomethylation mutation, ddm1, results in a variety of developmental abnormalities by slowly inducing heritable lesions at unlinked loci. Here, late-flowering traits observed at high frequencies in independently-established ddm1 lines were genetically characterized. In all of the four late-flowering lines examined the traits were dominant and mapped to the same chromosomal region, which is close or possibly identical to the FWA locus. The ddm1-induced phenotypic onsets are apparently not random mutation events, but specific to a group of genes, suggesting the underlying epigenetic mechanism. The DNA methylation mutant provide useful system for identifying epigenetically-regulated genes important for plant development." }, { "instance_id": "R31809xR31796", "comparison_id": "R31809", "paper_id": "R31796", "text": "Rapid quantification of global DNA methylation by isocratic cation exchange high-performance liquid chromatography The DNA of many eukaryotes is methylated at specific cytosine residues in connection with gene regulation. Here we report a method for the quantification of global cytosine methylation based on enzymatic hydrolysis of DNA, dephosphorylation, and subsequent high-performance cation exchange chromatography. Nucleosides are separated in less than 3 min under isocratic conditions on a benzenesulfonic acid-modified silica phase and detected by UV absorption. As little as 1 microg of DNA is sufficient to measure 5-methyldeoxycytosine levels with a typical relative standard deviation of less than 3%. As a proof of concept, the method was applied for analysis of DNA from several Arabidopsis thaliana mutants affected in DNA methylation and from Medicago sativa seedlings treated with the environmental pollutant chromium(VI)." }, { "instance_id": "R31809xR31781", "comparison_id": "R31809", "paper_id": "R31781", "text": "Brassica oleracea displays a high level of DNA methylation polymorphism Brassica oleracea is a species displaying a high level of phenotypic variability. In order to evaluate the extent of genome methylation and to relate methylation polymorphism to phenotypic variability, we used the methylation-sensitive amplification polymorphism (MSAP) technique on 30 B. oleracea populations and lines representing the species diversity. We first observed that most MSAP fragments were inherited from one generation to the next one and were mainly additive in a progeny. A high mean rate of methylation estimated by MSAP was revealed in this species (range of 52-60% depending on the accessions), 30-41 % of MSAP fragments being detected by MspI and 17-27% by Hpall. Most of the MSAP-methylated fragments (95%) were polymorphic between the populations and lines analysed. We performed a phenetic analysis to group populations/lines by using MSAP-methylated fragments. The phenetic relationships revealed showed a populations/lines classification that did not correlate completely with a classification by morphotype as obtained using AFLP fragments insensitive to methylation polymorphism. The high methylation level and polymorphism reported in this study could be related with the high structural genome plasticity already reported in the Brassica species to explain the phenotypic variability of this species. (C) 2007 Elsevier Ireland Ltd. All rights reserved." }, { "instance_id": "R31809xR31804", "comparison_id": "R31809", "paper_id": "R31804", "text": "Maize chromomethylase Zea methyltransferase2 is required for CpNpG methylation A cytosine DNA methyltransferase containing a chromodomain, Zea methyltransferase2 (Zmet2), was cloned from maize. The sequence of ZMET2 is similar to that of the Arabidopsis chromomethylases CMT1 and CMT3, with C-terminal motifs characteristic of eukaryotic and prokaryotic DNA methyltransferases. We used a reverse genetics approach to determine the function of the Zmet2 gene. Plants homozygous for a Mutator transposable element insertion into motif IX had a 13% reduction in methylated cytosines. DNA gel blot analysis of these plants with methylation-sensitive restriction enzymes and bisulfite sequencing of a 180-bp knob sequence showed reduced methylation only at CpNpG sites. No reductions in methylation were observed at CpG or asymmetric sites in heterozygous or homozygous mutant plants. Our research shows that chromomethylase Zmet2 is required for in vivo methylation of CpNpG sequences." }, { "instance_id": "R31809xR31790", "comparison_id": "R31809", "paper_id": "R31790", "text": "Cytosine methylation levels in the genome of Stellaria longipes Environment-induced alteration of DNA methylation levels was investigated inStellaria longipes (Caryophyllaceae). Total cytosine methylation levels were measured using HPLC in 6 genets representing two ecotypes (alpine and prairie) grown in short day photoperiod and cold temperature (SDC) and long day photoperiod and warm temperature (LDW) conditions. The levels of methylated cytosine were 16.54-22.20% among the three genets from the alpine and 12.62\u201324.70% in the three prairie genets when they were grown in SDC conditions. After the plants were moved to the LDW conditions, all of the three genets from the alpine showed decreasing levels of DNA methylation up to 6 days of growing in LDW. When the plants continued to grow in LDW for 10 days the average methylation level in the prairie genotypes showed no significant change. Cytosine methylation level was also detected inHpall andSau3AI restriction sites using the coupled restriction enzyme digestion and random amplification (CRED-RA) procedure, in which 15 random primers were used. Fifty per cent of the amplified bands with either or both of these two restriction sites were identified as being methylated in an alpine genotype (1C) and approximately 66% were found to be methylated in a prairie genotype (7C). It was observed that the change in growing conditions from SDC to LDW induced a decrease of methylation levels inHpall sites." }, { "instance_id": "R31809xR31771", "comparison_id": "R31809", "paper_id": "R31771", "text": "MSAP markers and global cytosine methylation in plants: a literature survey and comparative analysis for a wild-growing species Methylation of DNA cytosines affects whether transposons are silenced and genes are expressed, and is a major epigenetic mechanism whereby plants respond to environmental change. Analyses of methylation\u2010sensitive amplification polymorphism (MS\u2010AFLP or MSAP) have been often used to assess methyl\u2010cytosine changes in response to stress treatments and, more recently, in ecological studies of wild plant populations. MSAP technique does not require a sequenced reference genome and provides many anonymous loci randomly distributed over the genome for which the methylation status can be ascertained. Scoring of MSAP data, however, is not straightforward, and efforts are still required to standardize this step to make use of the potential to distinguish between methylation at different nucleotide contexts. Furthermore, it is not known how accurately MSAP infers genome\u2010wide cytosine methylation levels in plants. Here, we analyse the relationship between MSAP results and the percentage of global cytosine methylation in genomic DNA obtained by HPLC analysis. A screening of literature revealed that methylation of cytosines at cleavage sites assayed by MSAP was greater than genome\u2010wide estimates obtained by HPLC, and percentages of methylation at different nucleotide contexts varied within and across species. Concurrent HPLC and MSAP analyses of DNA from 200 individuals of the perennial herb Helleborus foetidus confirmed that methyl\u2010cytosine was more frequent in CCGG contexts than in the genome as a whole. In this species, global methylation was unrelated to methylation at the inner CG site. We suggest that global HPLC and context\u2010specific MSAP methylation estimates provide complementary information whose combination can improve our current understanding of methylation\u2010based epigenetic processes in nonmodel plants." }, { "instance_id": "R31878xR31857", "comparison_id": "R31878", "paper_id": "R31857", "text": "Exploration of the possibilities for production of Fischer Tropsch liquids and power via biomass gasification This paper reviews the technical feasibility and economics of biomass integrated gasification\u2013Fischer Tropsch (BIG-FT) processes in general, identifies most promising system configurations and identifies key R&D issues essential for the commercialisation of BIG-FT technology. The FT synthesis produces hydrocarbons of different length from a gas mixture of H2 and CO. The large hydrocarbons can be hydrocracked to form mainly diesel of excellent quality. The fraction of short hydrocarbons is used in a combined cycle with the remainder of the syngas. Overall LHV energy efficiencies,1 calculated with the flowsheet modelling tool Aspenplus, are 33\u201340% for atmospheric gasification systems and 42\u201350% for pressurised gasification systems. Investment costs of such systems () are MUS$ 280\u2013450,2 depending on the system configuration. In the short term, production costs of FT-liquids will be about US$ 16/GJ. In the longer term, with large-scale production, higher CO conversion and higher C5+ selectivity in the FT process, production costs of FT-liquids could drop to US$ 9/GJ. These perspectives for this route and use of biomass-derived FT-fuels in the transport sector are promising. Research and development should be aimed at the development of large-scale (pressurised) biomass gasification-based systems and special attention must be given to the gas cleaning section." }, { "instance_id": "R31878xR31865", "comparison_id": "R31878", "paper_id": "R31865", "text": "Performance of entrained flow and fluidised bed biomass gasifiers on different scales This biomass gasification process study compares the energetic and economic efficiencies of a dual fluidisedbed and an oxygen-blown entrained flow gasifier from 10MWth to 500MWth. While fluidised bedgasification became the most applied technology for biomass in small and medium scale facilities,entrained flow gasification technology is still used exclusively for industrial scale coal gasification. Therefore,it is analysed whether and for which capacity the entrained flow technology is an energetically andeconomically efficient option for the thermo-chemical conversion of biomass. Special attention is given tothe pre-conditioning methods for biomass to enable the application in an entrained flow gasifier. Processchains are selected for the two gasifier types and subsequently transformed to simulation models.The simulation results show that the performance of both gasifier types is similar for the production ofa pressurised product gas (2.5 MPa). The cold gas efficiency of the fluidised bed is 76?79" }, { "instance_id": "R31878xR31825", "comparison_id": "R31878", "paper_id": "R31825", "text": "Production of Fischer\u2013Tropsch fuels and electricity from bituminous coal based on steam hydrogasification A new thermochemical process for (Fischer\u2013Tropsch) FT fuels and electricity coproduction based on steam hydrogasification is addressed and evaluated in this study. The core parts include (Steam Hydrogasification Reactor) SHR, (Steam Methane Reformer) SMR and (Fisher\u2013Tropsch Reactor) FTR. A key feature of SHR is the enhanced conversion of carbon into methane at high steam environment with hydrogen and no need for catalyst or the use of oxygen. Facilities utilizing bituminous coal for coproduction of FT fuels and electricity with carbon dioxide sequestration are designed in detail. Cases with design capacity of either 400 or 4000 TPD (Tonne Per Day) (dry basis) are investigated with process modeling and cost estimation. A cash flow analysis is performed to determine the fuels (Production Cost) PC. The analysis shows that the 400 TPD case due to a FT fuels PC of 5.99 $/gallon diesel equivalent results in a plant design that is totally uneconomic. The 4000 TPD plant design is expected to produce 7143 bbl/day FT liquids with PC of 2.02 $/gallon and 2.27 $/gallon diesel equivalent at overall carbon capture ratio of 65% and 90%, respectively. Prospective commercial economics benefits with increasing plant size and improvements from large-scale demonstration efforts on steam hydrogasification." }, { "instance_id": "R31878xR31829", "comparison_id": "R31878", "paper_id": "R31829", "text": "Comparison of coal IGCC with and without CO2 capture and storage: shell gasification with standard vs. partial water quench This work provides a techno-economic assessment of Shell coal gasification -based IGCC, with and without CO2 capture and storage (CCS), focusing on the comparison between the standard Shell configuration with dry gas quench and syngas coolers versus partial water quench cooling." }, { "instance_id": "R31878xR31849", "comparison_id": "R31878", "paper_id": "R31849", "text": "Biofuels and biochemicals production from forest biomass in western Canada Biomass can be used for the production of fuels, and chemicals with reduced life cycle (greenhouse gas) emissions. Currently, these fuels and chemicals are produced mainly from natural gas and other fossil fuels. In Western Canada, forest residue biomass is gasified for the production of syngas which is further synthesized to produce different fuels and chemicals. Two types of gasifiers: the atmospheric pressure gasifier (commercially known as SilvaGas) and the pressurized gasifier (commercially known as RENUGAS) are considered for syngas production. The production costs of methanol, (dimethyl ether), (Fischer-Tropsch) fuels, and ammonia are $0.29/kg, $0.47/kg, $0.97/kg, and $2.09/kg, respectively, for a SilvaGas-based gasification plant with a capacity of 2000 dry tonnes/day. The cost of producing methanol, DME, F-T fuels, and ammonia in a RENUGAS-based plant are $0.45/kg, $0.69/kg, $1.53/kg, and $2.72/kg, respectively, for a plant capacity of 2000 dry tonnes/day. The minimum cost of producing methanol, DME, F-T fuels, and ammonia are $0.28/kg, $0.44/kg, $0.94/kg, and $2.06/kg at plant capacities of 3000, 3500, 4000, and 3000 dry tonnes/day, respectively, using the SilvaGas-based gasification process. Biomass-based fuels and chemicals are expensive compared to fuels and chemicals derived from fossil fuels, and carbon credits can help them become competitive." }, { "instance_id": "R31878xR31819", "comparison_id": "R31878", "paper_id": "R31819", "text": "Large-scale gasification-based coproduction of fuels and electricity from switchgrass Large-scale gasification-based systems for producing Fischer-Tropsch (F-T) fuels (diesel and gasoline blendstocks), dimethyl ether (DME), or hydrogen from switchgrass \u2013 with electricity as a coproduct in each case are assessed using a self-consistent design, simulation, and cost analysis framework. We provide an overview of alternative process designs for coproducing these fuels and power assuming commercially mature technology performance and discuss the commercial status of key component technologies. Overall efficiencies (lower-heating-value basis) of producing fuels plus electricity in these designs ranges from 57% for F-T fuels, 55\u201361% for DME, and 58\u201364% for hydrogen. Detailed capital cost estimates for each design are developed, on the basis of which prospective commercial economics of future large-scale facilities that coproduce fuels and power are evaluated. \u00a9 2009 Society of Chemical Industry and John Wiley & Sons, Ltd" }, { "instance_id": "R31878xR31841", "comparison_id": "R31878", "paper_id": "R31841", "text": "Techno-economic performance analysis of bio-oil based FischereTropsch and CHP synthesis platform The techno-economic potential of the UK poplar wood and imported oil palm empty fruit bunch derived bio-oil integrated gasification and Fischer-Tropsch (BOIG-FT) systems for the generation of transportation fuels and combined heat and power (CHP) was investigated. The bio-oil was represented in terms of main chemical constituents, i.e. acetic acid, acetol and guaiacol. The compositional model of bio-oil was validated based on its performance through a gasification process. Given the availability of large scale gasification and FT technologies and logistic constraints in transporting biomass in large quantities, distributed bio-oil generations using biomass pyrolysis and centralised bio-oil processing in BOIG-FT system are technically more feasible. Heat integration heuristics and composite curve analysis were employed for once-through and full conversion configurations, and for a range of economies of scale, 1 MW, 675 MW and 1350 MW LHV of bio-oil. The economic competitiveness increases with increasing scale. A cost of production of FT liquids of 78.7 Euro/MWh was obtained based on 80.12 Euro/MWh of electricity, 75 Euro/t of bio-oil and 116.3 million Euro/y of annualised capital cost." }, { "instance_id": "R31878xR31867", "comparison_id": "R31878", "paper_id": "R31867", "text": "FischereTropsch diesel production in a well-to-wheel perspective: a carbon, energy flow and cost analysis We calculated carbon and energy balances and costs of 14 different Fischer\u2013Tropsch (FT) fuel production plants in 17 complete well-to-wheel (WTW) chains. The FT plants can use natural gas, coal, biomass or mixtures as feedstock. Technical data, and technological and economic assumptions for developments for 2020 were derived from the literature, recalculating to 2005 euros for (capital) costs. Our best-guess WTW estimates indicate BTL production costs break even when oil prices rise above $75/bbl, CTL above $60/bbl and GTL at $36/bbl. CTL, and GTL without carbon capture and storage (CCS), will emit more CO2 than diesel from conventional oil. Driving on fuel from GTL with CCS may reduce GHG emissions to around 123 g CO2/km. Driving on BTL may cause emissions of 32\u201363 g CO2/km and these can be made negative by application of CCS. It is possible to have net climate neutral driving by combining fuels produced from fossil resources with around 50% BTL with CCS, if biomass gasification and CCS can be made to work on an industrial scale and the feedstock is obtained in a climate-neutral manner. However, the uncertainties in these numbers are in the order of tens of percents, due to uncertainty in the data for component costs, variability in prices of feedstocks and by-products, and the GHG impact of producing biomass." }, { "instance_id": "R31903xR31857", "comparison_id": "R31903", "paper_id": "R31857", "text": "Exploration of the possibilities for production of Fischer Tropsch liquids and power via biomass gasification This paper reviews the technical feasibility and economics of biomass integrated gasification\u2013Fischer Tropsch (BIG-FT) processes in general, identifies most promising system configurations and identifies key R&D issues essential for the commercialisation of BIG-FT technology. The FT synthesis produces hydrocarbons of different length from a gas mixture of H2 and CO. The large hydrocarbons can be hydrocracked to form mainly diesel of excellent quality. The fraction of short hydrocarbons is used in a combined cycle with the remainder of the syngas. Overall LHV energy efficiencies,1 calculated with the flowsheet modelling tool Aspenplus, are 33\u201340% for atmospheric gasification systems and 42\u201350% for pressurised gasification systems. Investment costs of such systems () are MUS$ 280\u2013450,2 depending on the system configuration. In the short term, production costs of FT-liquids will be about US$ 16/GJ. In the longer term, with large-scale production, higher CO conversion and higher C5+ selectivity in the FT process, production costs of FT-liquids could drop to US$ 9/GJ. These perspectives for this route and use of biomass-derived FT-fuels in the transport sector are promising. Research and development should be aimed at the development of large-scale (pressurised) biomass gasification-based systems and special attention must be given to the gas cleaning section." }, { "instance_id": "R31903xR31853", "comparison_id": "R31903", "paper_id": "R31853", "text": "Techno-economic analysis of biomass-to-liquids production based on gasification Abstract This study compares capital and production costs of two biomass-to-liquid production plants based on gasification utilizing 2000 dry Mg per day of corn stover. The focus is to produce liquid transportation fuels with electricity as co-product using commercially available technology within the next 5\u20138 years. The first biorefinery scenario is a low temperature (870 \u00b0C, 1598 \u00b0F), fluidized bed gasifier and the second scenario a high temperature (1300 \u00b0C, 2372 \u00b0F), entrained flow gasifier both followed by catalytic Fischer\u2013Tropsch synthesis and hydroprocessing. The scenarios were modeled and investment costs estimated. A discounted cash flow rate of return analysis was performed to determine a fuel product value (PV). The analysis shows that despite higher investment costs for the high temperature gasification scenario, a lower PV results due to higher fuel yield. These n th plant scenarios are expected to produce fuels with a PV in the range of $4\u20135 per gallon of gasoline equivalent ($1.06\u20131.32 per liter) and require $500\u2013650 million capital investment. Main factors responsible for this relatively high PV are feedstock cost and return on capital investment. Biomass-to-liquid based on gasification has yet to be commercialized. Hence, a pioneer plant is expected to be more costly to build and operate than an n th plant. Pioneer plant analysis found PV to increase by 60\u201390% and capital investment to more than double." }, { "instance_id": "R31903xR31887", "comparison_id": "R31903", "paper_id": "R31887", "text": "Technical and economic prospects of coal- and biomass-fired integrated gasification facilities equipped with CCS over time Abstract This study analyses the impacts of technological improvements and increased operating experience on the techno-economic performance of integrated gasification (IG) facilities. The facilities investigated produce electricity (IGCC) or FT-liquids with electricity as by-product (IG\u2013FT). Results suggest that a state-of-the-art (SOTA) coal-fired IGCC without CO2 capture has electricity production costs of 17 \u20ac/GJ (60 \u20ac/MWh) with the potential to decrease to 11 \u20ac/GJ (40 \u20ac/MWh) in the long term. Specific direct CO2 emissions may drop from about 0.71 kg CO2/kWh to 0.59 kg CO2/kWh. If CO2 is captured, production costs may increase to 23 \u20ac/GJ (83 \u20ac/MWh), with the potential to drop to 14 \u20ac/GJ (51 \u20ac/MWh) in the long term. As a result, CO2 avoidance costs would decrease from 35 \u20ac/t CO2 to 18 \u20ac/t CO2. The efficiency penalty due to CCS may decrease from 8.8%pt to 3.7%pt. CO2 emissions can also be reduced by using torrefied biomass (TOPS) instead of coal. Production costs of a SOTA TOPS-fired IGCC without CO2 capture are 18\u201325 \u20ac/GJ (64\u201392 \u20ac/MWh). In the long term, this may drop to 12 \u20ac/GJ (44 \u20ac/MWh), resulting in CO2 avoidance costs of 7 \u20ac/t CO2. The greatest reduction in anthropogenic CO2 emissions is obtained by using biomass combined with carbon capture and storage (CCS). A SOTA TOPS-fired IGCC with CCS has, depending on the biomass price, production costs of 25\u201335 \u20ac/GJ (91\u2013126 \u20ac/MWh) with CO2 avoidance costs of 19\u201340 \u20ac/t CO2. These values may decrease to 15 \u20ac/GJ (55 \u20ac/MWh) and 12 \u20ac/t CO2 avoided in the long term. As carbon from biomass is captured, specific direct CO2 emissions are negative and estimated at \u22120.93 kg CO2/kWh for SOTA and \u22120.59 kg CO2/kWh in the long term. Even though more carbon is sequested in the future concepts, specific emissions drop due to an increase in the energetic conversion efficiency of the future facilities. New technologies in IG-FT facilities have a slightly smaller impact on production costs. In the long term, production costs of FT-liquids from coal may drop from 13 \u20ac/GJ to 9 \u20ac/GJ if CO2 is vented and from 15 \u20ac/GJ to 10 \u20ac/GJ if CCS is applied. The use of TOPS results in 15\u201323 \u20ac/GJ (Vent) and 17\u201324 \u20ac/GJ (CCS) for SOTA facilities. These production costs may drop to 11\u201318 \u20ac/GJ (Vent) and 12\u201319 \u20ac/GJ (CCS) in the long term. Contrary to the IGCC cases, the coal-fired IG-FT facility shows the lowest CO2 avoidance costs. The CO2 emission of coal to FT-liquids with CCS is, however, similar to gasoline/diesel production from crude oil." }, { "instance_id": "R31903xR31822", "comparison_id": "R31903", "paper_id": "R31822", "text": "Making FischereTropsch fuels and electricity from coal and biomass: performance and cost analysis Major challenges posed by crude-oil-derived transportation fuels are high current and prospective oil prices, insecurity of liquid fuel supplies, and climate change risks from the accumulation of fossil fuel CO2 and other greenhouse gases in the atmosphere. One option for addressing these challenges simultaneously involves producing ultraclean synthetic fuels from coal and lignocellulosic biomass with CO2 capture and storage. Detailed process simulations, lifecycle greenhouse gas emissions analyses, and cost analyses carried out in a comprehensive analytical framework are presented for 16 alternative system configurations that involve gasification-based coproduction of Fischer\u2212Tropsch liquid (FTL) fuels and electricity from coal and/or biomass, with and without capture and storage of byproduct CO2. Systematic comparisons are made to cellulosic ethanol as an alternative low GHG-emitting liquid fuel and to alternative options for decarbonizing stand-alone fossil-fuel power plants. The analysis indicates tha..." }, { "instance_id": "R31903xR31862", "comparison_id": "R31903", "paper_id": "R31862", "text": "Thermochemical production of liquid fuels from biomass: thermo-economic modelling, process design and process integration analysis A detailed thermo-economic model combining thermodynamics with economic analysis and considering different technological alternatives for the thermochemical production of liquid fuels from lignocellulosic biomass is presented. Energetic and economic models for the production of Fischer-Tropsch fuel (FT), methanol (MeOH) and dimethyl ether (DME) by the means of biomass drying with steam or flue gas, directly or indirectly heated fluidized bed or entrained flow gasification, hot or cold gas cleaning, fuel synthesis and upgrading are reviewed and developed. The process is integrated and the optimal utility system is computed. The competitiveness of the different process options is compared systematically with regard to energetic, economic and environmental considerations. At several examples, it is highlighted that process integration is a key element that allows for considerably increasing the performance by optimal utility integration and energy conversion. The performance computations of some exemplary technology scenarios of integrated plants yield overall energy efficiencies of 59.8% (crude FT-fuel), 52.5% (MeOH) and 53.5% (DME), and production costs of 89, 128 and 113 EUR/MWh on fuel basis. The applied process design approach allows to evaluate the economic competitiveness compared to fossil fuels, to study the influence of the biomass and electricity price and to project for different plant capacities. Process integration reveals in particular potential energy savings and waste heat valorization. Based on this work, the most promising options for the polygeneration of fuel, power and heat will be determined in a future thermo-economic optimization." }, { "instance_id": "R31903xR31879", "comparison_id": "R31903", "paper_id": "R31879", "text": "Design/economics of a once-through natural gas FischereTropsch plant with power co-production In 1994, Bechtel, along with Amoco as the main subcontractor, developed a Baseline Design (and an ASPEN Plus computer process simulation model) for indirect coal liquefaction using advanced Fischer-Tropsch (F-T) technology to produce highquality, liquid transportation fuels. This was done under DOE Contract No. DE-AC22-91PC90027. In 1995, the original study was extended to develop a case in which natural gas, instead of coal, is used as the feedstock. The results, presented at last year\u2019s Contractors\u2019 Conference, show that a natural gas F-T plant is less capital intensive, and as a consequence, attractive F-T economics may be attained with low cost remote gas." }, { "instance_id": "R31903xR31881", "comparison_id": "R31903", "paper_id": "R31881", "text": "Second generation BtL type biofuels \u2013 a production cost analysis The objective of this paper is to address the issue of the production cost of second generation biofuels via the thermo-chemical route. The last decade has seen a large number of technical\u2013economic studies of second generation biofuels. As there is a large variation in the announced production costs of second generation biofuels in the literature, this paper clarifies some of the reasons for these variations and helps obtain a clearer picture. This paper presents simulations for two pathways and comparative production pathways previously published in the literature in the years between 2000 and 2011. It also includes a critical comparison and analysis of previously published studies. This paper does not include studies where the production is boosted with a hydrogen injection to improve the carbon yield. The only optimisation included is the recycle of tail gas. It is shown that the fuel can be produced on a large scale at prices of around 1.0\u20131.4 \u20ac per l. Large uncertainties remain however with regard to the precision of the economic predictions, the technology choices, the investment cost estimation and even the financial models to calculate the production costs. The benefit of a tail gas recycle is also examined; its benefit largely depends on the selling price of the produced electricity." }, { "instance_id": "R31928xR31841", "comparison_id": "R31928", "paper_id": "R31841", "text": "Techno-economic performance analysis of bio-oil based FischereTropsch and CHP synthesis platform The techno-economic potential of the UK poplar wood and imported oil palm empty fruit bunch derived bio-oil integrated gasification and Fischer-Tropsch (BOIG-FT) systems for the generation of transportation fuels and combined heat and power (CHP) was investigated. The bio-oil was represented in terms of main chemical constituents, i.e. acetic acid, acetol and guaiacol. The compositional model of bio-oil was validated based on its performance through a gasification process. Given the availability of large scale gasification and FT technologies and logistic constraints in transporting biomass in large quantities, distributed bio-oil generations using biomass pyrolysis and centralised bio-oil processing in BOIG-FT system are technically more feasible. Heat integration heuristics and composite curve analysis were employed for once-through and full conversion configurations, and for a range of economies of scale, 1 MW, 675 MW and 1350 MW LHV of bio-oil. The economic competitiveness increases with increasing scale. A cost of production of FT liquids of 78.7 Euro/MWh was obtained based on 80.12 Euro/MWh of electricity, 75 Euro/t of bio-oil and 116.3 million Euro/y of annualised capital cost." }, { "instance_id": "R31928xR31857", "comparison_id": "R31928", "paper_id": "R31857", "text": "Exploration of the possibilities for production of Fischer Tropsch liquids and power via biomass gasification This paper reviews the technical feasibility and economics of biomass integrated gasification\u2013Fischer Tropsch (BIG-FT) processes in general, identifies most promising system configurations and identifies key R&D issues essential for the commercialisation of BIG-FT technology. The FT synthesis produces hydrocarbons of different length from a gas mixture of H2 and CO. The large hydrocarbons can be hydrocracked to form mainly diesel of excellent quality. The fraction of short hydrocarbons is used in a combined cycle with the remainder of the syngas. Overall LHV energy efficiencies,1 calculated with the flowsheet modelling tool Aspenplus, are 33\u201340% for atmospheric gasification systems and 42\u201350% for pressurised gasification systems. Investment costs of such systems () are MUS$ 280\u2013450,2 depending on the system configuration. In the short term, production costs of FT-liquids will be about US$ 16/GJ. In the longer term, with large-scale production, higher CO conversion and higher C5+ selectivity in the FT process, production costs of FT-liquids could drop to US$ 9/GJ. These perspectives for this route and use of biomass-derived FT-fuels in the transport sector are promising. Research and development should be aimed at the development of large-scale (pressurised) biomass gasification-based systems and special attention must be given to the gas cleaning section." }, { "instance_id": "R31928xR31908", "comparison_id": "R31928", "paper_id": "R31908", "text": "Cost estimate for biosynfuel production via biosyncrude gasification Production of synthetic fuels from lignocellulose like wood or straw involves complex technology. There- fore, a large BTL (biomass to liquid) plant for biosynfuel production is more economic than many small facilities. A reasonable BTL-plant capacity is \u2265 1 Mt/a biosynfuel similar to the already existing commercial CTL and GTL (coal to liquid, gas to liquid) plants of SASOL and SHELL, corresponding to at least 10% of the capacity of a modern oil refi nery. BTL-plant cost estimates are therefore based on reported experience with CTL and GTL plants. Direct supply of large BTL plants with low bulk density biomass by trucks is limited by high transport costs and intolerable local traffi c density. Biomass densifi cation by liquefaction in a fast pyrolysis process generates a compact bioslurry or biopaste, also denoted as biosyncrude as produced by the bioliq \u00ae process. The densifi ed biosyncrude intermediate can now be cheaply transported from many local facilities in silo wagons by electric rail over long distances to a large and more economic central biosynfuel plant. In addition to the capital expenditure (capex) for the large and complex central biosynfuel plant, a comparable investment effort is required for the construction of several dozen regional pyrolysis plants with simpler technology. Investment costs estimated for fast pyrolysis plants reported in the literature have been complemented by own studies for plants with ca. 100 MWth biomass input. The breakdown of BTL synfuel manufacturing costs of ca. 1 \u20ac /kg in central EU shows that about half of the costs are caused by the biofeedstock, including transport. This helps to generate new income for farmers. The other half is caused by technical costs, which are about proportional to the total capital investment (TCI) for the pyrolysis and biosynfuel production plants. Labor is a minor contribution in the relatively large facilities. \u00a9 2009 Society of Chemical Industry and John Wiley & Sons, Ltd" }, { "instance_id": "R31928xR31921", "comparison_id": "R31928", "paper_id": "R31921", "text": "Technoeconomic analysis of a lignocellulosic biomass indirect gasification process to make ethanol via mixed alcohols synthesis A technoeconomic analysis of a 2000 tonne/day lignocellulosic biomass conversion process to make mixed alcohols via gasification and catalytic synthesis was completed. The process, modeled using ASPEN Plus process modeling software for mass and energy calculations, included all major process steps to convert biomass into liquid fuels, including gasification, gas cleanup and conditioning, synthesis conversion to mixed alcohols, and product separation. The gas cleanup area features a catalytic fluidized-bed steam reformer to convert tars and hydrocarbons into syngas. Conversions for both the reformer and the synthesis catalysts were based on research targets expected to be achieved by 2012 through ongoing research. The mass and energy calculations were used to estimate capital and operating costs that were used in a discounted cash flow rate of return analysis for the process to calculate a minimum ethanol selling price of $0.267/L ($1.01/gal) ethanol (U.S.$2005)." }, { "instance_id": "R31928xR31829", "comparison_id": "R31928", "paper_id": "R31829", "text": "Comparison of coal IGCC with and without CO2 capture and storage: shell gasification with standard vs. partial water quench This work provides a techno-economic assessment of Shell coal gasification -based IGCC, with and without CO2 capture and storage (CCS), focusing on the comparison between the standard Shell configuration with dry gas quench and syngas coolers versus partial water quench cooling." }, { "instance_id": "R31928xR31853", "comparison_id": "R31928", "paper_id": "R31853", "text": "Techno-economic analysis of biomass-to-liquids production based on gasification Abstract This study compares capital and production costs of two biomass-to-liquid production plants based on gasification utilizing 2000 dry Mg per day of corn stover. The focus is to produce liquid transportation fuels with electricity as co-product using commercially available technology within the next 5\u20138 years. The first biorefinery scenario is a low temperature (870 \u00b0C, 1598 \u00b0F), fluidized bed gasifier and the second scenario a high temperature (1300 \u00b0C, 2372 \u00b0F), entrained flow gasifier both followed by catalytic Fischer\u2013Tropsch synthesis and hydroprocessing. The scenarios were modeled and investment costs estimated. A discounted cash flow rate of return analysis was performed to determine a fuel product value (PV). The analysis shows that despite higher investment costs for the high temperature gasification scenario, a lower PV results due to higher fuel yield. These n th plant scenarios are expected to produce fuels with a PV in the range of $4\u20135 per gallon of gasoline equivalent ($1.06\u20131.32 per liter) and require $500\u2013650 million capital investment. Main factors responsible for this relatively high PV are feedstock cost and return on capital investment. Biomass-to-liquid based on gasification has yet to be commercialized. Hence, a pioneer plant is expected to be more costly to build and operate than an n th plant. Pioneer plant analysis found PV to increase by 60\u201390% and capital investment to more than double." }, { "instance_id": "R31954xR31952", "comparison_id": "R31954", "paper_id": "R31952", "text": "Process evaluations and design studies in the UCG project The results of the process-evaluation work carried out in the project, Development of Ultra-Clean Gas (UCG) Technologies for Biomass Gasification, are presented in the publication. The UCG project was directed towards the development of innovative biomass gasification and gas-cleaning technologies for the production of ultra-clean synthesis gas. The process-evaluation work in the UCG project covered a number of topics, including: \u2212 process-configuration screening \u2212 production-chain screening \u2212 techno-economic evaluations of plants producing FT liquids, methanol, SNG or hydrogen; processes simulated with Excel-based codes \u2212 techno-economic evaluations of competing technologies / production chains \u2212 evaluations of benefits of integration; including novel concepts \u2212 design studies of front-end operations by Foster-Wheeler and Pohjolan Voima. The results of the work have proved useful in: \u2212 developing and improving the UCG concept for bio-syngas production \u2212 planning the experimental R&D work in the UCG project \u2212 providing input data for sustainability analyses, CO2-balance calculations, etc. (carried out elsewhere) \u2212 compiling business plans and developmental strategies." }, { "instance_id": "R31954xR31857", "comparison_id": "R31954", "paper_id": "R31857", "text": "Exploration of the possibilities for production of Fischer Tropsch liquids and power via biomass gasification This paper reviews the technical feasibility and economics of biomass integrated gasification\u2013Fischer Tropsch (BIG-FT) processes in general, identifies most promising system configurations and identifies key R&D issues essential for the commercialisation of BIG-FT technology. The FT synthesis produces hydrocarbons of different length from a gas mixture of H2 and CO. The large hydrocarbons can be hydrocracked to form mainly diesel of excellent quality. The fraction of short hydrocarbons is used in a combined cycle with the remainder of the syngas. Overall LHV energy efficiencies,1 calculated with the flowsheet modelling tool Aspenplus, are 33\u201340% for atmospheric gasification systems and 42\u201350% for pressurised gasification systems. Investment costs of such systems () are MUS$ 280\u2013450,2 depending on the system configuration. In the short term, production costs of FT-liquids will be about US$ 16/GJ. In the longer term, with large-scale production, higher CO conversion and higher C5+ selectivity in the FT process, production costs of FT-liquids could drop to US$ 9/GJ. These perspectives for this route and use of biomass-derived FT-fuels in the transport sector are promising. Research and development should be aimed at the development of large-scale (pressurised) biomass gasification-based systems and special attention must be given to the gas cleaning section." }, { "instance_id": "R31954xR31881", "comparison_id": "R31954", "paper_id": "R31881", "text": "Second generation BtL type biofuels \u2013 a production cost analysis The objective of this paper is to address the issue of the production cost of second generation biofuels via the thermo-chemical route. The last decade has seen a large number of technical\u2013economic studies of second generation biofuels. As there is a large variation in the announced production costs of second generation biofuels in the literature, this paper clarifies some of the reasons for these variations and helps obtain a clearer picture. This paper presents simulations for two pathways and comparative production pathways previously published in the literature in the years between 2000 and 2011. It also includes a critical comparison and analysis of previously published studies. This paper does not include studies where the production is boosted with a hydrogen injection to improve the carbon yield. The only optimisation included is the recycle of tail gas. It is shown that the fuel can be produced on a large scale at prices of around 1.0\u20131.4 \u20ac per l. Large uncertainties remain however with regard to the precision of the economic predictions, the technology choices, the investment cost estimation and even the financial models to calculate the production costs. The benefit of a tail gas recycle is also examined; its benefit largely depends on the selling price of the produced electricity." }, { "instance_id": "R31954xR31938", "comparison_id": "R31954", "paper_id": "R31938", "text": "Production of FT transportation fuels from biomass; technical options, process analysis and optimisation, and development potential Fischer\u2013Tropsch (FT) diesel derived from biomass via gasification is an attractive clean and carbon neutral transportation fuel, directly usable in the present transport sector. System components necessary for FT diesel production from biomass are analysed and combined to a limited set of promising conversion concepts. The main variations are in gasification pressure, the oxygen or air medium, and in optimisation towards liquid fuels only, or towards the product mix of liquid fuels and electricity. The technical and economic performance is analysed. For this purpose, a dynamic model was built in Aspen Plus\u00ae, allowing for direct evaluation of the influence of each parameter or device, on investment costs, FT and electricity efficiency and resulting FT diesel costs. FT diesel produced by conventional systems on the short term and at moderate scale would probably cost 16 \u20ac/GJ. In the longer term (large scale, technological learning, and selective catalyst), this could decrease to 9 \u20ac/GJ. Biomass integrated gasification FT plants can only become economically viable when crude oil price levels rise substantially, or when the environmental benefits of green FT diesel are valued. Green FT diesel also seems 40\u201350% more expensive than biomass derived methanol or hydrogen, but has clear advantages with respect to applicability to the existing infrastructure and car technology." }, { "instance_id": "R31954xR31867", "comparison_id": "R31954", "paper_id": "R31867", "text": "FischereTropsch diesel production in a well-to-wheel perspective: a carbon, energy flow and cost analysis We calculated carbon and energy balances and costs of 14 different Fischer\u2013Tropsch (FT) fuel production plants in 17 complete well-to-wheel (WTW) chains. The FT plants can use natural gas, coal, biomass or mixtures as feedstock. Technical data, and technological and economic assumptions for developments for 2020 were derived from the literature, recalculating to 2005 euros for (capital) costs. Our best-guess WTW estimates indicate BTL production costs break even when oil prices rise above $75/bbl, CTL above $60/bbl and GTL at $36/bbl. CTL, and GTL without carbon capture and storage (CCS), will emit more CO2 than diesel from conventional oil. Driving on fuel from GTL with CCS may reduce GHG emissions to around 123 g CO2/km. Driving on BTL may cause emissions of 32\u201363 g CO2/km and these can be made negative by application of CCS. It is possible to have net climate neutral driving by combining fuels produced from fossil resources with around 50% BTL with CCS, if biomass gasification and CCS can be made to work on an industrial scale and the feedstock is obtained in a climate-neutral manner. However, the uncertainties in these numbers are in the order of tens of percents, due to uncertainty in the data for component costs, variability in prices of feedstocks and by-products, and the GHG impact of producing biomass." }, { "instance_id": "R31954xR31887", "comparison_id": "R31954", "paper_id": "R31887", "text": "Technical and economic prospects of coal- and biomass-fired integrated gasification facilities equipped with CCS over time Abstract This study analyses the impacts of technological improvements and increased operating experience on the techno-economic performance of integrated gasification (IG) facilities. The facilities investigated produce electricity (IGCC) or FT-liquids with electricity as by-product (IG\u2013FT). Results suggest that a state-of-the-art (SOTA) coal-fired IGCC without CO2 capture has electricity production costs of 17 \u20ac/GJ (60 \u20ac/MWh) with the potential to decrease to 11 \u20ac/GJ (40 \u20ac/MWh) in the long term. Specific direct CO2 emissions may drop from about 0.71 kg CO2/kWh to 0.59 kg CO2/kWh. If CO2 is captured, production costs may increase to 23 \u20ac/GJ (83 \u20ac/MWh), with the potential to drop to 14 \u20ac/GJ (51 \u20ac/MWh) in the long term. As a result, CO2 avoidance costs would decrease from 35 \u20ac/t CO2 to 18 \u20ac/t CO2. The efficiency penalty due to CCS may decrease from 8.8%pt to 3.7%pt. CO2 emissions can also be reduced by using torrefied biomass (TOPS) instead of coal. Production costs of a SOTA TOPS-fired IGCC without CO2 capture are 18\u201325 \u20ac/GJ (64\u201392 \u20ac/MWh). In the long term, this may drop to 12 \u20ac/GJ (44 \u20ac/MWh), resulting in CO2 avoidance costs of 7 \u20ac/t CO2. The greatest reduction in anthropogenic CO2 emissions is obtained by using biomass combined with carbon capture and storage (CCS). A SOTA TOPS-fired IGCC with CCS has, depending on the biomass price, production costs of 25\u201335 \u20ac/GJ (91\u2013126 \u20ac/MWh) with CO2 avoidance costs of 19\u201340 \u20ac/t CO2. These values may decrease to 15 \u20ac/GJ (55 \u20ac/MWh) and 12 \u20ac/t CO2 avoided in the long term. As carbon from biomass is captured, specific direct CO2 emissions are negative and estimated at \u22120.93 kg CO2/kWh for SOTA and \u22120.59 kg CO2/kWh in the long term. Even though more carbon is sequested in the future concepts, specific emissions drop due to an increase in the energetic conversion efficiency of the future facilities. New technologies in IG-FT facilities have a slightly smaller impact on production costs. In the long term, production costs of FT-liquids from coal may drop from 13 \u20ac/GJ to 9 \u20ac/GJ if CO2 is vented and from 15 \u20ac/GJ to 10 \u20ac/GJ if CCS is applied. The use of TOPS results in 15\u201323 \u20ac/GJ (Vent) and 17\u201324 \u20ac/GJ (CCS) for SOTA facilities. These production costs may drop to 11\u201318 \u20ac/GJ (Vent) and 12\u201319 \u20ac/GJ (CCS) in the long term. Contrary to the IGCC cases, the coal-fired IG-FT facility shows the lowest CO2 avoidance costs. The CO2 emission of coal to FT-liquids with CCS is, however, similar to gasoline/diesel production from crude oil." }, { "instance_id": "R31954xR31853", "comparison_id": "R31954", "paper_id": "R31853", "text": "Techno-economic analysis of biomass-to-liquids production based on gasification Abstract This study compares capital and production costs of two biomass-to-liquid production plants based on gasification utilizing 2000 dry Mg per day of corn stover. The focus is to produce liquid transportation fuels with electricity as co-product using commercially available technology within the next 5\u20138 years. The first biorefinery scenario is a low temperature (870 \u00b0C, 1598 \u00b0F), fluidized bed gasifier and the second scenario a high temperature (1300 \u00b0C, 2372 \u00b0F), entrained flow gasifier both followed by catalytic Fischer\u2013Tropsch synthesis and hydroprocessing. The scenarios were modeled and investment costs estimated. A discounted cash flow rate of return analysis was performed to determine a fuel product value (PV). The analysis shows that despite higher investment costs for the high temperature gasification scenario, a lower PV results due to higher fuel yield. These n th plant scenarios are expected to produce fuels with a PV in the range of $4\u20135 per gallon of gasoline equivalent ($1.06\u20131.32 per liter) and require $500\u2013650 million capital investment. Main factors responsible for this relatively high PV are feedstock cost and return on capital investment. Biomass-to-liquid based on gasification has yet to be commercialized. Hence, a pioneer plant is expected to be more costly to build and operate than an n th plant. Pioneer plant analysis found PV to increase by 60\u201390% and capital investment to more than double." }, { "instance_id": "R31991xR31873", "comparison_id": "R31991", "paper_id": "R31873", "text": "Liquid transportation fuels via large-scale fluidised bed gasification of lignocellulosic biomass With the objective of gaining a better understanding of the system design tradeoffs and economics that pertain to biomass-to-liquids processes, 20 individual BTL plant designs were evaluated based on their technical and economic performance. The investigation was focused on gasification-based processes that enable the conversion of biomass to methanol, dimethyl ether, Fischer-Tropsch liquids or synthetic gasoline at a large (300 MWth of biomass) scale. The biomass conversion technology was based on pressurised steam/O2-blown fluidised-bed gasification, followed by hot-gas filtration and catalytic conversion of hydrocarbons and tars. This technology has seen extensive development and demonstration activities in Finland during the recent years and newly generated experimental data has been incorporated into the simulation models. Our study included conceptual design issues, process descriptions, mass and energy balances and production cost estimates. Several studies exist that discuss the overall efficiency and economics of biomass conversion to transportation liquids, but very few studies have presented a detailed comparison between various syntheses using consistent process designs and uniform cost database. In addition, no studies exist that examine and compare BTL plant designs using the same front-end configuration as described in this work. Our analysis shows that it is possible to produce sustainable low-carbon fuels from lignocellulosic biomass with first-law efficiency in the range of 49.6\u201366.7% depending on the end-product and process conditions. Production cost estimates were calculated assuming Nth plant economics and without public investment support, CO2 credits or tax assumptions. They are 58\u201365 \u20ac/MWh for methanol, 58 \u201366 \u20ac/MWh for DME, 64\u201375 \u20ac/MWh for Fischer-Tropsch liquids and 68\u201378 \u20ac/MWh for synthetic gasoline." }, { "instance_id": "R31991xR31938", "comparison_id": "R31991", "paper_id": "R31938", "text": "Production of FT transportation fuels from biomass; technical options, process analysis and optimisation, and development potential Fischer\u2013Tropsch (FT) diesel derived from biomass via gasification is an attractive clean and carbon neutral transportation fuel, directly usable in the present transport sector. System components necessary for FT diesel production from biomass are analysed and combined to a limited set of promising conversion concepts. The main variations are in gasification pressure, the oxygen or air medium, and in optimisation towards liquid fuels only, or towards the product mix of liquid fuels and electricity. The technical and economic performance is analysed. For this purpose, a dynamic model was built in Aspen Plus\u00ae, allowing for direct evaluation of the influence of each parameter or device, on investment costs, FT and electricity efficiency and resulting FT diesel costs. FT diesel produced by conventional systems on the short term and at moderate scale would probably cost 16 \u20ac/GJ. In the longer term (large scale, technological learning, and selective catalyst), this could decrease to 9 \u20ac/GJ. Biomass integrated gasification FT plants can only become economically viable when crude oil price levels rise substantially, or when the environmental benefits of green FT diesel are valued. Green FT diesel also seems 40\u201350% more expensive than biomass derived methanol or hydrogen, but has clear advantages with respect to applicability to the existing infrastructure and car technology." }, { "instance_id": "R31991xR31929", "comparison_id": "R31991", "paper_id": "R31929", "text": "Hardwood biomass to gasoline, diesel, and jet fuel: I. Process synthesis and global opti- mization of a thermochemical refinery A process synthesis framework is introduced for the conversion of hardwood biomass to liquid (BTL) transportation fuels. A process superstructure is postulated that considers multiple thermochemical pathways for the production of gasoline, diesel, and jet fuel from a synthesis gas intermediate. The hardwood is dried and gasified to generate the synthesis gas, which is converted to hydrocarbons via Fischer\u2013Tropsch or methanol synthesis. Six different types of Fischer\u2013Tropsch units and two methanol conversion pathways are analyzed to determine the topology for liquid fuel production that minimizes the overall system cost. Several upgrading technologies, namely, ZSM-5 catalytic conversion, oligomerization, hydrocracking, isomerization, alkylation, and hydrotreating, are capable of outputting fuels that meet all necessary physical property standards. The costs associated with utility production and wastewater treatment are directly included within the process synthesis framework using a simultaneous heat, pow..." }, { "instance_id": "R31991xR31952", "comparison_id": "R31991", "paper_id": "R31952", "text": "Process evaluations and design studies in the UCG project The results of the process-evaluation work carried out in the project, Development of Ultra-Clean Gas (UCG) Technologies for Biomass Gasification, are presented in the publication. The UCG project was directed towards the development of innovative biomass gasification and gas-cleaning technologies for the production of ultra-clean synthesis gas. The process-evaluation work in the UCG project covered a number of topics, including: \u2212 process-configuration screening \u2212 production-chain screening \u2212 techno-economic evaluations of plants producing FT liquids, methanol, SNG or hydrogen; processes simulated with Excel-based codes \u2212 techno-economic evaluations of competing technologies / production chains \u2212 evaluations of benefits of integration; including novel concepts \u2212 design studies of front-end operations by Foster-Wheeler and Pohjolan Voima. The results of the work have proved useful in: \u2212 developing and improving the UCG concept for bio-syngas production \u2212 planning the experimental R&D work in the UCG project \u2212 providing input data for sustainability analyses, CO2-balance calculations, etc. (carried out elsewhere) \u2212 compiling business plans and developmental strategies." }, { "instance_id": "R31991xR31881", "comparison_id": "R31991", "paper_id": "R31881", "text": "Second generation BtL type biofuels \u2013 a production cost analysis The objective of this paper is to address the issue of the production cost of second generation biofuels via the thermo-chemical route. The last decade has seen a large number of technical\u2013economic studies of second generation biofuels. As there is a large variation in the announced production costs of second generation biofuels in the literature, this paper clarifies some of the reasons for these variations and helps obtain a clearer picture. This paper presents simulations for two pathways and comparative production pathways previously published in the literature in the years between 2000 and 2011. It also includes a critical comparison and analysis of previously published studies. This paper does not include studies where the production is boosted with a hydrogen injection to improve the carbon yield. The only optimisation included is the recycle of tail gas. It is shown that the fuel can be produced on a large scale at prices of around 1.0\u20131.4 \u20ac per l. Large uncertainties remain however with regard to the precision of the economic predictions, the technology choices, the investment cost estimation and even the financial models to calculate the production costs. The benefit of a tail gas recycle is also examined; its benefit largely depends on the selling price of the produced electricity." }, { "instance_id": "R31991xR31857", "comparison_id": "R31991", "paper_id": "R31857", "text": "Exploration of the possibilities for production of Fischer Tropsch liquids and power via biomass gasification This paper reviews the technical feasibility and economics of biomass integrated gasification\u2013Fischer Tropsch (BIG-FT) processes in general, identifies most promising system configurations and identifies key R&D issues essential for the commercialisation of BIG-FT technology. The FT synthesis produces hydrocarbons of different length from a gas mixture of H2 and CO. The large hydrocarbons can be hydrocracked to form mainly diesel of excellent quality. The fraction of short hydrocarbons is used in a combined cycle with the remainder of the syngas. Overall LHV energy efficiencies,1 calculated with the flowsheet modelling tool Aspenplus, are 33\u201340% for atmospheric gasification systems and 42\u201350% for pressurised gasification systems. Investment costs of such systems () are MUS$ 280\u2013450,2 depending on the system configuration. In the short term, production costs of FT-liquids will be about US$ 16/GJ. In the longer term, with large-scale production, higher CO conversion and higher C5+ selectivity in the FT process, production costs of FT-liquids could drop to US$ 9/GJ. These perspectives for this route and use of biomass-derived FT-fuels in the transport sector are promising. Research and development should be aimed at the development of large-scale (pressurised) biomass gasification-based systems and special attention must be given to the gas cleaning section." }, { "instance_id": "R31991xR31887", "comparison_id": "R31991", "paper_id": "R31887", "text": "Technical and economic prospects of coal- and biomass-fired integrated gasification facilities equipped with CCS over time Abstract This study analyses the impacts of technological improvements and increased operating experience on the techno-economic performance of integrated gasification (IG) facilities. The facilities investigated produce electricity (IGCC) or FT-liquids with electricity as by-product (IG\u2013FT). Results suggest that a state-of-the-art (SOTA) coal-fired IGCC without CO2 capture has electricity production costs of 17 \u20ac/GJ (60 \u20ac/MWh) with the potential to decrease to 11 \u20ac/GJ (40 \u20ac/MWh) in the long term. Specific direct CO2 emissions may drop from about 0.71 kg CO2/kWh to 0.59 kg CO2/kWh. If CO2 is captured, production costs may increase to 23 \u20ac/GJ (83 \u20ac/MWh), with the potential to drop to 14 \u20ac/GJ (51 \u20ac/MWh) in the long term. As a result, CO2 avoidance costs would decrease from 35 \u20ac/t CO2 to 18 \u20ac/t CO2. The efficiency penalty due to CCS may decrease from 8.8%pt to 3.7%pt. CO2 emissions can also be reduced by using torrefied biomass (TOPS) instead of coal. Production costs of a SOTA TOPS-fired IGCC without CO2 capture are 18\u201325 \u20ac/GJ (64\u201392 \u20ac/MWh). In the long term, this may drop to 12 \u20ac/GJ (44 \u20ac/MWh), resulting in CO2 avoidance costs of 7 \u20ac/t CO2. The greatest reduction in anthropogenic CO2 emissions is obtained by using biomass combined with carbon capture and storage (CCS). A SOTA TOPS-fired IGCC with CCS has, depending on the biomass price, production costs of 25\u201335 \u20ac/GJ (91\u2013126 \u20ac/MWh) with CO2 avoidance costs of 19\u201340 \u20ac/t CO2. These values may decrease to 15 \u20ac/GJ (55 \u20ac/MWh) and 12 \u20ac/t CO2 avoided in the long term. As carbon from biomass is captured, specific direct CO2 emissions are negative and estimated at \u22120.93 kg CO2/kWh for SOTA and \u22120.59 kg CO2/kWh in the long term. Even though more carbon is sequested in the future concepts, specific emissions drop due to an increase in the energetic conversion efficiency of the future facilities. New technologies in IG-FT facilities have a slightly smaller impact on production costs. In the long term, production costs of FT-liquids from coal may drop from 13 \u20ac/GJ to 9 \u20ac/GJ if CO2 is vented and from 15 \u20ac/GJ to 10 \u20ac/GJ if CCS is applied. The use of TOPS results in 15\u201323 \u20ac/GJ (Vent) and 17\u201324 \u20ac/GJ (CCS) for SOTA facilities. These production costs may drop to 11\u201318 \u20ac/GJ (Vent) and 12\u201319 \u20ac/GJ (CCS) in the long term. Contrary to the IGCC cases, the coal-fired IG-FT facility shows the lowest CO2 avoidance costs. The CO2 emission of coal to FT-liquids with CCS is, however, similar to gasoline/diesel production from crude oil." }, { "instance_id": "R31991xR31962", "comparison_id": "R31991", "paper_id": "R31962", "text": "Biofuels e economic aspects Assuming an oil price of US$60 per barrel, both biodiesel and bioethanol produced from wheat are not profitable in Europe. The producers' high margins are only due to the current mineral oil tax concessions. At present, biomass-to-liquid (BTL) fuel also cannot be produced competitively. At the assumed oil price, only bioethanol and biobutanol produced on a large scale from lignocellulose-containing raw materials have the potential to be produced competitively. Analyses of the technologies used in this field show that in Europe there are interesting new technological developments for the hydrolysis, fermentation and purification step." }, { "instance_id": "R32025xR32006", "comparison_id": "R32025", "paper_id": "R32006", "text": "Performance, cost and emissions of coal-to- liquids (CTLs) plants using low-quality coals under carbon constraints Prior studies of coal-to-liquids (CTLs) processes that produce synthetic transportation fuels from coal have focused mainly on designs using bituminous coal with no or limited constraints on carbon emissions. In this study, plant-level techno-economic models are applied to evaluate the performance, emissions and costs of CTL plants using low quality sub-bituminous coal and lignite as feedstock for both a slurry-feed and dry-feed gasification system. The additional cost of carbon dioxide capture and storage (CCS) is also studied for two plant configurations\u2014a liquids-only plant and a co-production plant that produces both liquids and electricity. The effect of uncertainty and variability of key parameters on the cost of liquids products is also quantified, as well as the effects of a carbon constraint in the form of a price or tax on plant-level CO2 emissions. For liquids-only plants, net plant efficiency is higher and CO2 emissions and costs are lower when sub-bituminous coal is used. For both coals, performance of plants with a dry-feed gasifier is better compared to plants with slurry-feed gasifiers, but the costs are comparable to each other, with slurry-feed plants having a minor advantage. A major concern for CTL plants is the high level of CO2 emissions, the major greenhouse gas linked to global climate change. However, this study shows that for the liquids-only plant most of the CO2 emissions can be avoided using CCS, with only a small (<1%) increase in capital cost. Depending on the coal type, gasifier type and CO2 constraint (up to $25/tonne CO2), the nominal cost of liquid product ranges from $75 to $110/barrel. Parameter uncertainties increase this range to $50\u2013140/barrel (90% confidence interval). With or without CCS, co-production plants are found to have higher capital costs than liquids-only plants, but produce cheaper liquid products when the electricity is sold at a sufficiently high price ($50\u2013120/MWh, depending on plant design and carbon constraint). For co-production plants, net plant efficiency, which depends both on coal consumption as well as electricity generation, is higher for plants with a dry-feed gasifier while CO2 emissions are lower from plants with a slurry-feed gasifier. For both coals, capital cost is lower for plants with dry-feed gasifier, with plants using sub-bituminous coal being cheaper than the ones using lignite. A CO2 tax of $25/tonne is not enough to make CCS more economical when the electricity price exceeds about $80/MWh." }, { "instance_id": "R32025xR31912", "comparison_id": "R32025", "paper_id": "R31912", "text": "Techno-economic evaluation of coal-to-liquids (CTL) plants with carbon capture and sequestration Coal-to-liquids (CTL) processes that generate synthetic liquid fuels from coal are of increasing interest in light of the substantial rise in world oil prices in recent years. A major concern, however, is the large emissions of CO2 from the process, which would add to the burden of atmospheric greenhouse gases. To assess the options, impacts and costs of controlling CO2 emissions from a CTL plant, a comprehensive techno-economic assessment model of CTL plants has been developed, capable of incorporating technology options for carbon capture and storage (CCS). The model was used to study the performance and cost of a liquids-only plant as well as a co-production plant, which produces both liquids and electricity. The effect of uncertainty and variability of key parameters on the cost of liquids production was quantified, as were the effects of alternative carbon constraints such as choice of CCS technology and the effective price (or tax) on CO2 emissions imposed by a climate regulatory policy. The efficiency and CO2 emissions from a co-production plant also were compared to the separate production of liquid fuels and electricity. The results for a 50,000 barrels/day case study plant are presented." }, { "instance_id": "R32025xR32018", "comparison_id": "R32025", "paper_id": "R32018", "text": "Policy drivers and barriers for coal-to-liquids (CtL) technologies in the United States Because of a growing dependence on oil imports, powerful industrial, political and societal stakeholders in the United States are trying to enhance national energy security through the conversion of domestic coal into synthetic hydrocarbon liquid fuels--so-called coal-to-liquids (CtL) processes. However, because of the technology's high costs and carbon intensity, its market deployment is strongly affected by the US energy, technology and climate policy setting. This paper analyses and discusses policy drivers and barriers for CtL technologies in the United States and reaches the conclusion that an increasing awareness of global warming among US policy-makers raises the requirements for the technology's environmental performance and, thus, limits its potential to regional niche markets in coal-producing states or strategic markets, such as the military, with specific security and fuel requirements." }, { "instance_id": "R32025xR31825", "comparison_id": "R32025", "paper_id": "R31825", "text": "Production of Fischer\u2013Tropsch fuels and electricity from bituminous coal based on steam hydrogasification A new thermochemical process for (Fischer\u2013Tropsch) FT fuels and electricity coproduction based on steam hydrogasification is addressed and evaluated in this study. The core parts include (Steam Hydrogasification Reactor) SHR, (Steam Methane Reformer) SMR and (Fisher\u2013Tropsch Reactor) FTR. A key feature of SHR is the enhanced conversion of carbon into methane at high steam environment with hydrogen and no need for catalyst or the use of oxygen. Facilities utilizing bituminous coal for coproduction of FT fuels and electricity with carbon dioxide sequestration are designed in detail. Cases with design capacity of either 400 or 4000 TPD (Tonne Per Day) (dry basis) are investigated with process modeling and cost estimation. A cash flow analysis is performed to determine the fuels (Production Cost) PC. The analysis shows that the 400 TPD case due to a FT fuels PC of 5.99 $/gallon diesel equivalent results in a plant design that is totally uneconomic. The 4000 TPD plant design is expected to produce 7143 bbl/day FT liquids with PC of 2.02 $/gallon and 2.27 $/gallon diesel equivalent at overall carbon capture ratio of 65% and 90%, respectively. Prospective commercial economics benefits with increasing plant size and improvements from large-scale demonstration efforts on steam hydrogasification." }, { "instance_id": "R32025xR32012", "comparison_id": "R32025", "paper_id": "R32012", "text": "Producing liquid fuels from coal, prospects and policy issues The increase in world oil prices since 2003 has prompted renewed interest in producing and using liquid fuels from unconventional resources, such as biomass, oil shale, and coal. This book focuses on issues and options associated with establishing a commercial coal-to-liquids (CTL) industry within the United States. It describes the technical status, costs, and performance of methods that are available for producing liquids from coal; the key energy and environmental policy issues associated with CTL development; the impediments to early commercial experience; and the efficacy of alternative federal incentives in promoting early commercial experience. Because coal is not the only near-term option for meeting liquid-fuel needs, this book also briefly reviews the benefits and limitations of other approaches, including the development of oil shale resources, the further development of biomass resources, and increasing dependence on imported petroleum. A companion document provides a detailed description of incentive packages that the federal government could offer to encourage private-sector investors to pursue early CTL production experience while reducing the probability of bad outcomes and limiting the costs that might be required to motivate those investors. (See Rand Technical Report TR586, Camm, Bartis, and Bushman, 2008.) 114 refs., 2 figs., 16 tabs., 3 apps." }, { "instance_id": "R32025xR31822", "comparison_id": "R32025", "paper_id": "R31822", "text": "Making FischereTropsch fuels and electricity from coal and biomass: performance and cost analysis Major challenges posed by crude-oil-derived transportation fuels are high current and prospective oil prices, insecurity of liquid fuel supplies, and climate change risks from the accumulation of fossil fuel CO2 and other greenhouse gases in the atmosphere. One option for addressing these challenges simultaneously involves producing ultraclean synthetic fuels from coal and lignocellulosic biomass with CO2 capture and storage. Detailed process simulations, lifecycle greenhouse gas emissions analyses, and cost analyses carried out in a comprehensive analytical framework are presented for 16 alternative system configurations that involve gasification-based coproduction of Fischer\u2212Tropsch liquid (FTL) fuels and electricity from coal and/or biomass, with and without capture and storage of byproduct CO2. Systematic comparisons are made to cellulosic ethanol as an alternative low GHG-emitting liquid fuel and to alternative options for decarbonizing stand-alone fossil-fuel power plants. The analysis indicates tha..." }, { "instance_id": "R32061xR32052", "comparison_id": "R32061", "paper_id": "R32052", "text": "Estimating class priors in domain adaptation for word sense disambiguation Instances of a word drawn from different domains may have different sense priors (the proportions of the different senses of a word). This in turn affects the accuracy of word sense disambiguation (WSD) systems trained and applied on different domains. This paper presents a method to estimate the sense priors of words drawn from a new domain, and highlights the importance of using well calibrated probabilities when performing these estimations. By using well calibrated probabilities, we are able to estimate the sense priors effectively to achieve significant improvements in WSD accuracy." }, { "instance_id": "R32061xR32031", "comparison_id": "R32061", "paper_id": "R32031", "text": "Frustratingly easy domain adaptation Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on image-to-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings." }, { "instance_id": "R32061xR32055", "comparison_id": "R32061", "paper_id": "R32055", "text": "Domain adaptation via pseudo in-domain data selection We explore efficient domain adaptation for the task of statistical machine translation based on extracting sentences from a large general-domain parallel corpus that are most relevant to the target domain. These sentences may be selected with simple cross-entropy based methods, of which we present three. As these sentences are not themselves identical to the in-domain data, we call them pseudo in-domain subcorpora. These subcorpora -- 1% the size of the original -- can then used to train small domain-adapted Statistical Machine Translation (SMT) systems which outperform systems trained on the entire corpus. Performance is further improved when we use these domain-adapted models in combination with a true in-domain model. The results show that more training data is not always better, and that best results are attained via proper domain-relevant data selection, as well as combining in- and general-domain systems during decoding." }, { "instance_id": "R32061xR32057", "comparison_id": "R32061", "paper_id": "R32057", "text": "Instance level transfer learning for cross lingual opinion analysis This paper presents two instance-level transfer learning based algorithms for cross lingual opinion analysis by transferring useful translated opinion examples from other languages as the supplementary training data for improving the opinion classifier in target language. Starting from the union of small training data in target language and large translated examples in other languages, the Transfer AdaBoost algorithm is applied to iteratively reduce the influence of low quality translated examples. Alternatively, starting only from the training data in target language, the Transfer Self-training algorithm is designed to iteratively select high quality translated examples to enrich the training data set. These two algorithms are applied to sentence- and document-level cross lingual opinion analysis tasks, respectively. The evaluations show that these algorithms effectively improve the opinion analysis by exploiting small target language training data and large cross lingual training data." }, { "instance_id": "R32061xR32034", "comparison_id": "R32061", "paper_id": "R32034", "text": "Domain adaptation with structural correspondence learning Discriminative learning methods are widely used in natural language processing. These methods work best when their training and test data are drawn from the same distribution. For many NLP tasks, however, we are confronted with new domains in which labeled data is scarce or non-existent. In such cases, we seek to adapt existing models from a resource-rich source domain to a resource-poor target domain. We introduce structural correspondence learning to automatically induce correspondences among features from different domains. We test our technique on part of speech tagging and show performance gains for varying amounts of source and target training data, as well as improvements in target domain parsing accuracy using our improved tagger." }, { "instance_id": "R32061xR32059", "comparison_id": "R32061", "paper_id": "R32059", "text": "Instance weighting for domain adaptation in nlp Domain adaptation is an important problem in natural language processing (NLP) due to the lack of labeled data in novel domains. In this paper, we study the domain adaptation problem from the instance weighting perspective. We formally analyze and characterize the domain adaptation problem from a distributional view, and show that there are two distinct needs for adaptation, corresponding to the different distributions of instances and classification functions in the source and the target domains. We then propose a general instance weighting framework for domain adaptation. Our empirical results on three NLP tasks show that incorporating and exploiting more information from the target domain through instance weighting is effective." }, { "instance_id": "R32061xR32040", "comparison_id": "R32061", "paper_id": "R32040", "text": "A two-stage approach to domain adaptation for statistical classifiers In this paper, we consider the problem of adapting statistical classifiers trained from some source domains where labeled examples are available to a target domain where no labeled example is available. One characteristic of such a domain adaptation problem is that the examples in the source domains and the target domain are known to follow different distributions. Thus a regular classification method would tend to overfit the source domains. We present a two-stage approach to domain adaptation, where at the first >" }, { "instance_id": "R32189xR32145", "comparison_id": "R32189", "paper_id": "R32145", "text": "A novel metaheuristic approach for the flow shop scheduling problem Abstract Advances in modern manufacturing systems such as CAD/CAM, FMS, CIM, have increased the use of intelligent techniques for solving various combinatorial and NP-hard sequencing and scheduling problems. Production process in these systems consists of workshop problems such as grouping similar parts into manufacturing cells and proceeds by passing these parts on machines in the same order. This paper presents a new hybrid simulated annealing algorithm (hybrid SAA) for solving the flow-shop scheduling problem (FSSP); an NP-hard scheduling problem with a strong engineering background. The hybrid SAA integrates the basic structure of a SAA together with features borrowed from the fields of genetic algorithms (GAs) and local search techniques. Particularly, the algorithm works from a population of candidate schedules and generates new populations of neighbor schedules by applying suitable small perturbation schemes. Further, during the annealing process, an iterated hill climbing procedure is stochastically applied on the population of schedules with the hope to improve its performance. The proposed approach is fast and easily implemented. Computational results on several public benchmarks of FSSP instances with up to 500 jobs and 20 machines show the effectiveness and the high quality performance of the approach. In comparison to the performance of previous SA and GA methods, the performance of the proposed one was found superior." }, { "instance_id": "R32189xR32111", "comparison_id": "R32189", "paper_id": "R32111", "text": "GA-based discrete dynamic programming approach for scheduling in FMS environments The paper presents a new genetic algorithm (GA)-based discrete dynamic programming (DDP) approach for generating static schedules in a flexible manufacturing system (FMS) environment. This GA-DDP approach adopts a sequence-dependent schedule generation strategy, where a GA is employed to generate feasible job sequences and a series of discrete dynamic programs are constructed to generate legal schedules for a given sequence of jobs. In formulating the GA, different performance criteria could be easily included. The developed DDF algorithm is capable of identifying locally optimized partial schedules and shares the computation efficiency of dynamic programming. The algorithm is designed In such a way that it does not suffer from the state explosion problem inherent in pure dynamic programming approaches in FMS scheduling. Numerical examples are reported to illustrate the approach." }, { "instance_id": "R32189xR32113", "comparison_id": "R32189", "paper_id": "R32113", "text": "An evolutionary hybrid scheduler based in Petri net structures for FMS scheduling Addresses a hybrid scheduling methodology for flexible manufacturing systems (FMS) that uses Petri nets (PNs) as a modeling tool and several successfully employed scheduling methods: conflict-solving based on heuristic dispatching algorithms, artificial intelligence (AI) heuristic search, problem decomposition and evolutionary approximation algorithms as search tools. PNs have been traditionally employed in scheduling approaches based on discrete event simulation and more recently, the combination of PNs and AI heuristic search has produced interesting results. PNs also allow easy structural analysis towards a decomposition of the problem. In this paper PNs are employed as a representation paradigm and a decomposition-construction scheduling method is based on them. A PN-based AI systematic heuristic search is used to solve sub-problems which are progressively joined by an evolutionary building procedure. Experimental results based on a preliminary implementation of the method are presented." }, { "instance_id": "R32189xR32182", "comparison_id": "R32189", "paper_id": "R32182", "text": "Appropriate evolutionary algorithm for scheduling in FMS Driven by open global competition, rapidly changing technology, and shorter product life cycles, manufacturing organizations come across significant amount of uncertainty and hence continuous change. Customers' demand for a greater variety, high quality and competitive cost is in increasing trend. Flexible Manufacturing Systems (FMS) have brought in significant advantages and benefits to manufacturing industries. The ability of FMSs to flex and adapt to both internal and external changes gives rise to improvement in throughput, product quality, information flows, reliability, and other strategic advantages. However, appropriate scheduling methodology can better derive these benefits. The powers Evolutionary Algorithms like genetic algorithm (GA) and simulated annealing (SA) can be beneficially utilized for optimization of scheduling FMS. The present work utilizes these powerful approaches and tries to find out their appropriateness for planning & scheduling of FMS producing variety of parts in batch mode." }, { "instance_id": "R32189xR32171", "comparison_id": "R32189", "paper_id": "R32171", "text": "Due date and cost-based FMS loading, scheduling and tool management In this study, we consider flexible manufacturing system loading, scheduling and tool management problems simultaneously. Our aim is to determine relevant tool management decisions, which are machining conditions selection and tool allocation, and to load and schedule parts on non-identical parallel CNC machines. The dual objectives are minimization of the manufacturing cost and total weighted tardiness. The manufacturing cost is comprised of machining and tooling costs (which are affected by machining conditions) and non-machining cost (which is affected by tool replacement decisions). We used both sequential and simultaneous approaches to solve our problem to show the superiority of the simultaneous approach. The proposed heuristics are used in a problem space genetic algorithm in order to generate a series of approximately efficient solutions." }, { "instance_id": "R32189xR32108", "comparison_id": "R32189", "paper_id": "R32108", "text": "An enhanced MPS solution for FMS using GAs Presents the master production scheduling (MPS) problem of a flexible manufacturing system (FMS). Earliness/tardiness production scheduling and planning (ETPSP) is one of the solutions used to integrate MRP and JIT effectively. Previous researches on ETPSP have only applied to problems of single/parallel machines with earliness and tardiness penalties on a common due date and capacity. Proposes a revised ETPSP for the purpose of developing an MPS that can fit into the FMS environment where a multiple machine capacity on multi\u2010processes and batch sizes is included. Outlines the application of an enhanced ETPSP method using a genetic algorithm (GA) solution to solve a multi\u2010product FMS production problem. Shows that the use of the improved ETPSP model can represent a real life FMS environment and that a solution can be effectively and efficiently obtained using the GA approach." }, { "instance_id": "R32189xR32091", "comparison_id": "R32189", "paper_id": "R32091", "text": "Genetically tuned fuzzy scheduling for flexible manufacturing systems This paper focuses on the development and implementation of a genetically tuned fuzzy scheduler (GTFS) for heterogeneous FMS under uncertainty. The scheduling system takes input from a table and creates an optimum master schedule. The GTFS uses fuzzy rulebase and inferencing where fuzzy sets are generated by a genetic algorithm to tune the optimization. The fuzzy optimization is based on time criticality in deadline and machine need, taking into account machine availability, uniformity, process time and selectability." }, { "instance_id": "R32189xR32147", "comparison_id": "R32189", "paper_id": "R32147", "text": "An intelligent integrated scheduling model for flexible manufacturing system This paper deals with the simultaneous scheduling of incoming jobs, machines, and vehicle dispatching in a flexible manufacturing system (FMS) having a single device, an automated guided vehicle (AGV). The objective is to find an optimal sequence of incoming parts, which will reduce the waiting times due to blocking and starving of resources and deadheading times, resulting in overall minimization of makespan. In this work a genetic algorithm based iterative procedure which accommodates the combinatorial nature of the problem is proposed to approximately solve the integrated scheduling problem. The procedure is evaluated through different benchmark problems and the outcome of the study is encouraging and paves the way for further research in this area." }, { "instance_id": "R32189xR32085", "comparison_id": "R32189", "paper_id": "R32085", "text": "A GA embedded dynamic search algorithm over a Petri Net model an FMS scheduling In this paper, a genetic algorithm (GA) embedded dynamic search strategy over a Petri net model provides a new scheduling method for a flexible manufacturing system (FMS). The chromosome representation of the search nodes is constructed directly from the Petri net model of an FMS, recording the information about all conflict resolutions, such as resource assignments and orders for resource allocation. The GA operators may enforce some change to the chromosome information in the next generation. A Petri net based schedule builder receives a chromosome and an initial marking as input, and then produces a near-optimal schedule. Due to the NP-complete nature of the scheduling problem of an FMS, we also propose a dynamic FMS scheduler incorporating the proposed GA embedded search scheme, which generates successive partial schedules, instead of generating a full schedule for all raw parts, as the production evolves." }, { "instance_id": "R32189xR32187", "comparison_id": "R32189", "paper_id": "R32187", "text": "HPA-PN: a new algorithm for scheduling FMS using combinational genetic algorithm and Timed Petri Net In flexible manufacturing systems (FMS), each job is formed of a set of operations that should be executed consecutive. Determining the sequence of operations and assigning proper machine to each operation are two important problems in scheduling FMS\u2019s. This is an NP-hard problem. Recently using heuristic methods, numerous algorithms are presented for solving this problem. In this paper for scheduling of flexible manufacturing systems, a new algorithm called Hybrid Genetic Algorithm-Petri Net (HGA-PN) is presented using Timed Petri Net and combinational genetic algorithm. In our purposed algorithm, first the flexible manufacturing system is modeled by using timed Petri net, and then an appropriate scheduling is manufactured using combinational genetic algorithm. The experimental results illustrates that our proposed algorithm has higher efficiency over other existing algorithms." }, { "instance_id": "R32189xR32097", "comparison_id": "R32189", "paper_id": "R32097", "text": "A genetic algorithm for scheduling flexible manufacturing systems General job shop scheduling and rescheduling with alternative route choices for an FMS environment is addressed in this paper. A genetic algorithm is proposed to derive an optimal combination of priority dispatching rules \u201cpdrs\u201d (independentpdrs one each for one Work Cell \u201cWC\u201d), to resolve the conflict among the contending jobs in the Giffler and Thompson \u201cGT\u201d procedure. The performance is compared with regard to makes-pan criteria and computational time. The optimal WCwise-pdr is proved to be efficient in providing optimal solutions in a reasonable computational time. Also, the proposed GA based heuristic method is extended to revise schedules on the arrival of new jobs, and on the failure of equipment to address the dynamic operation mode of flexible manufacturing systems. An iterative search technique is proposed to find the best route choice for all operations to provide a feasible and optimal solution. The applicability and usefulness of the proposed methodology for the operation and control of FMS in real-time are illustrated with examples. The scope of the genetic search process and future research directions are discussed." }, { "instance_id": "R32189xR32162", "comparison_id": "R32189", "paper_id": "R32162", "text": "Simultaneous scheduling of parts and automated guided vehicles in an FMS environment using adaptive genetic algorithm Automated Guided Vehicles (AGVs) are among various advanced material handling techniques that are finding increasing applications today. They can be interfaced to various other production and storage equipment and controlled through an intelligent computer control system. Both the scheduling of operations on machine centers as well as the scheduling of AGVs are essential factors contributing to the efficiency of the overall flexible manufacturing system (FMS). An increase in the performance of the FMS under consideration would be expected as a result of making the scheduling of AGVs an integral part of the overall scheduling activity. In this paper, simultaneous scheduling of parts and AGVs is done for a particular type of FMS environment by using a non-traditional optimization technique called the adaptive genetic algorithm (AGA). The problem considered here is a large variety problem (16 machines and 43 parts) and combined objective function (minimizing penalty cost and minimizing machine idle time). If the parts and AGVs are properly scheduled, then the idle time of the machining center can be minimized; as such, their utilization can be maximized. Minimizing the penalty cost for not meeting the delivery date is also considered in this work. Two contradictory objectives are to be achieved simultaneously by scheduling parts and AGVs using the adaptive genetic algorithm. The results are compared to those obtained by conventional genetic algorithm." }, { "instance_id": "R32189xR32103", "comparison_id": "R32189", "paper_id": "R32103", "text": "Knowledge-based worcell attribute oriented dynamic schedulers for flexible manufacturing systems The economy of production in flexible manufacturing systems (FMS) depends mainly on how effectively the production is planned and how the resources are used. This requires efficient and dynamic factory scheduling and control procedures. This paper addresses two knowledge-based scheduling schemes (work cell attribute oriented dynamic schedulers \u201cWCAODSs\u201d) to control the flow of parts efficiently in real-time for FMS in which the part-mix varies continually with the planning horizon. The present work employs a hybrid optimisation approach in the generalised A1 framework. A genetic algorithm that provides an optimal combination of a set of priority dispatching rules, one for each work cell \u201cWC\u201d (WCwisepdr set), for each of the problem instances characterised by their WC attributes, is used for generating examples. The WC attributes reflect the information about the operating environment of each individual WC. Two inductive learning algorithms are employed to learn the examples, and scheduling rules are formulated as a knowledge base. The learning algorithms employed are: the Genetic CID3 (Continuous Interactive Dichotomister3 algorithm extended with genetic program for weight optimisation) and the Classification Decision Tree algorithm. The knowledge base obtained through the above learning schemes generates robust and effective schedules intelligently with respect to the part-mix changes in real-time, for makespan criteria. The comparison made with a GA-based scheduling methodology shows that WCAODSs provide solutions closer to the optimum." }, { "instance_id": "R32189xR32173", "comparison_id": "R32189", "paper_id": "R32173", "text": "An asymmetric multileveled symbiotic evolutionary algorithm for integrated FMS scheduling This paper considers the integrated FMS (flexible manufacturing system) scheduling problem (IFSP) consisting of loading, routing, and sequencing subproblems that are interrelated to each other. In scheduling FMS, the decisions for the subproblems should be appropriately made to improve resource utilization. It is also important to fully exploit the potential of the inherent flexibility of FMS. In this paper, a symbiotic evolutionary algorithm, named asymmetric multileveled symbiotic evolutionary algorithm (AMSEA), is proposed to solve the IFSP. AMSEA imitates the natural process of symbiotic evolution and endosymbiotic evolution. Genetic representations and operators suitable for the subproblems are proposed. A neighborhood-based coevolutionary strategy is employed to maintain the population diversity. AMSEA has the strength to simultaneously solve subproblems for loading, routing, and sequencing and to easily handle a variety of FMS flexibilities. The extensive experiments are carried out to verify the performance of AMSEA, and the results are reported." }, { "instance_id": "R32189xR32176", "comparison_id": "R32189", "paper_id": "R32176", "text": "An introduction of dominant genes in genetic algorithm for FMS This paper proposes a new idea, namely genetic algorithms with dominant genes (GADG) in order to deal with FMS scheduling problems with alternative production routing. In the traditional genetic algorithm (GA) approach, crossover and mutation rates should be pre-defined. However, different rates applied in different problems will directly influence the performance of genetic search. Determination of optimal rates in every run is time-consuming and not practical in reality due to the infinite number of possible combinations. In addition, this crossover rate governs the number of genes to be selected to undergo crossover, and this selection process is totally arbitrary. The selected genes may not represent the potential critical structure of the chromosome. To tackle this problem, GADG is proposed. This approach does not require a defined crossover rate, and the proposed similarity operator eliminates the determination of the mutation rate. This idea helps reduce the computational time remarkably and improve the performance of genetic search. The proposed GADG will identify and record the best genes and structure of each chromosome. A new crossover mechanism is designed to ensure the best genes and structures to undergo crossover. The performance of the proposed GADG is testified by comparing it with other existing methodologies, and the results show that it outperforms other approaches." }, { "instance_id": "R32424xR32197", "comparison_id": "R32424", "paper_id": "R32197", "text": "Chemical Composition of the Essential Oil ofArtemisia herba-albaAsso Grown in Algeria Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and \u03b2-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield." }, { "instance_id": "R32424xR32385", "comparison_id": "R32424", "paper_id": "R32385", "text": "Composition and intraspecific chemical vari- ability of the essential oil from Artemisia herba alba growing wild in a Tunisian arid zone The intraspecific chemical variability of essential oils (50 samples) isolated from the aerial parts of Artemisia herba\u2010alba Asso growing wild in the arid zone of Southeastern Tunisia was investigated. Analysis by GC (RI) and GC/MS allowed the identification of 54 essential oil components. The main compounds were \u03b2\u2010thujone and \u03b1\u2010thujone, followed by 1,8\u2010cineole, camphor, chrysanthenone, trans\u2010sabinyl acetate, trans\u2010pinocarveol, and borneol. Chemometric analysis (k\u2010means clustering and PCA) led to the partitioning into three groups. The composition of two thirds of the samples was dominated by \u03b1\u2010thujone or \u03b2\u2010thujone. Therefore, it could be expected that wild plants of A. herba\u2010alba randomly harvested in the area of Kirchaou and transplanted by local farmers for the cultivation in arid zones of Southern Tunisia produce an essential oil belonging to the \u03b1\u2010thujone/\u03b2\u2010thujone chemotype and containing also 1,8\u2010cineole, camphor, and trans\u2010sabinyl acetate at appreciable amounts." }, { "instance_id": "R32424xR32369", "comparison_id": "R32424", "paper_id": "R32369", "text": "The essential oil from Artemisia herba-alba Asso cultivated in Arid Land (South Tunisia) Abstract Seedlings of Artemisia herba-alba Asso collected from Kirchaou area were transplanted in an experimental garden near the Institut des R\u00e9gions Arides of M\u00e9denine (Tunisia). During three years, the aerials parts were harvested (three levels of cutting, 25%, 50% and 75% of the plant), at full blossom and during the vegetative stage. The essential oil was isolated by hydrodistillation and its chemical composition was determined by GC(RI) and 13C-NMR. With respect to the quantity of vegetable material and the yield of hydrodistillation, it appears that the best results were obtained for plants cut at 50% of their height and during the full blossom. The chemical composition of the essential oil was dominated by \u03b2-thujone, \u03b1-thujone, 1,8-cineole, camphor and trans-sabinyl acetate, irrespective of the level of cutting and the period of harvest. It remains similar to that of plants growing wild in the same area." }, { "instance_id": "R32424xR32203", "comparison_id": "R32424", "paper_id": "R32203", "text": "Inhibition of steel corrosion in 2M H3PO4 by artemisia oil Abstract Artemisia oil (Ar) is extracted from artemisia herba alba collected in Ain es-sefra-Algeria, and tested as corrosion inhibitor of steel in 2 M H3PO4 using weight loss measurements, electrochemical polarisation and EIS methods. The naturally oil reduces the corrosion rate. The inhibition efficiency was found to increase with oil content to attain 79% at 6 g/l. Ar acts as a cathodic inhibitor. The effect of temperature on the corrosion behaviour of steel indicates that inhibition efficiency of the natural substance decreases with the rise of temperature. The adsorption isotherm of natural product on the steel has been determined." }, { "instance_id": "R32424xR32210", "comparison_id": "R32424", "paper_id": "R32210", "text": "Extraction by Steam Distillation ofArtemisia herba-albsEssential Oil from Algeria: Kinetic Study and Optimization of the Operating Conditions Abstract In order to study the extraction process of essential oil from Artemisia herba-alba, kinetic studies as well as an optimization of the operating conditions were achieved. The optimization was carried out by a parametric study and experiments planning method. Three operational parameters were chosen: Artemisia mass to be treated, steam flow rate and extraction time. The optimal extraction conditions obtained by the parametric study correspond to: a mass of 30 g, a steam flow rate of 1.65 mL.min\u22121 and the extraction time of 60 min. The results reveal that the combined effects of two parameters, the steam water flow rate and the extraction time, are the most significant. The yield is also affected by the interaction of the three parameters. The essential oil obtained with optimal conditions was analyzed by GC-MS and a kinetic study was realised." }, { "instance_id": "R32424xR32215", "comparison_id": "R32424", "paper_id": "R32215", "text": "Chemical composi- tion of Algerian Artemisia herba-alba essential oils isolated by microwave and hydrodistillation Abstract Isolation of the essential oil from Artemisia herba-alba collected in the North Sahara desert has been conducted by hydrodistillation (HD) and a microwave distillation process (MD). The chemical composition of the two oils was investigated by GC and GC/MS. In total, 94 constituents were identified. The main components were camphor (49.3 and 48.1% in HD and MD oils, respectively), 1,8-cineole (13.4\u201312.4%), borneol (7.3\u20137.1%), pinocarvone (5.6\u20135.5%), camphene (4.9\u20134.5%) and chrysanthenone (3.2\u20133.3%). In comparison with HD, MD allows one to obtain an oil in a very short time, with similar yields, comparable qualities and a substantial savings of energy." }, { "instance_id": "R32424xR32422", "comparison_id": "R32424", "paper_id": "R32422", "text": "Chemical constituents and antioxidant activity of the essential oil from aerial parts of Artemisia herba-alba grown in Tunisian semi-arid region Essential oils and their components are becoming increasingly popular as naturally occurring antioxidant agents. In this work, the composition of essential oil in Artemisia herba-alba from southwest Tunisia, obtained by hydrodistillation was determined by GC/MS. Eighteen compounds were identified with the main constituents namely, \u03b1-thujone (24.88%), germacrene D (14.48%), camphor (10.81%), 1,8-cineole (8.91%) and \u03b2-thujone (8.32%). The oil was screened for its antioxidant activity with 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, \u03b2-carotene bleaching and reducing power assays. The essential oil of A. herba-alba exhibited a good antioxidant activity with all assays with dose dependent manner and can be attributed to its presence in the oil. Key words: Artemisia herba alba, essential oil, chemical composition, antioxidant activity." }, { "instance_id": "R32424xR32413", "comparison_id": "R32424", "paper_id": "R32413", "text": "Chemical composition and biological activities of a new essential oil chemotype of Tunisian Artemisia herba alba Asso The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the \u03b1-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the \u03b2-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances." }, { "instance_id": "R32424xR32324", "comparison_id": "R32424", "paper_id": "R32324", "text": "Composition and infraspecific variability of Artemisia herba-alba from southern Spain The composition of the essential oils of 16 individual plants of Artemisia herba-alba Asso ssp. valentina (Lam.) Marcl. (at the full bloom stage) growing wild in four different locations from southern Spain were investigated by capillary GC and GC\u2013MS in combination with retention indices. Among the 60 identified constituents (accounting for 80.6\u201395.0% of the oils), 33 have been reported for the first time in Spanish A. herba-alba oil and 17 of them have not been previously described in A. herba-alba oil. From the analysis of the oil samples, it could be deduced that a noticeable chemical polymorphism typified this taxon. Four groups of essential oils exhibited a single compound with percentages near 30% or higher: davanone, 1,8-cineole, chrysanthenone and cis-chrysanthenol. Two further oil types showed p-cymene and cis-chrysanthenyl acetate as major components in moderate amounts (ca. 20%). All of these types of essential oils have not been previously found in A. herba-alba from Spain and the appearance of such considerable amount of p-cymene is described here for the first time in A. herba-alba. # 2003 Published by Elsevier Ltd." }, { "instance_id": "R32424xR32277", "comparison_id": "R32424", "paper_id": "R32277", "text": "APPLICATION OF ESSENTIAL OIL OF ARTEMISIA HERBA ALBA AS GREEN CORROSION INHIBITOR FOR STEEL IN 0.5 M H2SO4 Essential oil from Artemisia herba alba (Art) was hydrodistilled and tested as corrosion inhibitor of steel in 0.5 M H 2 SO 4 using weight loss measurements and electrochemical polarization methods. Results gathered show that this natural oil reduced the corrosion rate by the cathodic action. Its inhibition efficiency attains the maximum (74%) at 1 g/L. The inhibition efficiency of Arm oil increases with the rise of temperature. The adsorption isotherm of natural product on the steel has been determined. A. herba alba essential oil was obtained by hydrodistillation and its chemical composition oil was investigated by capillary GC and GC/MS. The major components were chrysanthenone (30.6%) and camphor (24.4%)." }, { "instance_id": "R32424xR32248", "comparison_id": "R32424", "paper_id": "R32248", "text": "The constitution of essential oils from Artemisia herba alba populations of Israel and Sinai Abstract The essential oils of Artemisia herba alba populations, four from Israel and one from Sinai, were analysed. Identification of components was achieved either by isolation of pure components or by GC and GC/MS. The composition of the oils differed in the various populations. All the oils contained 1,8-cineole in varying concentrations. Irregular monoterpenes were found in two populations, in one of them at high concentration. Two main types of oils were discerned, the cineole-thujane-bornane type and the pinane type. The differences in the composition of the essential oils in the A . herba alba populations investigated are in line with the variations of their sesquiterpene lactones." }, { "instance_id": "R32424xR32293", "comparison_id": "R32424", "paper_id": "R32293", "text": "Chemical composition and antiproliferative activity of essential oil from aerial parts of a medicinal herb Artemisia herba-alba Artemisia herba-alba Asso., Asteraceae, is widely used in Morrocan folk medicine for the treatment of different health disorders. However, no scientific or medical studies were carried out to assess the cytotoxicity of A. herba-alba essential oil against cancer cell lines. In this study, eighteen volatile compounds were identified by GC-MS analysis of the essential oil obtained from the plant's aerial parts. The main volatile constituent in A. herba-alba was found to be a monoterpene, Verbenol, contributing to about 22% of the total volatile components. The essential oil showed significant antiproliferative activity against the acute lymphoblastic leukaemia (CEM) cell line, with 3 \u00b5g/mL as IC50 value. The anticancer bioactivity of Moroccan A. herba-alba essential oil is described here for the first time." }, { "instance_id": "R32424xR32377", "comparison_id": "R32424", "paper_id": "R32377", "text": "IMPACT OF SEASON AND HARVEST FREQUENCY ON BIOMASS AND ESSENTIAL OIL YIELDS OF ARTEMISIA HERBA-ALBA CULTIVATED IN SOUTHERN TUNISIA SUMMARY Artemisia herba-alba Asso has been successfully cultivated in the Tunisian arid zone. However, information regarding the effects of the harvest frequency on its biomass and essential oil yields is very limited. In this study, the effects of three different frequencies of harvesting the upper half of the A. herba-alba plant tuft were compared. The harvest treatments were: harvesting the same individual plants at the flowering stage annually; harvesting the same individual plants at the full vegetative growth stage annually and harvesting the same individual plants every six months. Statistical analyses indicated that all properties studied were affected by the harvest frequency. Essential oil yield, depended both on the dry biomass and its essential oil content, and was significantly higher from plants harvested annually at the flowering stage than the other two treatments. The composition of the \u03b2- and \u03b1-thujone-rich oils did not vary throughout the experimental period." }, { "instance_id": "R32424xR32330", "comparison_id": "R32424", "paper_id": "R32330", "text": "Chemical composition, mutagenic and antimutagenic activities of essential oils from (Tunisian) Artemisia campestris and Artemisia herba-alba Abstract The essential oil composition from the aerial parts of Artemisia campestris var. glutinosa Gay ex Bess and Artemisia herba-alba Asso (Asteraceae) of Tunisian origin has been studied by GC and GC/MS. The main constituents of the oil from A. campestris collected in Benguerdane (South of Tunisia) were found to be \u03b2-pinene (41.0%), p-cymene (9.9%), \u03b1-terpinene (7.9%), limonene (6.5%), myrcene (4.1%), \u03b2-phellandrene (3.4%) and a-pinene (3.2%). Whereas the oil from A. herba-alba collected in Tataouine (South of Tunisia) showed, pinocarvone (38.3%), a-copaene (12.18%), limonene (11.0%), isoamyl2-methylbutyrate (19.5%) as major compounds. The mutagenic and antimutagenic activities of the two oils were investigated by the Salmonella typhimurium/microsome assay, with and without addition of an extrinsic metabolic activation system. The oils showed no mutagenicity when tested with Salmonella typhimurium strains TA98 and TA97. On the other hand, we showed that each oil had antimutagenic activity against the carcinogen Benzo (a) pyrene (B[a] P) when tested with TA97 and TA98 assay systems." }, { "instance_id": "R32424xR32258", "comparison_id": "R32424", "paper_id": "R32258", "text": "Chemovariation ofArtemisia herba albaAsso. Aromatic Plants of the Holy Land and the Sinai. Part XVI. Abstract In continuation of our investigation of aromatic flora of the Holy Land, the systematic study of Artemisia herba alba essential oils has been conducted. The detailed composition of five relatively rare chemotypes of A. herba alba obtained through GC and GC/MS analysis are presented. To ensure the integrity of each chemotype the volatiles were extracted from individual plant specimens and bulked only if the GC profiles were substantially similar. The major constituents were: Type 1: 1,8 cineole (10.8%), \u03b1-thujone (40.9%) and \u03b2-thujone (34.9%); Type 2: 1,8 cineole (26.0%) and camphor (42.1%); Type 3; 1,8 cineole (26.6%) and \u03b2-thujone (44.0%); Type 4: cis-chrysanthenyl acetate (8.9%) and cis-chrysanthenol (30.0%); Type 5: cis-chysanthenol (6.8%) and cis-chrysanthenyl acetate (69.0%). This study showed that the population of A. herba alba in Israel consists of a much greater number of chemovarieties than was previously believed. Though chemovarieties are unevenly distributed in different geographic areas, no clear relation between the plant type and environmental conditions could be established." }, { "instance_id": "R32424xR32308", "comparison_id": "R32424", "paper_id": "R32308", "text": "Chemical composition of the essential oil of Artemisia herba-alba Asso ssp. valentine (Lam.) Marcl Abstract The composition of the oil, steam-distilled from aerial parts of Artemisia herha-alba Asso ssp. valentina (Lam.) Marcl. (Asteraceae) collected from the south of Spain, has been analyzed by GC/MS. Among the 65 constituents investigated (representing 93.6 % of the oil composition), 61 were identified (90.3% of the oil composition). The major constituents detected were the sesquiterpene davanone (18.1%) and monoterpenes p-cymene (13.5%), 1,8-cineole (10.2%), chrysanthenone (6.7%), cis-chrysanthenyl acetate (5.6%), \u03b3-terpinene (5.5%), myrcene (5.1%) and camphor (4.0%). The oil was dominated by monoterpenes (ca. 66% of the oil), p-menthane and pinane being the most representative skeleta of the group. The oil sample studied did not contain thujones, unlike most A. herba-alba oils described in the literature." }, { "instance_id": "R32541xR32448", "comparison_id": "R32541", "paper_id": "R32448", "text": "\u00abEducation, Educational Mismatch and Wage Inequality: Evidence from Spain In this paper, we explore the connection between education and wage inequality in Spain for the period 1994-2001. Drawing on quantile regression, we describe the conditional wage distribution of different populations groups. We find that higher education is associated with higher wage dispersion. A contribution of the paper is that we explicitly take into account the fact that workers who are and workers who are not in jobs commensurate with their qualifications have a different distribution of earnings. We differentiate between three different types of educational mismatch: 'over-qualification', 'incorrect qualification', and 'strong mismatch'. We find that while over-qualification and incorrect qualification are not associated with lower wages, strong mismatch carries a pay penalty that ranges from 13% to 27%. Thus, by driving a wedge between matched and mismatched workers, the incidence of strong mismatch contributes to enlarge wage differences within education groups. We find that over the recent years, the proportion of strongly mismatched workers rose markedly in Spain, contributing toward further within-groups dispersion." }, { "instance_id": "R32541xR32489", "comparison_id": "R32541", "paper_id": "R32489", "text": "The incidence of, and returns to overeducation in the UK The 1991 wave of the British Household Panel Survey is used to examine the extent of, and the returns to overeducation in the UK. About 11% of the workers are overeducated, while another 9% are undereducated for their job. The results show that the allocation of female workers is more efficient than the allocation of males. The probability of being overeducated decreases with work experience, but increases with tenure. Overeducated workers earn less, while undereducated workers earn more than correctly allocated workers. Both the hypothesis that productivity is fully embodied and the hypothesis that productivity is completely job determined are rejected by the data. It is found that there are substantial wage gains obtainable from a more efficient allocation of skills over jobs." }, { "instance_id": "R32541xR32521", "comparison_id": "R32541", "paper_id": "R32521", "text": "The rising incidence of overeducation in the U.S. Labor market Abstract Research is reported that examines changes between 1960 and 1976 in the incidence of overeducation, defined as the discrepancy between the educational attainments of workers and the educational requirements of their jobs. Estimates are derived at the aggregate level and separately for race and sex groups. Results suggest that the overall incidence of overeducation has increased in recent times because the skill requirements of jobs have changed only slightly, while the educational attainments of workers have increased substantially. Whites continue to be less overeducated than blacks, but the gap has narrowed." }, { "instance_id": "R32541xR32465", "comparison_id": "R32541", "paper_id": "R32465", "text": "\u00abPremiums and Penalties for Surplus and Deficit Education. Evidence from the United States and Germany Abstract An intriguing finding in the literature on the role of education in the labor market concerns workers who have acquired either more or less education than they say their jobs require. Contrary to predictions from a rigid, structural view of jobs, several authors have found that the labor market rewards workers for having completed more schooling than their jobs require and penalizes workers who have \u2018too little\u2019 schooling. We investigate whether the structural changes in the labor market in the United States over the 1970s and 1980s (see Levy, F., & Murnane, R. (1992). US earnings levels and earnings inequality: a review of recent trends and proposed explanations. Journal of Economic Literature, 30, 1333\u20131381) affected the rewards and penalties associated with having too much or too little schooling for a job. We then examine whether the same rewards and penalties for surplus and deficit education observed in the United States apply in Germany, a country with a much more structured educational system and labor market. We test explicitly for differences over time in the United States and at a point in time between the United States and Germany. We find, consistent with a universalistic view of labor markets, more similarities across countries than over time." }, { "instance_id": "R32541xR32469", "comparison_id": "R32541", "paper_id": "R32469", "text": "\u00abThe Effects of Over-education on Earnings in the Graduate Labour Market This paper uses a new survey of graduates from one large civil university in the UK to examine the determinants of over-education and its subsequent impact on labour market earnings. Multiple measurements of over-education were collected to assess the effect of measurement error on the estimated pay penalty associated with over-education. Panel estimates suggest that the upward bias in standard OLS estimates is offset by an equal downward bias resulting from measurement error." }, { "instance_id": "R32541xR32537", "comparison_id": "R32541", "paper_id": "R32537", "text": "The Impact of Schooling Surplus on Earnings: Some Additional Findings\u00bb This paper examines the impact of overeducation (or surplus schooling) on earnings. Overeducated workers are defined as those with educational attainments substantially above the mean for their specific occupations. Two models are estimated using data from the 1980 census. Though our models, data, and measure of overeducation are different from those used by Rumberger (1987), our results are similar. Our results show that overeducated workers often earn less than their adequately educated and undereducated counterparts." }, { "instance_id": "R32541xR32487", "comparison_id": "R32541", "paper_id": "R32487", "text": "Overeducation and the returns to enterprise-related schooling Abstract This paper examines the relation between overeducation and enterprise-related schooling. If overeducation and enterprise-related schooling are substitutes the social costs of Overeducation are less. We find that correctly allocated workers have the highest probability of participation in enterprise-related schooling, while undereducated workers have the lowest probability of participation. There is no evidence of overeducation and enterprise-related schooling being either substitutes or complements. If we do not correct for self-selection, the average return on a year of education for correctly allocated workers is higher than the average rate of return to education for under- and overeducated workers. If we correct for self-selection in the participation in enterprise-related schooling the rate of return to education increases. The rates of return to under- and overeducation increase as well. If we correct for self-selection the rate of return to a year of undereducation becomes higher than the rate of return to a year of actual education. For undereducated workers the wage gain of participation in enterprise-related schooling is higher than for a correctly allocated worker. A year of Overeducation decreases the wage gain of participation in enterprise-related schooling for participants." }, { "instance_id": "R32541xR32433", "comparison_id": "R32541", "paper_id": "R32433", "text": "Educational Mismatches vs. Skill Mismatches: Effects of Wages, Job Satisfaction and On-the-job Search Education-job mismatches are reported to have serious effects on wages and other labour market outcomes. Such results are often cited in support of assignment theory, but can also be explained by institutional and human capital models. To test the assignment explanation, we examine the relation between educational mismatches and skill mismatches. In line with earlier research, educational mismatches affect wages strongly. Contrary to the assumptions of assignment theory, this effect is not explained by skill mismatches. Conversely, skill mismatches are much better predictors of job satisfaction and on-the-job search than are educational mismatches. Copyright 2001 by Oxford University Press." }, { "instance_id": "R32541xR32526", "comparison_id": "R32541", "paper_id": "R32526", "text": "A Theory of Career Mobility This paper analyzes theoretically and empirically the role and significance of occupational mobility in the labor market focusing on individuals' careers. It provides additional dimensions to the analysis of investment in human capital, wage differences across individuals, and the relationships among promotions, quits, and interfirm occupational mobility. It is shown that part of the returns to education is in the form of higher probabilities of occupational upgrading, within or across firms. Given an origin occupation, schooling increases the likelihood of occupational upgrading. Furthermore, workers who are not promoted despite a high probability of promotion are more likely to quit." }, { "instance_id": "R32541xR32499", "comparison_id": "R32541", "paper_id": "R32499", "text": "\u00abFitting to the Job: The Role of Generic and Vocational Competencies in Adjustment and Performance This paper provides new insight into the role of generic and vocational competencies during the transition from education to the labor market. Using data on the labor market situation of Dutch higher education graduates, we analyze the allocation over different educational domains, the incidence of on-the-job training and its impact on wages. The results reveal the different roles of competencies. Vocational competencies influence positively the chance of being matched to an occupation inside the own domain. Generic competencies influence positively both the chance of being matched to an occupation outside the own domain and the training participation." }, { "instance_id": "R32541xR32539", "comparison_id": "R32541", "paper_id": "R32539", "text": "The great Canadian training robbery: evidence on the returns to educational mismatch Abstract In this paper, I use data from the National Survey of Class Structure and Labour Process in Canada (NSCS) to estimate the returns to over and undereducation. I find that there are positive returns to overeducation for males in jobs that require a university bachelor's degree; but for other levels of required education, the returns are insignificant. I also find evidence of lower pay for undereducated males in jobs with low education requirements. For females, the returns to over and undereducation are insignificant for all levels of required education." }, { "instance_id": "R32541xR32429", "comparison_id": "R32541", "paper_id": "R32429", "text": "\u00abMismatch in the Spanish Labor Market. Overeducation?\u00bb The objective of this article is to explain the job match, which is assessed by comparing attained education and job-required education as reported by workers. We frame our empirical work according to the occupational mobility theory. Using a cross-section of workers from a representative survey of the Spanish labor force, we consider overeducated workers to be those who report that the level of education their jobs require is below the level of education they have attained. Our results indicate that overeducated workers have less experience, decreased on-the-job training and higher turnover than other comparable workers. We also observe an improvement in the job match over age and mobility." }, { "instance_id": "R32541xR32519", "comparison_id": "R32541", "paper_id": "R32519", "text": "College quality and overeducation Abstract This paper examines the relationship between college quality and overeducation. If workers attending lower quality colleges receive less human capital in a year of schooling, they may require more schooling than the typical person to be qualified for their job. Using the Panel Study of Income Dynamics, a negative relationship is found between college quality and the likelihood of being overeducated. In addition college quality is found to influence the ability of overeducated workers to exit the classification." }, { "instance_id": "R32541xR32444", "comparison_id": "R32541", "paper_id": "R32444", "text": "\u00abEducational Mismatch and Labour Mobility of People with Disabilities: The Spanish Case In this paper we analyze the job-matching quality of people with disabilities. We do not find evidence of a greater importance of over-education in this group in comparison to the rest of the population. We find that people with disabilities have a lower probability of being over-educated for a period of 3 or more years, a higher probability of leaving mismatch towards inactivity or marginal employment, a lower probability of leaving mismatch towards a better match, and a higher probability of employment mobility towards inactivity or marginal employment. The empirical analysis is based on Spanish data from the European Community Household Panel from 1995 to 2000." }, { "instance_id": "R32541xR32480", "comparison_id": "R32541", "paper_id": "R32480", "text": "\u00abIs There a Genuine Under-utilization of Skills Amongst the Over- qualified? Two theories of over-qualification are considered, namely mismatch, whereby workers do not find the most appropriate jobs for their skills, because of imperfect information or labour market rigidities, and \u2018heterogeneous workers\u2019, whereby individuals with the same qualifications have different actual skill levels, so that they can be over-qualified in terms of formal qualifications, while their skills are actually appropriate for the jobs that they do. The evidence suggests that both theories are relevant in certain situations." }, { "instance_id": "R32541xR32451", "comparison_id": "R32541", "paper_id": "R32451", "text": "The Social and Political Consequences of Overeducation This study employs national survey data to estimate the extent of overeducation in the U.S. labor force and its impact on a variety of worker attitudes. Estimates are made of the extent of overeducation and its distribution among different categories of workers, according to sex, race, age, and class background. The effects of overeducation are examined in four areas of worker attitudes: job satisfaction, political leftism, political alienation, and stratification ideology. Evidence is found of significant effects of overeducation on job satisfaction and several aspects of stratification ideology. The magnitude of these effects is small, however, and they are concentrated almost exclusively among very highly overeducated workers. No evidence is found of generalized political effects of overeducation, either in the form of increased political leftism or in the form of increased political alienation. These findings fail to support the common prediction of major political repercussions of overeducation and suggest the likelihood of alternative forms of adaptation among overeducated workers." }, { "instance_id": "R32541xR32473", "comparison_id": "R32541", "paper_id": "R32473", "text": "The Incidence and Wage Effects of Overeducation Abstract Data gathered from a recent national sample of workers on educational requirements and attainments are used to examine the extent and economic effects of overeducation. Nearly 40 percent of the U.S. work force-and about 50 percent of black males-have more education than their jobs require. But we also find that \u201csurplus\u201d education does have economic value. The individual return to an additional year of surplus education was positive and significant for all major demographic groups. The estimated return is, however, only about half the size of the return to an additional year of required education." }, { "instance_id": "R32541xR32502", "comparison_id": "R32541", "paper_id": "R32502", "text": "OPTIMAL \u2018MISMATCH\u2019 AND PROMOTIONS Seeming 'mismatches,' in which workers are either under- or overqualified, are shown to be optimal. From the firm's point of view, although turnover will be positively related to overqualification, training costs will be inversely related to overqualification. Further, overqualified workers constitute a pool from which promotions are made. Workers enter seeming mismatches due to search and mobility costs and because of opportunities for promotion. Estimates using a unique data set indicate that workers who are overqualified at hire receive less training and more promotions and that workers overqualified for their current job are more likely to quit. Copyright 1995 by Oxford University Press." }, { "instance_id": "R32541xR32460", "comparison_id": "R32541", "paper_id": "R32460", "text": "The Wage Effects of Overschooling Revisited\u00bb Abstract This study replicates models developed by Verdugo and Verdugo (1989) and Sicherman (1991) to study the wage effects of overschooling. Using the 1985 wave of the Panel Study of Income Dynamics, our results confirm earlier work showing that the rate of return to required schooling exceeds the rate of return to overschooling, and that the rate of return to underschooling is negative. At the same time, our results also confirm that, on the average, persons whose schooling exceeds (is less than) the required schooling for their occupation or job, respectively, receive lower (higher) wages than workers with similar levels of schooling in occupations or jobs having the required schooling. These results remain robust for alternative definitions of required, over- and underschooling, as well as for alternative specifications of the wage equations." }, { "instance_id": "R32541xR32475", "comparison_id": "R32541", "paper_id": "R32475", "text": "\u00abRecruitment of Overeducated Personnel: Insider-Outsider Effects on Fair Employee Selection Practice We analyze a standard employee selection model given two institutional constraints: First, professional experience perfectly substitutes insufficient formal education for insiders while this substitution is imperfect for outsiders. Second, in the latter case the respective substitution rate increases with the advertised minimum educational requirement. Optimal selection implies that the expected level of formal education is higher for outsider than for insider recruits. Moreover, this difference in educational attainments increases with lower optimal minimum educational job requirements. Investigating data of a large US public employer confirms both of the above theoretical implications. Generally, the econometric model exhibits a \ufffdgood fit\ufffd." }, { "instance_id": "R32541xR32484", "comparison_id": "R32541", "paper_id": "R32484", "text": "\u00abOvereducation, Wages and Promotions Within the Firm\u00bb, Labor Economics We analyse data from personnel records of a large firm producing energy and telecommunication and test for the effect of deviations between required and attained education of workers. Required education is measured as hiring standards set by the firm. We find the usual effects of over- and undereducation in a wage regression, thus rejecting the argument that such effects are exclusively due to firm fixed effects. Distinguishing, within the firm, between a sheltered internal labour market and an exposed external labour market, we find that at the internal labour market over- and undereducation significantly affect career development, in particular at younger ages, but that such effects are mostly absent at the firm's external labour market." }, { "instance_id": "R32541xR32532", "comparison_id": "R32541", "paper_id": "R32532", "text": "\u00abThe Training of School-leavers. Complementarity or Substitution? Abstract In theoretical discussions about the relation between education and training, the question of complementarity or substitutability between these two different forms of human capital is raised. If initial education and industrial training are substitutes, overeducated workers will participate less in additional training than workers who are adequately educated. It could explain the persistence of overeducation and implies that the social wastage of overeducation will be less. On the other hand, if initial education and industrial training are complements, existing differences in human capital will only increase by industrial training, implying the risk for some workers of `missing the boat'. Supplementary to Groot we not only look at the impact of over- and undereducation (level) but also at non-matching fields of studies and the `narrowness' of types of education. A sample of labour market entrants was used, so we did not have to cope with the disturbing influence of other forms of human capital: life and labour market experience. The paper gives evidence in support of both substitutability and complementarity between initial education and firm training." }, { "instance_id": "R32541xR32455", "comparison_id": "R32541", "paper_id": "R32455", "text": "Measuring Over-education Previous work on over-education has assumed homogeneity of workers and jobs. Relaxing these assumptions, we find that over-educated workers have lower education credentials than matched graduates. Among the over-educated graduates we distinguish between the apparently over-educated workers, who have similar unobserved skills as matched graduates, and the genuinely over-educated workers, who have a much lower skill endowment. Over-education is associated with a pay penalty of 5%-11% for apparently over-educated workers compared with matched graduates and of 22%-26% for the genuinely over-educated. Over-education originates from the lack of skills of graduates. This should be taken into consideration in the current debate on the future of higher education in the UK. Copyright The London School of Economics and Political Science 2003." }, { "instance_id": "R32541xR32462", "comparison_id": "R32541", "paper_id": "R32462", "text": "\u00abIncidence and Wage Effects of Overschooling and Underschooling in Hong Kong Abstract Data from the 1986 Hong Kong By-census and the 1991 Hong Kong Census were used to study the following issues: (1) What is the incidence of adequate schooling, overschooling and underschooling in Hong Kong, and has it changed between 1986 and 1991? (2) What are the wage consequences of adequate schooling, overschooling and underschooling, and have they changed over time? Also, are the results influenced by potential labor-market experience? The empirical results are discussed in the context of recent changes in the structure of the Hong Kong economy and the labor market. [ JEL I21, J31]" }, { "instance_id": "R32541xR32440", "comparison_id": "R32541", "paper_id": "R32440", "text": "Educational mismatch and wages: a panel analysis Abstract This paper contributes to the literature considering the wage effects of educational mismatch. It uses a large German panel data set for the period 1984\u20131998 and stresses the importance of controlling for unobserved heterogeneity when analyzing the labor market effects of over- and undereducation. Using pooled OLS, the estimation results confirm those found in the existing literature. The estimated differences between adequately and inadequately educated workers become smaller or disappear totally, when controlling for unobserved heterogeneity." }, { "instance_id": "R32871xR32649", "comparison_id": "R32871", "paper_id": "R32649", "text": "Ship recognition from high resolution remote sensing imagery aided by spatial relationship Target recognition is of great importance for information extraction from high resolution remote sensing image. As an important kind of man-made objects, ship recognition is a key point to many applications, such as vessel monitoring and marine traffic. As spatial relationship is invariant to topology change, a method of ship recognition from high resolution remote sensing imagery aided by spatial relationship is proposed and implemented. The method includes four critical steps: water segmentation, potential ship detection, seed growing and result creation. Experiments show that this method is robust to object position, orientation, scale, and intensity, and achieve a high accuracy of ship recognition." }, { "instance_id": "R32871xR32789", "comparison_id": "R32871", "paper_id": "R32789", "text": "OBIA ship detection with multispectral and SAR images: A simulation for Copernicus security applications Every day, ships of different type, size and origin cross the world seas. Not only for commerce and transport, but also for illegal activities. In addition to conventional positioning and tracking systems, detection with Earth observation satellites is an effective means to monitor human movements across the sea. The European Copernicus Programme operates towards this goal, through the definition of border and maritime surveillance as one of its main tasks. This paper describes an Object Based Image Analysis (OBIA) workflow developed for ship detection, monitoring and tracking with high-resolution satellite images. Here, it has been used to simulated medium-resolution multispectral (MS) and Synthetic Aperture Radar (SAR) images representative of the Sentinel components of Copernicus. First results confirm that the method proposed can be efficiently used by European agencies for monitoring the explosive growth of illegal flows in the Mediterranean Sea." }, { "instance_id": "R32871xR32691", "comparison_id": "R32871", "paper_id": "R32691", "text": "Object Detection in Image with Complex Background Abstr act. Object detection is the key technology in computer vision, with broad application prospects. Object detection has great research value and practical significance as a hot spot of video surveillance in recent years. This paper proposes an algorithm for ship detection in image with complex harbor background. We test the performance of several texture descriptors, and a region growing method based on contrast texture feature is proposed to implement sea-land separation. Then, we apply a method combined with adaptive threshold segmentation and shape analysis for offshore ship detection. Furthermore, the salient boundary template matching in the sea-land border area is used for docked ship detection. The experimental results show that our algorithm is able to implement ship object detection in complex image with good robustness and real-time performance." }, { "instance_id": "R32871xR32612", "comparison_id": "R32871", "paper_id": "R32612", "text": "Ship detection by salient convex boundaries Automatic ship detection from remote sensing imagery has many applications, such as maritime security, traffic surveillance, fisheries management. However, it is still a difficult task for noise and distractors. This paper is concerned with perceptual organization, which detect salient convex structures of ships from noisy images. Because the line segments of contour of ships compose a convex set, a local gradient analysis is adopted to filter out the edges which are not on the contour as preprocess. For convexity is the significant feature, we apply the salience as the prior probability to detect. Feature angle constraint helps us compute probability estimate and choose correct contour in many candidate closed line groups. Finally, the experimental results are demonstrated on the satellite imagery from Google earth." }, { "instance_id": "R32871xR32578", "comparison_id": "R32871", "paper_id": "R32578", "text": "Using SPOT-5 HRG Data in Panchromatic Mode for Operational Detection of Small Ships in Tropical Area Nowadays, there is a growing interest in applications of space remote sensing systems for maritime surveillance which includes among others traffic surveillance, maritime security, illegal fisheries survey, oil discharge and sea pollution monitoring. Within the framework of several French and European projects, an algorithm for automatic ship detection from SPOT\u20135 HRG data was developed to complement existing fishery control measures, in particular the Vessel Monitoring System. The algorithm focused on feature\u2013based analysis of satellite imagery. Genetic algorithms and Neural Networks were used to deal with the feature\u2013borne information. Based on the described approach, a first prototype was designed to classify small targets such as shrimp boats and tested on panchromatic SPOT\u20135, 5\u2013m resolution product taking into account the environmental and fishing context. The ability to detect shrimp boats with satisfactory detection rates is an indicator of the robustness of the algorithm. Still, the benchmark revealed problems related to increased false alarm rates on particular types of images with a high percentage of cloud cover and a sea cluttered background." }, { "instance_id": "R32871xR32856", "comparison_id": "R32871", "paper_id": "R32856", "text": "Region of interest extraction in remote sensing images by saliency analysis with the normal directional lifting wavelet transform Region of interest (ROI) extraction techniques based on saliency comprise an important branch of remote sensing image analysis. In this study, we propose a novel ROI extraction method for high spatial resolution remote sensing images. High spatial resolution remote sensing images contain complex spatial information, clear details, and well-defined geographical objects, where the structure, edge, and texture information has important roles. To fully exploit these features, we construct a novel normal directional lifting wavelet transform to preserve local detail features in the wavelet domain, which is beneficial for the generation of edge and texture saliency maps. We also improve the extraction results by calculating the amount of self-information contained in the spectra to obtain a spectral saliency map. The final saliency map is a weighted fusion of the two maps. Our experimental results demonstrate that the proposed extraction algorithm can eliminate background information effectively as well as highlighting the ROIs with well-defined boundaries and shapes, thereby facilitating more accurate ROI extraction." }, { "instance_id": "R32871xR32589", "comparison_id": "R32871", "paper_id": "R32589", "text": "Surveying coastal ship traffic with LANDSAT A semi-automated algorithm was developed to detect ships in LANDSAT 7 images. The algorithm combines multispectral and pattern recognition methods to discriminate ships from ocean clutter. Automated processing enables us to process a large number of images and gather a statistical picture of ship traffic patterns. As a test case we applied the algorithm on 54 LANDSAT images in the area of Jacksonville, FL, from the period 1999\u20132003. The area and time period are the same as an earlier ship traffic study by Ward-Geiger et al. using ship reports in the Mandatory Ship Reporting System (MSRS). The similarities between the two studies suggest that LANDSAT is a good alternative for surveying nearshore ship traffic." }, { "instance_id": "R32871xR32583", "comparison_id": "R32871", "paper_id": "R32583", "text": "Number estimation of small-sized ships in remote sensing image based on cumulative projection curve Ship detection is an important stage for the sea- area surveillance and many algorithms have been proposed for dealing with such tasks. Nevertheless, most of them are designed for large-sized ships and are not efficient for the small ones. In this paper, we present a novel method based on cumulative projection curve(CPC) to estimate the number of ships of small size. We firstly compute the Mahalanobis distance between each pixel of the image and the pixel intensities distribution of water, and then project these Mahalanobis distances to their near coastline vertically. The projected one-dimension curve is called cumulative projection curve. By doing this, each ship along the coastline will incur a fluctuating, the ship response, in the CPC. Thus, the number of ships can be estimated through the estimation for the number of ship responses in the CPC. This method simplifies the detection problem by converting a two-dimension problem to an one-dimension problem, and its efficiency is illustrated by the experimental results in the paper." }, { "instance_id": "R32871xR32669", "comparison_id": "R32871", "paper_id": "R32669", "text": "A method of ship detection from spaceborne optical image Operational SDSOI and Novel hierarchical complete approach based on shape and texture properties, whic h is considered a sequential coarse-to-fine deleting pro cess of fake alarms. Simple shape analysis is adopt ed to delete evident fake candidates generated by image segmentation with world and local information and to extrac t ship candidates with missing alarms as low as possible a nd a novel semi supervised hierarchical classificat ion approach based on different features is presented to disting uish between ships and non ships Besides a complete and operational SDSOI approach, the other contributions of our approach include the following three aspect s: 1) it Identify ship candidates by using their class proba bility distributions rather than the extracted feat ures; 2) the related classes are automatically built by the samples\u2019 app earances and their feature attribute in a semi supe rvised mode; and 3) besides commonly used shape and texture features, a new texture operator, i.e., local multiple patterns, is introduced to enhance the representation ability of the feature set in feature extraction. Experimenta l results of SDSOI on a big image set captured by optical sensor s from multiple satellites show that our approach i s effective in distinguishing between ships and non ships, and obt ains a well ship detection performance." }, { "instance_id": "R32871xR32628", "comparison_id": "R32871", "paper_id": "R32628", "text": "A novel ship detection method based on sea state analysis from optical imagery This paper proposes a novel ship detection method based on analyzing the sea state in optical images. This method is composed of three phases. First, the image is segmented with the improved region splitting and merging method, which divides the sea into separated regions. Then, the sea state of each divided region of sea is analyzed by extracting texture roughness and ripple density of a modified differential box counting (DBC) method. Finally, an appropriate algorithm is applied to detect ships for each region of sea. Experimental results test on 36 real remote sensing images and 133 images obtained from Google earth demonstrate that the method is free of image resolution and has little limitation of sea conditions." }, { "instance_id": "R32871xR32712", "comparison_id": "R32871", "paper_id": "R32712", "text": "Ship extraction and categorization from ASTER VNIR imagery We present a methodology for ship extraction and categorization from relatively low resolution multispectral ASTER imagery, corresponding to the sea region south east of Athens in Greece. At a first level, in the radiometrically corrected image, quad tree decomposition and bounding rectangular extraction automatically outline location of objects - possible ships, by statistically evaluating spectral responses throughout the segmented image. Subsequently, the object borders within the rectangular regions are extracted, while connected component labelling combined by size and shape filtering allows ship characterization. The ships\u2019 spectral signature is determined in green, red and infrared bands while cluster analysis allows the identification of ship categories on the basis of their size and reflectance. Additional pixel- based measures reveal estimated ship orientation, direction, movement, stability and turning. The results are complemented with additional geographic information and inference tools are formed towards the determination of probable ship type and its destination." }, { "instance_id": "R32871xR32863", "comparison_id": "R32871", "paper_id": "R32863", "text": "Inshore Ship Detection in Remote Sensing Images via Weighted Pose Voting Inshore ship detection from high-resolution satellite images is a useful yet challenging task in remote surveillance and military reconnaissance. It is difficult to detect the inshore ships with high precision because various interferences are present in the harbor scene. An inshore ship detection method based on the weighted voting and rotation\u2013scale-invariant pose is proposed to improve the detection performance. The proposed method defines the rotation angle pose and the scaling factor of the detected ship to detect the ship with different directions and different sizes. For each pixel on the ship template, the possible poses of a detection window are estimated according to all possible pose-related pixels. To improve robustness to the shape-similar distractor and various interferences, the score of the detection window is obtained by designing a pose weighted voting method. Moreover, the values of some parameters such as similarity threshold and the weight of \u201cV\u201d are investigated. The experimental results on actual satellite images demonstrate that the proposed method is invariant to rotation and scale and robust in the inshore ship detection. In addition, better detection performance is observed in comparison with the existing inshore ship detection algorithms in terms of precision rate and recall rate. The target pose of the detected ship can also be obtained as a byproduct of the ship detection." }, { "instance_id": "R32871xR32766", "comparison_id": "R32871", "paper_id": "R32766", "text": "Space shepherd: Search and rescue of illegal immigrants in the mediterranean sea through satellite imagery This paper presents the preliminary results obtained within a research project aimed to assess the feasibility of a system to monitor the immigration flows in the Southern Mediterranean Sea by solely relying on images coming from scientific and commercial satellites, which already operates on a regular basis. \u201cSpace Shepherd\u201d, a project funded by Politecnico di Milano, Italy, has the ultimate goal of integrating information coming from a number of satellites to 1) monitor remotely the Southern Mediterranean Sea, 2) detect the presence of possible vessels, 3) identify the migrant vessels and keep the authorities informed, 4) track the vessels and issue warnings in case of danger, 5) support the search and rescue operations. The methodology for scheduling image acquisitions is presented, as well as the algorithms for automatic detection of vessels in both optical and SAR images. The performances of the system are discussed, and its feasibility is assessed." }, { "instance_id": "R32871xR32701", "comparison_id": "R32871", "paper_id": "R32701", "text": "A new method for detection of ship docked in harbor in high resolution remote sensing image Ship detection using high resolution remote sensing images is a hot research topic in both military and civilian applications. In this paper, a new method for detection of ships docked in a harbor was proposed, in which, Harris corner detector combined with local salient region analysis were used to extract the obvious sharp-angled feature related to the fore part of a ship in satellite images. This method can determine the direction of the ship when the ship is detected. The results of experiments on several high resolution remote sensing images verified the effectiveness of the proposed method." }, { "instance_id": "R32871xR32752", "comparison_id": "R32871", "paper_id": "R32752", "text": "Unsupervised ship detection based on saliency and S-HOG descriptor from optical satellite images With the development of high-resolution imagery, ship detection in optical satellite images has attracted a lot of research interest because of the broad applications in fishery management, vessel salvage, etc. Major challenges for this task include cloud, wave, and wake clutters, and even the variability of ship sizes. In this letter, we propose an unsupervised ship detection method toward overcoming these existing issues. Visual saliency, which focuses on highlighting salient signals from scenes, is applied to extract candidate regions followed by a homogeneous filter presented to confirm suspected ship targets with complete profiles. Then, a novel descriptor, ship histogram of oriented gradient, which characterizes the gradient symmetry of ship sides, is provided to discriminate real ships. Experimental results on numerous panchromatic satellite images demonstrate the good performance of our method compared to state-of-the-art methods." }, { "instance_id": "R32871xR32718", "comparison_id": "R32871", "paper_id": "R32718", "text": "A Novel Sea-Land Segmentation Algorithm Based on Local Binary Patterns for Ship Detection Ship detection is an important application of optical remote sensing image processing. Sea-land segmentation is the key step in ship detection. Traditional sea-land segment methods only based on the gray-level information of an image to choose a gray threshold to segment the image; however, it is very difficult to establish a self-adapting mechanism to select a suitable threshold for different images. Thus, the segmentation result is greatly influenced by the threshold chosen for sea-land segmentation. In this paper, we are integrating the LBP feature information to propose a novel sea-land segmentation algorithm. Moreover, a new ship detection method based on our sea-land segmentation algorithm is proposed for optical remote sensing images. The performance of ship detection is measured in terms of precision and false-alarm-rate. Experimental results show that, as compared to minimum error method, the proposed algorithm can decrease the false-alarm-rate from 23.2% to 9.24%. And compared to Otsu method, the proposed algorithm improve the precision from 82.9% to 90.2%." }, { "instance_id": "R32871xR32709", "comparison_id": "R32871", "paper_id": "R32709", "text": "A new method on inshore ship detection in high-resolution satellite images using shape and context information In this letter, we present a new method to detect inshore ships using shape and context information. We first propose a new energy function based on an active contour model to segment water and land and minimize it with an iterative global optimization method. The proposed energy performs well on the different intensity distributions between water and land and produces a result that can be well used in shape and context analyses. In the segmented image, ships are detected with successive shape analysis, including shape analysis in the localization of ship head and region growing in computing the width and length of ship. Finally, to locate ships accurately and remove the false alarms, we unify them with a binary linear programming problem by utilizing the context information. Experiments on QuickBird images show the robustness and precision of our method." }, { "instance_id": "R32871xR32747", "comparison_id": "R32871", "paper_id": "R32747", "text": "Multi-layer Sparse Coding Based Ship Detection for Remote Sensing Images With the development of remote sensing technology, it becomes possible for the detection and identification of targets from remote sensing images. In this paper, we propose a new method integrating the bottom-up and the top-down mechanisms for the ship detection in high resolution satellite images. We use the multi-layer sparse coding to extract the features of the RS images. Then, we get the ship candidate regions by calculating the global saliency map which may have ships in it. Deformable part model is used to extract the ship features and latent support vector machine is used for the ship identification. As demonstrated in our experiments, the proposed approach can effectively detect ship in remote sensing images." }, { "instance_id": "R32871xR32851", "comparison_id": "R32871", "paper_id": "R32851", "text": "Ship detection in panchromatic images: a new method and its DSP implementation In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor)." }, { "instance_id": "R32871xR32828", "comparison_id": "R32871", "paper_id": "R32828", "text": "Fusion detection of ship targets in low resolution multi-spectral images Aiming at ship detection in multi-spectral images at low resolution, this paper proposes a new method for ship detection based on fusion detection which combines spectral feature with thermal feature. Firstly, it selects infrared band instead of visible band image to detect cloud according to size feature. With cloud available, it does segmentation work in thermal infrared image for the removal of cloud pixels. Then, the result is mapped to the IR images and cloud masking is completed. Next, the fusion of two kinds of images using wavelet transform is adopted. At last, the fused image is used to detect ships and morphological operations are used to discriminate ships. The experiment result on multi-spectral data of Landsat 8 shows that the proposed method which is robust against clutter can detect ships effectively." }, { "instance_id": "R32871xR32608", "comparison_id": "R32871", "paper_id": "R32608", "text": "A complete processing chain for ship detection using optical satellite imagery Ship detection from remote sensing imagery is a crucial application for maritime security, which includes among others traffic surveillance, protection against illegal fisheries, oil discharge control and sea pollution monitoring. In the framework of a European integrated project Global Monitoring for Environment and Security (GMES) Security/Land and Sea Integrated Monitoring for European Security (LIMES), we developed an operational ship detection algorithm using high spatial resolution optical imagery to complement existing regulations, in particular the fishing control system. The automatic detection model is based on statistical methods, mathematical morphology and other signal-processing techniques such as the wavelet analysis and Radon transform. This article presents current progress made on the detection model and describes the prototype designed to classify small targets. The prototype was tested on panchromatic Satellite Pour l'Observation de la Terre (SPOT) 5 imagery taking into account the environmental and fishing context in French Guiana. In terms of automatic detection of small ship targets, the proposed algorithm performs well. Its advantages are manifold: it is simple and robust, but most of all, it is efficient and fast, which is a crucial point in performance evaluation of advanced ship detection strategies." }, { "instance_id": "R32871xR32658", "comparison_id": "R32871", "paper_id": "R32658", "text": "An effective method on ship target detection in remote sensing image of complex background This paper presents a method for ship target detecting in complex background. It aims at solving two difficulties in detection. The first one is that the ships docking in-shore cannot be segmented because of its gray level similarity to land, and the second is that the ships linked side by side cannot be easily located as separate correct target. The first one is solved by extracting water region firstly by measure of harbor-template matching. In order to reduce the impact of angle difference which leads to error, we update the template by the corresponding angle calculated recur to line feature. Then matching fine with the updated template to extract water region wholly in which the segment is effective. For the second difficulty, the smallest minimum bounding rectangle (SMBR) of the segmented areas are obtained by contour tracing, and the areas are projected to the two different directions of its SMBR, then the projection curves are acquired. If the ships are linking together, the peak-valley-peak pattern will exist in the projection curve and the valley-point indicates the ships' connection position. Then the ships can be separated by cutting the area at connection position along the projection direction. The experiment results verify the efficiency and accuracy of our method." }, { "instance_id": "R32871xR32816", "comparison_id": "R32871", "paper_id": "R32816", "text": "A multi-scale fractal dimension based onboard ship saliency detection algorithm Detection of ship targets in the sea area is an important field in remote sensing image target detection. As the ships and the surrounding areas are very different in texture, that makes it a possible solution to detect the ships using the texture feature. Aiming at the detection of ship targets, a novel ship target detection algorithm in a large scene of the optical remote sensing image is proposed in this paper. This algorithm is based on the conspicuity of ship targets of multi-scale fractal dimension feature in the sea background, and then the detection of ship targets is realized by the method of visual saliency model. In this paper, the accuracy of fractal dimension feature of small or medium-sized window by using differential box counting algorithm has been improved. The novel algorithm proposed in this paper is based on the significant difference of natural background and man-made objects in multi-scale fractal dimension feature. Then, the conspicuous fractal feature is obtained by using center-surround difference arithmetic operator, in order to highlight the target in the saliency map normalization is need in the final step. On the basis of the saliency map the rapid detection of ship targets in the sea background can be realized. Experimental results show that ship targets in the sea background can be detected accurately with this algorithm, and also the false alarm rate has been effectively reduced." }, { "instance_id": "R32871xR32697", "comparison_id": "R32871", "paper_id": "R32697", "text": "A remote sensing ship recognition method based on dynamic probability generative model Aiming at detecting sea targets reliably and timely, a novel ship recognition method using optical remote sensing data based on dynamic probability generative model is presented. First, with the visual saliency detection method, prior shape information of target objects in put images which is used to describe the initial curve adaptively is extracted, and an improved Chan\u2013Vese (CV) model based on entropy and local neighborhood information is utilized for image segmentation. Second, based on rough set theory, the common discernibility degree is used to compute the significance weight of each candidate feature and select valid recognition features automatically. Finally, for each node, its neighbor nodes are sorted by their e-neighborhood distances to the node. Using the classes of the selected nodes from top of sorted neighbor nodes list, a dynamic probability generative model is built to recognize ships in data from optical remote sensing system. Experimental results on real data show that the proposed approach can get better classification rates at a higher speed than the k-nearest neighbor (KNN), support vector machines (SVM) and traditional hierarchical discriminant regression (HDR) method." }, { "instance_id": "R32871xR32741", "comparison_id": "R32871", "paper_id": "R32741", "text": "A remote sensing ship recognition using random forest In order to detect the marine targets reliably and timely, a novel ship recognition method by using optical remote sensing data based on random forest is presented. First, in the feature extraction part, in addition to the common features, we introduce the visual saliency features of the target.; second, an improved random forest based on mutual information (MIRF) is utilized to recognize ships in data from the optical remote sensing system; finally, we compare MIRF to classical algorithms. The MIRF has accelerated the operation speed of the algorithm and the classification accuracy remains robust. Theoretical analysis and experiment results show that the proposed method can achieve high recognition rate; therefore, this approach is feasible and efficient in the marine target recognition." }, { "instance_id": "R32871xR32794", "comparison_id": "R32871", "paper_id": "R32794", "text": "A Direct and Fast Methodology for Ship Recognition in Sentinel-2 Multispectral Imagery The European Space Agency satellite Sentinel-2 provides multispectral images with pixel sizes down to 10 m. This high resolution allows for ship detection and recognition by determining a number of important ship parameters. We are able to show how a ship position, its heading, length and breadth can be determined down to a subpixel resolution. If the ship is moving, its velocity can also be determined from its Kelvin waves. The 13 spectrally different visual and infrared images taken using multispectral imagery (MSI) are \u201cfingerprints\u201d that allow for the recognition and identification of ships. Furthermore, the multispectral image profiles along the ship allow for discrimination between the ship, its turbulent wakes, and the Kelvin waves, such that the ship\u2019s length and breadth can be determined more accurately even when sailing. The ship\u2019s parameters are determined by using satellite imagery taken from several ships, which are then compared to known values from the automatic identification system. The agreement is on the order of the pixel resolution or better." }, { "instance_id": "R32871xR32777", "comparison_id": "R32871", "paper_id": "R32777", "text": "Ship detection from optical satellite images based on visual search mechanism Automatic ship detection from high-resolution optical satellite images has attracted great interest in the wide applications of maritime security and traffic control. However, most of the popular methods have much difficulty in extracting targets without false alarms due to the variable appearances of ships and complicated background. In this paper, we propose a ship detection approach based on visual search mechanism to solve this problem. First, salient regions are extracted by a global contrast model fast and easily. Second, geometric properties and neighborhood similarity of targets are used for discriminating the ship candidates with ambiguous appearance effectively. Furthermore, we utilize the SVM algorithm to classify each image as including target(s) or not according to the LBP feature of each ship candidate. Extensive experiments validate our proposed scheme outperforms the state-of-the-art methods in terms of detection time and accuracy." }, { "instance_id": "R32871xR32769", "comparison_id": "R32871", "paper_id": "R32769", "text": "Salient target detection in remote sensing image via cellular automata In order to detect salient target in remote sensing images effectively and accurately, this paper propose a target segmentation method based on cellular automata which is usually used as a dynamic evolution model. First, we introduce the background based map to obtain saliency map with the help of a widely used superpixel segmentation method named simple linear iterative clustering. Secondly, cellular automata are employed to produce the elementary saliency map. Then enhanced saliency map can be obtained by maximum contrast of image patch method. Adaptive threshold is calculated to segment the enhanced saliency map. Consequently, the salient target detection and segmentation result can be obtained. Experiments on optical remote sensing images and synthetic aperture radar (SAR) images demonstrate that the proposed algorithm outperforms other methods such as K-means, Otsu and region growing method." }, { "instance_id": "R32871xR32653", "comparison_id": "R32871", "paper_id": "R32653", "text": "An Invariant Generalized Hough Transform Based Method of Inshore Ships Detection NA" }, { "instance_id": "R32871xR32865", "comparison_id": "R32871", "paper_id": "R32865", "text": "Ship detection in optical remote sensing image based on visual saliency and AdaBoost classifier In this paper, firstly, target candidate regions are extracted by combining maximum symmetric surround saliency detection algorithm with a cellular automata dynamic evolution model. Secondly, an eigenvector independent of the ship target size is constructed by combining the shape feature with ship histogram of oriented gradient (S-HOG) feature, and the target can be recognized by AdaBoost classifier. As demonstrated in our experiments, the proposed method with the detection accuracy of over 96% outperforms the state-of-the-art method." }, { "instance_id": "R32871xR32687", "comparison_id": "R32871", "paper_id": "R32687", "text": "Maritime situation awareness capabilities from satellite and terrestrial sensor systems Maritime situation awareness is supported by a combination of satellite, airborne, and terrestrial sensor systems. This paper presents several solutions to process that sensor data into information that supports operator decisions. Examples are vessel detection algorithms based on multispectral image techniques in combination with background subtraction, feature extraction techniques that estimate the vessel length to support vessel classification, and data fusion techniques to combine image based information, detections from coastal radar, and reports from cooperative systems such as (satellite) AIS. Other processing solutions include persistent tracking techniques that go beyond kinematic tracking, and include environmental information from navigation charts, and if available, ELINT reports. And finally rule-based and statistical solutions for the behavioural analysis of anomalous vessels. With that, trends and future work will be presented." }, { "instance_id": "R32871xR32774", "comparison_id": "R32871", "paper_id": "R32774", "text": "Remote Sensing of Ships and Offshore Oil Platforms and Mapping the Marine Oil Spill Risk Source in the Bohai Sea Abstract Oil spills from marine transportation and oil exploration are of great importance in marine environment protection. Marine oil platforms and ships which have similar spectral characteristics and size can be simultaneously extracted from satellite images. Oil platforms and ships have different modes of behavior: offshore ships usually move with time, and even if they are moored, this mooring status won\u2019t last for months or years; while oil platforms are usually in the same place for a long time till they are removed or destroyed in disasters. In this paper, a detection strategy of ships and offshore oil platforms on the basis of remote sensing images and geographic information system was proposed. Satellite images were used to detect the distribution of ships and oil platforms in the Bohai Sea, and the map of oil spill risk from ships and oil rigs in the Bohai Sea was generated on the basis of an averaged oil spill probability." }, { "instance_id": "R32871xR32689", "comparison_id": "R32871", "paper_id": "R32689", "text": "Ship detection from optical satellite image using optical flow and saliency This paper present an effective method for ship detection using optical flow and saliency methods from optical satellite images, which can be able to identify more than one ship targets in the complex dynamic sea background and succeeds to reduce the false positive rate compared to traditional methods. In this paper, moving targets in the image are highlighted through the classical optical flow method, and the dynamic waves are restrained by combining the state-of-art saliency method. We make the best of the low-level (size, color, etc.) and high-level (adjacent frames information, etc.) features of image, which can adapt to different dynamic background situation. Compared to existing method, experimental results demonstrate the robustness of the proposed method with high performance." }, { "instance_id": "R32871xR32849", "comparison_id": "R32871", "paper_id": "R32849", "text": "Ship detection and extraction using visual saliency and histogram of oriented gradient A novel unsupervised ship detection and extraction method is proposed. A combination model based on visual saliency is constructed for searching the ship target regions and suppressing the false alarms. The salient target regions are extracted and marked through segmentation. Radon transform is applied to confirm the suspected ship targets with symmetry profiles. Then, a new descriptor, improved histogram of oriented gradient (HOG), is introduced to discriminate the real ships. The experimental results on real optical remote sensing images demonstrate that plenty of ships can be extracted and located successfully, and the number of ships can be accurately acquired. Furthermore, the proposed method is superior to the contrastive methods in terms of both accuracy rate and false alarm rate." }, { "instance_id": "R32871xR32799", "comparison_id": "R32871", "paper_id": "R32799", "text": "Active deep belief networks for ship recognition based on BvSB Abstract During remote image classification, accurately classifying ships with insufficient label data is a well-known challenge. In this paper, we propose an intelligent, semi-supervised learning algorithm called active deep network based on BvSB (BvSB-ADN). BvSB-ADN is initially constructed based on the structure of restricted Boltzmann machines (RBM), then active learning is used to identify samples which can be labeled as training data. In the sample-identification phase, the best versus second-best (BvSB) rule is applied to determine the most useful samples; the labeled samples as-selected and all unlabeled samples are then combined to train the BvSB-ADN architecture. The BvSB and classifier are based on the same architecture, which makes selecting the most important samples relatively very simple. We applied BvSB-ADN to a ship classification task to verify its effectiveness and feasibility, and found that it outperforms other classification methods. BvSB-ADN also showed impressive performance on the MNIST dataset." }, { "instance_id": "R32871xR32811", "comparison_id": "R32871", "paper_id": "R32811", "text": "Ship detection in high spatial resolution remote sensing image based on improved sea-land segmentation A new method to detect ship target at sea based on improved segmentation algorithm is proposed in this paper, in which the improved segmentation algorithm is applied to precisely segment land and sea. Firstly, mean value is replaced instead of average variance value in Otsu method in order to improve the adaptability. Secondly, Mean Shift algorithm is performed to separate the original high spatial resolution remote sensing image into several homogeneous regions. At last, the final sea-land segmentation result can be located combined with the regions in preliminary sea-land segmentation result. The proposed segmentation algorithm performs well on the segment between water and land with affluent texture features and background noise, and produces a result that can be well used in shape and context analyses. Ships are detected with settled shape characteristics, including width, length and its compactness. Mean Shift algorithm can smooth the background noise, utilize the wave\u2019s texture features and helps highlight offshore ships. Mean shift algorithm is combined with improved Otsu threshold method in order to maximizes their advantages. Experimental results show that the improved sea-land segmentation algorithm on high spatial resolution remote sensing image with complex texture and background noise performs well in sea-land segmentation, not only enhances the accuracy of land and sea boarder, but also preserves detail characteristic of ships. Compared with traditional methods, this method can achieve accuracy over 90 percent. Experiments on Worldview images show the superior, robustness and precision of the proposed method." }, { "instance_id": "R32871xR32559", "comparison_id": "R32871", "paper_id": "R32559", "text": "The Potential for Using Very High Spatial Resolution Imagery for Marine Search and Rescue Surveillance Abstract Recreational boating activities represent one of the highest risk populations in the marine environment. Moreover, there is a trend of increased risk exposure by recreational boaters such as those who undertake adventure tourism, sport fishing/hunting, and personal watercraft (PWC) activities. When trying to plan search and rescue activities, there are data deficiencies regarding inventories, activity type, and spatial location of small, recreational boats. This paper examines the current body of research in the application of remote sensing technology in marine search and rescue. The research suggests commercially available very high spatial resolution satellite (VHSR) imagery can be used to detect small recreational vessels using a sub\u2010pixel detection methodology. The sub\u2010pixel detection method utilizes local image statistics based on spatio\u2010spectral considerations. This methodology would have to be adapted for use with VHSR imagery as it was originally used in hyperspectral imaging. Further, the authors examine previous research on \u2018target characterization\u2019 which uses a combination of spectral based classification, and context based feature extraction to generate information such as: length, heading, position, and material of construction for target vessels. This technique is based on pixel\u2010based processing used in generic digital image processing and computer vision. Finally, a preliminary recreational vessel surveillance system \u2010 called Marine Recreational Vessel Reconnaissance (MRV Recon) is tested on some modified VHSR imagery." }, { "instance_id": "R32871xR32723", "comparison_id": "R32871", "paper_id": "R32723", "text": "Ship detection from optical satellite images based on sea surface analysis Automatic ship detection in high-resolution optical satellite images with various sea surfaces is a challenging task. In this letter, we propose a novel detection method based on sea surface analysis to solve this problem. The proposed method first analyzes whether the sea surface is homogeneous or not by using two new features. Then, a novel linear function combining pixel and region characteristics is employed to select ship candidates. Finally, Compactness and Length-width ratio are adopted to remove false alarms. Specifically, based on the sea surface analysis, the proposed method cannot only efficiently block out no-candidate regions to reduce computational time, but also automatically assign weights for candidate selection function to optimize the detection performance. Experimental results on real panchromatic satellite images demonstrate the detection accuracy and computational efficiency of the proposed method." }, { "instance_id": "R32871xR32858", "comparison_id": "R32871", "paper_id": "R32858", "text": "S-CNN-BASED SHIP DETECTION FROM HIGH-RESOLUTION REMOTE SENSING IMAGES Abstract. Reliable ship detection plays an important role in both military and civil fields. However, it makes the task difficult with high-resolution remote sensing images with complex background and various types of ships with different poses, shapes and scales. Related works mostly used gray and shape features to detect ships, which obtain results with poor robustness and efficiency. To detect ships more automatically and robustly, we propose a novel ship detection method based on the convolutional neural networks (CNNs), called SCNN, fed with specifically designed proposals extracted from the ship model combined with an improved saliency detection method. Firstly we creatively propose two ship models, the \u201cV\u201d ship head model and the \u201c||\u201d ship body one, to localize the ship proposals from the line segments extracted from a test image. Next, for offshore ships with relatively small sizes, which cannot be efficiently picked out by the ship models due to the lack of reliable line segments, we propose an improved saliency detection method to find these proposals. Therefore, these two kinds of ship proposals are fed to the trained CNN for robust and efficient detection. Experimental results on a large amount of representative remote sensing images with different kinds of ships with varied poses, shapes and scales demonstrate the efficiency and robustness of our proposed S-CNN-Based ship detector." }, { "instance_id": "R32871xR32656", "comparison_id": "R32871", "paper_id": "R32656", "text": "A sea-land segmentation scheme based on statistical model of sea Sea-land segmentation is a key step for target detection. Due to the complex texture and uneven gray value of the land in optical remote sensing image, traditional sea-land segmentation algorithms often recognize land as sea incorrectly. A new segmentation scheme is presented in this paper to solve this problem. This scheme determines the threshold according to the adaptively established statistical model of the sea area, and removes the incorrectly classified land according to the difference of the variance in the statistical model between land and sea. Experimental results show our segmentation scheme has small computation complexity, and it has better performance and higher robustness compared to the traditional algorithms." }, { "instance_id": "R32871xR32821", "comparison_id": "R32871", "paper_id": "R32821", "text": "Rotation and scale invariant target detection in optical remote sensing images based on pose-consistency voting Rotation and scaling are two problems that must be solved in remote sensing detection. Most current methods only focus on the rotation invariance. In this paper, a novel target detection method based on pose-consistency voting is proposed to solve both the rotation and scaling problems, and improve detection precision in complicated optical remote sensing images. The proposed method defines a target pose to describe the direction and the scale of the detected target related to the target template. To detect the target in a detection window, the estimation-voting strategy is used. In the estimation stage, a large set of possible poses for the target in the detection window are predicted by pairs of pose-related pixels. Each pair of pose-related pixels is obtained through a pixel matching method based on the radial- gradient angle (RGA). As to the voting stage, based on the pose consistency property, all possible target poses vote in the angle-scale space to generate a pose histogram. The maximum value of the pose histogram is defined as the detection score of current detection window, and the pose corresponding to this max value is considered as the pose the detected target. Experimental results demonstrate that the proposed method is rotation-scale invariant, and is robust to the interference of shadow and occlusion. The detection performance in the complicated background is better than other state-of-the-art detection methods." }, { "instance_id": "R32871xR32756", "comparison_id": "R32871", "paper_id": "R32756", "text": "Fusing local texture description of saliency map and enhanced global statistics for ship scene detection In this paper, we introduce a new feature representation based on fusing local texture description of saliency map and enhanced global statistics for ship scene detection in very high-resolution remote sensing images in inland, coastal, and oceanic regions. First, two low computational complexity methods are adopted. Specifically, the Itti attention model is used to extract saliency map, from which local texture histograms are extracted by LBP with uniform pattern. Meanwhile, Gabor filters with multi-scale and multi-orientation are convolved with the input image to extract Gist, means and variances which are used to form the enhanced global statistics. Second, sliding window-based detection is applied to obtain local image patches and extract the fusion of local and global features. SVM with RBF kernel is then used for training and classification. Such detection manner could remove coastal and oceanic regions effectively. Moreover, the ship scene region of interest can be detected accurately. Experiments on 20 very high-resolution remote sensing images collected by Google Earth shows that the fusion feature has advantages than LBP, Saliency map-based LBP and Gist, respectively. Furthermore, desirable results can be obtained in the ship scene detection." }, { "instance_id": "R32871xR32635", "comparison_id": "R32871", "paper_id": "R32635", "text": "Sea object detection using colour and texture classification Sea target detection from remote sensing imagery is very important, with a wide array of applications in areas such as fishery management, vessel traffic services, and naval warfare. This paper focuses on the issue of ship detection from spaceborne optical images (SDSOI). Although advantages of synthetic aperture radar (SAR) result in that most of current ship detection approaches are based on SAR images. But disadvantages of SAR still exist. Such as the limited number of SAR sensors, the relatively long revisit cycle, and the relatively lower resolution. To overcome these disadvantages a new classification algorithm using colour and texture is introduced for Ship detection. Colour information is computationally cheap to learn and process. However in many cases, colour alone does not provide enough information for classification. Texture information also can improve classification performance. This algorithm uses both colour and texture features. In this approach for the construction of a hybrid colour-texture space we are using mutual information. Feature extraction is done by the co- occurrence matrix with SVM (Support Vectors Machine) as a classifier. Therefore this algorithm may attain a very good classification rate." }, { "instance_id": "R32871xR32745", "comparison_id": "R32871", "paper_id": "R32745", "text": "Ship detection for high resolution optical imagery with adaptive target filter Ship detection is important due to both its civil and military use. In this paper, we propose a novel ship detection method, Adaptive Target Filter (ATF), for high resolution optical imagery. The proposed framework can be grouped into two stages, where in the first stage, a test image is densely divided into different detection windows and each window is transformed to a feature vector in its feature space. The Histograms of Oriented Gradients (HOG) is accumulated as a basic feature descriptor. In the second stage, the proposed ATF highlights all the ship regions and suppresses the undesired backgrounds adaptively. Each detection window is assigned a score, which represents the degree of the window belonging to a certain ship category. The ATF can be adaptively obtained by the weighted Logistic Regression (WLR) according to the distribution of backgrounds and targets of the input image. The main innovation of our method is that we only need to collect positive training samples to build the filter, while the negative training samples are adaptively generated by the input image. This is different to other classification method such as Support Vector Machine (SVM) and Logistic Regression (LR), which need to collect both positive and negative training samples. The experimental result on 1-m high resolution optical images shows the proposed method achieves a desired ship detection performance with higher quality and robustness than other methods, e.g., SVM and LR." }, { "instance_id": "R32871xR32667", "comparison_id": "R32871", "paper_id": "R32667", "text": "A Line Segment Based Inshore Ship Detection Method On the problem of ship detection in optical remote sensing images, it is difficult to separate the ships from the harbor background to detect the inshore ships. This paper shows a new method based on the line segment feature, which can detect the shape of the ship by the line segment feature and remove the false alarm by the bounding box of the ship. The experimental results present that the method is effective to detect the inshore ships in linear time." }, { "instance_id": "R32871xR32646", "comparison_id": "R32871", "paper_id": "R32646", "text": "An Auto-Adapt Multi-Level Threshold Segmentation Method of Ships Detection in Remote Sensing Images with Complex Sea Surface Background A new multi-level threshold segmentation approach proposed for ship detection. This method is designed to detect targets in remote sensing images, especially for which with complex sea surface background image. An experiment over 1104 ship samples and 11600 no-ship samples, those from Spot, Quickbird, Ikonos, Landsat, shows that the target detection rate of the new method can be as high as 99.5% and the false alarm rate is low. Experiments over images with various content and from different aircraft testify to the new method's robustness." }, { "instance_id": "R32871xR32743", "comparison_id": "R32871", "paper_id": "R32743", "text": "Ship detection from high-resolution imagery based on land masking and cloud filtering High resolution satellite images play an important role in target detection application presently. This article focuses on the ship target detection from the high resolution panchromatic images. Taking advantage of geographic information such as the coastline vector data provided by NOAA Medium Resolution Coastline program, the land region is masked which is a main noise source in ship detection process. After that, the algorithm tries to deal with the cloud noise which appears frequently in the ocean satellite images, which is another reason for false alarm. Based on the analysis of cloud noise's feature in frequency domain, we introduce a windowed noise filter to get rid of the cloud noise. With the help of morphological processing algorithms adapted to target detection, we are able to acquire ship targets in fine shapes. In addition, we display the extracted information such as length and width of ship targets in a user-friendly way i.e. a KML file interpreted by Google Earth." }, { "instance_id": "R32871xR32843", "comparison_id": "R32871", "paper_id": "R32843", "text": "Multi-class remote sensing object recognition based on discriminative sparse representation The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition." }, { "instance_id": "R32871xR32762", "comparison_id": "R32871", "paper_id": "R32762", "text": "Compressed-Domain Ship Detection on Spaceborne Optical Image Using Deep Neural Network and Extreme Learning Machine Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infrared and synthetic aperture radar images, their results are affected by weather conditions, like clouds and ocean waves, and 2) the higher resolution results in larger data volume, which makes processing more difficult. Most of the previous works mainly focus on solving the first problem by improving segmentation or classification with complicated algorithms. These methods face difficulty in efficiently balancing performance and complexity. In this paper, we propose a ship detection approach to solving the aforementioned two issues using wavelet coefficients extracted from JPEG2000 compressed domain combined with deep neural network (DNN) and extreme learning machine (ELM). Compressed domain is adopted for fast ship candidate extraction, DNN is exploited for high-level feature representation and classification, and ELM is used for efficient feature pooling and decision making. Extensive experiments demonstrate that, in comparison with the existing relevant state-of-the-art approaches, the proposed method requires less detection time and achieves higher detection accuracy." }, { "instance_id": "R32871xR32665", "comparison_id": "R32871", "paper_id": "R32665", "text": "A novel method of ship detection from spaceborne optical image based on spatial pyramid matching In this paper we propose an automatic ship detection method in High Resolution optical satellite images based on neighbor context information. First, a pre-detection of targets gives us candidates. For each candidate, we choose an extended region called candidate with neighborhood which comprises candidate and its neighbor area. Second, the patches of candidate with neighborhood are got by a regular grid, and their SIFT(Scale Invariant Feature Transform) features are extracted. Then the SIFT features of training images are clustered with the K-means algorithm to form a codebook of the patches. We quantize the patches of candidate with neighborhood according to this codebook and get the visual word representation. Finally by applying spatial pyramid matching, the candidates are classified with SVM (support vector machine). Experiment results are given for a set of images show that our method has got predominant performance." }, { "instance_id": "R32871xR32597", "comparison_id": "R32871", "paper_id": "R32597", "text": "Ship detection and classification in high-resolution remote sensing imagery using shape-driven segmentation method High-resolution remote sensing imagery provides an important data source for ship detection and classification. However, due to shadow effect, noise and low-contrast between objects and background existing in this kind of data, traditional segmentation approaches have much difficulty in separating ship targets from complex sea-surface background. In this paper, we propose a novel coarse-to-fine segmentation strategy for identifying ships in 1-meter resolution imagery. This approach starts from a coarse segmentation by selecting local intensity variance as detection feature to segment ship objects from background. After roughly obtaining the regions containing ship candidates, a shape-driven level-set segmentation is used to extract precise boundary of each object which is good for the following stages such as detection and classification. Experimental results show that the proposed approach outperforms other algorithms in terms of recognition accuracy." }, { "instance_id": "R32871xR32779", "comparison_id": "R32871", "paper_id": "R32779", "text": "Object Detection Based on Sparse Representation and Hough Voting for Optical Remote Sensing Imagery We present a novel method for detecting instances of an object class or specific object in high-spatial-resolution optical remote sensing images. The proposed method integrates sparse representations for local-feature detection into generalized-Hough-transform object detection. Object parts are detected via class-specific sparse image representations of patches using learned target and background dictionaries, and their co-occurrence is spatially integrated by Hough voting, which enables object detection. We aim to efficiently detect target objects using a small set of positive training samples by matching essential object parts with a target dictionary while the residuals are explained by a background dictionary. Experimental results show that the proposed method achieves state-of-the-art performance for several examples including object-class detection and specific-object identification." }, { "instance_id": "R32871xR32749", "comparison_id": "R32871", "paper_id": "R32749", "text": "Ship Recognition Based on Active Learning and Composite Kernel SVM Aiming at recognizing ship target efficiently and accurately, a novel method based on active learning and the Composite Kernel Support Vector Machines (CK-SVM) is proposed. First, we build a ship recognition dataset which contains the major warship models and massive civil ships. Second, to reduce the cost of manual labeling, active learning algorithm is run to select the most informative and representative samples to label. Finally, we construct a composite-kernel SVM combining shape and texture features to recognize ships. The composite-kernel strategy can enhance the quality of features fusion apparently. Experiments demonstrate that our method not only improves the efficiency of samples selection, but also receives satisfying results." }, { "instance_id": "R32871xR32808", "comparison_id": "R32871", "paper_id": "R32808", "text": "Moving ship detection based on visual saliency for video satellite Moving ship detection is an important issue with the development of video satellite. However, it is difficult for the registration of sea scenes imaging with motion camera. In this paper, we propose a new method to detect moving ship by combining optical flow and video attention saliency for video satellite with image registration. Video visual attention is a consequence of some saliency features such as optical flow, Gabor features, intensity. The model proposed for ship detection mainly includes three stages. Firstly, the moving region is estimated based on optical flow with corners. Secondly, Gabor filter is used for texture features extraction of video images. Finally, the above saliency features as several channels are integrated to a quaternion, which can indicate where the ships are located. The experimental results show that the proposed model can effectively extract moving ships in video images without image registration." }, { "instance_id": "R32871xR32674", "comparison_id": "R32871", "paper_id": "R32674", "text": "Multi-evidence fusion recognition of ship targets in sea battlefield's remote sensing images Reliably recognizing ship targets in the sea battlefield has become an increasingly pressing need, as the capabilities for image acquisition are growing rapidly. In the research, a modified ship targets fusion recognition model based on the Dempster-Shafer evidence theory is proposed. The algorithm firstly detects ship targets from the sea battlefield\u2019s remote sensing image. Then extracts the multiple image features of these target candidate areas as the evidence to recognize the ship targets. Finally, recognizes the ship targets from the image using the Dempster-Shafer evidence theory based on multiple ship features, and sends the recognition result. Experiment show that this method can be used to reliably and effectively recognize targets information in the sea battlefield." }, { "instance_id": "R32871xR32591", "comparison_id": "R32871", "paper_id": "R32591", "text": "Ship detection and recognitionin high-resolution satellite images Nowadays, the availability of high-resolution images taken from satellites, like Quickbird, Orbview, and others, offers the remote sensing community the possibility of monitoring and surveying vast areas of the Earth for different purposes, e.g. monitoring forest regions for ecological reasons. A particular application is the use of satellite images to survey the bottom of the seas around the Iberian peninsula which is flooded with innumerable treasures that are being plundered by specialized ships. In this paper we present a GIS-based application aimed to catalog areas of the sea with archeological interest and to monitor the risk of plundering of ships that stay within such areas during a suspicious period of time." }, { "instance_id": "R32871xR32573", "comparison_id": "R32871", "paper_id": "R32573", "text": "An Enhanced Spatio-spectral Template for Automatic Small Recreational Vessel Detection This paper examines the performance of a spatiospectral template on Ikonos imagery to automatically detect small recreational boats. The spatiospectral template is utilized and then enhanced through the use of a weighted Euclidean distance metric adapted from the Mahalanobis distance metric. The aim is to assist the Canadian Coast Guard in gathering data on recreational boating for the modeling of search and rescue incidence risk. To test the detection accuracy of the enhanced spatiospectral template, a dataset was created by gathering position and attribute data for 53 recreational vessel targets purposely moored for this research within Cadboro Bay, British Columbia, Canada. The Cadboro Bay study site containing the targets was imaged using Ikonos. Overall detection accuracy was 77%. Targets were broken down into 2 categories: 1) Category A-less than 6 m in length, and Category B-more than 6 m long. The detection rate for Category B targets was 100%, while the detection rate for Category A targets was 61%. It is important to note that some Category A targets were intentionally selected for their small size to test the detection limits of the enhanced spatiospectral template. The smallest target detected was 2.2 m long and 1.1 m wide. The analysis also revealed that the ability to detect targets between 2.2 and 6 m long was diminished if the target was dark in color." }, { "instance_id": "R32871xR32638", "comparison_id": "R32871", "paper_id": "R32638", "text": "Saliency and gist features for target detection in satellite images Reliably detecting objects in broad-area overhead or satellite images has become an increasingly pressing need, as the capabilities for image acquisition are growing rapidly. The problem is particularly difficult in the presence of large intraclass variability, e.g., finding \u201cboats\u201d or \u201cbuildings,\u201d where model-based approaches tend to fail because no good model or template can be defined for the highly variable targets. This paper explores an automatic approach to detect and classify targets in high-resolution broad-area satellite images, which relies on detecting statistical signatures of targets, in terms of a set of biologically-inspired low-level visual features. Broad-area images are cut into small image chips, analyzed in two complementary ways: \u201cattention/saliency\u201d analysis exploits local features and their interactions across space, while \u201cgist\u201d analysis focuses on global nonspatial features and their statistics. Both feature sets are used to classify each chip as containing target(s) or not, using a support vector machine. Four experiments were performed to find \u201cboats\u201d (Experiments 1 and 2), \u201cbuildings\u201d (Experiment 3) and \u201cairplanes\u201d (Experiment 4). In experiment 1, 14 416 image chips were randomly divided into training (300 boat, 300 nonboat) and test sets (13 816), and classification was performed on the test set (ROC area: 0.977 \u00b10.003). In experiment 2, classification was performed on another test set of 11 385 chips from another broad-area image, keeping the same training set as in experiment 1 (ROC area: 0.952 \u00b10.006). In experiment 3, 600 training chips (300 for each type) were randomly selected from 108 885 chips, and classification was conducted (ROC area: 0.922 \u00b10.005). In experiment 4, 20 training chips (10 for each type) were randomly selected to classify the remaining 2581 chips (ROC area: 0.976 \u00b10.003). The proposed algorithm outperformed the state-of-the-art SIFT, HMAX, and hidden-scale salient structure methods, and previous gist-only features in all four experiments. This study shows that the proposed target search method can reliably and effectively detect highly variable target objects in large image datasets." }, { "instance_id": "R32871xR32760", "comparison_id": "R32871", "paper_id": "R32760", "text": "Combined use of optical imaging satellite data and electronic intelligence satellite data for large scale ship group surveillance We propose a novel framework for large-scale maritime ship group surveillance using spaceborne optical imaging satellite data and Electronic Intelligence (ELINT) satellite data. Considering that the size of a ship is usually less than the distance between different ships for large-scale maritime surveillance, we treat each ship as a mass point and ship groups are modelled as point sets. Motivated by the observation that ship groups performing tactical or strategic operations often have a stable topology and their attributes remain unchanged, we combine both topological features and attributive features within the framework of Dempster-Shafer (D-S) theory for coherent ship group analysis. Our method has been tested using different sets of simulated data and recorded data. Experimental results demonstrate our method is robust and efficient for large-scale maritime surveillance." }, { "instance_id": "R32871xR32594", "comparison_id": "R32871", "paper_id": "R32594", "text": "Automatic ship detection in HJ-1A satellite data In this paper, we use HJ-1A satellite data to ship detection and a ship target detection algorithm based on optical remote sensing images of moving window and entropy Maximum is presented. The method uses a moving window to get ship candidates and the Shannon theory to image segmentation. Basic principle is that the entropy of the image segmented by the threshold value is max. After completing the image segmentation, an automatic discriminator is used. The identify algorithm is used to get rid of the false alarm caused by spray, cloudy and solar flare. Some feature is considered include area, length ratio and extent. The detection results indicate that most ship target can be detected without regard to cloudy." }, { "instance_id": "R32871xR32565", "comparison_id": "R32871", "paper_id": "R32565", "text": "Object oriented ship detection from VHR satellite images Within today's security environment and with increasing worldwide travel and transport of dangerous goods the need of vessel traffic services, ship routing and monitoring of ship movements on sea and along coastlines becomes more time consuming and an important responsibility for coastal authorities. This paper describes the architecture of a ship detection prototype based on an object-oriented methodology to support these monitoring tasks. The system\u2019s architecture comprises a fully-automatic coastline detection tool, a tool for fully or semiautomatic ship detection in off-shore areas and a semi-automatic tool for ship detection within harbour-areas. Its core is based on the client-server environment of the first object-oriented image analysis software on the market named eCognition. The described ship detection system has been developed for panchromatic VHR satellite image data and has proven its capabilities on Ikonos and QuickBird imagery under different weather conditions and for various regions of the world. With the capability of eCognition to combine raster data with imported thematic data it is possible to work with available non-remote sensing based data e.g. detailed harbour GIS information in ESRI shape file format or weather information, which can be attached to the results. Finally the system\u2019s ability of generating customized reports in HTML format and the possibility of exporting results in standard raster or vector format offers new opportunities in the direction of an interoperability of technology where a great number of heterogeneous networks and operators are involved in the surveillance process." }, { "instance_id": "R32871xR32804", "comparison_id": "R32871", "paper_id": "R32804", "text": "On-board ship targets detection method based on multi-scale salience enhancement for remote sensing image A on-board ship targets detection method based on multi-scale salience enhancement is proposed. Unlike the traditional wavelet filter enhancement methods, the proposed utilizes the wavelet decomposition to obtain the high-low frequency parts, and estimate the salience feature with both parts, which enhance the ship targets efficiently. First, decompose the remote sensing image by 2-D DWT, and obtain the low-frequency part, high-frequency part of horizontal, vertical and diagonal; then, compute the OSTU threshold, which is subtracted by the low-frequency coefficients to get the low-frequency salience image; and, the high-frequency parts are used to compose the high-frequency salience image; finally, the high-low parts are fused by addition and normalized to obtain the salience map. The original data of multi sets of remote sensing images are experimented, and the results are compared with the method without the proposed salience enhancement. The proposed shows obvious salience enhancement for the low-resolution, high-noise remote sensing images." }, { "instance_id": "R32871xR32833", "comparison_id": "R32871", "paper_id": "R32833", "text": "Attribute learning for ship category recognition in remote sensing imagery Object category recognition, in remote sensing imagery, usually relies on exemplar-based training. The latter is achieved by modeling intricate relationships between object categories and visual features. However, for real-world and fine grained object categories - exhibiting complex visual appearance and strong variability - these models may fail especially when training data are scarce. In this paper, we introduce an effective object category recognition approach that alleviates the limitation caused by small training sets. The method learns discriminant mid-level representations (a.k.a. attributes) through nonlinear mappings that make these attributes highly discriminant while being easily trainable and predictable. We demonstrate the effectiveness of our method, through extensive experiments on the challenging task of ship recognition in maritime environments, and we show how our attribute learning model generalizes well in spite of the scarcity of training data." }, { "instance_id": "R32871xR32614", "comparison_id": "R32871", "paper_id": "R32614", "text": "Characterization of a Bayesian Ship Detection Method in Optical Satellite Images This letter presents the experimental results obtained for an automatic predetection of small ships (about 5 × 5 pixels) in high-resolution optical satellite images. Our images are panchromatic SPOT 5 images, whose resolution is 5 m per pixel. Our detection method is based on the Bayesian decision theory and does not need any preprocessing. Here, we describe the method precisely and the tuning of its two parameters, namely, the size of the analysis window and the threshold used to make a decision. Both are fixed from the receiver operating-characteristic curves that we draw from different sets of tests. Finally, the overall results of the method are given for a set of images, as close as possible to the operational conditions." }, { "instance_id": "R32871xR32562", "comparison_id": "R32871", "paper_id": "R32562", "text": "A Shape Constraints Based Method to Recognize Ship Objects from High Spatial Resolution Remote Sensed Imagery Automatically extracting moored ships from high spatial resolution remote sensed imagery is more difficult than offshore ships because their spectrum values and textures are very close to the harbor. For this reason, different routes are designed and applied to extract the two kinds of ships. Our whole method can be divided into three main steps: 1) extract water polygons with histogram segmentation, 2) extract holes in the water polygons with morphological operations as the possible offshore ships, and extract possible moored ship with the identification of the salients to sea along the water boundary, and then 3) screen real ships out of these possible ships with more shape constraints. A case study is carried out on Spot 5 imagery to validate our method." }, { "instance_id": "R32871xR32621", "comparison_id": "R32871", "paper_id": "R32621", "text": "Graph-based ship extraction scheme for optical satellite image Automatic detection and recognition of ship in satellite images is very important and has a wide array of applications. This paper concentrates on optical satellite sensor, which provides an important approach for ship monitoring. Graph-based fore/background segmentation scheme is used to extract ship candidant from optical satellite image chip after the detection step, from course to fine. Shadows on the ship are extracted in a CFAR scheme. Because all the parameters in the graph-based algorithms and CFAR are adaptively determined by the algorithms, no parameter tuning problem exists in our method. Experiments based on measured optical satellite images shows our method achieved good balance between computation speed and ship extraction accuracy." }, { "instance_id": "R32871xR32704", "comparison_id": "R32871", "paper_id": "R32704", "text": "A unified algorithm for ship detection on optical and SAR spaceborne images Synthetic Aperture Radar (SAR) is the most widely used sensor for ship detection from space but optical sensors are increasingly used in addition of these. The combined use of these sensors in an operational framework becomes a major stake of the efficiency of the current systems. It becomes also a source of the increased complexity of these systems. Optical and SAR signals of a maritime scene have many similarities. These similarities allow us to define a common detection approach presented in this paper. Beyond the definition of a single algorithm for both types of data, this study aims to define an algorithm for the detection of vessels of any size in any resolution images. After studying the signatures of vessels, this second goal leads us to define a detection strategy based on multi-scale processes. It has been implemented in a processing chain into two major steps: first targets that are potentially vessels are identified using a Discrete Wavelet Transform (DWT) and Constant False Alarm Rate (CFAR) detector. Second among these targets, false alarms are rejected using a multi-scale reasoning on the contours of the targets. The definition of this processing chain is made with respect to three constraints: the detection rate should be 100%, the false alarm rate should be as low as possible and finally the processing time must be compatible with operations at sea. The method was developed and tested on the basis of a very large data set containing real images and associated detections. The obtained results validate this approach but with limitations mainly related to the sea state." }, { "instance_id": "R32871xR32813", "comparison_id": "R32871", "paper_id": "R32813", "text": "A Novel Inshore Ship Detection via Ship Head Classification and Body Boundary Determination In this letter, we propose a novel method for inshore ship detection via ship head classification and body boundary determination. Compared with some traditional ship head detection methods depending on accurate ship head segmentation, we generate novel ship head features in the transformed domain of polar coordinate, where the ship heads have an approximate trapezoid shape and can be more easily detected. Then, these features are used in the classification based on support vector machine to detect the ship head candidates, and give the important information of initial ship head direction. Next, the surrounding consistent line segments are utilized to refine the ship direction, and the ship boundary is determined based on the saliency of directional gradient information symmetrical about the ship body. Finally, the context information of sea areas is introduced to remove false alarms. Experimental results show that the proposed method can accurately and robustly detect the inshore ships in high-resolution optical remote sensing images." }, { "instance_id": "R32871xR32716", "comparison_id": "R32871", "paper_id": "R32716", "text": "Automatic ship detection for optical satellite images based on visual attention model and LBP Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, the problem is extremely difficult in the complex background, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust algorithm based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is simple, general, and not designed for specific types of images. Large-area images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship target or not, using a support vector machine method. Experimental results show the proposed method is insensitive to waves, clouds, and illumination, as well as high precision and low false alarms performance." }, { "instance_id": "R32871xR32754", "comparison_id": "R32871", "paper_id": "R32754", "text": "In-shore ship extraction from HR optical remote sensing image via salience structure and GIS information In order to solve the problem of in-shore ship extraction from remote sensing image, a novel method for in-shore ship extraction from high resolution (HR) optical remote sensing image is proposed via salience structure feature and GIS information. Firstly, the berth ROI is located in the image with the aid of the prior GIS auxiliary information. Secondly, the salient corner features at ship bow are extracted from the berth ROI precisely. Finally, a recursive algorithm concerning the symmetric geometry of the ship target is conducted to discriminate the multi docked in-shore targets into mono in-shore ships. The results of the experiments show that the method proposed in this paper can detect the majority of large and medium scale in-shore ships from the optical remote sensing image, including both the mono and the multi adjacent docked in-shore ship cases." }, { "instance_id": "R32871xR32737", "comparison_id": "R32871", "paper_id": "R32737", "text": "Variational approximate inferential probability generative model for ship recognition using remote sensing data Abstract Aiming at detecting sea targets reliably and timely, a discriminative ship recognition method using optical remote sensing data based on variational methods probability generative model is presented. First, an improved Hough transformation is utilized for pretreatment of the target candidate region, which reduces the amount of computation by filterring the edge points, our experiments indicate the targets (ships) can be detected quickly and accurately. Second, based on rough set theory, the common discernibility degree is used to compute the significance weight of each candidate feature and select valid recognition features automatically. Finally, for each node, its neighbor nodes are sorted by their manifold similarity to the node. Using the classes of the selected nodes from top of sorted neighbor nodes list, a dynamic probability generative model is built to recognize ships in data from optical remote sensing system. Experimental results on real data show that the proposed approach can get better classification rates at a higher speed than the k-nearest neighbor (KNN), support vector machines (SVM) and traditional hierarchical discriminant regression (HDR) method." }, { "instance_id": "R32871xR32823", "comparison_id": "R32871", "paper_id": "R32823", "text": "The Application of GF-1 Imagery to Detect Ships on the Yangtze River Various satellite data are currently used to detect ships on the sea surface. However, no study on the use of Gaofen-1 (GF-1) data to monitor ships on the surface of inland rivers has been reported. Therefore, we proposed a method to extract inland river-surface ships from GF-1 imagery. The Normalized Differential Water Index was calculated to enhance the contrast between water and non-water areas after the preprocessing procedure. The multi-resolution segmentation method and object-oriented classification rule sets were used to detect the ships in the image. Results show that most of the ships, whose length-to-width ratio ranges from 3.0 to 7.2, could be identified correctly regardless of their size. The results also indicate that detecting ships on inland rivers using GF-1 imagery is feasible." }, { "instance_id": "R32871xR32672", "comparison_id": "R32871", "paper_id": "R32672", "text": "Ship target segmentation and detection in complex optical remote sensing image based on component tree characteristics discrimination Under the application background of sea-surface target surveillance based on optical remote sensing image, automatic sea-surface ship target recognition with complicated background is discussed in this paper. The technology this article focused on is divided into two parts, feature classification training and component class discrimination. In the feature classification training process, large numbers of sample images are used for feature selection and classifier determination of ship targets and false targets. Component tree characteristics discrimination achieves extraction of suspected target areas from complicated remote sensing image, and their features are entered to Fisher for ship target recognition. Experimental results show that the method discussed in this paper can deal with complex sea surface environment, and can avoid the interference of cloud cover, sea clutter and islands. The method can effectively achieve ship target recognition in complex sea background." }, { "instance_id": "R32871xR32640", "comparison_id": "R32871", "paper_id": "R32640", "text": "Object recognition in ocean imagery using feature selection and compressive sensing Ship recognition and classification in electro-optical satellite imagery is a challenging problem with important military applications. The problem is similar to that of face recognition, but with many unique considerations. A ship's appearance can vary dramatically from image to image depending on factors such as lighting condition, sensor angle, and ocean state, and there is often wide variation between ships of the same class. Collecting and labeling sufficient training data is another challenge. We consider how appropriate feature selection and description can assist in addressing these challenges. Our proposed algorithm for vessel classification combines shape invariant features such as SIFT with a well known face recognition algorithm from the theory of sparse representation and compressive sensing. We demonstrate improved classification accuracy using invariant features at significant key points instead of random features to represent images. We also discuss how algorithms such as this are currently implemented to detect and classify ships and other objects in ocean imagery." }, { "instance_id": "R32871xR32568", "comparison_id": "R32871", "paper_id": "R32568", "text": "Ship detection and classification from overhead imagery This paper presents a sequence of image-processing algorithms suitable for detecting and classifying ships from nadir panchromatic electro-optical imagery. Results are shown of techniques for overcoming the presence of background sea clutter, sea wakes, and non-uniform illumination. Techniques are presented to measure vessel length, width, and direction-of-motion. Mention is made of the additional value of detecting identifying features such as unique superstructure, weaponry, fuel tanks, helicopter landing pads, cargo containers, etc. Various shipping databases are then described as well as a discussion of how measured features can be used as search parameters in these databases to pull out positive ship identification. These are components of a larger effort to develop a low-cost solution for detecting the presence of ships from readily-available overhead commercial imagery and comparing this information against various open-source ship-registry databases to categorize contacts for follow-on analysis." }, { "instance_id": "R32871xR32791", "comparison_id": "R32871", "paper_id": "R32791", "text": "A NOVEL SHIP DETECTION METHOD FOR LARGE-SCALE OPTICAL SATELLITE IMAGES BASED ON VISUAL LBP FEATURE AND VISUAL ATTENTION MODEL Abstract. Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size." }, { "instance_id": "R32871xR32720", "comparison_id": "R32871", "paper_id": "R32720", "text": "Automatic detection of inshore ships in highresolution remote sensing images using robust invariant generalized Hough transform In this letter, we propose a new detection framework based on robust invariant generalized Hough transform (RIGHT) to solve the problem of detecting inshore ships in high-resolution remote sensing imagery. The invariant generalized Hough transform is an effective shape extraction technique, but it is not adaptive to shape deformation well. In order to improve its adaptability, we use an iterative training method to learn a robust shape model automatically. The model could capture the shape variability of the target contained in the training data set, and every point in the model is equipped with an individual weight according to its importance, which greatly reduces the false-positive rate. Through the iteration process, the model performance is gradually improved by extending the shape model with these necessary weighted points. Experimental result demonstrates the precision, robustness, and effectiveness of our detection framework based on RIGHT." }, { "instance_id": "R32871xR32610", "comparison_id": "R32871", "paper_id": "R32610", "text": "Ship detection in satellite imagery using rank-order grayscale hit-or-miss transforms Ship detection from satellite imagery is something that has great utility in various communities. Knowing where ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a version in which we use a rank-order selection for the dilation and erosion parts of the transform, instead of the standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs and provides a method for tuning the algorithm's performance for particular detection problems. We describe our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the use of this approach for real ship detection problems with panchromatic satellite imagery." }, { "instance_id": "R32914xR32901", "comparison_id": "R32914", "paper_id": "R32901", "text": "An empirical analysis of theories on factors influencing state government accounting disclosure Abstract This study develops a politico-economic model based on the theoretical and empirical work in public choice and political science to explain state government accounting disclosure choice. Measures of the theoretical constructs hypothesized to influence accounting disclosure choice are selected from the literature. An updated 1986 practice index, based on Ingram's (1984) 12-practice categories, is used as the indicator accounting disclosure choice. The model is then tested for other indicators of accounting disclosure choice and a reanalysis is performed for 1978. Because of the complexity of the political context, LISREL methodology was used to test the model. The evidence supports the implication that state government accounting disclosure choice is dependent on factors in the political environment and on institutional forces. The model is robust over time and for different measures of accounting disclosure choice." }, { "instance_id": "R32914xR32903", "comparison_id": "R32914", "paper_id": "R32903", "text": "The association between municipal disclosure practices and audit quality Abstract This study examines the impact of two proxies for audit quality on a model of public sector disclosure for a sample of municipal governments. I argue that more complete disclosures enhance the reputation of an independent auditing firm and that independent auditors, seeking to maintain a reputation of higher quality, positively influence the level of financial disclosures appearing in their clients' financial statements. Specifically, a variable indicating the presence of a (then) Big Eight auditor and regression residuals from a model of audit fees were used as surrogates of audit quality. These were included in a model designed to explain variation in an index representing financial disclosures required under generally accepted accounting principles for local governments. The results provide evidence in support of the hypothesized relationship between audit quality and disclosure." }, { "instance_id": "R32914xR32881", "comparison_id": "R32914", "paper_id": "R32881", "text": "E-Government and Public Financial Reporting: The Case of Spanish Regional Governments Technology has changed the way public organizations relate to the public. Government's use of the Internet and other associated technologies, known as e-government, could become the instrument that makes regular timely information on public finances more forthcoming. New technologies can improve government responsiveness and empower individual citizens. By making government financial information available, the public could continuously assess a government agency through everyday interaction. The financial accountability of government and its response to public demands for information and services are thus a contribution to government openness. It is therefore relevant to determine whether public organizations are also becoming more aware of the importance of placing financial information on their Web sites to help in decision-making processes. This article focuses on the e-democracy process, specifically the transparency of government information, by analyzing governmental financial disclosures on the Web as a tool for the public to assess its financial accountability. To this end, an empirical study was carried out on regional governments in Spain." }, { "instance_id": "R32914xR32891", "comparison_id": "R32914", "paper_id": "R32891", "text": "Governance structures and accounting at large municipalities Either Mayor\u2013Council or Council\u2013Manager forms of governance operate most cities in the US, with a slow trend toward Council\u2013Manager cities. Theoretical modeling suggests that the Council\u2013Manager form should be more efficient, since the city manager has greater incentives to increase financial and accounting performance relative to the mayor as chief executive officer. However, two sets of factors may be more important for municipal comparisons. Since the mid-1980s, regulations of state and local governments have intensified. At the same time, economic conditions improved dramatically. Consequently, these two factors might be more relevant to evaluate the financial and accounting conditions in large cities. The purpose of this paper is to test the significance of governance structure on accounting disclosure levels and financial condition, based on samples of large cities from the early 1980s and the mid-1990s. The findings support the perspective that city manager cities substantially outperform Mayor\u2013Council cities on major dimensions examined in both univariate and multivariate tests. Large municipalities improved on key financial and accounting variables from 1983 to 1996. Council\u2013Manager cities maintained superiority over Mayor\u2013Council cities for accounting disclosure in both periods. Council\u2013Manager cities were significantly better in financial condition in 1983, but the evidence for 1996 was mixed. 2003 Elsevier Science Inc. All rights reserved." }, { "instance_id": "R32914xR32897", "comparison_id": "R32914", "paper_id": "R32897", "text": "Municipal Government Financial Reporting: Administrative and Ethical Climate This article examines financial disclosure in U.S. cities. It considers factors that affect the level of municipal financial disclosure, in particular the effect of administrative factors. It finds that participation in the Government Finance Officers Association Certificate of Excellence in Financial Reporting program, and the Chief Financial Officer's familiarity with the activities of the Governmental Accounting Standards Board are positively associated with more disclosure. These latter factors are interpreted as measures of professionalism and are furthered by the adoption of municipal codes of ethics which stress openness and responsiveness to stakeholder interests. Such general policies are indirectly associated with heightened levels of financial disclosure. Financial disclosure is also associated with city size and demands from capital markets." }, { "instance_id": "R32914xR32873", "comparison_id": "R32914", "paper_id": "R32873", "text": "e-Government process and incentives for online public financial information Purpose \u2013 The aim of this paper is to examine the extent of financial information made available by public administrations on their web sites and to discover whether this communications policy is influenced by the context in which the public entity operates.Design/methodology/approach \u2013 The study took as its reference the prior literature and distinguished three dimensions \u2013 information content, qualitative characteristics of information and accessibility \u2013 which were converted into a disclosure index that was used to assess government web sites. A multivariable linear regression analysis was performed in search of a relationship between seven external factors and the provision of public financial information online.Findings \u2013 The empirical research revealed that the sample municipalities were not fully aware of the potential importance of the internet in enabling the achievement of e\u2010democracy initiatives as a tool of new public management. The factors previously found to be important in paper\u2010based repo..." }, { "instance_id": "R32914xR32885", "comparison_id": "R32914", "paper_id": "R32885", "text": "Determinants of voluntary Internet financial reporting by local government authorities Abstract The reform of public sector (local and central government) financial reporting in New Zealand in the early 1990s has aligned such reporting with reporting practices in the private sector (business enterprises). Literature examining the behaviour of managers in the public (government) sector suggests that agency relationships in the sector motivate such managers to provide information to enable the monitoring of their actions. This literature identifies a number of characteristics and variables that proxy for agency costs in the public sector. The recent development of the Internet provides an opportunity for examining voluntary disclosure in the public sector and, in particular, in the local government environment. Some New Zealand local government authorities elect to voluntarily provide financial information on their websites. This paper examines the voluntary Internet financial reporting practices of local authorities. Six variables associated with voluntary disclosure are examined: political competition, size, leverage, municipal wealth, press visibility, and type of local authority. Results indicate that leverage, municipal wealth, press visibility, and type of council are associated with the Internet financial reporting practices of local authorities in New Zealand. Policy implications and possible limitations of the study as well as suggestions for future research are discussed in the paper." }, { "instance_id": "R32914xR32879", "comparison_id": "R32914", "paper_id": "R32879", "text": "The Role of Generally Accepted Reporting Methods in The Public Sector: An Empirical Test Abstract This study explores the role of standard or generally accepted accounting and reporting methods in the public sector. It differs from prior studies that address public sector accounting issues in that it considers more directly how the political process influences decisions to report financial information. The primary contention is that adopting standard reporting methods reduces costs to public officials that arise from factors that characterize political markets. Empirical evidence based on data from the state governments is consistent with this contention, but theoretical and methodological problems restrict our ability to ascertain which specific factors are relevant." }, { "instance_id": "R32914xR32883", "comparison_id": "R32914", "paper_id": "R32883", "text": "Cultural contexts and governmental digital reporting The way in which public sector entities disseminate information publicly is affected by the degree of transparency adopted, and the construction and management of websites are increasingly essential elements of modern public administration. Nonetheless, differences in this process exist among governments worldwide, probably due to different contextual factors. This article examines and discusses the approach of Anglo-Saxon, South American and Continental European central governments to the use of the Web as a means of making financial disclosures. To measure the disclosure of governmental financial information on the Internet, an index has been defined, taking into consideration the data considered to be relevant for a potential user, gathering the data visiting their websites. The results show that the way different countries use the Web for financial disclosure is deeply rooted in and follows from their administrative culture. In conclusion, the Continental European and South American governments should improve their digital reporting." }, { "instance_id": "R32914xR32899", "comparison_id": "R32914", "paper_id": "R32899", "text": "Voluntary disclosure by NSW statutory authorities: The influence of political visibility Abstract This study examines the influence of political visibility on the voluntary disclosure practices of 50 commercial and semi-commercial statutory authorities in the Australian State of New South Wales in 1984, the year prior to the enactment of legislation mandating detailed disclosure requirements. A positive correlation is hypothesized and found between the political visibility of these authorities and their voluntary disclosure of financial and non-financial information of a non-sensitive nature. However, as predicted, no positive correlation is found between the authorities' political visibility and their disclosure of financial and non-financial information of a sensitive nature." }, { "instance_id": "R32914xR32905", "comparison_id": "R32914", "paper_id": "R32905", "text": "Political interests and governmental accounting disclosure Abstract In this paper, disclosure indexes of municipalities are developed based on the anticipated needs of political groups. Next, disclosure quality relationships are modeled on political and economic incentives of the groups actively involved in governmental processes of municipalities. The results suggest that each group with political power has only limited influence on disclosure quality." }, { "instance_id": "R32914xR32907", "comparison_id": "R32914", "paper_id": "R32907", "text": "The effect of regulation on local government disclosure practices The purpose of this study is to examine the relationship between financial disclosures of local governments and the economic incentives of the local political manager to disclose. These economic incentives include the regulatory structure of the local government's financial reporting. Unlike publicly held corporations which are all subject to the regulatory authority of the Securities and Exchange Commission, local governments face different state government regulations. Some states require GAAP compliance, same require compliance with state designated (non-GAAP) disclosure practices, and some do not regulate local government financial disclosures. The primary findings are that disclosure practices of cities in states that do not regulate local government practices, do not differ significantly from the practices of cities in GAAP regulated states. Further, when considered in conjunction with other political and socioeconomic variables, GAAP regulation appears to have a negligible effect on the financial reporting practices of local governments in GAAP regulated states. Finally, the disclosures of cities in states that require non-GAAP practices are significantly fewer than those of cities in the other categories. When considered in conjunction with other political and socioeconomic variables, non-GAAP reporting requirements appear to have a significant effect on the financial reporting practices of local governments in these states." }, { "instance_id": "R32940xR32920", "comparison_id": "R32940", "paper_id": "R32920", "text": "Gallstone shrapnel contamination during laparoscopic cholecystectomy The fate of lost gallstones in the peritoneal cavity following laparoscopic cholecystectomy is unknown. We report a case of microabscesses and granuloma formation in the peritoneal cavity and abdominal wall caused by infected gallstone shrapnel due to rupture of the gallbladder during extraction." }, { "instance_id": "R32940xR32938", "comparison_id": "R32940", "paper_id": "R32938", "text": "Factors influencing the outcome of intestinal anastomosis Anastomotic leak (AL) is one of the most serious complications after gastrointestinal surgery. All patients aged 16 years or older who underwent a surgery with single intestinal anastomosis at Morristown Medical Center from January 2006 to June 2008 were entered into a prospective database. To compare the rate of AL, patients were divided into the following surgery-related groups: 1) stapled versus hand-sewn, 2) small bowel versus large bowel, 3) right versus left colon, 4) emergent versus elective, 5) laparoscopic versus converted (laparoscopic to open) versus open, 6) inflammatory bowel disease versus non inflammatory bowel disease, and 7) diverticulitis versus nondiverticulitis. We also looked for surgical site infection, estimated intraoperative blood loss, blood transfusion, comorbidities, preoperative chemotherapy, radiation, and anticoagulation treatment. The overall rate of AL was 3.8 per cent. Mortality rate was higher among patients with ALs (13.3%) versus patients with no AL (1.7%). Open surgery had greater risk of AL than laparoscopic operations. Surgical site infection and intraoperative blood transfusions were also associated with significantly higher rates of AL. Operations involving the left colon had greater risk of AL when compared with those of the right colon, sigmoid, and rectum. Prior chemotherapy, anticoagulation, and intraoperative blood loss all increased the AL rates. In conclusion, we identified several significant risk factors for ALs. This knowledge should help us better understand and prevent this serious complication, which has significant morbidity and mortality rates." }, { "instance_id": "R32940xR32936", "comparison_id": "R32940", "paper_id": "R32936", "text": "Smoking is a major risk factor for anastomotic leak in patients undergoing low anterior resection Aim To examine modifiable risk factors for anastomotic leak in patients undergoing low anterior resection." }, { "instance_id": "R32940xR32928", "comparison_id": "R32940", "paper_id": "R32928", "text": "Risk Factors for Anastomotic Leak and Mortality in Diabetic Patients Undergoing Colectomy OBJECTIVES To determine the risk factors in diabetic patients that are associated with increased postcolectomy mortality and anastomotic leak. DESIGN A prospectively acquired statewide database of patients who underwent colectomy was reviewed. Primary risk factors were diabetes mellitus, hyperglycemia (glucose level \u2265 140 mg/dL), steroid use, and emergency surgery. Categorical analysis, univariate logistic regression, and multivariate regression were used to evaluate the effects of these risk factors on outcomes. SETTING Participating hospitals within the Michigan Surgical Quality Collaborative. PATIENTS Database review of patients from hospitals within the Michigan Surgical Quality Collaborative. MAIN OUTCOME MEASURES Anastomotic leak and 30- day mortality rate. RESULTS Of 5123 patients, 153 (3.0%) had leaks and 153 (3.0%) died. Preoperative hyperglycemia occurred in 15.6% of patients, only 54% of whom were known to have diabetes. Multivariate analysis showed that the risk of leak for patients with and without diabetes increased only by preoperative steroid use (P<.05). Mortality among diabetic patients was associated with emergency surgery (P<.01) and anastomotic leak (P<.05); it was not associated with hyperglycemia. Mortality among nondiabetic patients was associated with hyperglycemia (P<.005). The presence of an anastomotic leak was associated with increased mortality among diabetic patients (26.3% vs 4.5%; P<.001) compared with nondiabetic patients (6.0% vs 2.5%; P<.05). CONCLUSIONS The presence of diabetes did not have an effect on the presence of an anastomotic leak, but diabetic patients who had a leak had more than a 4-fold higher mortality compared with nondiabetic patients. Preoperative steroid use led to increased rates of anastomotic leak in diabetic patients. Mortality was associated with hyperglycemia for nondiabetic patients only. Improved screening may identify high-risk patients who would benefit from perioperative intervention." }, { "instance_id": "R32940xR32918", "comparison_id": "R32940", "paper_id": "R32918", "text": "Risk Factors for Anastomotic Leakage after Surgery for Colorectal Cancer: Results of Prospective Surveillance BACKGROUND Anastomotic leakage in operations for colorectal cancer not only results in morbidity and mortality, but also increases the risk of local recurrence and worsens prognosis. So a better understanding of risk factors for developing anastomotic leakage in colorectal cancer surgery is important to surgeons. The aim of this study was to determine the incidence and risk factors for clinical anastomotic leakage after elective surgery for colorectal cancer. STUDY DESIGN We conducted prospective surveillance of all elective colorectal resections performed by a single surgeon in a single university hospital from November 2000 to July 2004. The outcomes of interest was clinical anastomotic leakage. Eighteen independent clinical variables were examined by univariate and multivariate analyses. RESULTS A total of 391 patients undergoing elective operations for colorectal cancer were admitted to the program. Clinical anastomotic leakage was identified in 11(2.8%) patients. Univariate and multivariate analyses showed that preoperative steroid use (odds ratio=8.7), longer duration of operation (odds ratio=9.9), and wound contamination (odds ratio=7.8) were independently predictive of clinical anastomotic leakage. Although there were no statistical differences in leakage rates between patients with and without covering stoma, all four patients requiring reoperation for leakage were without covering stoma. CONCLUSIONS Preoperative steroid use, longer duration of operation, and contamination of the operative field were independent risk factors for developing clinical anastomotic leakage after elective resection for colorectal cancer. Surgeons should be aware of such high-risk patients, which would help them to decide whether to create a diversion stoma during surgery." }, { "instance_id": "R32940xR32916", "comparison_id": "R32940", "paper_id": "R32916", "text": "Effect of systemic corticosteroids on elective left-sided colorectal resection with colorectal anastomosis BACKGROUND The impact of systemic steroid therapy on surgical outcome after elective left-sided colorectal resection with rectal anastomosis is not well known. METHODS We compared 606 consecutive patients including 53 patients who were on steroids and undergoing surgery between 1995 and 2005. RESULTS Postoperative mortality and anastomotic leakage rates were equivalent. The postoperative complications rate, especially infections, was higher in steroid-treated patients than in non-steroid-treated patients: 38% (20 of 53 patients) versus 25% (139 of 553 patients), respectively (P = .046). In the steroid group, univariate analysis revealed 3 significant risk factors for postoperative complications: blood transfusion, preoperative anticoagulation, and chronic respiratory failure. In a multivariate analysis, blood transfusion and chronic respiratory failure remained independent factors for postoperative complications. CONCLUSION Patients on steroids have a higher incidence of postoperative complications after elective left-sided colorectal resection with rectal anastomosis." }, { "instance_id": "R32940xR32932", "comparison_id": "R32940", "paper_id": "R32932", "text": "Multivariate analysis suggests improved perioperative outcome in Crohn's disease patients receiving immunomodulator therapy after segmental resection and/or strictureplasty BACKGROUND Medical management of moderate to severe Crohn's disease (CD) using immunomodulator agents has not eliminated surgical treatment of disease complications. The effect of improved medical treatment on perioperative CD surgical outcome is not known. We analyzed the impact of immunomodulator therapy on the rate of intraabdominal septic complications (IASC) in CD patients undergoing bowel reanastomosis or strictureplasty. METHODS Surgical outcome was reviewed in 100 consecutive CD patients who underwent segmental resection with primary anastomosis or strictureplasty between 1998 and 2002. Multivariate analysis was performed to determine the effect of immunomodulator therapy on rate of IASC (intraabdominal abscess, anastomotic leak, or enterocutaneous fistulae). Immunomodulator agents included azathioprine, 6-MP, methotrexate, and infliximab. RESULTS IASC developed in 11 of 100 (11%) operations. Immunomodulator use was associated with fewer IASC (4/72 procedures; 5.6%), compared with 7/28 (25%) cases with patients not on therapy (P<.01). IASC were not influenced by steroid use, smoking status, preoperative abscess, or fistula or albumin levels. Immunomodulator use did not affect the length of resection or the rate and number of strictureplasties. CONCLUSION Medical management with immunomodulator therapy is safe and significantly decreases postoperative IASC in CD patients undergoing surgical procedures requiring bowel anastomosis or strictureplasty." }, { "instance_id": "R32940xR32922", "comparison_id": "R32940", "paper_id": "R32922", "text": "Effect of Prednisolone on Local and Systemic Response in Laparoscopic vs. Open Colon Surgery PURPOSE: This study was designed to assess whether preoperative, short-term, intravenously administered high doses of methylprednisolone (30 mg/kg 90 minutes before surgery) influence local and systemic biohumoral responses in patients undergoing laparoscopic or open resection of colon cancer. METHODS: Fifty-two patients who were candidates for curative colon resection were randomly assigned to laparoscopic or open surgery and, in a double-blind design, assigned to receive methylprednisolone (n = 26) or placebo (n = 26). Pulmonary function, postoperative pain, C-reactive protein, interleukins 6 and 8, and tumor necrosis factor \u03b1 were analyzed, as was patient outcome. RESULTS: The steroid and placebo groups were well balanced for preoperative variables, as were the subgroups of patients who underwent laparoscopic (methylprednisolone, n = 13; placebo, n = 13) and open surgery (methylprednisolone, n = 13; placebo, n = 13). No adverse events related to steroid administration occurred. In the methylprednisolone groups, significant improvement in pulmonary performance (P = 0.01), pain control (P = 0.001), and length of stay (P = 0.03) were observed independent of the surgical technique. No differences in morbidity or anastomotic leak rate were observed among groups. CONCLUSION: Preoperative administration of methylprednisolone in colon cancer patients may improve pulmonary performance and postoperative pain, and shorten length of stay regardless of the surgical technique used (laparoscopy, open colon resection)." }, { "instance_id": "R33008xR32990", "comparison_id": "R33008", "paper_id": "R32990", "text": "Chromosomal abnormalities in Philadelphia chromosome negative metaphases appearing during imatinib mesylate therapy in patients with newly diagnosed chronic myeloid leukemia in chronic phase Abstract The development of chromosomal abnormalities (CAs) in the Philadelphia chromosome (Ph)\u2013negative metaphases during imatinib (IM) therapy in patients with newly diagnosed chronic myecloid leukemia (CML) has been reported only anecdotally. We assessed the frequency and significance of this phenomenon among 258 patients with newly diagnosed CML in chronic phase receiving IM. After a median follow-up of 37 months, 21 (9%) patients developed 23 CAs in Ph-negative cells; excluding \u2212Y, this incidence was 5%. Sixteen (70%) of all CAs were observed in 2 or more metaphases. The median time from start of IM to the appearance of CAs was 18 months. The most common CAs were \u2212Y and + 8 in 9 and 3 patients, respectively. CAs were less frequent in young patients (P = .02) and those treated with high-dose IM (P = .03). In all but 3 patients, CAs were transient and disappeared after a median of 5 months. One patient developed acute myeloid leukemia (associated with \u2212 7). At last follow-up, 3 patients died from transplantation-related complications, myocardial infarction, and progressive disease and 2 lost cytogenetic response. CAs occur in Ph-negative cells in a small percentage of patients with newly diagnosed CML treated with IM. In rare instances, these could reflect the emergence of a new malignant clone." }, { "instance_id": "R33008xR32984", "comparison_id": "R33008", "paper_id": "R32984", "text": "The importance of diagnostic cytogenetics on outcome in AML: analysis of 1,612 patients entered into the MRC AML 10 trial Abstract Cytogenetics is considered one of the most valuable prognostic determinants in acute myeloid leukemia (AML). However, many studies on which this assertion is based were limited by relatively small sample sizes or varying treatment approach, leading to conflicting data regarding the prognostic implications of specific cytogenetic abnormalities. The Medical Research Council (MRC) AML 10 trial, which included children and adults up to 55 years of age, not only affords the opportunity to determine the independent prognostic significance of pretreatment cytogenetics in the context of large patient groups receiving comparable therapy, but also to address their impact on the outcome of subsequent transplantation procedures performed in first complete remission (CR). On the basis of response to induction treatment, relapse risk, and overall survival, three prognostic groups could be defined by cytogenetic abnormalities detected at presentation in comparison with the outcome of patients with normal karyotype. AML associated with t(8;21), t(15;17) or inv(16) predicted a relatively favorable outcome. Whereas in patients lacking these favorable changes, the presence of a complex karyotype, \u22125, del(5q), \u22127, or abnormalities of 3q defined a group with relatively poor prognosis. The remaining group of patients including those with 11q23 abnormalities, +8, +21, +22, del(9q), del(7q) or other miscellaneous structural or numerical defects not encompassed by the favorable or adverse risk groups were found to have an intermediate prognosis. The presence of additional cytogenetic abnormalities did not modify the outcome of patients with favorable cytogenetics. Subgroup analysis demonstrated that the three cytogenetically defined prognostic groups retained their predictive value in the context of secondary as well as de novo AML, within the pediatric age group and furthermore were found to be a key determinant of outcome from autologous or allogeneic bone marrow transplantation (BMT) in first CR. This study highlights the importance of diagnostic cytogenetics as an independent prognostic factor in AML, providing the framework for a stratified treatment approach of this disease, which has been adopted in the current MRC AML 12 trial." }, { "instance_id": "R33008xR32992", "comparison_id": "R33008", "paper_id": "R32992", "text": "Cytogenetic abnormalities in adult acute lymphoblastic leukemia: correlations with hematologic findings outcome A Collaborative Study of the Group Francais de Cyto- genetique Hematologique Cytogenetic analyses performed at diagnosis on 443 adult patients with acute lymphoblastic leukemia (ALL) were reviewed by the Groupe Fran\u00e7ais de Cytog\u00e9n\u00e9tique H\u00e9matologique, correlated with hematologic data, and compared with findings for childhood ALL. This study showed that the same recurrent abnormalities as those reported in childhood ALL are found in adults, and it determined their frequencies and distribution according to age. Hyperdiploidy greater than 50 chromosomes with a standard pattern of chromosome gains had a lower frequency (7%) than in children, and was associated with the Philadelphia chromosome (Ph) in 11 of 30 cases. Tetraploidy (2%) and triploidy (3%) were more frequent than that in childhood ALL. Hypodiploidy 30-39 chromosomes (2%), characterized by a specific pattern of chromosome losses, might be related to the triploid group that evoked a duplication of the 30-39 hypodiploidy. Both groups shared similar hematologic features. Ph+ ALL (29%) peaked in the 40- to 50-year-old age range (49%) and showed a high frequency of myeloid antigens (24%). ALL with t(1;19) (3%) occurred in young adults (median age, 22 years). In T-cell ALL (T-ALL), frequencies of 14q11 breakpoints (26%) and of t(10;14)(q24;q11) (14%) were higher than those in childhood ALL. New recurrent changes were identified, ie, monosomies 7 present in Ph-ALL (17%) and also in other ALL (8%) and two new recurrent translocations, t(1;11)(p34;pll) in T-ALL and t(1;7)(q11-21;q35-36) in Ph+ ALL. The ploidy groups with a favorable prognostic impact were hyperdiploidy greater than 50 without Ph chromosome (median event-free survival [EFS], 46 months) and tetraploidy (median EFS, 46 months). The recurrent abnormalities associated with better response to therapy were also significantly correlated to T-cell lineage. Among them, t(10;14)(q24;q11) (median EFS, 46 months) conferred the best prognostic impact (3-year EFS, 75%). Hypodiploidy 30-39 chromosomes and the related triploidy were associated with poor outcome. All Ph-ALL had short EFS (median EFS, 5 months), and no additional change affected this prognostic impact. Most patients with t(1;19) failed therapy within 1 year. Patients with 11q23 changes not because of t(4;11) had a poor outcome, although they did not present the high-risk factors found in t(4;11)." }, { "instance_id": "R33008xR32972", "comparison_id": "R33008", "paper_id": "R32972", "text": "Cytogenetic abnormalities in essential thrombocythemia: prevalence and prognostic significance Objectives: In the current study we describe cytogenetic findings as well as clinical correlates and long\u2010term prognostic relevance of abnormal cytogenetics at the time of diagnosis of essential thrombocythemia (ET)." }, { "instance_id": "R33008xR32962", "comparison_id": "R33008", "paper_id": "R32962", "text": "Cytogenetic abnormalities and their prognostic significance in idiopathic myelo- fibrosis: a study of 106 cases The prognostic significance of cytogenetic abnormalities was determined in 106 patients with well\u2010characterized idiopathic myelofibrosis who were successfully karyotyped at diagnosis. 35% of the cases exhibited a clonal abnormality (37/106), whereas 65% (69/106) had a normal karyotype. Three characteristic defects, namely del(13q) (nine cases), del(20q) (eight cases) and partial trisomy 1q (seven cases), were present in 64.8% (24/37) of patients with clonal abnormalities. Kaplan\u2010Meier plots and log rank analysis demonstrated an abnormal karyotype to be an adverse prognostic variable (P < 0.001). Of the eight additional clinical and haematological parameters recorded at diagnosis, age (P < 0.01), anaemia (haemoglobin \u226410 g/dl; P < 0.001), platelet (\u2264100 \u00d7 109/l, P < 0.0001) and leucocyte count (>10.3 \u00d7 109/l; P = 0.06) were also associated with a shorter survival. In contrast, sex, spleen and liver size, and percentage blast cells were not found to be significant. Multivariate analysis, using Cox's regression, revealed karyotype, haemoglobin concentration, platelet and leucocyte counts to retain their unfavourable prognostic significance. A simple and useful schema for predicting survival in idiopathic myelofibrosis has been produced by combining age, haemoglobin concentration and karyotype with median survival times varying from 180 months (good\u2010risk group) to 16 months (poor\u2010risk group)." }, { "instance_id": "R33008xR32977", "comparison_id": "R33008", "paper_id": "R32977", "text": "Karyotypic abnorma- lities in myelofibrosis following polycythemia vera Polycythemia vera (PV) is a chronic myeloproliferative disease characterized by an increase of total red cell volume; in 10% to 15% of cases, bone marrow fibrosis complicates the course of the disease after several years, resulting in a hematologic picture mimicking myelofibrosis with myelocytic metaplasia (MMM). This condition is known as post polycythemic myelofibrosis (PPMF). Among 30 patients with PPMF followed in Northern France, 27 (90%) expressed one or two abnormal clones in myelocytic cell cultures. Of these, 19 (70%) had partial or complete trisomy 1q. This common anomaly either resulted from unbalanced translocations with acrocentric chromosomes, that is, 13, 14, and 15, or other chromosomes, that is, 1, 6, 7, 9, 16, 19, and Y, or from partial or total duplication of long arm of chromosome 1. A single patient had an isochromosome 1q leading to tetrasomy 1q. In all cases, a common trisomic region spanning 1q21 to 1q32 has been identified. Given that most patients had previously received chemotherapy or radio-phosphorus to control the polycythemic phase of their disease, this study illustrates the increased frequency of cytogenetic abnormalities after such treatments: 90% versus 50% in de novo MMM. Moreover, karyotype can be used to distinguish PPMF-where trisomy 1q is the main anomaly-from primary MMM where trisomy 1q is rare and deletions 13q or 20q are far more common. Whether trisomy 1q is or is not a secondary event remains a matter of debate, as well as the role of cytotoxic treatments." }, { "instance_id": "R33008xR32974", "comparison_id": "R33008", "paper_id": "R32974", "text": "Cytogenetic abnormalities in essential thrombocythemia at presentation and transformation Cytogenetic abnormalities in patients with essential thrombocythemia (ET) are infrequent. Their role in survival of patients and disease transformation is not extensively studied. We describe cytogenetic abnormalities in 172 patients with ET at a single institution. At presentation nine (5.2%) patients had cytogenetic abnormality and three (1.7%) additional patients acquired them during follow-up. Survival of patients with cytogenetic changes at presentation did not differ when compared to the patients with normal karyotype. The more common were abnormalities of chromosome 9 (n = 4), 20 (n = 2), 5 (n = 2), and complex abnormalities (n = 2). Forty-one patients (23.8%) had additional cytogenetic tests performed for monitoring purposes during follow-up. Five patients (2.9%) with normal karyotype transformed to myelofibrosis (MF) without developing new cytogenetic changes at transformation. Two patients (1.2%) with normal karyotypes at presentation transformed to myelodysplastic syndrome and acute myeloid leukemia, respectively. Both acquired complex cytogenetic changes at the time of transformation. There is no rationale for repeating cytogenetic tests in ET patients on follow up, unless blood cell count changes suggest possible transformation." }, { "instance_id": "R33008xR33001", "comparison_id": "R33008", "paper_id": "R33001", "text": "Prognos- tic and biologic significance of chromosomal imbalances assessed by comparative genomic hybridization in multiple myeloma Abstract Cytogenetic abnormalities, evaluated either by karyotype or by fluorescence in situ hybridization (FISH), are considered the most important prognostic factor in multiple myeloma (MM). However, there is no information about the prognostic impact of genomic changes detected by comparative genomic hybridization (CGH). We have analyzed the frequency and prognostic impact of genetic changes as detected by CGH and evaluated the relationship between these chromosomal imbalances and IGH translocation, analyzed by FISH, in 74 patients with newly diagnosed MM. Genomic changes were identified in 51 (69%) of the 74 MM patients. The most recurrent abnormalities among the cases with genomic changes were gains on chromosome regions 1q (45%), 5q (24%), 9q (24%), 11q (22%), 15q (22%), 3q (16%), and 7q (14%), while losses mainly involved chromosomes 13 (39%), 16q (18%), 6q (10%), and 8p (10%). Remarkably, the 6 patients with gains on 11q had IGH translocations. Multivariate analysis selected chromosomal losses, 11q gains, age, and type of treatment (conventional chemotherapy vs autologous transplantation) as independent parameters for predicting survival. Genomic losses retained the prognostic value irrespective of treatment approach. According to these results, losses of chromosomal material evaluated by CGH represent a powerful prognostic factor in MM patients. (Blood. 2004;104:2661-2666)" }, { "instance_id": "R33008xR33003", "comparison_id": "R33008", "paper_id": "R33003", "text": "Abnormalities of chromosome 1p \u2044 q are highly associated with chromosome 13 \u2044 13q deletions and are an adverse prognostic factor for the outcome of high-dose chemotherapy in patients with multiple myeloma The prognostic value of chromosomal abnormalities was studied in untreated multiple myeloma patients who were registered into a prospective randomised multicentre phase 3 study for intensified treatment (HOVON24). A total of 453 patients aged less than 66 years with stage II and III A/B disease were registered in the clinical study. Cytogenetic analysis was introduced as a standard diagnostic assay in 1998. It was performed at diagnosis in 160 patients and was successful in 137/160 patients (86%). An abnormal karyotype was observed in 53/137 (39%) of the patients. Abnormalities of chromosome 1p and 1q were found in 19 (36% of patients with an abnormal karyotype) and 21 patients (40%). There was a strong association between chromosome 1p and/or 1q abnormalities and deletion of chromosome 13 or 13q (n = 27, P < 0\u00b7001). Patients with karyotypic abnormalities had a significantly shorter overall survival (OS) than patients with normal karyotypes. Complex abnormalities, hypodiploidy, chromosome 1p abnormalities, chromosome 1q abnormalities, and chromosome 13 abnormalities were associated with inferior OS on univariate analysis, as well as after adjustment for other prognostic factors. In conclusion, chromosome 13 abnormalities and chromosome 1p and/or 1q abnormalities were highly associated, and are risk factors for poor outcome after intensive therapy in multiple myeloma." }, { "instance_id": "R33008xR32979", "comparison_id": "R33008", "paper_id": "R32979", "text": "Cytogenetic findings and their clinical relevance in myelofibrosis with myeloid metaplasia The prognostic significance of bone marrow cytogenetic lesions in myelofibrosis with myeloid metaplasia (MMM) was investigated in a retrospective series of 165 patients. An abnormal karyotype was demonstrated in 57% of patients. At diagnosis (n = 92), 48% of the patients had detectable cytogenetic abnormalities, and clonal evolution was frequently demonstrated in sequential studies. More than 90% of the anomalies were represented by 20q\u2013, 13q\u2013, +8, +9, 12p\u2013, and abnormalities of chromosomes 1 and 7. Of these, 20q\u2013, 13q\u2013 and +8 were the most frequent sole abnormalities, each occurring in 15\u201325% of the abnormal cases. Trisomy 9 and abnormalities of chromosomes 1 and 7 were equally prevalent but were usually associated with additional cytogenetic lesions. Chromosome 5 abnormalities were infrequent but were over\u2010represented in the group of patients exposed to genotoxic therapy. In a multivariate analysis that incorporated other clinical and laboratory variables, the presence of an abnormal karyotype did not carry an adverse prognosis. Instead, +8, 12p\u2013, advanced age and anaemia were independent prognostic determinants of inferior survival. In particular, survival was not adversely affected by the presence of either 20q\u2013 or 13q\u2013." }, { "instance_id": "R33008xR32995", "comparison_id": "R33008", "paper_id": "R32995", "text": "Chromosomal abnormalities in untreated patients with non-Hodgkin\u2019s lymphoma: associations with histology, clinical characteristics, and treatment outcome. The Nebraska Lymphoma Study Group Abstract We describe the chromosomal abnormalities found in 104 previously untreated patients with non-Hodgkin's lymphoma (NHL) and the correlations of these abnormalities with disease characteristics. The cytogenetic method used was a 24- to 48-hour culture, followed by G- banding. Several significant associations were discovered. A trisomy 3 was correlated with high-grade NHL. In the patients with an immunoblastic NHL, an abnormal chromosome no. 3 or 6 was found significantly more frequently. As previously described, a t(14;18) was significantly correlated with a follicular growth pattern. Abnormalities on chromosome no. 17 were correlated with a diffuse histology and a shorter survival. A shorter survival was also correlated with a +5, +6, +18, all abnormalities on chromosome no. 5, or involvement of breakpoint 14q11\u201312. In a multivariate analysis, these chromosomal abnormalities appeared to be independent prognostic factors and correlated with survival more strongly than any traditional prognostic variable. Patients with a t(11;14)(q13;q32) had an elevated lactate dehydrogenase (LDH). Skin infiltration was correlated with abnormalities on 2p. Abnormalities involving breakpoints 6q11\u201316 were correlated with B symptoms. Patients with abnormalities involving breakpoints 3q21\u201325 and 13q21\u201324 had more frequent bulky disease. The correlations of certain clinical findings with specific chromosomal abnormalities might help unveil the pathogenetic mechanisms of NHL and tailor treatment regimens." }, { "instance_id": "R33091xR32967", "comparison_id": "R33091", "paper_id": "R32967", "text": "Exploring polycythaemia vera with fluorescence in situ hybridization: additional cryptic 9p is the most frequent abnormality detected Summary. Between 1986 and 2001, 220 patients with polycythaemia vera (PV) were studied using conventional cytogenetics. Of 204 evaluable patients, 52 (25\u00b74%) had clonal abnormalities. The recurrent chromosomal rearrangements were those of chromosome 9 (21\u00b71%), del(20q) (19\u00b72%), trisomy 8 (19\u00b72%), rearrangements of 13q (13\u00b74%), abnormalities of 1q (11\u00b75%), and of chromosomes 5 and 7 (9\u00b76%). Subsequent analysis of 32 patients, performed at follow\u2010up of up to 14\u00b78 years, revealed new clonal abnormalities in five patients and the disappearance of an abnormal clone in four. Eleven patients remained normal up to 11\u00b75 years and seven patients maintained an abnormality for over 10 years. Fifty\u2010three patients were studied retrospectively using interphase fluorescence in situ hybridization (I\u2010FISH), utilizing probes for centromere enumeration of chromosomes 8 and 9, and for 13q14 and 20q12 loci. Conventional cytogenetics demonstrated clonal chromosome abnormalities in 23% of these 53 patients. The addition of I\u2010FISH increased the detection of abnormalities to 29% and permitted clarification of chromosome 9 rearrangements in an additional 5\u00b76% of patients. FISH uncovered rearrangements of chromosome 9 in 53% of patients with an abnormal FISH pattern, which represented the most frequent genomic alteration in this series." }, { "instance_id": "R33091xR33072", "comparison_id": "R33091", "paper_id": "R33072", "text": "Compari- son of peripheral blood interphase cytogenetics with bone marrow karyotype analysis in myelofibrosis with myeloid metaplasia In a prospective study of 42 patients with myelofibrosis with myeloid metaplasia (MMM), peripheral blood (PB) and bone marrow (BM) interphase cytogenetics and PB CD34 enumeration were performed concomitantly with BM karyotype analysis. Interphase cytogenetics was performed with a panel of fluorescence in situ hybridization (FISH) probes that were capable of detecting most of the known recurrent cytogenetic lesions in MMM. There was a close concordance in the results of interphase cytogenetics between PB and BM, regardless of the PB CD34 count. In general, FISH\u2010detectable abnormalities were also detected by BM karyotype. Although complementary, interphase cytogenetics may not always provide the necessary karyotypic information in MMM." }, { "instance_id": "R33091xR33033", "comparison_id": "R33091", "paper_id": "R33033", "text": "Cytogenetic studies and their prognostic significance in agnogenic myeloid metaplasia: a report on 47 cases Abstract Cytogenetic analysis was performed in 47 newly diagnosed patients with agnogenic myeloid metaplasia (AMM); 32 had a normal karyotype (68%, group I), whereas 15 had clonal abnormalities (32%, group II). The most frequent abnormal findings were a 20q- deletion in six cases (either alone or within complex anomalies), interstitial 13q- deletion in three cases (and monosomy 13 in one case), and acquired trisomy 21 or 21p+ in three cases. Four cases exhibited complex aberrations involving several chromosomes, sometimes with a mosaicism. In two patients with an initial abnormal karyotype, further cytogenetic analysis during the disease course showed the appearance of additional clonal anomalies, and particularly of a probable Philadelphia (Ph1) variant in one case. Treatment was essentially supportive. Survival was significantly shorter in group II (median, 30 months) compared with group I (median, not reached at 6 years; P = .015). In univariate analysis, other parameters significantly associated with a poor prognosis (P less than .05) were higher age, anemia, and increased percentage of circulating blasts. However, in a multivariate analysis, only cytogenetic abnormalities and age retained their independent prognostic value." }, { "instance_id": "R33091xR33082", "comparison_id": "R33091", "paper_id": "R33082", "text": "Der(6)t(1;6)(q21-23;p21.3): a specific cytogenetic abnormality in myelofibrosis with myeloid metaplasia Chromosome anomalies are detected in approximately half of patients with myelofibrosis with myeloid metaplasia (MMM) although none of the most prevalent lesions are specific to the disease. In a prospective cytogenetic study of 81 patients with MMM, we encountered three with an unbalanced translocation between chromosomes 1 and 6 with specific breakpoints; der(6)t(1;6)(q21\u201323;p21.3). A subsequent Mayo Clinic cytogenetic database search identified 12 patients with this chromosome anomaly among 17 791 consecutive patients. A similar database search from Royal Hallamshire Hospital in Sheffield, UK revealed two additional patients among 8000 cases. The clinical phenotype and survival for each of these 14 patients was typical of MMM. These findings suggested that der(6)t(1;6)(q21\u201323;p21.3) is a highly specific cytogenetic anomaly that may harbour gene(s) specifically associated with MMM. In a preliminary fluorescence in situ hybridization study, the breakpoints on chromosome 6 in two additional cases were found to be telomeric to the gene for 51 kDa FK506\u2010binding protein (FKBP51)." }, { "instance_id": "R33091xR33024", "comparison_id": "R33091", "paper_id": "R33024", "text": "Trisomy 1q in polycythe- mia vera and its relation to disease transition Clinical and cytogenetic details of 12 patients with polycythemia vera and complete or partial trisomy of the long arm of chromosome 1 are reported. All patients had trisomy for at least the segments 1q22 to 1qter. The 1q or material from 1q was translocated to another chromosome in eight patients. This was chromosome 9 in four patients, and those cases all had trisomy also for 9p. The trisomy 1q was found at the time of diagnosis in three patients, later during the polycythemic phase in five, and in four patients when they were first examined during a late stage of the disease. Acute leukemia or a myelodysplastic syndrome developed in eight of the 12 patients. Signs of advanced disease, eg, myeloid metaplasia or myelofibrosis, preceded the leukemia in four cases and was noted in one more patient." }, { "instance_id": "R33091xR33039", "comparison_id": "R33091", "paper_id": "R33039", "text": "Karyotypic and ras gene mutational analysis in idiopathic myelofibrosis Summary. Karyotypic analysis was performed in a total of 69 patients with well\u2010characterized idiopathic myelofibrosis. Karyotypic abnormalities were detected in 46% of cases examined during the chronic phase (29/63); with three abnormalities, del (13q), del(20q) and partial trisomy 1q, accounting for 75% of all abnormalities at diagnosis. The absence of del(5q), trisomy 8 and 21, as well as the rarity of monosomy 7, contrasts with pooled published data and may reflect our exclusion of closely related disorders, in particular MDS with fibrosis. Chromosomal aberrations increased to approximately 90% (8/9) in patients analysed during acute transformation. Mutational activation of codons 12, 13 and 61 of N\u2010, Ha\u2010 and Ki\u2010ras genes were assessed by polymerase chain reaction and hydridization with synthetic non\u2010radioactive digoxigenin\u2010labelled probes. Three mutations were detected in samples of peripheral blood DNA taken from 50 patients during the chronic phase of their disease: one N12 Asp (GGT GAT) and two N12 Ser (GGT AGT) mutations. The results from this study indicate that karyotypic abnormalities are present in at least 29% of cases at diagnosis and that del(13q), del(20q) and partial trisomy 1q are the most frequent findings. Ras mutations were relatively infrequent (6%) and appeared restricted to the N\u2010ras gene. Karyotypic analysis at diagnosis was found to be of prognostic significance." }, { "instance_id": "R33091xR33031", "comparison_id": "R33091", "paper_id": "R33031", "text": "A prospective long-term cytogenetic study in polycythemia vera in relation to treatment and clinical course Abstract This paper reports the results of cytogenetic studies in a consecutive series of 64 patients with polycythemia vera, 57 of whom could be followed prospectively. The median length of the cytogenetic observation time was 93 months (range, 24 to 224 months) after diagnosis. Clonal chromosome abnormalities were observed initially in 11 patients (17%) and later during the course of the disease in another 20 patients. An abnormal karyotype was found in 71% to 80% of the patients who were examined after the development of myeloid metaplasia, myelofibrosis, or leukemia. Patients treated with myelosuppressive agents showed a significantly greater risk of chromosome abnormalities developing than did patients who had been phlebotomized. Acute leukemia developed in eight patients, all of whom had been treated with myelosuppressive agents. A chromosome abnormality preceded the leukemia in only two of the patients. The initial presence of an abnormal karyotype did not predict a greater risk of development of leukemia. No consistent relationship was demonstrated between the occurrence of chromosome abnormalities and the development of myeloid metaplasia and/or myelofibrosis, which was observed in 42% of the patients. The chromosome abnormalities followed a nonrandom pattern, and those most frequently observed were trisomies for 1 q, 8, 9, or 9p and deletion of 20q. Deletions seem to be common and were found in 14 patients." }, { "instance_id": "R33091xR33088", "comparison_id": "R33091", "paper_id": "R33088", "text": "The role of cytogenetic abnormalities as a prognostic marker in primary myelofibrosis: applicability at the time of diagnosis and later during disease course Abstract Although cytogenetic abnormalities are important prognostic factors in myeloid malignancies, they are not included in current prognostic scores for primary myelofibrosis (PMF). To determine their relevance in PMF, we retrospectively examined the impact of cytogenetic abnormalities and karyotypic evolution on the outcome of 256 patients. Baseline cytogenetic status impacted significantly on survival: patients with favorable abnormalities (sole deletions in 13q or 20q, or trisomy 9 \u00b1 one other abnormality) had survivals similar to those with normal diploid karyotypes (median, 63 and 46 months, respectively), whereas patients with unfavorable abnormalities (rearrangement of chromosome 5 or 7, or \u2265 3 abnormalities) had a poor median survival of 15 months. Patients with abnormalities of chromosome 17 had a median survival of only 5 months. A model containing karyotypic abnormalities, hemoglobin, platelet count, and performance status effectively risk-stratified patients at initial evaluation. Among 73 patients assessable for clonal evolution during stable chronic phase, those who developed unfavorable or chromosome 17 abnormalities had median survivals of 18 and 9 months, respectively, suggesting the potential role of cytogenetics as a risk factor applicable at any time in the disease course. Dynamic prognostic significance of cytogenetic abnormalities in PMF should be further prospectively evaluated." }, { "instance_id": "R33091xR33022", "comparison_id": "R33091", "paper_id": "R33022", "text": "Translocation 1;7 in hematologic disorders: a brief review of 22 cases A translocation t(1;7)(p11;p11), previously reported in patients with myelodysplasia or leukemia has been found in seven new cases. The present report briefly reviews the cytogenetic and clinical features of 22 patients with this translocation. The majority of these patients had a history of occupational or therapeutic exposure to toxic substances or radiation. Trisomy 8 or 21 were the most common additional abnormalities, especially in leukemic patients. The t(1;7) should be added to the group of specific cytogenetic abnormalities observed frequently in secondary myelodysplasia and leukemia." }, { "instance_id": "R33091xR33009", "comparison_id": "R33091", "paper_id": "R33009", "text": "Partial trisomy of the long arm of chromosome 1 in myelofibrosis and polycythemia vera We have identified partial trisomy 1q in 2 patients with different hematologic disorders. The first patient was a 55\u2010year\u2010old female with myelosclerosis and myeloid metaplasia diagnosed at age 38 years presenting with anemia, fatigue, bruising, fever, and splenomegaly. At age 56, she had 50\u201395% myeloblast cells and 95\u2013100 nucleated RBC precursors per 100 WBC. Chromosome analysis of unstimulated leukocytes with Q, G, and C banding showed 46,XX,\u20106,+t(1;6) (q25;p22) in all metaphase cells. In vitro incorporation of Fe55 was demonstrated in 90% of metaphases by autoradiography. The second patient, a 49\u2010year\u2010old male, was diagnosed as having polycythemia vera at age 30 during a regular checkup. He since developed hepatosplenomegaly. Chromosome analysis from a direct bone marrow preparation at age 44 and 45 showed grossly normal karyotypes. At age 49, his marrow by Q and G banding showed almost 100% of cells with 46,XY,\u201313,+t(1;13) (q12;p12). Eleven cases of trisomy of 1q have been reported in various hematologic disorders. It is apparent that partial trisomy 1q represents another nonrandom chromosomal abnormality, in addition to the most common nonrandom chromosomal aberrations, such as the Philadelphia chromosome, trisomy 8, trisomy 9, and monosomy 7 in hematologic disorders." }, { "instance_id": "R33091xR33067", "comparison_id": "R33091", "paper_id": "R33067", "text": "Acute mye- loid leukemia (AML) having evolved from essential thrombocythemia (ET): distinctive chromosome abnormal- ities in patients treated with pipobroman or hydroxyurea ET is a chronic myeloproliferative disorder rarely evolving into AML, sometimes preceded by a myelodysplastic syndrome (MDS). Such transformations mostly occur in patients treated with radiophosphorous (32P) or alkylating agents, especially busulfan. Recently, concern has also arisen about the long-term safety of hydroxyurea (HU). Pipobroman (PI), a well tolerated and simple to use drug, constitutes a valid alternative to those cytoreductive treatments. The present study reports on 155 ET patients treated at our institution from 1985 to 1995, and monitored until December 2000. A good control of thrombocytosis was achieved with PI as the only treatment in 106 patients and with HU in 23 patients. Twenty-six patients received no treatment. After a median follow-up of 104 months, seven patients (four treated with HU, and three with PI) developed AML whereas one patient treated with PI developed MDS. A significant difference in progression-free survival was observed between HU- and PI-treated patients (P = 0.004). A short-arm deletion of chromosome 17 was most frequently detected in HU-treated patients, while a long-arm trisomy of chromosome 1 and a monosomy 7q were seen in PI-treated patients. No TP53 mutation was discovered in the six patients studied (two HU-treated and four PI-treated). We conclude that these cytogenetic abnormalities are not linked to the natural history of the disease, but rather that they might be induced by the cytoreductive treatment." }, { "instance_id": "R33091xR33035", "comparison_id": "R33091", "paper_id": "R33035", "text": "Cytogenetic and molecular studies in primary myelofibrosis Cytogenetic and molecular data of three patients affected by primary myelofibrosis with myeloid metaplasia (PMMM) evolving to blastic crisis are reported. The cytogenetic findings were uncommon. The first patient (female) showed an idic(X)(q13) as the sole alteration in chronic phase, with an additional r(7) in 67% of the cells of the blast crisis; the other two patients showed, in blast crisis, a partial trisomy of the long arm of chromosome 1, without translocation, as a unique structural abnormality. These findings confirm the presence of nonrandom, although nonspecific, alterations in PMMM that, in our cases, seem to be related to the multistep progression of the neoplastic process. Molecular investigations have been applied to study the genomic organization and the level of expression of genes such as bcr and calcyclin and c-fms protooncogene possibly involved in the molecular mechanisms underlying cell proliferation in hematopoietic cells. The data obtained are discussed with respect to the myeloproliferative disorder." }, { "instance_id": "R33091xR33080", "comparison_id": "R33091", "paper_id": "R33080", "text": "Prognostic diver- sity among cytogenetic abnormalities in myelofibrosis with myeloid metaplasia Approximately 30\u201350% of patients with myelofibrosis with myeloid metaplasia (MMM) demonstrate detectable cytogenetic abnormalities, the prognostic value of which has not been completely defined by previous retrospective studies. The current prospective study addresses this issue in the context of currently accepted independent prognostic variables." }, { "instance_id": "R33091xR33020", "comparison_id": "R33091", "paper_id": "R33020", "text": "The pattern and clinical significance of karyotypic abnormalities in patients with idiopathic and postpolycythemic myelofibro- sis Six of eight (75%) patients with postpolycythemic myelofibrosis (PPMF) and 11 of 20 (55%) patients with idiopathic myelofibrosis (MF), seen at the University of Chicago, had abnormal karyotypes in cells of bone marrow origin. The specific chromosomal findings and their clinical significance in these patients were analyzed. A review of the literature added the findings from abnormal karyotype studies in 10 patients with PPMF and 36 patients with MF to this series. The demonstration of an increased frequency of cytogenetic abnormalities after cytotoxic therapy in polycythemia vera (PV) implies that such therapy may have a role in the development of chromosomal changes seen in treated PV and PPMF. The cytogenetic abnormalities in MF appear to be unrelated to therapy except possibly for an association with partial or complete losses of chromosome 5 or 7. Trisomy 8 is the only finding that is more common in MF than in PPMF. Other abnormalities were more common in PPMF, particularly 20q\u2010, loss of 7 or 7q\u2010, and trisomy 9, and to a lesser extent trisomy Iq and 5q\u2010. Cytogenetic abnormalities do not show a pattern that can be used to distinguish between PPMF and MF, nor are they useful in the prognosis of MF or in initial studies in PPMF. PPMF does appear to have a higher tendency toward leukemic transformation than does MF, and an evolution in karyotype appears to have serious prognostic implications in PPMF in regard to this transition." }, { "instance_id": "R33091xR32962", "comparison_id": "R33091", "paper_id": "R32962", "text": "Cytogenetic abnormalities and their prognostic significance in idiopathic myelo- fibrosis: a study of 106 cases The prognostic significance of cytogenetic abnormalities was determined in 106 patients with well\u2010characterized idiopathic myelofibrosis who were successfully karyotyped at diagnosis. 35% of the cases exhibited a clonal abnormality (37/106), whereas 65% (69/106) had a normal karyotype. Three characteristic defects, namely del(13q) (nine cases), del(20q) (eight cases) and partial trisomy 1q (seven cases), were present in 64.8% (24/37) of patients with clonal abnormalities. Kaplan\u2010Meier plots and log rank analysis demonstrated an abnormal karyotype to be an adverse prognostic variable (P < 0.001). Of the eight additional clinical and haematological parameters recorded at diagnosis, age (P < 0.01), anaemia (haemoglobin \u226410 g/dl; P < 0.001), platelet (\u2264100 \u00d7 109/l, P < 0.0001) and leucocyte count (>10.3 \u00d7 109/l; P = 0.06) were also associated with a shorter survival. In contrast, sex, spleen and liver size, and percentage blast cells were not found to be significant. Multivariate analysis, using Cox's regression, revealed karyotype, haemoglobin concentration, platelet and leucocyte counts to retain their unfavourable prognostic significance. A simple and useful schema for predicting survival in idiopathic myelofibrosis has been produced by combining age, haemoglobin concentration and karyotype with median survival times varying from 180 months (good\u2010risk group) to 16 months (poor\u2010risk group)." }, { "instance_id": "R33091xR33074", "comparison_id": "R33091", "paper_id": "R33074", "text": "Frequency of structural abnormalities of the long arm of chromosome 12 in myelofibrosis with myeloid metaplasia Among cytogenetic studies of 205 patients diagnosed as myelofibrosis with myeloid metaplasia, we found seven cases with structural abnormalities of the long arm of chromosome 12. The karyotype showed six balanced translocations, that is, t(4;12)(q33;q21), t(5;12)(p14;q21), t(1;12)(q22;q24), t(12;17)(q24;q11), t(7;12) (p11;q24), and t(1;12)(p12;q24), as well as other cytogenetic abnormalities such as del(12)(q21;q24) and inv(12) (p12q24). Some isolated cases involving the 12q21 region have also been described in the literature. Importance of rearrangement of chromosome 12 in 12q21 or 12q24 is underlined by the authors suggesting a proto-oncogene accountable mechanism of leukemogenesis." }, { "instance_id": "R33091xR33015", "comparison_id": "R33091", "paper_id": "R33015", "text": "Abnormalities of chromosome no. 1 related to blood dyscrasias: study of 10 cases Partial excess of chromosome 1 (q25-q32) was noted in malignant cells from all of 10 patients who had disorders such as non-African Burkitt's lymphoma, adult T-cell leukemia, myelofibrosis, malignant lymphoma, chronic lymphocytic leukemia or chronic myelocytic leukemia in blast crisis. The break points on chromosome 1 were at centromere, q12, q21, q23, q25 and q32. Variations in the specific region of the long arm of chromosome 1, q25-q32, were thought to be important in the evolution of malignant cell proliferation." }, { "instance_id": "R33091xR32977", "comparison_id": "R33091", "paper_id": "R32977", "text": "Karyotypic abnorma- lities in myelofibrosis following polycythemia vera Polycythemia vera (PV) is a chronic myeloproliferative disease characterized by an increase of total red cell volume; in 10% to 15% of cases, bone marrow fibrosis complicates the course of the disease after several years, resulting in a hematologic picture mimicking myelofibrosis with myelocytic metaplasia (MMM). This condition is known as post polycythemic myelofibrosis (PPMF). Among 30 patients with PPMF followed in Northern France, 27 (90%) expressed one or two abnormal clones in myelocytic cell cultures. Of these, 19 (70%) had partial or complete trisomy 1q. This common anomaly either resulted from unbalanced translocations with acrocentric chromosomes, that is, 13, 14, and 15, or other chromosomes, that is, 1, 6, 7, 9, 16, 19, and Y, or from partial or total duplication of long arm of chromosome 1. A single patient had an isochromosome 1q leading to tetrasomy 1q. In all cases, a common trisomic region spanning 1q21 to 1q32 has been identified. Given that most patients had previously received chemotherapy or radio-phosphorus to control the polycythemic phase of their disease, this study illustrates the increased frequency of cytogenetic abnormalities after such treatments: 90% versus 50% in de novo MMM. Moreover, karyotype can be used to distinguish PPMF-where trisomy 1q is the main anomaly-from primary MMM where trisomy 1q is rare and deletions 13q or 20q are far more common. Whether trisomy 1q is or is not a secondary event remains a matter of debate, as well as the role of cytotoxic treatments." }, { "instance_id": "R33091xR33041", "comparison_id": "R33091", "paper_id": "R33041", "text": "Prognostic factors in agnogenic myeloid metaplasia: a report on 195 cases with a new scoring system We studied the survival of 195 patients with agnogenic myeloid metaplasia (AMM) diagnosed between 1962 and 1992 in an attempt to stratify patients into risk groups. Median survival was 42 months. Adverse prognostic factors for survival were age > 60 years, hepatomegaly, weight loss, low hemoglobin level (Hb), low or very high leukocyte count (WBC), high percentage of circulating blasts, male sex, and low platelet count. A new scoring system based on two adverse prognostic factors, namely Hb < 10 g/dL and WBC < 4 or > 30 x 10(3)/L, was able to separate patients in three groups with low (0 factor), intermediate (1 factor), and high (2 factors) risks, associated with a median survival of 93, 26, and 13 months, respectively. An abnormal karyotype (32 cases of 94 tested patients) was associated with a short survival, especially in the low-risk group (median survival of 50 v 112 months in patients with normal karyotype). The prognostic factors for acute conversion were WBC > 30 x 10(3)/L and abnormal karyotype. Thus, hemoglobin level and leukocyte count provide a simple prognostic model for survival in AMM, and the adverse prognostic value of abnormal karyotype may be related to a higher rate of acute conversion." }, { "instance_id": "R33091xR33084", "comparison_id": "R33091", "paper_id": "R33084", "text": "Leukemic recombinations involving heterochromatin in myelo- proliferative disorders with t(1;9) The unbalanced t(1;9) is a rare, recurrent rearrangement in polycythemia vera (PV) resulting in trisomy of both 1q and 9p arms, whereas a balanced t(1;9)(q12;q12), to our knowledge, has never been reported before. We studied two patients with PV and one with idiopathic myelofibrosis bearing an unbalanced t(1;9) and one patient with essential thrombocythemia with a balanced t(1;9). In all cases fluorescence in situ hybridization showed that the breakpoints were located within the satellite II family of heterochromatin of chromosome 1 and the satellite III of chromosome 9. Heterochromatin breakage and reunion produce the unbalanced t(1;9) and may contribute to a gene dosage effect due to gains of 1q and 9p. Case 4 with the balanced t(1;9), however, suggests that translocation of heterochromatin close to critical genes could interfere with their function. The molecular event underlying juxtaposition of satellite II of chromosome 1 and the satellite III of chromosome 9 remains to be elucidated." }, { "instance_id": "R33091xR32987", "comparison_id": "R33091", "paper_id": "R32987", "text": "New insights into the prognostic impact of the karyotype in MDS and correlation with subtypes: evidence from a core dataset of 2124 patients We have generated a large, unique database that includes morphologic, clinical, cytogenetic, and follow-up data from 2124 patients with myelodysplastic syndromes (MDSs) at 4 institutions in Austria and 4 in Germany. Cytogenetic analyses were successfully performed in 2072 (97.6%) patients, revealing clonal abnormalities in 1084 (52.3%) patients. Numeric and structural chromosomal abnormalities were documented for each patient and subdivided further according to the number of additional abnormalities. Thus, 684 different cytogenetic categories were identified. The impact of the karyotype on the natural course of the disease was studied in 1286 patients treated with supportive care only. Median survival was 53.4 months for patients with normal karyotypes (n = 612) and 8.7 months for those with complex anomalies (n = 166). A total of 13 rare abnormalities were identified with good (+1/+1q, t(1q), t(7q), del(9q), del(12p), chromosome 15 anomalies, t(17q), monosomy 21, trisomy 21, and \u2212X), intermediate (del(11q), chromosome 19 anomalies), or poor (t(5q)) prognostic impact, respectively. The prognostic relevance of additional abnormalities varied considerably depending on the chromosomes affected. For all World Health Organization (WHO) and French-American-British (FAB) classification system subtypes, the karyotype provided additional prognostic information. Our analyses offer new insights into the prognostic significance of rare chromosomal abnormalities and specific karyotypic combinations in MDS." }, { "instance_id": "R33581xR33468", "comparison_id": "R33581", "paper_id": "R33468", "text": "Success factors for information logistics strategy \u2014 An empirical investigation Providing analytical information to all stakeholders in a timely manner remains, in the face of current challenges, a key issue in organizations. Information logistics (IL) extends present concepts of decision support like business intelligence by focusing on enterprise-wide information supply and the exploitation of synergies. The article investigates which factors play critical roles in the success of IL strategies. An empirical study by means of a causal analysis provides evidence for significant relationships between those factors and organizational performance. The study identifies comprehensiveness, flexibility, support, communication, IT strategy orientation, business/IT partnership, and project collaboration as influencing factors for IL strategy success. Not all success factors, however, validated in related strategy research can be confirmed in the IL context." }, { "instance_id": "R33581xR33506", "comparison_id": "R33581", "paper_id": "R33506", "text": "Key success factor analysis for e\u2010SCM project implementation and a case study in semiconductor manufacturers Purpose \u2013 The semiconductor market exceeded US$250 billion worldwide in 2010 and has had a double\u2010digit compound annual growth rate (CAGR) in the last 20 years. As it is located far upstream of the electronic product market, the semiconductor industry has suffered severely from the \u201cbullwhip\u201d effect. Therefore, effective e\u2010based supply chain management (e\u2010SCM) has become imperative for the efficient operation of semiconductor manufacturing (SM) companies. The purpose of this research is to define and analyze the key success factors (KSF) for e\u2010SCM system implementation in the semiconductor industry.Design/methodology/approach \u2013 A hierarchy of KSFs is defined first by a combination of a literature review and a focus group discussion with experts who successfully implemented an inter\u2010organizational e\u2010SCM project. Fuzzy analytic hierarchy process (FAHP) is then employed to rank the importance of these identified KSFs. To confirm the research result and further explore the managerial implications, a second in..." }, { "instance_id": "R33581xR33251", "comparison_id": "R33581", "paper_id": "R33251", "text": "

Critical Factors of Attracting Supply Chain Network Members to Electronic Marketplaces: The Case of Sunbooks Ltd. and the Hungarian Book Trade

Vertical electronic marketplaces often suffer from the low level of liquidity. Attracting members is critical, however, not even a sound and efficient IT and logistic background is enough to convince both the supplier and the customer side. In this paper the authors present the case study of Sunbooks Ltd. This venture has started to transform the Hungarian book trade market that suffers from serious deficiencies in field of information and material flow. Despite the vast investments and that the marketplace is prepared to serve the whole Hungarian book industry, the market share started to grow very slowly. The authors identify three contingency factors which can be accounted for the evolution dynamics of this virtual network. They explain how the business model is subjected to the evolution of market characteristics, and how the third factor, the \u201csoft issues\u201d determine the evolution opportunities even in a supporting market situation." }, { "instance_id": "R33581xR33489", "comparison_id": "R33581", "paper_id": "R33489", "text": "Identifying critical enablers and pathways to high performance supply chain quality management Purpose \u2013 The aim of this paper is threefold: first, to examine the content of supply chain quality management (SCQM); second, to identify the structure of SCQM; and third, to show ways for finding improvement opportunities and organizing individual institution's resources/actions into collective performance outcomes.Design/methodology/approach \u2013 To meet the goals of this work, the paper uses abductive reasoning and two qualitative methods: content analysis and formal concept analysis (FCA). Primary data were collected from both original design manufacturers (ODMs) and original equipment manufacturers (OEMs) in Taiwan.Findings \u2013 According to the qualitative empirical study, modern enterprises need to pay immediate attention to the following two pathways: a compliance approach and a voluntary approach. For the former, three strategic content variables are identified: training programs, ISO, and supplier quality audit programs. As for initiating a voluntary effort, modern lead firms need to instill \u201cmotivat..." }, { "instance_id": "R33581xR33418", "comparison_id": "R33581", "paper_id": "R33418", "text": "Drivers, barriers and critical success factors for ERPII implementation in supply chains: A critical analysis This paper reviews existing literature to determine the drivers of and barriers to Enterprise Resource Planning II (ERPII) implementation. The ERPII literature is then extended through interviews with potential players in ERPII implementations to identify the critical success factors (CSFs) or preconditions required for successful implementation throughout supply chains. These interviews were conducted with leading ERP vendors/consultants and organisations involved in the entire supply chain to gather evidence on the success, or lack thereof, of ERPII implementations. The results were compared and contrasted to existing literature on ERPII, collaborative networks, and the extended enterprise. We found more barriers to than drivers of successful ERPII implementation. This leads prospective implementers to have a pessimistic forecast for ERPII implementation success. Our research reveals that main reason for this negativity is a general lack of understanding and appreciation of the capabilities of the extended enterprise network. Second, the research presents two sets of CSFs: CSFs which apply to traditional ERP and carry forward to apply to ERPII, and CSFs that are tailored to the new needs for successful ERPII implementations. Finally, the research questions the suitability of ERPII in today's modern business environment, and suggests that technology may have overtaken management's capabilities to capture the full benefits of such an advanced enterprise system. Future trends in ERPII development are also considered in an attempt to find the next phase in the enterprise system life cycle. Beyond ERPII, the research suggests that infrastructure such as large-scale business intelligence (BI) systems must be heavily incorporated into modern enterprise systems to fully understand how information flows throughout an organisation and to make sense of that information." }, { "instance_id": "R33581xR33368", "comparison_id": "R33581", "paper_id": "R33368", "text": "Managing Supply Chain at High Technology Companies There is an expectation that high technology companies use unique and leading edge technology, and invest heavily in supply chain management. This research uses multiple case study methodology to determine factors affecting the supply chain management at high technology companies. The research benchmarks the supply chain performance of these high technology companies with supply chain of other supply chains at both strategic at tactical levels. The results indicate that at the strategic level the high technology companies and benchmark companies have a similar approach to supply chain management. However at the tactical, or critical, supply chain factor level, the analysis suggests that the benchmark companies (which happen to be companies dealing in commodity-type products) have a different approach to supply chain management." }, { "instance_id": "R33581xR33270", "comparison_id": "R33581", "paper_id": "R33270", "text": "Logistics and supply chain management in luxury fashion retail: Empirical investigation of Italian firms Abstract The Italian industry of fashion goods is a business worth 67.6\u20ac billion in 2006 (Il Sole 24ore, January 10, 2007), of which about 26\u20ac billion is due to the luxury segment. Marketing gurus state that \u201cconsumers everywhere at every income level want more luxury\u201d [Danziger, P.N., 2005. Let them Eat the Cake: Marketing Luxury to the Masses as well as the Classes. Dearborn Trade Publishing, Chicago]: therefore, companies should move brands towards a higher positioning and add more valuable features to products and services, but this cannot be obtained only by means of marketing efforts. Which is the role of operations and supply chain management in luxury fashion companies\u2019 success? This paper presents the results of the exploratory stage of a research project ongoing at Politecnico di Milano and dealing with supply chain management in the luxury fashion industry. In total, 12 Italian luxury fashion retailers have been studied in order to describe the main features of operations and supply chain strategies in the luxury fashion segment and to identify their role with respect to the relevant critical success factors." }, { "instance_id": "R33581xR33294", "comparison_id": "R33581", "paper_id": "R33294", "text": "Adoption of e-procurement in Hong Kong: An empirical research For the past 5 years, a large number of procurement articles have appeared in the literature. E-procurement solutions make purchasing activities more effective in terms of both time and cost. E-procurement is changing the way businesses purchase goods. Since most products and services are procured using electronic data interchange and the Internet, the application of e-procurement is inevitable in both manufacturing and services. There are limited empirical studies in the literature on the adoption of e-procurement in a country, that is, at the macro-level. Nevertheless, such a study will help companies in other countries to develop policies, strategies, and procedures to implement e-procurement. Understanding the importance of such a study, we have conducted a questionnaire-based survey about the adoption of e-procurement in Hong Kong. The main objective of this study is to identify the perceived critical success factors and perceived barriers regarding the implementation of e-procurement. A conceptual framework has been developed for the adoption of e-procurement, and this subsequently has been tested with data collected from companies in Hong Kong. Also, this study examines the current status of e-procurement adoption in Hong Kong. Finally, a framework is proposed based on the conceptual and empirical analysis for the adoption of e-procurement. The results indicate that educating companies in both long- and short-term benefits would encourage the application of e-procurement. Some critical success factors include adequate financial support, availability of interoperability and standards with traditional communication systems, top management support and commitment, understanding the priorities of the company, and having suitable security systems." }, { "instance_id": "R33581xR33127", "comparison_id": "R33581", "paper_id": "R33127", "text": "Outsourcing of logistics functions: a literature survey Recent times have witnessed a heightened global interest in outsourcing of logistics functions. This is indicated by the volume of writings on the subject in various scholarly journals, trade publications and popular magazines. However, efforts to organize them in an integrated body of knowledge appear to be very limited. Keeping this in view, this paper makes an attempt to develop a comprehensive literature on outsourcing based on more than 100 published articles, papers and books on the subject." }, { "instance_id": "R33581xR33544", "comparison_id": "R33581", "paper_id": "R33544", "text": "Critical factors for sub-supplier management: A sustainable food supply chains perspective The food industry and its supply chains have significant sustainability implications. Effective supply chain management requires careful consideration of multiple tiers of partners, especially with respect to sustainability issues. Firms increasingly approach their sub-suppliers to drive compliance with social and environmental efforts. A number of complexities and unique challenges make sub-supplier management more difficult than direct supplier management, e.g. a lack of contractual relationships to sub-suppliers, few opportunities to put direct pressure on sub-suppliers, or lack of transparency concerning sub-suppliers' involvement in a focal firm's supply chains. The literature has not investigated, either from sustainability or other perspectives, the critical success factors (CSFs) for firms' sub-supplier management. Therefore, this study seeks to explore and increase understanding of critical factors that help to overcome the complexities and unique challenges of sub-supplier management, with a focus on the food industry. Using data and information from a year-long field study in two food supply chains, the research identified 14 CSFs that influence the success of sub-suppliers' compliance with corporate sustainability standards (CSS). The identified CSFs can be classified into (1) focal firm-related, (2) relationship-related, (3) supply chain partner-related, and (4) context-related CSFs. The present research expands on the theory of critical success factors by applying the theory to the sustainability and sub-supplier management context. In support of critical success theory, it was found that CSFs do exist and their management will be necessary for effective sub-supplier management success as highlighted and exemplified by field study insights from practitioners. Multiple research avenues are necessary for further evaluation of sub-supplier management in the food industry and other industries who may find similar issues that arose from the food industry." }, { "instance_id": "R33581xR33476", "comparison_id": "R33581", "paper_id": "R33476", "text": "An empirical study on the impact of critical success factors on the balanced scorecard performance in Korean green supply chain management enterprises Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM." }, { "instance_id": "R33581xR33348", "comparison_id": "R33581", "paper_id": "R33348", "text": "Critical success factors for B2B e\u2010commerce use within the UK NHS pharmaceutical supply chain Purpose \u2013 The purpose of this paper is to determine those factors perceived by users to influence the successful on\u2010going use of e\u2010commerce systems in business\u2010to\u2010business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach \u2013 Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e\u2010commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings \u2013 The paper yields five composite factors that are perceived by users to influence successful e\u2010commerce use. \u201cSystem quality,\u201d \u201cinformation quality,\u201d \u201cmanagement and use,\u201d \u201cworld wide web \u2013 assurance and empathy,\u201d and \u201ctrust\u201d are proposed as potentia..." }, { "instance_id": "R33581xR33343", "comparison_id": "R33581", "paper_id": "R33343", "text": "Critical success factors for improving decision quality on collaborative design in the IC supply chain Because the design process of Integrated Circuit (IC) product is knowledge-intensive and time-consuming, the collaboration among IC designers and manufacturers is crucial for reducing time to market of designing product. To enhance the quality of manufacturing strategic decisions for collaborative IC design, manufacturing practices must be identified as the core elements of manufacturing strategy. However, little research has been done regarding the essentials of implementing collaboration among IC designers. This study aims to clarify terminology of decision quality in manufacturing strategy and define Critical Success Factors (CSFs) as manufacturing practices for improving decision quality on collaborative design in the IC supply chain through comprehensive literature review. Moreover, this study proposes a framework in which the CSFs can be identified for different parties in IC supply chain." }, { "instance_id": "R33581xR33183", "comparison_id": "R33581", "paper_id": "R33183", "text": "Extending the concept of supply chain: Abstract Supply chain management (SCM) is a major issue in many industries as organisations begin to appreciate the criticality of creating an integrated relationship with their suppliers and customers, as well as all other stakeholders. Managing the supply chain has become a way of improving competitiveness by reducing uncertainty and enhancing customer service. The concept of value chain management (VCM) is becoming quite prevalent in industry. Despite this popularity, there is little evidence of the development of accompanying theory in the literature. Without theory development, it is difficult to identify specific hypotheses and propositions, which can be tested, resulting in research that lacks focus and is perhaps irrelevant. This paper analyses the merits and limitations of SCM and provides broader awareness of VCM, its critical success factors and proposes a model, which covers four key elements supported by a drive on agility and speed." }, { "instance_id": "R33581xR33529", "comparison_id": "R33581", "paper_id": "R33529", "text": "Supply chain issues in SMEs: select insights from cases of Indian origin This article reports the supply chain issues in small and medium scale enterprises (SMEs) using insights from select cases of Indian origin (manufacturing SMEs). A broad range of qualitative and quantitative data were collected during interviews and plant visits in a multi-case study (of 10 SMEs) research design. Company documentation and business reports were also employed. Analysis is carried out using diagnostic tools like \u2018EBM-REP\u2019 (Thakkar, J., Kanda, A., and Deshmukh, S.G., 2008c. An enquiry-analysis framework \u201cEBM-REP\u201d for qualitative research. International Journal of Innovation and Learning (IJIL), 5 (5), 557\u2013580.) and \u2018Role Interaction Model\u2019 (Thakkar J., Kanda, A., and Deshmukh, S.G., 2008b. A conceptual role interaction model for supply chain management in SMEs. Journal of Small Business and Enterprise Development (JSBED), 15 (1), 74\u201395). This article reports a set of critical success factors and evaluates six critical research questions for the successful supply chain planning and management in SMEs. The results of this article will help SME managers to assess their supply chain function more rigorously. This article addresses the issue on supply chain management in SMEs using case study approach and diagnostic tools to add select new insights to the existing body of knowledge on supply chain issues in SMEs." }, { "instance_id": "R33581xR33102", "comparison_id": "R33581", "paper_id": "R33102", "text": "The integrated logistics management system: a framework and case study Presents a framework for distribution companies to establish and improve their logistics systems continuously. Recently, much attention has been given to automation in services, the use of new information technology and the integration of the supply chain. Discusses these areas, which have great potential to increase logistics productivity and provide customers with high level service. The exploration of each area is enriched with Taiwanese logistics management practices and experiences. Includes a case study of one prominent food processor and retailer in Taiwan in order to demonstrate the pragmatic operations of the integrated logistics management system. Also, a survey of 45 Taiwanese retailers was conducted to investigate the extent of logistics management in Taiwan. Concludes by suggesting how distribution companies can overcome noticeable logistics management barriers, build store automation systems, and follow the key steps to logistics success." }, { "instance_id": "R33581xR33328", "comparison_id": "R33581", "paper_id": "R33328", "text": "Critical success factors in the context of humanitarian aid supply chains \u2013 Critical success factors (CSFs) have been widely used in the context of commercial supply chains. However, in the context of humanitarian aid (HA) this is a poorly addressed area and this paper therefore aims to set out the key areas for research., \u2013 This paper is based on a conceptual discussion of CSFs as applied to the HA sector. A detailed literature review is undertaken to identify CSFs in a commercial context and to consider their applicability to the HA sector., \u2013 CSFs have not previously been identified for the HA sector, an issue addressed in this paper., \u2013 The main constraint on this paper is that CSFs have not been previously considered in the literature as applied to HA. The relevance of CSFs will therefore need to be tested in the HA environment and qualitative research is needed to inform further work., \u2013 This paper informs the HA community of key areas of activity which have not been fully addressed and offers., \u2013 This paper contributes to the understanding of supply chain management in an HA context." }, { "instance_id": "R33581xR33381", "comparison_id": "R33581", "paper_id": "R33381", "text": "Aggregated construction supply chains: success factors in implementation of strategic partnerships Purpose \u2013 The purpose of this paper is to address the management of supply chains within the construction industry. Supply chains in this sector evidence a marked tendency to waste and inefficiency. One approach to improving this situation, which is the subject of intense discussion by both scientists and practitioners, is the establishment of strategic partnerships integrated with the scientific observation of the processes involved. This paper aims to present a case study of such a strategic alliance among German building contractors whose goal it is to cover the entire life cycle of a building, from its planning to its ultimate facility management. The paper seeks to focus on the establishment and implementation of an aggregated strategic alliance and its success factors.Design/methodology/approach \u2013 The research methodology is based on a case study of a German network of builders and trade contracting companies. Data collection tools included observation of workshops and meetings, semi\u2010structured inte..." }, { "instance_id": "R33581xR33482", "comparison_id": "R33581", "paper_id": "R33482", "text": "Key success factors and their performance implications in the Indian third-party logistics (3PL) industry This paper uses the extant literature to identify the key success factors that are associated with performance in the Indian third-party logistics service providers (3PL) sector. We contribute to the sparse literature that has examined the relationship between key success factors and performance in the Indian 3PL context. This study offers new insights and isolates key success factors that vary in their impact on operations and financial performance measures. Specifically, we found that the key success factor of relationship with customers significantly influenced the operations measures of on-time delivery performance and customer satisfaction and the financial measure of profit growth. Similarly, the key success factor of skilled logistics professionals improved the operational measure of customer satisfaction and the financial measure of profit growth. The key success factor of breadth of service significantly affected the financial measure of revenue growth, but did not affect any operational measure. To further unravel the patterns of these results, a contingency analysis of these relationships according to firm size was also conducted. Relationship with 3PLs was significant irrespective of firm size. Our findings contribute to academic theory and managerial practice by offering context-specific suggestions on the usefulness of specific key success factors based on their potential influence on operational and financial performance in the Indian 3PL industry." }, { "instance_id": "R33581xR33436", "comparison_id": "R33581", "paper_id": "R33436", "text": "A Study of Key Success Factors for Supply Chain Management System in Semiconductor Industry Developing a supply chain management (SCM) system is costly, but important. However, because of its complicated nature, not many of such projects are considered successful. Few research publications directly relate to key success factors (KSFs) for implementing and operating a SCM system. Motivated by the above, this research proposes two hierarchies of KSFs for SCM system implementation and operation phase respectively in the semiconductor industry by using a two-step approach. First, a literature review indicates the initial hierarchy. The second step includes a focus group approach to finalize the proposed KSF hierarchies by extracting valuable experiences from executives and managers that actively participated in a project, which successfully establish a seamless SCM integration between the world's largest semiconductor foundry manufacturing company and the world's largest assembly and testing company. Finally, this research compared the KSF's between the two phases and made a conclusion. Future project executives may refer the resulting KSF hierarchies as a checklist for SCM system implementation and operation in semiconductor or related industries." }, { "instance_id": "R33581xR33198", "comparison_id": "R33581", "paper_id": "R33198", "text": "Understanding supply chain management: critical research and a theoretical framework Increasing global cooperation, vertical disintegration and a focus on core activities have led to the notion that firms are links in a networked supply chain. This strategic viewpoint has created the challenge of coordinating effectively the entire supply chain, from upstream to downstream activities. While supply chains have existed ever since businesses have been organized to bring products and services to customers, the notion of their competitive advantage, and consequently supply chain management (SCM), is a relatively recent thinking in management literature. Although research interests in and the importance of SCM are growing, scholarly materials remain scattered and disjointed, and no research has been directed towards a systematic identification of the core initiatives and constructs involved in SCM. Thus, the purpose of this study is to develop a research framework that improves understanding of SCM and stimulates and facilitates researchers to undertake both theoretical and empirical investigation on the critical constructs of SCM, and the exploration of their impacts on supply chain performance. To this end, we analyse over 400 articles and synthesize the large, fragmented body of work dispersed across many disciplines such as purchasing and supply, logistics and transportation, marketing, organizational dynamics, information management, strategic management, and operations management literature." }, { "instance_id": "R33581xR33564", "comparison_id": "R33581", "paper_id": "R33564", "text": "Identification of critical success factors 261 to achieve high green supply chain management performances in Indian automobile industry\u201d Green supply chain management (GSCM) has been receiving the spotlight in last few years. The study aims to identify critical success factors (CSFs) to achieve high GSCM performances from three perspectives i.e., environmental, social and economic performance. CSFs to achieve high GSCM performances relevant to Indian automobile industry have been identified and categorised according to three perspectives from the literature review and experts' opinions. Conceptual models also have been put forward. This paper may play vital role to understand CSFs to achieve GSCM performances in Indian automobile industry and help the supply chain managers to understand how they may improve environmental, social and economic performance." }, { "instance_id": "R33581xR33144", "comparison_id": "R33581", "paper_id": "R33144", "text": "Distinguishing the critical success factors between e-commerce, enterprise resource planning, and supply chain management The rapid deployment of e-business systems has surprised even the most futuristic management thinkers. Unfortunately very little empirical research has documented the many variations of e-business solutions as major software vendors release complex IT products into the marketplace. The literature holds simultaneous evidence of major success and major failure as implementations evolve. It is not clear from the literature just what the difference is between e-commerce and its predecessor concepts of supply chain management and enterprise resource planning. In this paper we use existing case studies, industrial interviews, and survey data to describe how these systems are similar and how they differ. We develop a conceptual model to show how these systems are related and how they serve significantly different strategic objectives. Finally, we suggest the critical success factors that are the key issues to resolve in order to successfully implement these systems in practice." }, { "instance_id": "R33581xR33305", "comparison_id": "R33581", "paper_id": "R33305", "text": "Supply chain management in SMEs: development of constructs and propositions Purpose \u2013 The purpose of this paper is to review the literature on supply chain management (SCM) practices in small and medium scale enterprises (SMEs) and outlines the key insights.Design/methodology/approach \u2013 The paper describes a literature\u2010based research that has sought understand the issues of SCM for SMEs. The methodology is based on critical review of 77 research papers from high\u2010quality, international refereed journals. Mainly, issues are explored under three categories \u2013 supply chain integration, strategy and planning and implementation. This has supported the development of key constructs and propositions.Findings \u2013 The research outcomes are three fold. Firstly, paper summarizes the reported literature and classifies it based on their nature of work and contributions. Second, paper demonstrates the overall approach towards the development of constructs, research questions, and investigative questions leading to key proposition for the further research. Lastly, paper outlines the key findings an..." }, { "instance_id": "R33581xR33261", "comparison_id": "R33581", "paper_id": "R33261", "text": "Supply Base Reduction: An Empirical Study of Critical Success Factors SUMMARY One important factor in the design of an organization's supply chain is the number of suppliers used for a given product or service. Supply base reduction is one option useful in managing the supply base. The current paper reports the results of case studies in 10 organizations that recently implemented supply base reduction activities. Specifically, the paper identifies the key success factors in supply base reduction efforts and prescribes processes to capture the benefits of supply base reduction." }, { "instance_id": "R33581xR33287", "comparison_id": "R33581", "paper_id": "R33287", "text": "Implementing supply chain quality management This paper describes a strategic framework for the development of supply chain quality management (SCQM). The framework integrates both vision- and gap-driven change approaches to evaluate not only the implementation gaps but also their potential countermeasures. Based on literature review, drivers of supply chain quality are identified. They are: supply chain competence, critical success factors (CSF), strategic components, and SCQ practices/activities/programmes. Based on SCQM literature, five survey items are also presented in this study for each drive. The Analytic Hierarchy Process (AHP) is used to develop priority indices for these survey items. Knowledge of these critical dimensions and possible implementation discrepancies could help multinational enterprises and their supply chain partners lay out effective and efficient SCQM plans." }, { "instance_id": "R33581xR33388", "comparison_id": "R33581", "paper_id": "R33388", "text": "E-procurement, the golden key to optimizing the supply chains system Procurement is an important component in the field of operating resource management and e-procurement is the golden key to optimizing the supply chains system. Global firms are optimistic on the level of savings that can be achieved through full implementation of e-procurement strategies. E-procurement is an Internet-based business process for obtaining materials and services and managing their inflow into the organization. In this paper, the subjects of supply chains and e-procurement and its benefits to organizations have been studied. Also, e-procurement in construction and its drivers and barriers have been discussed and a framework of supplier selection in an e-procurement environment has been demonstrated. This paper also has addressed critical success factors in adopting e-procurement in supply chains. Keywords\u2014E-Procurement, Supply Chain, Benefits, Construction, Drivers, Barriers, Supplier Selection, CFSs." }, { "instance_id": "R33581xR33136", "comparison_id": "R33581", "paper_id": "R33136", "text": "Success factors in the fresh produce supply chain: insights from the UK Presents recent evidence of supply chain developments in the UK fresh produce industry, based on interviews with chief executives from some of the country\u2019s most successful suppliers. A number of success factors were evident, to varying degrees, in all of the companies interviewed. These included: continuous investment (despite increasingly tight margins), good staff (to drive the process of innovation and develop good trading relationships with key customers), volume growth (to fund the necessary investments and provide a degree of confidence in the future), improvement of measurement and control of costs (in the pursuit of further gains in efficiency), and innovation (not just the product offer but also the level of service and the way of doing business with key customers)." }, { "instance_id": "R33581xR33455", "comparison_id": "R33581", "paper_id": "R33455", "text": "Perceptions of service providers and customers of key success factors of third-party logistics relationships \u2013 an empirical study This paper provides a comparison of third-party logistics (3PL) service providers and 3PL customers with respect to the perception of key success factors (KSFs) for building and fostering relationships. The KSFs and their related sub-factors were derived from the extant literature and modified to reflect the nature of 3PL arrangements. The relevant data were collected from separate, but consistent, mail surveys that were sent to 3PL service providers and 3PL customers. The results indicate statistically significant differences in the perception of critical success factors between 3PL service providers and 3PL customers. The results show that customers see a focus on service-based solutions as being an important feature of 3PL provision providing a set of benefits beyond mere cost control." }, { "instance_id": "R33581xR33521", "comparison_id": "R33581", "paper_id": "R33521", "text": "Evaluating the critical success factors of supplier development: a case study Purpose \u2013 The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach \u2013 In total, 13 CSFs for SD are identified (i.e. long\u2010term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings \u2013 The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long\u2010term strategic goal is found to be ..." }, { "instance_id": "R33581xR33189", "comparison_id": "R33581", "paper_id": "R33189", "text": "Critical success factors of web-based supply-chain management systems: an exploratory study This paper reports the results of a survey on the critical success factors (CSFs) of web-based supply-chain management systems (WSCMS). An empirical study was conducted and an exploratory factor analysis of the survey data revealed five major dimensions of the CSFs for WSCMS implementation, namely (1) communication, (2) top management commitment, (3) data security, (4) training and education, and (5) hardware and software reliability. The findings of the results provide insights for companies using or planning to use WSCMS." }, { "instance_id": "R33581xR33486", "comparison_id": "R33581", "paper_id": "R33486", "text": "Understanding the Success Factors of Sustainable Supply Chain Management: Empirical Evidence from the Electrics and Electronics Industry Recent studies have reported that organizations are often unable to identify the key success factors of Sustainable Supply Chain Management (SSCM) and to understand their implications for management practice. For this reason, the implementation of SSCM often does not result in noticeable benefits. So far, research has failed to offer any explanations for this discrepancy. In view of this fact, our study aims at identifying and analyzing the factors that underlie successful SSCM. Success factors are identified by means of a systematic literature review and are then integrated into an explanatory model. Consequently, the proposed success factor model is tested on the basis of an empirical study focusing on recycling networks of the electrics and electronics industry. We found that signaling, information provision and the adoption of standards are crucial preconditions for strategy commitment, mutual learning, the establishment of ecological cycles and hence for the overall success of SSCM. Copyright \u00a9 2011 John Wiley & Sons, Ltd and ERP Environment." }, { "instance_id": "R33581xR33171", "comparison_id": "R33581", "paper_id": "R33171", "text": "The successful management of a small logistics company In this paper, a case study conducted on a small third\u2010party logistics (3PL) company in Hong Kong is presented. This company is interesting in that it has been designated as the \u201cking\u201d of Hong Kong's 3PL (in\u2010bound) logistics companies. The company has been successful in its overall business performance and in satisfying customers. This company's strategic alliances with both clients and customers have helped to improve the utilization of its resources, such as warehouse space and transportation fleets. Also, the company is in the process of expanding its operations across greater China, with the objective of becoming a full\u2010pledged 3PL company. The analysis of this case focuses on the critical success factors (strategies and technologies) that have allowed a small company started only in 1996 to become so successful in its operations. Also, a framework has been provided for the company to develop its logistics operations as a full\u2010pledged 3PL company." }, { "instance_id": "R33581xR33426", "comparison_id": "R33581", "paper_id": "R33426", "text": "Determination of the success factors in supply chain networks: a Hong Kong\u2010based manufacturer's perspective Purpose \u2013 The purpose of the paper is to investigate the factors that affect the decision\u2010making process of Hong Kong\u2010based manufacturers when they select a third\u2010party logistics (3PL) service provider and how 3PL service providers manage to retain customer loyalty in times of financial turbulence.Design/methodology/approach \u2013 The paper presents a survey\u2010based study targeting Hong Kong\u2010based manufacturers currently using 3PL companies. It investigates the relationship between the reasons for using 3PL services and the requirements for selecting a provider, and examines the relationship between customer satisfaction and loyalty. In addition, the relationships among various dimensions \u2013 in small to medium\u2010sized enterprises (SMEs), large enterprises and companies \u2013 of contracts of various lengths are investigated.Findings \u2013 In general, the reasons for using 3PL services and the requirements for selecting 3PL service providers are positive\u2010related. The dimension of \u201creputation\u201d of satisfaction influences \u201cpri..." }, { "instance_id": "R33581xR33375", "comparison_id": "R33581", "paper_id": "R33375", "text": "Critical factors for implementing green supply chain management practice Purpose \u2013 The purpose of this paper is to explore critical factors for implementing green supply chain management (GSCM) practice in the Taiwanese electrical and electronics industries relative to European Union directives.Design/methodology/approach \u2013 A tentative list of critical factors of GSCM was developed based on a thorough and detailed analysis of the pertinent literature. The survey questionnaire contained 25 items, developed based on the literature and interviews with three industry experts, specifically quality and product assurance representatives. A total of 300 questionnaires were mailed out, and 87 were returned, of which 84 were valid, representing a response rate of 28 percent. Using the data collected, the identified critical factors were performed via factor analysis to establish reliability and validity.Findings \u2013 The results show that 20 critical factors were extracted into four dimensions, which denominated supplier management, product recycling, organization involvement and life cycl..." }, { "instance_id": "R33581xR33237", "comparison_id": "R33581", "paper_id": "R33237", "text": "Supply chain software implementations: getting it right Purpose To highlight key success factors in supply chain projects. Design/methodology/approach The paper presents insights from a number of supply chain projects in which IT has played an important part in the business solution. Findings Successful supply chain projects have four things in common: the right leadership, the right focus, the right approach and effective communication of KPIs to all stakeholders engaged in the project. Research limitations/implications The focus of the paper is on supply chain projects with a significant IT component, but the key success factors identified are common to the majority of supply chain projects. Practical implications Companies must not assume that investment in IT is, by itself, a solution to their supply chain solutions. A lack of leadership, focus and communication will invariably result in sub\u2010optimal outcomes which are all too frequently attributed to the complex nature of the project or the inflexibility of the software when in most cases the problems are internal to the businesses involved and the project management process. Originality/value This paper provides practical tips for improving the likelihood of getting the most out of IT\u2010based supply chain projects." }, { "instance_id": "R33581xR33406", "comparison_id": "R33581", "paper_id": "R33406", "text": "A study of supplier selection factors for high-tech industries in the supply chain Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers." }, { "instance_id": "R33581xR33571", "comparison_id": "R33581", "paper_id": "R33571", "text": "Critical success factors of green supply chain management for achieving sustainability in Indian automobile industry The aim of this study was to identify and analyse the key success factors behind successful achievement of environment sustainability in Indian automobile industry supply chains. Here, critical success factors (CSFs) and performance measures of green supply chain management (GSCM) have been identified through extensive literature review and discussions with experts from Indian automobile industry. Based on the literature review, a questionnaire was designed and 123 final responses were considered. Six CSFs to implement GSCM for achieving sustainability and four expected performance measures of GSCM practices implementation were extracted using factor analysis. interpretive ranking process (IRP) modelling approach is employed to examine the contextual relationships among CSFs and to rank them with respect to performance measures. The developed IRP model shows that the CSF \u2018Competitiveness\u2019 is the most important CSF for achieving sustainability in Indian automobile industry through GSCM practices. This study is one of the few that have considered the environmental sustainability practices in the automobile industry in India and their implications on sectoral economy. The results of this study may help the mangers/SC practitioners/Governments/Customers in making strategic and tactical decisions regarding successful implementation of GSCM practices in Indian automobile industry with a sustainability focus. The developed framework provides a comprehensive perspective for assessing the synergistic impact of CSFs on GSCM performances and can act as ready reckoner for the practitioners. As there is very limited work presented in literature using IRP, this piece of work would provide a better understanding of this relatively new ranking methodology." }, { "instance_id": "R33783xR33653", "comparison_id": "R33783", "paper_id": "R33653", "text": "Neural Network Control Techniques of Hybrid Active Power Filter A multi-object optimization approach was developed for the design of hybrid active power filters (HAPF) to give better mitigation of the harmonics and better reactive power compensation. The neural network technique was used with optimization theory to improve the algorithm precision and stability. The optimization is more effective since the performance goals and optimization parameters were optimized together. Secondly, this paper presents the design of a hierarchical fuzzy current control scheme for a shunt active power filter compared with a single fuzzy controller scheme. It provides superior current tracking capability and switch frequency signal is limit in the permit range. Finally, many simulations and experimental result demonstrate the validity of the theory." }, { "instance_id": "R33783xR33665", "comparison_id": "R33783", "paper_id": "R33665", "text": "A Power Harmonic Detection Method Based On Wavelet Neural Network Harmonic detection technology is one of the key technologies used for active power filter (APF), and its development has determined the development of APF technology. It is difficult to detect the harmonic accurately because of the features of power network harmonic, such as inherent nonlinear, random, distribution, non-stationary and the complexity of impact factors, so the study of the power system harmonic's detection methods is very important. This paper proposes a harmonic detection method based on Wavelet Neural Network combining Wavelet with Neural Network, and designs for the wavelet neural network. The simulation results show that this method can detect the power network harmonic accurately and real-time." }, { "instance_id": "R33783xR33727", "comparison_id": "R33783", "paper_id": "R33727", "text": "Artificial Neural Network Controlled Shunt Active Power Filter This paper presents neural based proportional integral (PI) control applicable for active power filters for single-phase system, which are comprised of multiple nonlinear loads. The system consists of an uncontrolled rectifier and AC controller as the non-linear loads, with an active filter to compensate for the harmonic current injected by the load. The active filter is based on a single-phase inverter with four controllable switches, a standard H-bridge inverter. The AC side of the inverter is connected in parallel with the other nonlinear loads through a filter inductance. The DC side of the inverter is connected to a filter capacitor. The neural PI controller is used to shape the current through the filter inductor such that the line current is in phase with and of the same shape as the input voltage. The spectral analysis of the supply current shows the harmonics produced by the load has been successfully compensated by the active filter. The system is modeled in Matlab Simulink and simulation results prove that the injected harmonics are greatly reduced and system efficiency and power factor are improved" }, { "instance_id": "R33783xR33713", "comparison_id": "R33783", "paper_id": "R33713", "text": "Neural Network Controlled Shunt Active Filter For Non Linear Loads As industry power demand become increasingly sensitive, power quality distortion becomes a critical issue. The recent increase in nonlinear loads drawing non-sinusoidal currents has seen the introduction to manage the clean delivery of power. In this paper, a three-phase shunt active power filter (APF) using artificial neural network technique is proposed to mitigate harmonics, to compensate reactive power, to improve power factor and to remedy system unbalance. The simulation is done on a three-phase system and the results are used for comparison." }, { "instance_id": "R33783xR33655", "comparison_id": "R33783", "paper_id": "R33655", "text": "New Research on Harmonic Detection Based on Neural Network for Power System Analysis and control for power quality by neural network is a new research field in electrical power system. Rapid and reliable extract the harmonic components determine the overall performance of Active Power Filter (APF). This paper presents a new three-layer feedforward neural network based on error back-propagation algorithm that the training sample without time delay, which can detecting harmonics for power system in real-time. With the simulation study using Matlab, the simulation results illustrate that the harmonic detection method based on neural network is feasible, which can quickly detecting the harmonics for non-linear load." }, { "instance_id": "R33783xR33659", "comparison_id": "R33783", "paper_id": "R33659", "text": "The Harmonic Currents Detecting Algorithm Based on Adaptive Neural Network In order to decrease the harmonics and improve the power factors in power system, a detecting algorithm of harmonics and reactive currents based on neural networks and adaptive noise canceling technology is proposed. The structure of neural network and the adaptive weights adjusting algorithm are presented. The contradiction of the detecting speed and the precision has been settled preferably. The proposed algorithm is simulated for detecting the harmonics and the reactive currents of` active power filters, Simulation results show that both the tracking speed and steady state error have good effects. The base wave current can be detected in half a period and the steady state error is better. It is benefit to the detecting of harmonics and reactive currents of active power filters in power system." }, { "instance_id": "R33783xR33661", "comparison_id": "R33783", "paper_id": "R33661", "text": "The Study of the Electric Power Harmonics Detecting Method Based on the Immune RBF Neural Network Nonlinear loads of the power system pour a lot of harmonics into the power network, and the harmonics endanger the power system and the safety of the electric power equipment. The power active wave filter provides a best sufficient method to restrain the harmonics. The key technique of the power active wave filter is the detecting the harmonics. This article makes a good research of astringency of the immune optimization and the bacterin extraction of the immune radial basis function network. It proposes to combine the immune optimization with the RBF neural network, developing a new method which is called the electric power harmonics current detecting method based on the immune RBF neural network. Through the stimulating test, it is proved that this technique has the advantages of learning the speed of the astringent signal rapidly and higher precision. So, it can detect the harmonics of the current timely and precisely in the power network." }, { "instance_id": "R33783xR33681", "comparison_id": "R33783", "paper_id": "R33681", "text": "A Neural Networks-Based Method for Single-Phase Harmonic Content Identification A neural method is presented in this paper to identify the harmonic components of an AC controller. The components are identified by analyzing the single-phase current waveform. The method effectiveness is verified by applying it to an active power filter (APF) model dedicated to the selective harmonic compensation. Simulation results using theoretical and experimental data are presented to validate the proposed approach." }, { "instance_id": "R33783xR33701", "comparison_id": "R33783", "paper_id": "R33701", "text": "Harmonic Components Identification through the Adaline with Fuzzy Learning Parameter Identification of different harmonic components of current/voltage signals is required in many power system applications e.g. power quality monitoring, active power filtering, and digital system protection. In this paper, a method based on the adaptive linear combiner (Adaline) is presented for harmonic components identification. The convergence speed and the estimation error of the Adaline are governed by the learning parameter (LP) in the weight adaptation rule of this artificial neural network. Thus, instead of a constant LP utilized in the conventional Adaline, this paper proposes the implementation of a fuzzy inference system (FIS) for suitable adjustment of the LP. Two simulation studies are conducted on the MATLAB and PSCAD/EMTDC to show the validity and performance of the proposed method." }, { "instance_id": "R33783xR33693", "comparison_id": "R33783", "paper_id": "R33693", "text": "Learning and adaptive techniques for harmonics compensation in power supply networks This paper compares different variants of the least mean squares (LMS) algorithm. The objective consists in finding the best compromise between on-line learning and computational costs. Indeed, an algorithm with low computational complexity for updating Adalines weights is required for a real-time implementation of a modular neural Active Power Filter (APF). This filtering scheme is inserted in an electric distribution system to identify and compensate for harmonic distortions. Adaline learning schemes are used in two neural APF frameworks. The first one is a neural approach of the Instantaneous Power Theory (IPT) where the instantaneous powers are decomposed in a linear manner and are learned on-line with Adalines. The second one is a neural diphase currents method based on the DQ-currents which are linearly decomposed and learned with Adalines. The overall complexity of the neural frameworks is evaluated in terms of basic operators such as adders, multipliers, and signum functions. Simulation and experimental results demonstrate the applicability of neural approaches for the control of APF frameworks for power quality improvement. The complexity of the neural APF frameworks are equivalent than methods based on the conventional Instantaneous Power Theory (IPT), while their performances are superior." }, { "instance_id": "R33783xR33781", "comparison_id": "R33783", "paper_id": "R33781", "text": "Comparison of PI and ANN Control Strategies of Unified Shunt Series Compensator This paper presents the compensation principle using PI and ANN control strategies of the USSC in detail. The USSC is an active filter (AF) and it compensates the reactive power, harmonics in both the voltage and current caused by loads. The USSC makes use of two back to back connected IGBT based voltage source inverters (VSIs) with a common DC bus. One inverter is connected in series and the other one is placed in shunt with the load. The shunt inverter works as a current source and it compensates the current harmonics. The series inverter works as a voltage source and it helps in compensating the voltage harmonics. Previous works presented a control strategy for shunt active filter with PI control. Then, this control strategy was extended to develop the two controllers for shunt and series active filters. The simulation results of these control strategies are listed for comparison and verification of results" }, { "instance_id": "R33783xR33649", "comparison_id": "R33783", "paper_id": "R33649", "text": "A Novel Hysteresis Current Control Strategy Based on Neural Network The relationship among principle of variable-band hysteresis current control, compensation capacity of hysteresis and switching frequency is dissertated in this paper. Aimed at changing the disadvantages of variable-band current control strategy resulted from the fuzzy controller, a control strategy based on command current and current error is proposed and realized by neural network. Power electronics model is built by MATLAB and PSIM and logical control circuit is built according to the fuzzy rules and neural network respectively. Co-simulation results indicate that the hysteresis based on neural network controller has better compensation capacity than the hysteresis based on fuzzy controller and reduces switching frequency." }, { "instance_id": "R33783xR33685", "comparison_id": "R33783", "paper_id": "R33685", "text": "A Neural Network Adaptive Detecting Approach of Harmonic Current A neural network adaptive detecting approach of harmonic current for APF (active power filter) is proposed in this paper. It bases on the adaptive noise canceling technology (ANCT). Regarding the fundamental current as noise source, it can be cancelled from the load current, and then the harmonic current is obtained. Artificial neural network (ANN) with two layers is used to cancel the harmonics. The structure of this neural network and the adaptive adjusting algorithm are presented. The study has been carried out through detail digital dynamic simulation using the MATLAB Simulink power system toolbox. The results of simulation show that this approach can detect harmonic components at real time with high precision, little calculation and strong adaptive ability." }, { "instance_id": "R33783xR33779", "comparison_id": "R33783", "paper_id": "R33779", "text": "Neural Network controlled three-phase three-wire shunt active Power Filter Three-phase shunt active power filters (shunt APFs) are widely used in industrial applications to compensate for harmonics, generated by nonlinear loads. In order to succeed the compensation it's necessary to assure current control in the AC side of the APF and voltage or current control in the DC side of the APF. In real practice PI controllers are the most common, as a novel and intelligent control technique neural network controllers are under study and investigation in such application. In this paper, a neural network controller is used to control a three- phase three-wire voltage source shunt APF, its performance in terms of harmonic compensation and speed of response are compared to those of classic PI controller." }, { "instance_id": "R33783xR33711", "comparison_id": "R33783", "paper_id": "R33711", "text": "A Unified Artificial Neural Network Architecture for Active Power Filters In this paper, an efficient and reliable neural active power filter (APF) to estimate and compensate for harmonic distortions from an AC line is proposed. The proposed filter is completely based on Adaline neural networks which are organized in different independent blocks. We introduce a neural method based on Adalines for the online extraction of the voltage components to recover a balanced and equilibrated voltage system, and three different methods for harmonic filtering. These three methods efficiently separate the fundamental harmonic from the distortion harmonics of the measured currents. According to either the Instantaneous Power Theory or to the Fourier series analysis of the currents, each of these methods are based on a specific decomposition. The original decomposition of the currents or of the powers then allows defining the architecture and the inputs of Adaline neural networks. Different learning schemes are then used to control the inverter to inject elaborated reference currents in the power system. Results obtained by simulation and their real-time validation in experiments are presented to compare the compensation methods. By their learning capabilities, artificial neural networks are able to take into account time-varying parameters, and thus appreciably improve the performance of traditional compensating methods. The effectiveness of the algorithms is demonstrated in their application to harmonics compensation in power systems" }, { "instance_id": "R33783xR33645", "comparison_id": "R33783", "paper_id": "R33645", "text": "Estimation and elimination of harmonics in power system using modified FFT with variable learning of Adaline This paper presents a new technique for harmonic detection, estimation and elimination in power system. The approach expresses the input signal in the form of Fourier Transformation and adjusts the Fourier coefficients using Linear Adaptive Filters (Adaline). Two of the existing approaches using conventional Fast Fourier transformation (FFT) and FFT with modified W-H learning have been discussed with their merits and demerits. A new approach has been proposed using a network comprising three different Adalines and involving variable learning rate. This approach is able to mitigate the shortcomings of the existing techniques and is time efficient as well. The detailed architecture for the proposed approach has been discussed and the algorithm has then been tested on different test signals. The approach has been compared with the existing techniques and it's superiority over these has thus been established." }, { "instance_id": "R33783xR33643", "comparison_id": "R33783", "paper_id": "R33643", "text": "Neural network based active power filter for power quality improvement In this paper, a neural network based reference current computation method for active power filter (APF) control under non-ideal mains voltage with different load condition is proposed. The neural network controller has been designed to extract fundamental frequency components from non-sinusoidal and unbalanced currents. These fundamental frequency components will be used as unit templates. The APF realized by a current controlled IGBT based PWM-VSI bridges with a common dc bus capacitor. Current controller based on pulse width modulation with suitable carrier frequency is used to generate the firing pulse. The proposed neural network reference generation approach has been evaluated in simulations. The results show excellent performance, as well as robustness and usefulness of the system." }, { "instance_id": "R33783xR33703", "comparison_id": "R33783", "paper_id": "R33703", "text": "Power System Harmonic Estimation Using Neural Networks The increasing application of power electronic facilities in the industrial environment has led to serious concerns about source line pollution and the resulting impacts on system equipment and power distribution systems. Consequently, active power filters (APFs) have been used as an effective way to compensate harmonic components in nonlinear loads. Obviously, fast and precise harmonic detection is one of the key factors to design APFs. Various digital signal analysis techniques are being used for the measurement and estimation of power system harmonics. Presently, neural network has received special attention from the researchers because of its simplicity, learning and generalization ability. This paper presents a neural network-based algorithm that can identify both in magnitude and phase of harmonics. Experimental results have testified its performance with a variety of generated harmonies and interharmonics. Comparison with the conventional DFT method is also presented to demonstrate its very fast response and high accuracy." }, { "instance_id": "R33783xR33677", "comparison_id": "R33783", "paper_id": "R33677", "text": "Voltage source inverter control with Adaline approach for the compensation of harmonic currents in electrical power systems This paper presents a complete strategy for harmonics identification and neural control of a three-phase voltage inverter used for the power active filtering. Based on the use of neural techniques, this approach of compensation is done in two stages. The first stage extracts the harmonic currents with the diphase currents method by using Adaline neural networks. The second stage injects the harmonic currents in the electrical supply network; it uses a control based on a PI-neural controller. Our approach is automatically able to adapt itself to any change of the non-linear load and thus to the harmonic currents generated. Furthermore, comparisons with hysteresis control and results of application of a conventional low-pass filter for harmonics identification are presented. The proposed neural compensation approach has been evaluated in simulations. The results show excellent behaviours and performances, as well as robustness and usefulness." }, { "instance_id": "R33783xR33719", "comparison_id": "R33783", "paper_id": "R33719", "text": "Synchronous Reference Frame Based Active Filter Current Reference Generation Using Neural Networks The increased use of nonlinear devices in industry has resulted in direct increase of harmonic distortion in the industrial power system in recent years. The significant harmonics are almost always 5th, 7th, 11th and the 13th with the 5th harmonic being the largest in most instances. Active filter systems have been proposed to mitigate harmonic currents of the industrial loads. The most important requirement for any active filter is the precise detection of the individual harmonic component's amplitude and phase. Fourier transform based techniques provide an excellent method for individual harmonic isolation, but it requires a minimum of two cycles of data for the analysis, does not perform well in the presence of subharmonics which are not integral multiples of the fundamental frequency and most importantly introduces phase shifts. To overcome these difficulties, this paper proposes a multilayer perceptron neural network trained with back-propagation training algorithm to identify the harmonic characteristics of the nonlinear load. The operation principle of the synchronous-reference-frame-based harmonic isolation is discussed. This proposed method is applied to a thyristor controlled DC drive to obtain the accurate amplitude and phase of the dominant harmonics. This technique can be integrated with any active filter control algorithm for reference generation" }, { "instance_id": "R33783xR33683", "comparison_id": "R33783", "paper_id": "R33683", "text": "An ANN based Digital Controller for a Three-phase Active Power Filter Three-phase shunt active power filters are designed to effectively compensate for the current harmonics and reactive power requirements in a three-phase system with harmonic loads. An ANN (artificial neural network) based controller selects the amount of harmonic current injection based on the percentage of harmonic distortion present in the source current and also on the reactive power requirement of the load. The selection is done by the ANN with the help of a properly tuned knowledge base." }, { "instance_id": "R33783xR33691", "comparison_id": "R33783", "paper_id": "R33691", "text": "Intelligent Control and Application of All-Function Active Power Filter An all-function active power filter which can compensate harmonics inter-harmonics, asymmetries, fundamental sequence reactive powers and so on is introduced in this paper. Its basic concept and main features working process are illuminated briefly. To solve the difficulty problem of control for all-function active power filter, its intelligent control strategy and working process based on BP neural network are expounded. First, basic theory of BP neural network to control all-function active power is analyzed. Then, the basic configuration and working process of an all-function active power filter based on BP neural network control strategy is detailed. Finally, simulations and simulative results are given. Theory analysis and the simulative results show that the intelligent control strategy based on BP neural network can make all-function active power filter in possession of full function and predominant working characteristics." }, { "instance_id": "R33783xR33679", "comparison_id": "R33783", "paper_id": "R33679", "text": "Artificial neural networks for harmonic currents identification in active power filtering schemes This paper presents a new harmonic currents identification method called neural synchronous method and based on artificial neural networks. Its theoretical aspect relies on a new decomposition of the load current signals. Adaline neural networks are used in order to learn this decomposition on-line; the fundamental currents can therefore be estimated at each sampling time. The fundamental currents are then synchronized with the direct component of the voltage obtained by a PLL (phase locked loop). The harmonic currents are deduced and re-injected phase-opposite in the power distribution system through an active power filtering scheme. This harmonic currents identification method is compared to other similar methods by simulation results." }, { "instance_id": "R33783xR33689", "comparison_id": "R33783", "paper_id": "R33689", "text": "Active Power Filter of Three-Phase Based on Neural Network This paper presents a novel control design of shunt active power filter to compensate reactive power and reduce the unwanted harmonics. A shunt active filter is realized employing three-phase voltage source inverter (VSI) and a control circuit. The shunt active filter acts as a current source, which is connected in parallel with a nonlinear load and controlled to generate the required compensation currents. The control circuit using neural network controller is proposed. One adaptive neural network (ANN) controller is designed to estimate the harmonic components of the distorted load current and supply voltage [1]. A power factor correction function is incorporated in the shunt active filter to achieve a power factor that is near to unity. The different cases are considered and then simulated in order to show validity of the active power filter with neural network control." }, { "instance_id": "R33783xR33695", "comparison_id": "R33783", "paper_id": "R33695", "text": "Harmonic and reactive power compensation with artificial neural network technology This paper presents a new method for harmonic and reactive power compensation with an artificial neural network (ANN) controller and a new control algorithm for active power filter (APF) to eliminate harmonics and compensate the reactive power of a three-phase thyristor bridge rectifier. The artificial neural network (ANN) current controller is adapted to active power filter (APF) and the current controller based on modified hysteresis current controller is used to generate the firing pulses. All of the studies have been carried out using the detail digital dynamic simulation with the MATLAB Simulink Power System Toolbox. The results of simulation study of new APF control technique and algorithm presented in this paper are found quite satisfactory to eliminate harmonics and reactive power components from utility current." }, { "instance_id": "R33783xR33639", "comparison_id": "R33783", "paper_id": "R33639", "text": "Harmonic estimation using Modified ADALINE algorithm with Time-Variant Widrow — Hoff (TVWH) learning rule Algorithms are well developed for adaptive estimation of selected harmonic components in Digital Signal Processing. In power electronic applications, objectives like fast response of a system is of primary importance. An effective active power filtering for estimation of instantaneous harmonic components is presented in this paper. A signal processing technique using Modified Adaptive Neural Network (Modified ANN) algorithm has been proposed for harmonic estimation. Its primary function is to estimate harmonic components from selected signal (Current or Voltage) and it requires only the knowledge of the frequency of the component to be estimated. This method can be applied to a wide range of equipments. The validity of the proposed method to estimate voltage harmonics is proved with a dc/ac inverter as an example and the simulation results are compared with ADALINE algorithm for illustrating its effectiveness." }, { "instance_id": "R33783xR33667", "comparison_id": "R33783", "paper_id": "R33667", "text": "Neuron adaptive control of a shunt active power filter and its realization of analog circuit There have been a number of harmonic current detecting methods for the active power filter (APF), including the filtration approach by fixed frequency filters, the composition method of the imaginary and the real power based on the instantaneous reactive power theory, and so on. In this paper, first according to the adaptive noise canceling technology (ANCT) in signal processing, an adaptive detecting approach of harmonic current based on a neuron is presented. Next, on the basis of the configuration and learning algorithm for the developed system, the realization scheme of an analog circuit of the system is discussed. Third, in the light of PSIM software, the computer simulation studies of the circuit are done. Finally, the performance and feasibility of the approach are tested and verifified by the simulation results." }, { "instance_id": "R33783xR33707", "comparison_id": "R33783", "paper_id": "R33707", "text": "Harmonic Detection Based on Artificial Neural Networks for Current Distortion Compensation In this paper a method for the determination of part of the current harmonic components for the selective compensation harmonic by single-phase active power filter is presented. The non-linear load is composed by an AC controller with variable resistive load. The first six components are identified through artificial neural network. The effectiveness of the proposed method and its application in the single-phase active power filters with selective harmonic compensation are verified. Simulation results are presented to validate the proposed approach." }, { "instance_id": "R33783xR33637", "comparison_id": "R33783", "paper_id": "R33637", "text": "Harmonic content extraction in converter waveforms using radial basis function neural networks (RBFNN) and p-q power theory In this paper radial basis function neural network (RBFNN) is used to extract total harmonics in converter waveforms. The methodology is based on p-q (real power-imaginary power) theory. The converter waveforms are analyzed and the harmonics over a wide operating range are extracted. The proposed RBFNN filtering training algorithms are based on an efficient training method called hybrid learning method \u2014 computation is systematic. The method requires small size network, very robust, and the proposed algorithms are very effective. The analysis is verified using MATLAB/SIMULINK simulation." }, { "instance_id": "R33783xR33669", "comparison_id": "R33783", "paper_id": "R33669", "text": "Harmonic elimination and reactive power compensation through a shunt active power filter by twin neural networks with predictive and adaptive properties A method for controlling an active power filter using Artificial Neural Network(ANN) is presented in this paper. This paper applies ANN based predictive and adaptive reference generation technique. Predictive scheme extracts the information of the fundamental component through an ANN that replaces a low pass filter. This ANN based low pass-filter is trained offline with large number of training set to predict the fundamental magnitude of load current. This predictive reference generation technique works well for clean source voltage. However, the performance deteriorates in case of distortion in source voltage and also with noise. To overcome this, an Adaline based ANN is applied after the operation of the predictive algorithm. It has been shown that the combined predictive-adaptive approach offers better performance. Simulation results and experimental results are presented to confirm the usefulness of the proposed technique.." }, { "instance_id": "R33783xR33641", "comparison_id": "R33783", "paper_id": "R33641", "text": "A Comparative Experimental Study of Neural and Conventional Controllers for an Active Power Filter This paper will consider the benefits of neural controllers over model-based current regulators to supervise the current generation of a shunt active power filter. The task consists in generating appropriate compensation currents with a system composed of a voltage source inverter and a low-pass filter. These currents cancel the harmonic terms introduced by nonlinear loads in a power distribution grid. The performances of conventional controllers such as a PI and resonant current regulators are confronted to neural controllers. If conventional regulators present some advantages in terms of engineering specifications, their tuning remains difficult and their design relies on a rough linearization of the system. Their performances are acceptable without perturbations. However, fast changes of nonlinear loads lead to different operating points of the system. Furthermore, the inverter's nonlinearities and low-pass filter parameters have to be considered for generating precise currents. Two neural approaches have therefore been proposed, one which estimates the input-output relationship of the system, and one which relies on a state-space representation of the system. These approaches combine learning capabilities with a priori knowledge of the system. The benefits of the neural approaches are discussed and illustrated by simulations and by experimental tests with real-time implementations on a digital signal processing board." }, { "instance_id": "R33783xR33687", "comparison_id": "R33783", "paper_id": "R33687", "text": "Artificial Neural Networks to Control an Inverter in a Harmonic Distortion Compensation Scheme In this paper, two efficient and reliable neural approaches to control an inverter are developed. The objective is to improve the compensation performance of a conventional active power filter (APF) with a homogeneous neural structure allowing an efficient hardware implementation. The first control approach is based on a neural PI regulator. This technique uses an Adaline to determine the PI parameters. The second control approach is a direct inverse control method. It uses two multilayer neural networks with the backpropagation learning in order to identify the Jacobian of the process and to control the inverter. The originality lies in the error signal used for the weight adaption in the first approach, and in the choice of the inputs of the neural networks in the second approach. The performance of the two methods is evaluated through simulation and experimental results and demonstrates the effectiveness of the proposed neural approaches." }, { "instance_id": "R33783xR33673", "comparison_id": "R33783", "paper_id": "R33673", "text": "Neural Network and Bandless Hysteresis Approach to Control Switched Capacitor Active Power Filter for Reduction of Harmonics This paper proposes a combination of neural network and a bandless hysteresis controller, for a switched capacitor active power filter (SCAPF), to improve line power factor and to reduce line current harmonics. The proposed active power filter controller forces the supply current to be sinusoidal, in phase with line voltage, and has low current harmonics. Two main controls are proposed for it: neural network detection of harmonics and bandless digital hysteresis switching algorithm. A mathematical algorithm and a suitable learning rate determine the filter's optimal operation. A digital signal controller (TMS320F2812) verifies the proposed SCAPF, implementing the neural network and bandless hysteresis algorithms. A laboratory SCAPF system is built to test its feasibility. Simulation and experimental results are provided to verify performance of the proposed SCAPF system." }, { "instance_id": "R33783xR33635", "comparison_id": "R33783", "paper_id": "R33635", "text": "A Shunt Active Power Filter With Enhanced Performance Using ANN-Based Predictive and Adaptive Controllers This paper attempts to improve the dynamic performance of a shunt-type active power filter. The predictive and adaptive properties of artificial neural networks (ANNs) are used for fast estimation of the compensating current. The dynamics of the dc-link voltage is utilized in a predictive controller to generate the first estimate followed by convergence of the algorithm by an adaptive ANN (adaline) based network. Weights in adaline are tuned to minimize the total harmonic distortion of the source current. Extensive simulations and experimentations confirm the validity of the proposed scheme for all kinds of load (balanced and unbalanced) for a three-phase three-wire system." }, { "instance_id": "R33783xR33723", "comparison_id": "R33783", "paper_id": "R33723", "text": "Artificial Neural Networks as Harmonic Detectors In order to succeed harmonic mitigation in electrical circuits, it's very important to estimate or extract the compensating harmonic references. Any failure in this last procedure will cause failure in the harmonic elimination. In fact, in the literature many harmonic detection or estimation methods were presented, in this paper we focus on a new idea to apply artificial intelligence methods namely artificial neural networks in harmonic detection" }, { "instance_id": "R33851xR33844", "comparison_id": "R33851", "paper_id": "R33844", "text": "Comparative analysis of copy number variation detection methods and database construction BackgroundArray-based detection of copy number variations (CNVs) is widely used for identifying disease-specific genetic variations. However, the accuracy of CNV detection is not sufficient and results differ depending on the detection programs used and their parameters. In this study, we evaluated five widely used CNV detection programs, Birdsuite (mainly consisting of the Birdseye and Canary modules), Birdseye (part of Birdsuite), PennCNV, CGHseg, and DNAcopy from the viewpoint of performance on the Affymetrix platform using HapMap data and other experimental data. Furthermore, we identified CNVs of 180 healthy Japanese individuals using parameters that showed the best performance in the HapMap data and investigated their characteristics.ResultsThe results indicate that Hidden Markov model-based programs PennCNV and Birdseye (part of Birdsuite), or Birdsuite show better detection performance than other programs when the high reproducibility rates of the same individuals and the low Mendelian inconsistencies are considered. Furthermore, when rates of overlap with other experimental results were taken into account, Birdsuite showed the best performance from the view point of sensitivity but was expected to include many false negatives and some false positives. The results of 180 healthy Japanese demonstrate that the ratio containing repeat sequences, not only segmental repeats but also long interspersed nuclear element (LINE) sequences both in the start and end regions of the CNVs, is higher in CNVs that are commonly detected among multiple individuals than that in randomly selected regions, and the conservation score based on primates is lower in these regions than in randomly selected regions. Similar tendencies were observed in HapMap data and other experimental data.ConclusionsOur results suggest that not only segmental repeats but also interspersed repeats, especially LINE sequences, are deeply involved in CNVs, particularly in common CNV formations.The detected CNVs are stored in the CNV repository database newly constructed by the \"Japanese integrated database project\" for sharing data among researchers. http://gwas.lifesciencedb.jp/cgi-bin/cnvdb/cnv_top.cgi" }, { "instance_id": "R33851xR33819", "comparison_id": "R33851", "paper_id": "R33819", "text": "The Effect of Algorithms on Copy Number Variant Detection Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed." }, { "instance_id": "R33851xR33808", "comparison_id": "R33851", "paper_id": "R33808", "text": "Comparing CNV detection methods for SNP arrays Data from whole genome association studies can now be used for dual purposes, genotyping and copy number detection. In this review we discuss some of the methods for using SNP data to detect copy number events. We examine a number of algorithms designed to detect copy number changes through the use of signal-intensity data and consider methods to evaluate the changes found. We describe the use of several statistical models in copy number detection in germline samples. We also present a comparison of data using these methods to assess accuracy of prediction and detection of changes in copy number." }, { "instance_id": "R33851xR33802", "comparison_id": "R33851", "paper_id": "R33802", "text": "Assessment of algorithms for high throughput detection of genomic copy number variation in oligonucleotide microarray data Abstract Background Genomic deletions and duplications are important in the pathogenesis of diseases, such as cancer and mental retardation, and have recently been shown to occur frequently in unaffected individuals as polymorphisms. Affymetrix GeneChip whole genome sampling analysis (WGSA) combined with 100 K single nucleotide polymorphism (SNP) genotyping arrays is one of several microarray-based approaches that are now being used to detect such structural genomic changes. The popularity of this technology and its associated open source data format have resulted in the development of an increasing number of software packages for the analysis of copy number changes using these SNP arrays. Results We evaluated four publicly available software packages for high throughput copy number analysis using synthetic and empirical 100 K SNP array data sets, the latter obtained from 107 mental retardation (MR) patients and their unaffected parents and siblings. We evaluated the software with regards to overall suitability for high-throughput 100 K SNP array data analysis, as well as effectiveness of normalization, scaling with various reference sets and feature extraction, as well as true and false positive rates of genomic copy number variant (CNV) detection. Conclusion We observed considerable variation among the numbers and types of candidate CNVs detected by different analysis approaches, and found that multiple programs were needed to find all real aberrations in our test set. The frequency of false positive deletions was substantial, but could be greatly reduced by using the SNP genotype information to confirm loss of heterozygosity." }, { "instance_id": "R33851xR33815", "comparison_id": "R33851", "paper_id": "R33815", "text": "Comparative analyses of seven algorithms for copy number variant identification from single nucleotide polymorphism arrays Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs." }, { "instance_id": "R33851xR33839", "comparison_id": "R33851", "paper_id": "R33839", "text": "Comprehensive assessment of array-based platforms and calling algorithms for detection of copy number variants We have systematically compared copy number variant (CNV) detection on eleven microarrays to evaluate data quality and CNV calling, reproducibility, concordance across array platforms and laboratory sites, breakpoint accuracy and analysis tool variability. Different analytic tools applied to the same raw data typically yield CNV calls with <50% concordance. Moreover, reproducibility in replicate experiments is <70% for most platforms. Nevertheless, these findings should not preclude detection of large CNVs for clinical diagnostic purposes because large CNVs with poor reproducibility are found primarily in complex genomic regions and would typically be removed by standard clinical data curation. The striking differences between CNV calls from different platforms and analytic tools highlight the importance of careful assessment of experimental design in discovery and association studies and of strict data curation and filtering in diagnostics. The CNV resource presented here allows independent data evaluation and provides a means to benchmark new algorithms." }, { "instance_id": "R33851xR33824", "comparison_id": "R33851", "paper_id": "R33824", "text": "Accuracy of CNV Detection from GWAS Data Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites\u2014Birdsuite, Partek, HelixTree, and PennCNV-Affy\u2014in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two \u201cgold standards,\u201d the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a \u201cgold standard\u201d for detection of CNVs remains to be established." }, { "instance_id": "R33953xR33933", "comparison_id": "R33953", "paper_id": "R33933", "text": "Automatic Test Data Generation Based on Ant Colony Optimization Software testing is a crucial measure used to assure the quality of software. Path testing can detect bugs earlier because of it performs higher error coverage. This paper presents a model of generating test data based on an improved ant colony optimization and path coverage criteria. Experiments show that the algorithm has a better performance than other two algorithms and improve the efficiency of test data generation notably." }, { "instance_id": "R33953xR33877", "comparison_id": "R33953", "paper_id": "R33877", "text": "IntelligenTester - Software Test Sequence Optimization Using Graph Based Intelligent Search Agent Due to the abundantly available imaging technologies, manipulation of digital images has become a serious problem nowadays, in various fields like medical imaging, digital forensics, journalism, scientific publications, etc. In this paper, we concentrate on detection of a specific category of digital image forgery known as region duplication forgery or copy-move forgery, which is done by copying a block of an image and pasting it on to some other block of the same image. We present a novel approach based on the application of wavelet transform that detects and localizes such forgeries. Our technique works by first applying wavelet transform to the input image to yield a reduced dimension representation. We then perform exhaustive search to identify the similar blocks in the image by mapping them to log-polar coordinates and using phase correlation as the similarity criterion. This is done only once at the lowest resolution of the wavelet transform. Only the matched blocks are carried for comparison to the next level. This drastically reduces the time needed for the detection process. This approach works even if the pasted region has undergone transformations like translation and rotation." }, { "instance_id": "R33953xR33939", "comparison_id": "R33953", "paper_id": "R33939", "text": "An Optimization Method of Test Suite in Regression Test Model Original test cases should be reused and new ones should be supplemented in regression test. For optimizing test suite, pair-wise combination test cases generating method is adopted in this paper, which was realized by ant colony algorithm with monolepsis diagnostic. When original and new generated test suite was united, improved greedy arithmetic was adopted to reduce test suite and random off-trap strategy was introduced. The test case study result shows the method can reduce the scale of test suite effectively and decrease the regression test cost." }, { "instance_id": "R33953xR33951", "comparison_id": "R33953", "paper_id": "R33951", "text": "Test Case Prioritization Using Ant Colony optimization,\u201d Association in Computing Machinery Regression testing is primarily a maintenance activity that is performed frequently to ensure the validity of the modified software. In such cases, due to time and cost constraints, the entire test suite cannot be run. Thus, it becomes essential to prioritize the tests in order to cover maximum faults in minimum time. In this paper, ant colony optimization is used, which is a new way to solve time constraint prioritization problem. This paper presents the regression test prioritization technique to reorder test suites in time constraint environment along with an algorithm that implements the technique." }, { "instance_id": "R33953xR33869", "comparison_id": "R33953", "paper_id": "R33869", "text": "An Ant Colony Optimization Approach to Test Sequence Generation for Statebased Software Testing Properly generated test suites may not only locate the defects in software systems, but also help in reducing the high cost associated with software testing, ft is often desired that test sequences in a test suite can be automatically generated to achieve required test coverage. However, automatic test sequence generation remains a major problem in software testing. This paper proposes an ant colony optimization approach to automatic test sequence generation for state-based software testing. The proposed approach can directly use UML artifacts to automatically generate test sequences to achieve required test coverage." }, { "instance_id": "R33953xR33899", "comparison_id": "R33953", "paper_id": "R33899", "text": "Building Prioritized Pairwise Interaction Test Suites with Ant Colony Optimization Interaction testing offers a stable cost-benefit ratio in identifying faults. But in many testing scenarios, the entire test suite cannot be fully executed due to limited time or cost. In these situations, it is essential to take the importance of interactions into account and prioritize these tests. To tackle this issue, the biased covering array is proposed and the Weighted Density Algorithm (WDA) is developed. To find a better solution, in this paper we adopt ant colony optimization (ACO) to build this prioritized pairwise interaction test suite (PITS). In our research, we propose four concrete test generation algorithms based on Ant System, Ant System with Elitist, Ant Colony System and Max-Min Ant System respectively. We also implement these algorithms and apply them to two typical inputs and report experimental results. The results show the effectiveness of these algorithms." }, { "instance_id": "R33953xR33888", "comparison_id": "R33953", "paper_id": "R33888", "text": "A Non- Pheromone based Intelligent Swarm Optimization Technique in Software Test Suite Optimization In our paper, we applied a non-pheromone based intelligent swarm optimization technique namely artificial bee colony optimization (ABC) for test suite optimization. Our approach is a population based algorithm, in which each test case represents a possible solution in the optimization problem and happiness value which is a heuristic introduced to each test case corresponds to the quality or fitness of the associated solution. The functionalities of three groups of bees are extended to three agents namely Search Agent, Selector Agent and Optimizer Agent to select efficient test cases among near infinite number of test cases. Because of the parallel behavior of these agents, the solution generation becomes faster and makes the approach an efficient one. Since, the test adequacy criterion we used is path coverage; the quality of the test cases is improved during each iteration to cover the paths in the software. Finally, we compared our approach with Ant Colony Optimization (ACO), a pheromone based optimization technique in test suite optimization and finalized that, ABC based approach has several advantages over ACO based optimization." }, { "instance_id": "R33953xR33873", "comparison_id": "R33953", "paper_id": "R33873", "text": "Automatic Mutation Test Input Data Generation via Ant Colony Fault-based testing is often advocated to overcome limitations ofother testing approaches; however it is also recognized as beingexpensive. On the other hand, evolutionary algorithms have beenproved suitable for reducing the cost of data generation in the contextof coverage based testing. In this paper, we propose a newevolutionary approach based on ant colony optimization for automatictest input data generation in the context of mutation testingto reduce the cost of such a test strategy. In our approach the antcolony optimization algorithm is enhanced by a probability densityestimation technique. We compare our proposal with otherevolutionary algorithms, e.g., Genetic Algorithm. Our preliminaryresults on JAVA testbeds show that our approach performed significantlybetter than other alternatives." }, { "instance_id": "R33953xR33931", "comparison_id": "R33953", "paper_id": "R33931", "text": "Generation of test data using Meta heuristic approach Software testing is of huge importance to development of any software. The prime focus is to minimize the expenses on the testing. In software testing the major problem is generation of test data. Several metaheuristic approaches in this field have become very popular. The aim is to generate the optimum set of test data, which would still not compromise on exhaustive testing of software. Our objective is to generate such efficient test data using genetic algorithm and ant colony optimization for a given software. We have also compared the two approaches of software testing to determine which of these are effective towards generation of test data and constraints if any." }, { "instance_id": "R33953xR33912", "comparison_id": "R33953", "paper_id": "R33912", "text": "Optimized Test Sequence Generation from Usage Models using Ant Colony Optimization Software Testing is the process of testing the software in order to ensure that it is free of errors and produces the desired outputs in any given situation. Model based testing is an approach in which software is viewed as a set of states. A usage model describes software on the basis of its statistical usage data. One of the major problems faced in such an approach is the generation of optimal sets of test sequences. The model discussed in this paper is a Markov chain based usage model. The analytical operations and results associated with Markov chains make them an appropriate choice for checking the feasibility of test sequences while they are being generated. The statistical data about the estimated usage has been used to build a stochastic model of the software under test. This paper proposes a technique to generate optimized test sequences from a markov chain based usage model. The proposed technique uses ant colony optimization as its basis and also incorporates factors like cost and criticality of various states in the model. It further takes into consideration the average number of visits to any state and the trade-off between cost considerations and optimality of the test coverage." }, { "instance_id": "R33953xR33903", "comparison_id": "R33953", "paper_id": "R33903", "text": "An ant colony optimization approach to test sequence generation for control flow based software testing For locating the defects in software system and reducing the high cost, it\u2019s necessary to generate a proper test suite that gives desired automatically generated test sequence. However automatic test sequence generation remains a major problem in software testing. This paper proposes an Ant Colony Optimization approach to automatic test sequence generation for control flow based software testing. The proposed approach can directly use control flow graph to automatically generate test sequences to achieve required test coverage." }, { "instance_id": "R33953xR33894", "comparison_id": "R33953", "paper_id": "R33894", "text": "Variable Strength Interaction Testing with an Ant Colony System Approach Interaction testing (also called combinatorial testing) is an cost-effective test generation technique in software testing. Most research work focuses on finding effective approaches to build optimal t-way interaction test suites. However, the strength of different factor sets may not be consistent due to the practical test requirements. To solve this problem, a variable strength combinatorial object and several approaches based on it have been proposed. These approaches include simulated annealing (SA) and greedy algorithms. SA starts with a large randomly generated test suite and then uses a binary search process to find the optimal solution. Although this approach often generates the minimal test suites, it is time consuming. Greedy algorithms avoid this shortcoming but the size of generated test suites is usually not as small as SA. In this paper, we propose a novel approach to generate variable strength interaction test suites (VSITs). In our approach, we adopt a one-test-at-a-time strategy to build final test suites. To generate a single test, we adopt ant colony system (ACS) strategy, an effective variant of ant colony optimization (ACO). In order to successfully adopt ACS, we formulize the solution space, the cost function and several heuristic settings in this framework. We also apply our approach to some typical inputs. Experimental results show the effectiveness of our approach especially compared to greedy algorithms and several existing tools." }, { "instance_id": "R33953xR33907", "comparison_id": "R33953", "paper_id": "R33907", "text": "An approach of optimal path generation using ant colony optimization Software Testing is one of the indispensable parts of the software development lifecycle and structural testing is one of the most widely used testing paradigms to test various software. Structural testing relies on code path identification, which in turn leads to identification of effective paths. Aim of the current paper is to present a simple and novel algorithm with the help of an ant colony optimization, for the optimal path identification by using the basic property and behavior of the ants. This novel approach uses certain set of rules to find out all the effective/optimal paths via ant colony optimization (ACO) principle. The method concentrates on generation of paths, equal to the cyclomatic complexity. This algorithm guarantees full path coverage." }, { "instance_id": "R33953xR33884", "comparison_id": "R33953", "paper_id": "R33884", "text": "Generating Method of Pair-wise Covering Test Data Based on ACO Optimizing test suite can reduce the cost of time and resources, and improve the efficiency of regression test when test cases are generated. The generation of pair-wise covering test data is an NP question, which can be solved by heuristic method, greedy arithmetic and algebra method at present. In this paper, ant colony arithmetic is adopted, which is a new way to solve the pair-wise test data generating question. It can generate fewer test cases which can cover more pair combinations,and can solve questions with fast calculate speed. The method can get the goal of optimizing in the process of regression test. The result shows that the method is feasible." }, { "instance_id": "R34099xR34039", "comparison_id": "R34099", "paper_id": "R34039", "text": "Stable isotope (\u03b413C and \u03b418O) and Sr/Ca composition of otoliths as proxies for environmental salinity experienced by an estuarine fish The ability to identify past patterns of salinity habitat use in coastal fishes is viewed as a critical development in evaluating nursery habitats and their role in population dynamics. The utility of otolith tracers (\u03b4 13 C, \u03b4 18 O, and Sr/Ca) as proxies for environmental salinity was tested for the estuarine-dependent juvenile white perch Morone americana. Analysis of water samples revealed a positive relationship between the salinity gradient and \u03b4 18 O, \u03b4 13 C, and Sr/Ca values of water in the Patuxent River estuary. Similarly, analysis of otolith material from young-of-the-year white perch (2001, 2004, 2005) revealed a positive relationship between salinity and otolith \u03b4 13 C, \u03b4 18 O, and Sr/Ca values. In classifying fish to their known salinity habitat, \u03b4 18 O and Sr/Ca were moderately accurate tracers (53 to 79% and 75% correct classification, respectively), and \u03b4 13 C provided near complete dis- crimination between habitats (93 to 100% correct classification). Further, \u03b4 13 C exhibited the lowest inter-annual variability and the largest range of response across salinity habitats. Thus, across estuaries, it is expected that resolution and reliability of salinity histories of juvenile white perch will be improved through the application of stable isotopes as tracers of salinity history." }, { "instance_id": "R34099xR34001", "comparison_id": "R34099", "paper_id": "R34001", "text": "Use of otolith Sr:Ca ratios to study the riverine migratory behaviors of Japanese eel Anguilla japonica To understand the migratory behavior and habitat use of the Japanese eel Anguilla japonica in the Kaoping River, SW Taiwan, the temporal changes of strontium (Sr) and calcium (Ca) contents in otoliths of the eels in combination with age data were examined by wavelength dispersive X-ray spectrometry with an electron probe microanalyzer. Ages of the eel were determined by the annulus mark in their otolith. The pattern of the Sr:Ca ratios in the otoliths, before the elver stage, was similar among all specimens. Post-elver stage Sr:Ca ratios indicated that the eels experienced different salinity histories in their growth phase yellow stage. The mean (\u00b1SD) Sr:Ca ratios in otoliths beyond elver check of the 6 yellow eels from the freshwater middle reach were 1.8 \u00b1 0.2 x 10 -3 with a maximum value of 3.73 x 10 -3 . Sr:Ca ratios of less than 4 x 10-3 were used to discriminate the freshwater from seawater resident eels. Eels from the lower reach of the river were classified into 3 types: (1) freshwater contingents, Sr:Ca ratio <4 x 10 -3 , constituted 14 % of the eels examined; (2) seawater contingent, Sr:Ca ratio 5.1 \u00b1 1.1 x 10-3 (5%); and (3) estuarine contingent, Sr:Ca ratios ranged from 0 to 10 x 10 -3 , with migration between freshwater and seawater (81 %). The frequency distribution of the 3 contingents differed between yellow and silver eel stages (0.01 < p < 0.05 for each case) and changed with age of the eel, indicating that most of the eels stayed in the estuary for the first year then migrated to the freshwater until 6 yr old. The eel population in the river system was dominated by the estuarine contingent, probably because the estuarine environment was more stable and had a larger carrying capacity than the freshwater middle reach did, and also due to a preference for brackish water by the growth-phase, yellow eel." }, { "instance_id": "R34099xR33996", "comparison_id": "R34099", "paper_id": "R33996", "text": "Identification and growth rates comparison of divergent migratory contingents of Japanese eel (Anguilla japonica) The strontium (Sr) and calcium (Ca) concentrations in the otoliths of the Japanese eels Anguilla japonica collected from China, Japan and Taiwan were measured by electron probe micro-analyzer. The Sr/Ca ratios indicated that the eels beyond elver stage can be classified into three types of migratory contingents. Type 1 (seawater), the Sr/Ca ratios from approximately 150 Am from primordium to edge of the otolith maintained at the level of approximately 4\u201310x, indicating that the eel after elver stage stayed in sea water until the silver eel stage. Type 2 (freshwater), the ratios were lower than 4x, indicating that the eel stayed in freshwater from elver stage to the silver eel stags. Type 3 (estuarine), the ratios fluctuated between those of Types 1 and 2, indicating that eel migrated between freshwater and sea water before the silver stage. The estuarine contingents constituted the majority of the eel population and grew faster than the freshwater contingents. D 2003 Elsevier Science B.V. All rights reserved." }, { "instance_id": "R34099xR34097", "comparison_id": "R34099", "paper_id": "R34097", "text": "Estimating contemporary early life-history dispersal in an estuarine fish: integrating molecular and otolith elemental approaches Dispersal during the early life history of the anadromous rainbow smelt, Osmerus mordax, was examined using assignment testing and mixture analysis of multilocus genotypes and otolith elemental composition. Six spawning areas and associated estuarine nurseries were sampled throughout southeastern Newfoundland. Samples of adults and juveniles isolated by > 25 km displayed moderate genetic differentiation (FST ~ 0.05), whereas nearby (< 25 km) spawning and nursery samples displayed low differentiation (FST < 0.01). Self\u2010assignment and mixture analysis of adult spawning samples supported the hypothesis of independence of isolated spawning locations (> 80% self\u2010assignment) with nearby runs self\u2010assigning at rates between 50 % and 70%. Assignment and mixture analysis of juveniles using adult baselines indicated high local recruitment at several locations (70\u201390%). Nearby (< 25 km) estuaries at the head of St Mary's Bay showed mixtures of individuals (i.e. 20\u201340% assignment to adjacent spawning location). Laser ablation inductively coupled mass spectrometry transects across otoliths of spawning adults of unknown dispersal history were used to estimate dispersal among estuaries across the first year of life. Single\u2010element trends and multivariate discriminant function analysis (Sr:Ca and Ba:Ca) classified the majority of samples as estuarine suggesting limited movement between estuaries (< 0.5%). The mixtures of juveniles evident in the genetic data at nearby sites and a lack of evidence of straying in the otolith data support a hypothesis of selective mortality of immigrants. If indeed selective mortality of immigrants reduces the survivorship of dispersers, estimates of dispersal in marine environments that neglect survival may significantly overestimate gene flow." }, { "instance_id": "R34099xR34018", "comparison_id": "R34099", "paper_id": "R34018", "text": "Variation in otolith strontium and calcium ratios as an indicator of life-history strategies of freshwater fish species within a brackish water system A possible life-history strategy of freshwater fish in brackish waters would be one of migrations between higher salinity rearing/feeding areas and low salinity spawning areas. This would allow the more euryhaline adults and sub-adults a greater foraging range, while increasing the chance of survival of highly stenohaline eggs and larvae. We tested the feasibility of using the variation in otolith strontium/calcium ratios (Sr/Ca) as a method to investigate migrations of freshwater species living under oligo-mesohaline conditions. Zander (Stizostedion lucioperca L.) and common bream (Abramis brama L.) adults were collected in the brackish Kiel Canal and its largest free-flowing freshwater tributary, the Haaler Au. Furthermore, reference fish were obtained from a closed freshwater lake. Otoliths were thin sectioned and polished. Strontium and calcium concentrations were measured along transects between the otolith core and outside edge at approximately 10 \u03bcm intervals using a Cameca SX-50 wavelength dispersive electron microprobe. Strontium/calcium ratios found in the reference fish otoliths were fairly constant and low (common bream: x=0.0025, S.D.=0.0005 and zander: x=0.001, S.D.=0.0004). The Sr/Ca ratios in otoliths from brackish waters were more variable (common bream: x=0.0032, S.D.=0.0023 and zander: x=0.002, S.D.=0.001). Lowest Sr/Ca ratios were measured in core zones (<0.0003). Differences in patterns were observed, suggesting individual variations in migratory histories, Our results show that analysis of Sr/Ca has great potential for describing the migratory histories of freshwater fishes within brackish water systems." }, { "instance_id": "R34099xR34053", "comparison_id": "R34099", "paper_id": "R34053", "text": "Coexistence of anadromous and lacustrine life histories of the shirauo, Sala- nichthys microdon The environmental history of the shirauo, Salangichthys microdon, was examined in terms of strontium (Sr) and calcium (Ca) uptake in the otolith, by means of wavelength dispersive X-ray spectrometry on an electron microprobe. Anadromous and lacustrine type of the shirauo were found to occur sympatric. Otolith Sr concentration or Sr : Ca ratios of anadromous shirauo fluctuated strongly along the life-history transect in accordance with the migration (habitat) pattern from sea to freshwater. In contrast, the Sr concentration or the Sr : Ca ratios of lacustrine shirauo remained at consistently low levels throughout the otolith. The higher ratios in anadromous shirauo, in the otolith region from the core to 90\u2013230 \u03bcm, corresponded to the initial sea-going period, probably reflecting the ambient salinity or the seawater\u2013freshwater gradient in Sr concentration. The findings clearly indicated that otolith Sr : Ca ratios reflected individual life histories, enabling these anadromous shirauo to be distinguished from lacustrine shirauo." }, { "instance_id": "R34099xR34060", "comparison_id": "R34099", "paper_id": "R34060", "text": "Population structure of sympatric anadromous and nonanadromous Oncorhynchus mykiss: evidence from spawning surveys and otolith microchemistry Reproductive isolation between steelhead and resident rainbow trout (Oncorhynchus mykiss) was examined in the Deschutes River, Oregon, through surveys of spawning timing and location. Otolith microchemistry was used to determine the occurrence of steelhead and resident rainbow trout progeny in the adult populations of steelhead and resident rainbow trout in the Deschutes River and in the Babine River, British Columbia. In the 3 years studied, steelhead spawning occurred from mid March through May and resident rainbow trout spawning occurred from mid March through August. The timing of 50% spawning was 9-10 weeks earlier for steelhead than for resident rainbow trout. Spawning sites selected by steelhead were in deeper water and had larger substrate than those selected by resident rainbow trout. Maternal origin was identified by comparing Sr/Ca ratios in the primordia and freshwater growth regions of the otolith with a wavelength-dispersive electron microprobe. In the Deschutes River, only steelhead of steelhead maternal origin and resident rainbow trout of resident rainbow trout origin were observed. In the Babine River, steelhead of resident rainbow trout origin and resident rainbow trout of steelhead maternal origin were also observed. Based on these findings, we suggest that steelhead and resident rainbow trout in the Deschutes River may constitute reproductively isolated populations." }, { "instance_id": "R34099xR34050", "comparison_id": "R34099", "paper_id": "R34050", "text": "Evidence of multiple migrations between freshwater and marine habitats of Salvelinus leucomaenis. The migratory history of the white-spotted charr Salvelinus leucomaenis was examined using otolith microchemical analysis. The fish migrated between freshwater and marine environments multiple times during their life history. Some white-spotted charr used an estuarine habitat prior to smolting and repeated seaward migration within a year." }, { "instance_id": "R34099xR33982", "comparison_id": "R34099", "paper_id": "R33982", "text": "Is otolith strontium a useful scalar of life cycles in estuarine fishes? Abstract The efficiency with which estuarine habitats produce fish is poorly understood due to the complexity of life cycles. Spatial dynamics of estuarine fishes comprise retentive and dispersive behaviors which occur on seasonal and ontogenetic scales. Salinity is an important scalar in the spatial dynamics of estuarine fishes, that may affect production and dispersal. In this paper, we review investigations that used otolith strontium (Sr) to chart estuarine movements of fishes. Based upon microprobe analysis of otolith Sr, variable patterns of estuarine ingress have been shown for bay anchovy Anchoa mitchilli , freshwater eel Anguilla spp., Japanese sea bass Lateolabrax japonicus , and Atlantic croaker Micropogonias undulatus . In anadromous fishes, striped bass Morone saxatilis American shad Alosa sapidissima , and Arctic charr Salvelinus alpinus , otolith Sr has been used to record emigration of juveniles and adults from freshwater and oligohaline nurseries. These same species showed seasonal cycles in otolith Sr consistent with expectations on frequency of spawning migration. A critical yet seldom evaluated issue is the relative roles of salinity, temperature, ontogenetic stage, and physiological state on otolith Sr. In a review of the literature (1982\u20131997), we found that these effects were infrequently evaluated (10 of 27 species investigated). Rarer still were studies of the interaction of these effects on otolith Sr. Only a single study had calibrated a laboratory-based salinity vs. otolith Sr relationship using field data. Based upon values obtained through the literature review, we observed a positive relationship between otolith Sr and habitat salinities among freshwater, estuarine, and marine taxa." }, { "instance_id": "R34099xR34063", "comparison_id": "R34099", "paper_id": "R34063", "text": "Evidence of different habitat use by New Zealand freshwater eels Anguilla australis and A. dieffenbachii, as revealed by otolith microchemistry The apparent use of marine and freshwater habitats by Anguilla australis and A. dieffenbachii was examined by analyzing the strontium (Sr) and calcium (Ca) concentrations in otoliths of silver eels collected from Lake Ellesmere, which is a shallow brackish-water coastal lagoon in New Zealand. The age and growth of these eels was also examined using their otolith annuli. Size and ages of females were greater than those of males for both species. Growth rates were similar among sex and species, but the highest growth rates were observed in eels that experienced saline environments. Line analyses of Sr:Ca ratios along a life-history transect in each otolith showed peaks (ca. 15 to 21 \u00d7 10 -3 in A. australis, 14 to 20 \u00d7 10 -3 in A. dieffenbachii) between the core and elver mark, which corresponded to the period of their leptocephalus and early glass eel stage in the ocean. Out- side the elver mark, the Sr:Ca ratios indicated that eels had remained in different habitats that included freshwater (average Sr:Ca ratios, 1.8 to 2.4 \u00d7 10 -3 ), areas with relatively high salinities (aver- age Sr:Ca ratios, 3.0 to 7.4 \u00d7 10 -3 ), and in some cases individuals showed clear evidence of shifts in the salinity of their environments. These shifts either indicated movements between different loca- tions, or changes in the salinity of the lake. There were more individuals of A. australis that used areas with intermediate or high salinities, at least for a short time (85% of individuals), than A. dief- fenbachii (30%). These findings suggest that these 2 southern temperate species may have the same behavioral plasticity regarding whether or not to enter freshwater or remain in marine environments, as has been recently documented in several northern temperate anguillid species." }, { "instance_id": "R34099xR33991", "comparison_id": "R34099", "paper_id": "R33991", "text": "Facultative catadromy of the eel Anguilla japonica between freshwater and seawater habitats To confirm the occurrence of marine residents of the Japanese eel, Anguilla japonica, which have never entered freshwater ('sea eels'), we measured Sr and Ca concentrations by X-ray electron microprobe analysis of the otoliths of 69 yellow and silver eels, collected from 10 localities in seawater and freshwater habitats around Japan, and classified their migratory histories. Two-dimen- sional images of the Sr concentration in the otoliths showed that all specimens generally had a high Sr core at the center of their otolith, which corresponded to a period of their leptocephalus and early glass eel stages in the ocean, but there were a variety of different patterns of Sr concentration and concentric rings outside the central core. Line analysis of Sr/Ca ratios along the radius of each otolith showed peaks (ca 15 \u00d7 10 -3 ) between the core and out to about 150 \u00b5m (elver mark). The pattern change of the Sr/Ca ratio outside of 150 \u00b5m indicated 3 general categories of migratory history: 'river eels', 'estuarine eels' and 'sea eels'. These 3 categories corresponded to mean values of Sr/Ca ratios of \u2265 6.0 \u00d7 10 -3 for sea eels, which spent most of their life in the sea and did not enter freshwater, of 2.5 to 6.0 \u00d7 10 -3 for estuarine eels, which inhabited estuaries or switched between different habitats, and of <2.5 \u00d7 10 -3 for river eels, which entered and remained in freshwater river habitats after arrival in the estuary. The occurrence of sea eels was 20% of all specimens examined and that of river eels, 23%, while estuarine eels were the most prevalent (57%). The occurrence of sea eels was confirmed at 4 localities in Japanese coastal waters, including offshore islands, a small bay and an estuary. The finding of estuarine eels as an intermediate type, which appear to frequently move between different habitats, and their presence at almost all localities, suggested that A. japonica has a flexible pattern of migration, with an ability to adapt to various habitats and salinities. Thus, anguillid eel migrations into freshwater are clearly not an obligatory migratory pathway, and this form of diadromy should be defined as facultative catadromy, with the sea eel as one of several ecophenotypes. Furthermore, this study indicates that eels which utilize the marine environment to various degrees during their juve- nile growth phase may make a substantial contribution to the spawning stock each year." }, { "instance_id": "R34099xR34043", "comparison_id": "R34099", "paper_id": "R34043", "text": "Migratory history of the threeespine stickleback Gasterosteus aculeatus Abstract The migratory history of two highly divergent forms (the Japan Sea and Pacific Ocean forms) of the threespine stickleback Gasterosteus aculeatus collected from Japanese brackish water (seawater) and freshwater was studied by examining strontium (Sr) and calcium (Ca) concentrations in their otoliths using wavelength dispersive X-ray spectrometry on an electron microprobe. The Sr : Ca ratios in the otoliths changed with salinity of the habitat. The otolith Sr : Ca ratios of the freshwater resident-type samples of the Pacific Ocean form showed consistently low Sr : Ca ratios, averaging 0.85\u20130.96 \u00d7 10\u22123 from the core to the edge. In contrast, the otolith Sr : Ca ratios of the anadromous type of both the Japan Sea and Pacific Ocean forms fluctuated strongly along the life history transects in accordance with their migration patterns from seawater to freshwater. The higher ratios in the anadromous type, averaging 5.4 \u00d7 10\u22123, in the otolith region from the core to 200 \u03bcm, corresponded to the seagoing period, suggesting that otolith Sr : Ca ratios are affected by ambient water salinity. These findings clearly indicate that otolith Sr : Ca ratios reflect individual life histories, and that these two highly divergent forms of stickleback have a flexible migration strategy." }, { "instance_id": "R34099xR33987", "comparison_id": "R34099", "paper_id": "R33987", "text": "Can otolith micro- chemistry chart patterns of migration and habitat utilization in anadromous fishes? Seasonal and ontogenetic patterns in estuarine and coastal migrations of anadromous fish species have important consequences to their survival, growth, recruitment, and reproduction. We tested the hypothesis that otolith (sagitta) microchemistry can document the environmental history of individual fish across an estuarine salinity gradient. Juvenile striped bass, Morone saxatilis (Walbaum), (80 days posthatch) were reared for 3 wk in aquaria at two temperatures and six salinities. The ratio of strontium/calcium (SrCa) deposited in the sagittal otoliths of reared juveniles was positively related to salinity. Temperature and growth rate had relatively minor, but significant effects on the SrCa ratio. In a second experiment, juveniles (80 days posthatch) were exposed to increasing salinity (0 ppt to 25 ppt) and then decreasing salinity (25 ppt to 0 ppt) over a 20-wk period. Electron microprobe examination of the otoliths from these juveniles showed a gradual rise and decline in SrCa during the experimental period which corresponded directly with experimental changes in salinity. Field data on subadult and adult striped bass corroborated the laboratory analyses and indicated a logistic relationship between ambient salinity and otolith SrCa ratio. Verification studies support the use of otolith microchemistry to measure migratory schedules and habitat utilization patterns in anadromous striped bass populations." }, { "instance_id": "R34099xR33994", "comparison_id": "R34099", "paper_id": "R33994", "text": "Evidence of downstream migration of Sakhalin taimen, Hucho perryi, as revealed by Sr:Ca ratios of otolith The migratory history of Sakhalin taimen, Hucho perryi, was examined in terms of strontium (Sr) and calcium (Ca) uptake in the otolith by using wavelength dispersive X-ray spectrometry on an electron microprobe. Otolioth Sr : Ca ratios of freshwater-reared samples remained consistently at low levels throughout the otolith. The Sr : Ca ratios of samples from Lake Aynskoye of Sakhalin Island showed a low value from the core up to a point of 700\u20132140 \u00b5m. Thereafter, the ratios increased sharply and remained at higher levels up to the outermost regions. The difference in Sr : Ca ratio might be the result of the presence of individuals that underwent seawater and freshwater life history phases, probably reflecting the ambient salinity or the seawater\u2013freshwater gradient in Sr concentration. Otolith Sr : Ca ratio analysis revealed downstream migration history in H. perryi." }, { "instance_id": "R34099xR34057", "comparison_id": "R34099", "paper_id": "R34057", "text": "Migration and rearing histories of chinook salmon (Oncorhynchus tshawytscha) determined by ion microprobe Sr isotope and Sr/Ca transects of otoliths Strontium isotope and Sr/Ca ratios measured in situ by ion microprobe along radial transects of otoliths of juvenile chinook salmon (Oncorhynchus tshawytscha) vary between watersheds with contrasting geology. Otoliths from ocean-type chinook from Skagit River estuary, Washington, had prehatch regions with 87 Sr/ 86 Sr ratios of ~0.709, suggesting a maternally inherited marine signature, extensive fresh water growth zones with 87 Sr/ 86 Sr ratios similar to those of the Skagit River at ~0.705, and marine-like 87 Sr/ 86 Sr ratios near their edges. Otoliths from stream-type chinook from central Idaho had prehatch 87 Sr/ 86 Sr ratios \u22650.711, indicating that a maternal marine Sr isotopic signature is not preserved after the ~1000- to 1400-km migration from the Pacific Ocean. 87 Sr/ 86 Sr ratios in the outer portions of otoliths from these Idaho juveniles were similar to those of their respective streams (~0.708\u00960.722). For Skagit juveniles, fresh water growth was marked by small decreases in otolith Sr/Ca, with increases in Sr/Ca corresponding to increases in 87 Sr/ 86 Sr with migration into salt water. Otoliths of Idaho fish had Sr/Ca radial variation patterns that record seasonal fluctuation in ambient water Sr/Ca ratios. The ion microprobe's ability to measure both 87 Sr/ 86 Sr and Sr/Ca ratios of otoliths at high spatial resolution in situ provides a new tool for studies of fish rearing and migration." }, { "instance_id": "R34126xR34113", "comparison_id": "R34126", "paper_id": "R34113", "text": "Clinically isolated syndromes: a new oligoclonal band test accurately predicts conversion to MS Background: Patients with a clinically isolated demyelinating syndrome (CIS) are at risk of developing a second attack, thus converting into clinically definite multiple sclerosis (CDMS). Therefore, an accurate prognostic marker for that conversion might allow early treatment. Brain MRI and oligoclonal IgG band (OCGB) detection are the most frequent paraclinical tests used in MS diagnosis. A new OCGB test has shown high sensitivity and specificity in differential diagnosis of MS. Objective: To evaluate the accuracy of the new OCGB method and of current MRI criteria (MRI-C) to predict conversion of CIS to CDMS. Methods: Fifty-two patients with CIS were studied with OCGB detection and brain MRI, and followed up for 6 years. The sensitivity and specificity of both methods to predict conversion to CDMS were analyzed. Results: OCGB detection showed a sensitivity of 91.4% and specificity of 94.1%. MRI-C had a sensitivity of 74.23% and specificity of 88.2%. The presence of either OCGB or MRI-C studied simultaneously showed a sensitivity of 97.1% and specificity of 88.2%. Conclusions: The presence of oligoclonal IgG bands is highly specific and sensitive for early prediction of conversion to multiple sclerosis. MRI criteria have a high specificity but less sensitivity. The simultaneous use of both tests shows high sensitivity and specificity in predicting clinically isolated demyelinating syndrome conversion to clinically definite multiple sclerosis." }, { "instance_id": "R34126xR34117", "comparison_id": "R34126", "paper_id": "R34117", "text": "Correlation of clinical, magnetic resonance imaging, and cerebrospinal fluid findings in optic neuritis We found 42 of 74 patients (57%) with isolated monosymptomatic optic neuritis to have 1 to 20 brain lesions, by magnetic resonance imaging (MRI). All of the brain lesions were clinically silent and had characteristics consistent with multiple sclerosis (MS). None of the patients had ever experienced neurologic symptoms prior to the episode of optic neuritis. During 5.6 years of follow\u2010up, 21 patients (28%) developed definite MS on clinical grounds. Sixteen of the 21 converting patients (76%) had abnormal MRIs; the other 5 (24%) had MRIs that were normal initially (when they had optic neuritis only) and when repeated after they had developed clinical MS in 4 of the 5. Of the 53 patients who have not developed clinically definite MS, 26 (49%) have abnormal MRIs and 27 (51%) have normal MRIs. The finding of an abnormal MRI at the time of optic neuritis was significantly related to the subsequent development of MS on clinical grounds, but interpretation of the strength of that relationship must be tempered by the fact that some of the converting patients had normal MRIs and approximately half of the patients who did not develop clinical MS had abnormal MRIs. We found that abnormal IgG levels in the cerebrospinal fluid correlated more strongly than abnormal MRIs with the subsequent development of clinically definite MS." }, { "instance_id": "R34126xR34122", "comparison_id": "R34126", "paper_id": "R34122", "text": "Uncomplicated retrobul- bar neuritis and the development of multiple sclerosis Abstract A retrospective study of 30 patients hospitalized with a diagnosis of uncomplicated retrobulbar neuritis was carried out. The follow\u2010up period was 2\u201311 years; 57% developed multiple sclerosis. When the initial examination revealed oligoclonal bands in the cerebrospinal fluid, the risk of developing multiple sclerosis increased to 79%. With normal cerebrospinal fluid the risk decreased to only 10%. In the majority of cases, the diagnosis of MS was made during the first 3 years after retrobulbar neuritis." }, { "instance_id": "R34126xR34102", "comparison_id": "R34126", "paper_id": "R34102", "text": "Optic neuritis: oligoclo- nal bands increase the risk of multiple sclerosis ABSTRACT\u2010 In 1974 we examined 30 patients 0.5\u201314 (mean 5) years after acute unilateral optic neuritis (ON), when no clinical signs of multiple sclerosis (MS) were discernable. 11 of the patients had oligoclonal bands in the cerebrospinal fluid (CSF). Re\u2010examination after an additional 6 years revealed that 9 of the 11 ON patients with oligoclonal bands (but only 1 of the 19 without this CSF abnormality) had developed MS. The occurrence of oligoclonal bands in CSF in a patient with ON is \u2010 within the limits of the present observation time \u2010 accompanied by a significantly increased risk of the future development of MS. Recurrent ON also occurred significantly more often in those ON patients who later developed MS." }, { "instance_id": "R34126xR34124", "comparison_id": "R34126", "paper_id": "R34124", "text": "Can CSF predict the course of optic neuritis? To discuss the implications of CSF abnormalities for the course of acute monosymptomatic optic neuritis (AMON), various CSF markers were analysed in patients being randomly selected from a population-based cohort. Paired serum and CSF were obtained within a few weeks from onset of AMON. CSF-restricted oligoclonal IgG bands, free kappa and free lambda chain bands were observed in 17, 15, and nine of 27 examined patients, respectively. Sixteen patients showed a polyspecific intrathecal synthesis of oligoclonal IgG antibodies against one or more viruses. At 1 year follow-up five patients had developed clinically definite multiple sclerosis (CDMS); all had CSF oligoclonal IgG bands and virus-specific oligoclonal IgG antibodies at onset. Due to the relative small number studied at the short-term follow-up, no firm conclusion of the prognostic value of these analyses could be reached. CSF Myelin Basic Protein-like material was increased in only two of 29 patients with AMON, but may have potential value in reflecting disease activity, as the highest values were obtained among patients with CSF sampled soon after the worst visual acuity was reached, and among patients with severe visual impairment. In most previous studies of patients with AMON qualitative and quantitative analyses of CSF IgG had a predictive value for development of CDMS, but the results are conflicting." }, { "instance_id": "R34126xR34115", "comparison_id": "R34126", "paper_id": "R34115", "text": "Predicting multiple sclerosis at optic neuritis onset Using multivariate analyses, individual risk of clinically definite multiple sclerosis (C DMS) after monosymptomatic optic neuritis (MO N) was quantified in a prospective study with clinical MO N onset during 1990 -95 in Stockholm, Sweden. During a mean follow-up time of 3.8 years, the presence of MS-like brain magnetic resonance imaging (MRI) lesions and oligoclonal immunoglobulin (Ig) G bands in cerebrospinal fluid (CSF) were strong prognostic markers of C DMS, with relative hazard ratios of 4.68 {95% confidence interval (CI) 2.21 -9.91} and 5.39 (95% C I 1.56 -18.61), respectively. Age and season of clinical onset were also significant predictors, with relative hazard ratios of 1.76 (95% C I 1.02 -3.04) and 2.21 (95% C I 1.13 -3.98), respectively. Based on the above two strong predicto rs, individual probability of C DMS development after MO N was calculated in a three-quarter sample drawn from a cohort, with completion of follow-up at three years. The highest probability, 0.66 (95% C I 0.48 -0.80), was obtained for individuals presenting with three or more brain MRI lesions and oligoclonal bands in the C SF, and the lowest, 0.09 (95% C I 0.02 -0.32), for those not presenting with these traits. Medium values, 0.29 (95% C I 0.13 -0.53) and 0.32 (95% C I 0.07 -0.73), were obtained for individuals discordant for the presence of brain MRI lesions and oligoclonal bands in the C SF. These predictions were validated in an external one-quarter sample." }, { "instance_id": "R34183xR34171", "comparison_id": "R34183", "paper_id": "R34171", "text": "Roundup Ready soybeans and welfare effects in the soybean complex A three-region world model for the soybean complex is developed to evaluate the welfare effects of Roundup Ready (RR) soybean adoption. The structural modeling of the innovation accounts for farmers' adoption incentives and for the observed pricing of RR soybean seeds as a proprietary technology. The calibrated model is solved for various scenarios to evaluate the production, price, and welfare impacts of RR soybean adoption. The United States gains substantially from the innovation, with the innovator capturing the larger share of the welfare gains. US farmers benefit in the base scenario, but would be adversely affected if the RR innovation were to increase yields. Spillover of the new technology to foreign competitors erodes the competitive position of domestic soybean producers, and export of the technology per se may not improve the welfare position of the innovating country. Consumers in every region gain from the adoption of RR soybeans. lJEL Classification: F14, O33, Q16r. \u00a9 2000 John Wiley & Sons, Inc." }, { "instance_id": "R34183xR34140", "comparison_id": "R34183", "paper_id": "R34140", "text": "A critical assessment of methods for analysis of social welfare impacts of genetically modified crops: A literature survey This paper is a review of existing literature on economic and environmental costs and benefits of genetically modified (GM) crops focusing on methodological issues arising from this literature. Particular attention is given to the production function framework commonly used to quantify costs and benefits of GM crops at the farm level and to equilibrium displacement models used to quantify impacts of GM crops on social welfare. Methods are discussed with respect to their sensitivity to specific parameter values and key areas are identified for further research." }, { "instance_id": "R34183xR34132", "comparison_id": "R34183", "paper_id": "R34132", "text": "Evaluation of Transgenic Corn Against European Corn Borer, Central Minnesota, 1996 Abstract This experiment was conducted to assess the performance of Bacillus thuringiensis (Bt) transgenic corn [crylA(b) gene] against a natural ECB infestation in Rosemount, MN. Plots measuring 50 ft by 8 rows (30 inch row spacing) were established in Dakota silty loam soil on 23 May at a rate of 26,100 seeds per acre. Plots were arranged in a RCB with four replications. First generation ECB measurements recorded Jul to Aug included % shot-holing, leaf injury ratings, and tunnel length and number. Measurements for second generation ECB recorded in Sept included cumulative tunnel length and number, fall larvae, and ear and shank damage. Yield data were corrected to 15.5% moisture." }, { "instance_id": "R34183xR34130", "comparison_id": "R34183", "paper_id": "R34130", "text": "First impact of biotechnology in the EU: Bt maize adoption in Spain Summary In the present paper we build a bio-economic model to estimate the impact of a biotechnology innovation in EU agriculture. Transgenic Bt maize offers the potential to efficiently control corn borers, that cause economically important losses in maize growing in Spain. Since 1998, Syngenta has commercialised the variety Compa CB, equivalent to an annual maize area of about 25,000 ha. During the six-year period 1998-2003 a total welfare gain of \u20ac15.5 million is estimated from the adoption of Bt maize, of which Spanish farmers captured two thirds, the rest accruing to the seed industry." }, { "instance_id": "R34183xR34145", "comparison_id": "R34183", "paper_id": "R34145", "text": "The distribution of benefits from the introduction of transgenic cotton varieties A handful of vertically coordinated \u201clife science\u201d firms have been the key players in ushering in the biotechnology revolution in the United States. These firms have been successful in linking useful genetic events with high quality germplasm to create genetically modified varieties ( GMVs) with the ability to gain rapid market penetration and to capture value for the creators. These life science firms have fundamentally altered the structure of the seed industry, using mergers, acquisitions and licensing agreements to ally their financial, scientific, and organizational strengths with the genetic resources of traditional seed companies such as Pioneer, Delta and Pineland, Asgrow, DeKalb and dozens of smaller seed companies. Intellectual property right (IPR) laws have been used to provide incentives for inventors to invest in research since the founding of this country. IPR protection provides inventors with limited monopoly power, increasing their ability to appropriate the benefits created by their research effort. The firm producing the IPR-protected innovation is able to price their product above the marginal cost of producing the input, thereby appropriating profit that would otherwise be passed on to consumers through lower prices. Major changes have also taken place in t he laws and enforcement of IPR for biological innovations so that protection is now similar to that afforded to discovery in other sectors, but it is really only the specifics of how IPR laws apply to biological innovations that have changed in recent years." }, { "instance_id": "R34183xR34138", "comparison_id": "R34183", "paper_id": "R34138", "text": "The impact of the introduction of transgenic crops in Argentinean agriculture Since the early 1990s, Argentinean grain production underwent a dramatic increase in grains production (from 26 million tons in 1988/89 to over 75 million tons in 2002/2003). Several factors contributed to this \"revolution,\" but probably one of the most important was the introduction of new genetic modification (GM) technologies, specifically herbicide-tolerant soybeans. This article analyses this process, reporting on the economic benefits accruing to producers and other participating actors as well as some of the environmental and social impacts that could be associated with the introduction of the new technologies. In doing so, it lends attention to the synergies between GM soybeans and reduced-tillage technologies and also explores some of the institutional factors that shed light on the success of this case, including aspects such as the early availability of a reliable biosafety mechanism and a special intellectual property rights (IPR) situation. In its concluding comments, this article also posts a number of questions about the replicability of the experience and some pending policy issues regarding the future exploitation of GM technologies in Argentina." }, { "instance_id": "R34183xR34177", "comparison_id": "R34183", "paper_id": "R34177", "text": "Biodiversity versus transgenic sugar beet: the one euro question The decision of whether to release transgenic crops in the EU is one subject to flexibility, uncertainty, and irreversibility. We analyse the case of herbicide tolerant sugar beet and reassess whether the 1998 de facto moratorium of the EU on transgenic crops for sugar beet was correct from a cost-benefit perspective using a real option approach. We show that the decision was correct, if households value possible annual irreversible costs of herbicide tolerant sugar beet with about 1 Euro or more on average. On the other hand, the total net private reversible benefits forgone if the de facto moratorium is not lifted are in the order of 169 Mio Euro per year." }, { "instance_id": "R34183xR34147", "comparison_id": "R34183", "paper_id": "R34147", "text": "Size and Distribution of Market Benefits From Adopting Biotech Crops This study estimates the total benefit arising from the adoption of agricultural biotechnology in one year (1997) and its distribution among key stakeholders along the production and marketing chain. The analysis focuses on three biotech crops: herbicide-tolerant soybeans, insect-resistant (Bt) cotton, and herbicide-tolerant cotton. Adoption of these crops resulted in estimated market benefits of $212.5-$300.7 million for Bt cotton, $231.8 million for herbicide-tolerant cotton, and $307.5 million for herbicide-tolerant soybeans. These benefits accounted for small shares of crop production value, ranging from 2 percent to 5 percent. U.S. farmers captured a much larger share (about a third) of the benefits for Bt cotton than with herbicide-tolerant soybeans (20 percent) and herbicide-tolerant cotton (4 percent). Innovators' share ranged from 30 percent for Bt cotton to 68 percent for herbicide-tolerant soybeans. For herbicide-tolerant cotton, U.S. consumers and the rest of the world (including both producers and consumers) received the bulk of the estimated benefits in 1997. Estimated benefits and their distribution depend on the specification of the analytical framework, supply and demand elasticity assumptions, the inclusion of market and nonmarket benefits, crops considered, and year-specific factors (such as weather and pest infestation levels)." }, { "instance_id": "R34183xR34157", "comparison_id": "R34183", "paper_id": "R34157", "text": "Benefits of Bt cotton use by smallholders farmers in South Africa This paper describes the results of research conducted in the Makhathini region, Kwazulu Natal, Republic of South Africa, designed to explore the economic benefits of the adoption of Bt cotton for smallholders. Results suggest that Bt cotton had higher yields than non-Bt varieties and generated greater revenue. Seed costs for Bt cotton were double those of non-Bt, although pesticide costs were lower. On balance, the gross margins (revenue \u2013 costs) of Bt growers were higher than those of non-Bt growers." }, { "instance_id": "R34183xR34163", "comparison_id": "R34183", "paper_id": "R34163", "text": "Genetically modified crops, corporate pricing strategies, and farmers' adoption: the case of Bt cotton in Argentina This article analyzes adoption and impacts of Bt cotton in Argentina against the background of monopoly pricing. Based on survey data, it is shown that the technology significantly reduces insecticide applications and increases yields; however, these advantages are curbed by the high price charged for genetically modified seeds. Using the contingent valuation method, it is shown that farmers' average willingness to pay is less than half the actual technology price. A lower price would not only increase benefits for growers, but could also multiply company profits, thus, resulting in a Pareto improvement. Implications of the sub-optimal pricing strategy are discussed." }, { "instance_id": "R34183xR34143", "comparison_id": "R34183", "paper_id": "R34143", "text": "Surplus distribution from the introduction of a biotechnology innovation We examine the distribution of welfare from the introduction of Bt cotton in the United States in 1996. The welfare framework explicitly recognizes that research protected by intellectual property rights generates monopoly profits, and makes it possible to partition these rents among consumers, farmers, and the innovating input firms. We calculate a total increase in world surplus of $240.3 million for 1996. Of this total, the largest share (59%) went to U.S. farmers. The gene developer, Monsanto, received the next largest share (21%), followed by U.S. consumers (9%), the rest of the world (6%), and the germplasm supplier, Delta and Pine Land Company (5%)." }, { "instance_id": "R34183xR34173", "comparison_id": "R34183", "paper_id": "R34173", "text": "Rent creation and distribution from biotechnology innovation: the case of Bt cotton and herbicide-tolerant soybeans in 1997 We examine the distribution of welfare from the second-year planting of Bt cotton in the United States in 1997. We also provide preliminary estimates of the planting of herbicide-tolerant soybeans in 1997. For Bt cotton, total increase in world surplus was $190.1 million and US farmer share of total surplus was 42%. The gene developer, Monsanto, received 35% and the rest of the world 6% of the total world surplus. Delta and Pine Land received 9%, whereas US consumers received 7%. For herbicide-tolerant soybeans, total world surplus was $1,061.7 million. US farmers' surplus was 76%, Monsanto's was 7%, US consumers received 4%, and seed companies captured 3% of total surplus. lEconolit: Q120, D600, O330r \u00a9 2000 John Wiley & Sons, Inc." }, { "instance_id": "R34183xR34136", "comparison_id": "R34183", "paper_id": "R34136", "text": "Case study in benefits and risks of agricultural biotechnology: Roundup Ready soybeans. Abstract

This case study describes the US regulatory process governing agricultural biotechnology and traces the approval of Roundup Ready soyabeans (with transgenic tolerance of the herbicide glyphosate), summarizing the information that was submitted to US regulatory agencies by Monsanto. Estimates of the impact that the adoption of Roundup Ready soyabeans has had on US agriculture are also provided. The US regulatory structure for agricultural biotechnology has evolved over the past 25 years, as technology allowing for genetic modification developed. The system continues to evolve as new and different applications of the technology emerge. In reviewing the studies that were conducted on the safety of Roundup Ready soyabeans, no indication of greater health or environmental risks were found compared with conventional varieties. The benefits of the introduction of Roundup Ready soyabeans include cost savings of US$216 million in annual weed control and 19 million fewer soyabean herbicide applications per year.

" }, { "instance_id": "R34183xR34155", "comparison_id": "R34183", "paper_id": "R34155", "text": "Transgenic Cotton in Mexico: A Case Study of the Comarca Lagunera In 1999, transgenic cotton was grown in six countries on a total of some 3.7 million hectares, making it the world\u2019s third most common transgenic crop (Table 10.1). Bt cotton has been grown in Mexico since 1996 and was planted on one third of the country\u2019s cotton area during the 2000 growing season. A number of papers have now been published on the impacts of transgenic crops in the United States, but few empirical studies of transgenic crops in developing countries have appeared. In this paper we describe Mexico\u2019s experience with Bt cotton, focusing on the \u201cComarca Lagunera\u201d region in the northern states of Coahuila and Durango, where Bt adoption reached 96% within three years of its introduction in 1997." }, { "instance_id": "R34251xR34225", "comparison_id": "R34251", "paper_id": "R34225", "text": "Policy Coordination Framework for the Proposed Monetary Union in ECOWAS There is no doubt that regional economic integration and eventual monetary union would be generally beneficial to the economies of West Africa. Each country in the sub-region conceptualizes and implements its own monetary, fiscal and exchange rate policies, among others. There have been attempts in recent years by some countries to design such policies in line with efforts to meet both primary and secondary criteria for convergence. However, these policies seem not to be properly coordinated. They remain country specific and focused thus defeating the essence of moving towards a monetary union." }, { "instance_id": "R34251xR34228", "comparison_id": "R34251", "paper_id": "R34228", "text": "Optimality of a monetary union: New evidence from exchange rate misalignments in West Africa This paper aims to study the optimality of a monetary union in West Africa by using a new methodology based on the analysis of convergence and co-movements between exchange rate misalignments. Two main advantages characterize this original framework. First, it brings together the information related to several optimum currency area criteria\u2014such as price convergence, terms of trade shocks, trade and fiscal policies\u2014going further than previous studies which are mainly based on only one criterion at a given time. Second, our study detects potential competitiveness differentials which play a key role in the debate on the optimality or not of a monetary union, as evidenced by the recent crisis in the Euro area. Relying on the recent panel cointegration techniques, cluster analyses and robustness tests, our results show that the WAEMU area is the most homogeneous area in Central and Western Africa and could be joined by Ghana, Gambia and, to a lesser extent, Sierra Leone, and that Ghana and Senegal appear to be the best reference countries for the creation of the whole West Africa monetary union." }, { "instance_id": "R34251xR34205", "comparison_id": "R34251", "paper_id": "R34205", "text": "Exchange rate volatility and optimum currency area: evidence from Africa In this paper we use a system of simultaneous equations and Generalized Method of Moment (GMM) to investigate the relation between bilateral exchange rate volatility and the relevant variables pointed out by the theory of optimum currency areas (OCA) for 21 selected African countries for the period 1990-2003. The evidence turns out to be strongly supported by the data. An OCA index for African countries is derived by adapting a method initially proposed by Bayoumi and Eichengreen (1997). The results have important policy implications for proposed monetary unions in Africa. Citation: Bangak\u00e9, Chrysost, (2008) \"Exchange Rate Volatility and Optimum Currency Area: Evidence from Africa.\" Economics Bulletin, Vol. 6, No. 12 pp. 1-10 Submitted: December 13, 2007. Accepted: March 26, 2008. URL: http://economicsbulletin.vanderbilt.edu/2008/volume6/EB-07F30021A.pdf" }, { "instance_id": "R34251xR34238", "comparison_id": "R34251", "paper_id": "R34238", "text": "REER Imbalances and Macroeconomic Adjustments in the Proposed West African Monetary Union With the spectre of the euro crisis hunting embryonic monetary unions, we use a dynamic model of a small open economy to analyse real effective exchange rate (REER) imbalances and examine whether the movements in the aggregate real exchange rates are consistent with the underlying macroeconomic fundamentals in the proposed West African Monetary Union (WAMU). Using both country-oriented and WAMU panel-based specifications, we show that the long-run behaviour of the REERs can be explained by fluctuations in the terms of trade, productivity, investment, debt and openness. While there is still significant evidence of cross-country differences in the relationship between underlying macroeconomic fundamentals and corresponding REERs, the embryonic WAMU has a stable error correction mechanism, with four of the five cointegration relations having signs that are consistent with the predictions from economic theory. Policy implications are discussed, and the conclusions of the analysis are a valuable contribution to the scholarly and policy debate over whether the creation of a sustainable monetary union should precede convergence in macroeconomic fundamentals that determine REER adjustments." }, { "instance_id": "R34251xR34242", "comparison_id": "R34251", "paper_id": "R34242", "text": "How Would Monetary Policy Matter in the Proposed African Monetary Unions? Evidence from Output and Prices We analyze the effects of monetary policy on economic activity in the proposed African monetary unions. Findings broadly show that: (1) but for financial efficiency in the EAMZ, monetary policy variables affect output neither in the short-run nor in the long-term and; (2) with the exception of financial size that impacts inflation in the EAMZ in the short-term, monetary policy variables generally have no effect on prices in the short-run. The WAMZ may not use policy instruments to offset adverse shocks to output by pursuing either an expansionary or a contractionary policy, while the EAMZ can do with the \u2018financial allocation efficiency\u2019 instrument. Policy implications are discussed." }, { "instance_id": "R34251xR34240", "comparison_id": "R34251", "paper_id": "R34240", "text": "Are proposed African monetary unions optimal currency areas? Real, monetary and fiscal policy convergence analysis Purpose \u2013 A spectre is hunting embryonic African monetary zones: the European Monetary Union crisis. The purpose of this paper is to assess real, monetary and fiscal policy convergence within the proposed WAM and EAM zones. The introduction of common currencies in West and East Africa is facing stiff challenges in the timing of monetary convergence, the imperative of central bankers to apply common modeling and forecasting methods of monetary policy transmission, as well as the requirements of common structural and institutional characteristics among candidate states. Design/methodology/approach \u2013 In the analysis: monetary policy targets inflation and financial dynamics of depth, efficiency, activity and size; real sector policy targets economic performance in terms of GDP growth at macro and micro levels; while, fiscal policy targets debt-to-GDP and deficit-to-GDP ratios. A dynamic panel GMM estimation with data from different non-overlapping intervals is employed. The implied rate of convergence and the time required to achieve full (100 percent) convergence are then computed from the estimations. Findings \u2013 Findings suggest overwhelming lack of convergence: initial conditions for financial development are different across countries; fundamental characteristics as common monetary policy initiatives and IMF-backed financial reform programs are implemented differently across countries; there is remarkable evidence of cross-country variations in structural characteristics of macroeconomic performance; institutional cross-country differences could also be responsible for the deficiency in convergence within the potential monetary zones; absence of fiscal policy convergence and no potential for eliminating idiosyncratic fiscal shocks due to business cycle incoherence. Practical implications \u2013 As a policy implication, heterogeneous structural and institutional characteristics across countries are giving rise to different levels and patterns of financial intermediary development. Thus, member states should work towards harmonizing cross-country differences in structural and institutional characteristics that hamper the effectiveness of convergence in monetary, real and fiscal policies. This could be done by stringently monitoring the implementation of existing common initiatives and/or the adoption of new reforms programs. Originality/value \u2013 It is one of the few attempts to investigate the issue of convergence within the proposed WAM and EAM unions." }, { "instance_id": "R34251xR34231", "comparison_id": "R34251", "paper_id": "R34231", "text": "West African Single Currency and Competitiveness This paper compares different nominal anchors to promote internal and external competitiveness in the case of a fixed exchange rate regime for the future single regional currency of the Economic Community of the West African States (ECOWAS). We use counterfactual analyses and estimate a model of dependent economy for small commodity exporting countries. We consider four foreign anchor currencies: the US dollar, the euro, the yen and the yuan. Our simulations show little support for a dominant peg in the ECOWAS area if they pursue several goals: maximizing the export revenues, minimizing their variability, stabilizing them and minimizing the real exchange rate misalignments from the fundamental value." }, { "instance_id": "R34251xR34213", "comparison_id": "R34251", "paper_id": "R34213", "text": "Does monetary integration lead to an increase in FDI flows? An empirical investigation from the West African Monetary Zone (WAMZ) This paper investigates the relationship between monetary integration, foreign direct investment (FDI) and trade in the West African Monetary Zone (WAMZ) using annual time series for the period 1980\u20132013. It also examines whether trade and FDI are complement or substitute. Several econometric models are applied including Ordinary Least Squares (OLS) and fully-modified OLS (FMOLS). Our empirical results revealed that FDI flows into the WAMZ is influence positively by monetary integration. The findings also suggest that while real GDP, large population size and greater distance positively influence FDI flows, weak economic freedom index negatively impact FDI flows into the zone. The results support the argument that monetary union positively affect trade. Our empirical finding support the hypothesis that FDI and trade flows are complementary. The results are in line with earlier research findings. Therefore, any policy that promotes trade such as monetary integration enhances FDI inflows as well. The findings offer perspectives and insight for a new policy in WAMZ economies in their drive to attain sustainable economic growth." }, { "instance_id": "R34251xR34207", "comparison_id": "R34251", "paper_id": "R34207", "text": "Monetary union in West Africa and asymmetric shocks: A dynamic structural factor model approach We analyse the costs of a monetary union in West Africa by means of asymmetric aggregate demand and aggregate supply shocks. Previous studies have estimated the shocks with the VAR model.We discuss the limits of this approach and apply a new technique based on the dynamic factor model.The results suggest the presence of economic costs for a monetary union in West Africa because aggregate supply shocks are poorly correlated or asymmetric across these countries. Aggregate demand shocks are more positively or less negatively correlated between West African countries. These conclusions imply some policy recommendations for the monetary union project in West Africa." }, { "instance_id": "R34251xR34245", "comparison_id": "R34251", "paper_id": "R34245", "text": "Analysis of convergence criteria in a proposed monetary union: a study of the economic community of West African States This study examines the processes of the monetary union of the Economic Community of West African States (ECOWAS). It takes a critical look at the convergence criteria and the various conditions under which they are to be met. Using the panel least square technique an estimate of the beta convergence was made for the period 2000-2008. The findings show that nearly all the explanatory variables have indirect effects on the income growth rate and that there tends to be convergence in income over time. The speed of adjustment estimated is 0.2% per year and the half-life is -346.92. Thus the economies can make up for half of the distance that separates them from their stationary state. From the findings, it was concluded that a well integrated economy could further the achievement of steady growth in these countries in the long run." }, { "instance_id": "R34251xR34201", "comparison_id": "R34251", "paper_id": "R34201", "text": "Monetary Union Membership in West Africa: A Cluster Analysis Summary Applying hard and soft clustering algorithms to a set of variables suggested by the convergence criteria and the theory of optimal currency areas, this paper examines the suitability of countries in the west African region to form the proposed monetary unions, the West African Monetary Zone (WAMZ) and the Economic Community of West African States (ECOWAS). Our analysis reveals considerable dissimilarities in the economic characteristics of member countries, particularly WAMZ countries. Furthermore, when west and central African countries are considered together, we find significant heterogeneities within the CFA franc zone, and some interesting similarities between the central African and WAMZ countries." }, { "instance_id": "R34251xR34220", "comparison_id": "R34251", "paper_id": "R34220", "text": "Inflationary shocks and common economic trends: Implications for West African monetary union membership This paper examines the inflation dynamics and common trends in the real gross domestic product (GDP) in the candidate countries of the embryonic West African Monetary Zone (WAMZ). Using fractional integration and cointegration methods, we establish that significant heterogeneity in behavior exists among the countries. Shocks to inflation in Sierra Leone are not mean-reverting; results for the Gambia and Ghana suggest some inflation persistence, despite being mean-reverting. Further, the cointegration results indicate the presence of only one common trend. With much attention currently being placed on convergence criteria and preparedness of the aspiring member states, less attention has been given to the extent to which the dynamics of inflation and economic trends in the individual countries are (dis)similar. We discuss some policy implications and highlight political implications." }, { "instance_id": "R34282xR34266", "comparison_id": "R34282", "paper_id": "R34266", "text": "Business Cycle Synchronization in the Proposed East African Monetary Union: An Unobserved Component Approach This paper uses the business cycle synchronization criteria of the theory of optimum currency area (OCA) to examine the feasibility of the East African Community (EAC) as a monetary union. We also investigate whether the degree of business cycle synchronization has increased after the 1999 EAC Treaty. We use an unobserved component model to measure business cycle synchronization as the proportion of structural shocks that are common across different countries, and a time-varying parameter model to examine the dynamics of synchronization over time. We find that although the degree of synchronization has increased since 2000 when the EAC Treaty came into force, the proportion of shocks that is common across different countries is still small implying weak synchronization. This evidence casts doubt on the feasibility of a monetary union for the EAC as scheduled by 2012." }, { "instance_id": "R34282xR34260", "comparison_id": "R34282", "paper_id": "R34260", "text": "Benefits from Mutual Restraint in a Multilateral Monetary Union Summary We show that monetary union can enhance price stability for its member countries even if none of them has a long history of stable prices and independent monetary policy, as is the case in a number of monetary union initiatives among developing countries. The positive effect obtains because the opportunistic objectives of one country's policy makers are kept in check at the union level by other members with disparate objectives. We calibrate the model to evaluate the proposed monetary union in the East African Community. The empirical results show that the mutual restraint on monetary policy is an important determinant of the expected benefit from an EAC monetary union." }, { "instance_id": "R34282xR34242", "comparison_id": "R34282", "paper_id": "R34242", "text": "How Would Monetary Policy Matter in the Proposed African Monetary Unions? Evidence from Output and Prices We analyze the effects of monetary policy on economic activity in the proposed African monetary unions. Findings broadly show that: (1) but for financial efficiency in the EAMZ, monetary policy variables affect output neither in the short-run nor in the long-term and; (2) with the exception of financial size that impacts inflation in the EAMZ in the short-term, monetary policy variables generally have no effect on prices in the short-run. The WAMZ may not use policy instruments to offset adverse shocks to output by pursuing either an expansionary or a contractionary policy, while the EAMZ can do with the \u2018financial allocation efficiency\u2019 instrument. Policy implications are discussed." }, { "instance_id": "R34282xR34205", "comparison_id": "R34282", "paper_id": "R34205", "text": "Exchange rate volatility and optimum currency area: evidence from Africa In this paper we use a system of simultaneous equations and Generalized Method of Moment (GMM) to investigate the relation between bilateral exchange rate volatility and the relevant variables pointed out by the theory of optimum currency areas (OCA) for 21 selected African countries for the period 1990-2003. The evidence turns out to be strongly supported by the data. An OCA index for African countries is derived by adapting a method initially proposed by Bayoumi and Eichengreen (1997). The results have important policy implications for proposed monetary unions in Africa. Citation: Bangak\u00e9, Chrysost, (2008) \"Exchange Rate Volatility and Optimum Currency Area: Evidence from Africa.\" Economics Bulletin, Vol. 6, No. 12 pp. 1-10 Submitted: December 13, 2007. Accepted: March 26, 2008. URL: http://economicsbulletin.vanderbilt.edu/2008/volume6/EB-07F30021A.pdf" }, { "instance_id": "R34282xR34276", "comparison_id": "R34282", "paper_id": "R34276", "text": "Macroeconomic Shock Synchronization in the East African Community The East African Community\u2019s (EAC) economic integration has gained momentum recently, with the EAC countries aiming to adopt a single currency in 2015. This article evaluates empirically the readiness of the EAC countries for monetary union. First, structural similarity in terms of similarity of production and exports of the EAC countries is measured. Second, the symmetry of shocks is examined with structural vector auto-regression analysis (SVAR). The lack of macroeconomic convergence gives evidence against a hurried transition to a monetary union. Given the divergent macroeconomic outcomes, structural reforms, including closing infrastructure gaps and harmonizing macroeconomic policies that would raise synchronization of business cycles, need to be in place before moving to monetary union." }, { "instance_id": "R34282xR34272", "comparison_id": "R34282", "paper_id": "R34272", "text": "Monetary Transmission Mechanism in the East African Community: An Empirical Investigation Do changes in monetary policy affect inflation and output in the East African Community (EAC)? We find that (i) Monetary Transmission Mechanism (MTM) tends to be generally weak when using standard statistical inferences, but somewhat strong when using non-standard inference methods; (ii) when MTM is present, the precise transmission channels and their importance differ across countries; and (iii) reserve money and the policy rate, two frequently used instruments of monetary policy, sometimes move in directions that exert offsetting expansionary and contractionary effects on inflation - posing challenges to harmonization of monetary policies across the EAC and transition to a future East African Monetary Union. The paper offers some suggestions for strengthening the MTM in the EAC." }, { "instance_id": "R34282xR34256", "comparison_id": "R34282", "paper_id": "R34256", "text": "Is the proposed East African Monetary Union an optimal currency area? a structural vector autoregression analysis The treaty of 1999 to revive the defunct East African Community (EAC) ratified by Kenya, Uganda, and Tanzania came into force on July 2000 with the objective of fostering a closer co-operation in political, economic, social, and cultural fields. To achieve this, an East Africa Customs Union protocol was signed in March 2004. A Common Market, a Monetary Union, and ultimately a Political Federation of East Africa states is planned. Though the question of a monetary union has been discussed in the political arena there has been no corresponding empirical study on the economic viability of such a union. This article fills the gap and assesses whether the political force driving the EAC towards a monetary union has economic basis. In particular, we focus on the symmetry of the underlying shocks across the East African economies as a precondition for forming an optimum currency area (OCA). As Mundell (1961) and McKinnon (1963) describe, the member countries of a monetary union do not have independent monetary policy, which differs from that of the union as a whole; governments cannot use monetary and exchange rate policies to react to a country-specific shock. How serious this limitation is for the union countries depends on the degree of asymmetry of shocks and the speed with which the economies adjust to these shocks. If disturbances are distributed symmetrically across union countries, a common response will suffice. If, however, the countries face mostly asymmetric shocks, the retention of policy autonomy is beneficial." }, { "instance_id": "R34282xR34278", "comparison_id": "R34282", "paper_id": "R34278", "text": "Monetary, Financial and Fiscal Stability in the East African Community: Ready for a Monetary Union? We examine prospects for a monetary union in the East African Community (EAC) by developing a stylized model of policymakers' decision problem that allows for uncertain benefits derived from monetary,financial and fiscal stability, and then calibrating the model for the EAC for the period 2003-2010. When policymakers properly allow for uncertainty, none of the countries wants to pursue a monetary union based on either monetary or financial stability grounds, and only Rwanda might favor it on fiscal stability grounds; we argue that robust institutional arrangements assuring substantial improvements in monetary, financial and fiscal stability are needed to compensate. (This abstract was borrowed from another version of this item.)" }, { "instance_id": "R34282xR34268", "comparison_id": "R34282", "paper_id": "R34268", "text": "Monetary union for the development process in the East African community: business cycle synchronization approach This paper empirically examines the suitability of monetary union in East African community members namely, Burundi, Kenya, Rwanda, Tanzania and Uganda, on the basis of business cycle synchronization. This research considers annual GDP (gross domestic product) data from IMF (international monetary fund) for the period of 1980 to 2010. In order to extract the business cycles and trends, the study uses HP (Hodrick-Prescott) and the BP (band pass) filters. After identifying the cycles and trends of the business cycle, the study considers cross country correlation analysis and analysis of variance technique to examine whether EAC (East African community) countries are characterized by synchronized business cycles or not. The results show that four EAC countries (Burundi, Kenya, Tanzania and Uganda) among five countries are having similar pattern of business cycle and trend from the last ten years of the formation of the EAC. The research concludes that these countries, except Rwanda, do not differ significantly in transitory or cycle components but do differ in permanent components especially in growth trend. Key words: Business cycle synchronization, optimum currency area, East African community, monetary union, development." }, { "instance_id": "R34316xR34310", "comparison_id": "R34316", "paper_id": "R34310", "text": "Modelling Monetary Union in Southern Africa: Welfare Evaluation for the CMA and SADC This paper proposes a quantitative assessment of the welfare effects arising from the Common Monetary Area (CMA) and an array of broader groupings among Southern African Development Community (SADC) countries. Model simulations suggest that (i) participating in the CMA benefits all members; (ii) joining the CMA individually is beneficial for all SADC members except Angola, Mauritius and Tanzania; (iii) creating a symmetric CMA-wide monetary union with a regional central bank carries some costs in terms of foregone anti-inflationary credibility; and (iv) SADC-wide symmetric monetary union continues to be beneficial for all except Mauritius, although the gains for existing CMA members are likely to be limited." }, { "instance_id": "R34316xR34205", "comparison_id": "R34316", "paper_id": "R34205", "text": "Exchange rate volatility and optimum currency area: evidence from Africa In this paper we use a system of simultaneous equations and Generalized Method of Moment (GMM) to investigate the relation between bilateral exchange rate volatility and the relevant variables pointed out by the theory of optimum currency areas (OCA) for 21 selected African countries for the period 1990-2003. The evidence turns out to be strongly supported by the data. An OCA index for African countries is derived by adapting a method initially proposed by Bayoumi and Eichengreen (1997). The results have important policy implications for proposed monetary unions in Africa. Citation: Bangak\u00e9, Chrysost, (2008) \"Exchange Rate Volatility and Optimum Currency Area: Evidence from Africa.\" Economics Bulletin, Vol. 6, No. 12 pp. 1-10 Submitted: December 13, 2007. Accepted: March 26, 2008. URL: http://economicsbulletin.vanderbilt.edu/2008/volume6/EB-07F30021A.pdf" }, { "instance_id": "R34316xR34292", "comparison_id": "R34316", "paper_id": "R34292", "text": "The Southern African Development Community: suitable for a monetary union? Abstract This paper investigates whether a monetary union is desirable among the countries of the Southern African Development Community (SADC). We employ a Generalised Auto-Regressive Conditional Heteroscedasticity (GARCH) model to consider the share of the variation in real exchange rates (RERs; vis-a-vis South Africa) that can be explained by the divergence in monetary and fiscal policies. The results show that monetary integration would substantially eliminate real exchange rate variation due to different monetary policies for some members. The study concludes that a monetary union that embraces all SADC members would amass large costs relative to the benefits and hence would not be desirable." }, { "instance_id": "R34316xR34314", "comparison_id": "R34316", "paper_id": "R34314", "text": "Assessment of monetary union in SADC: evidence from cointegration and panel unit root tests In this paper we investigate the likelihood of a proposed monetary union in the Southern African Development Community (SADC) being successful from the viewpoint of the Generalised Purchasing Power Parity (GPPP) hypothesis and optimum currency area (OCA) theory. We apply Johansen\u2019s multivariate co-integration technique, panel unit root tests, Pedroni\u2019s residual cointegration test and error correction based panel cointegration tests. The findings from this study confirm that GPPP holds among SADC member countries included in this study on account of cointegration and stationarity in real exchange rate series. The South African rand normalised long run beta coefficients of all the real exchange rates are below one except in the case of the Mauritian rupee and all bear negative signs except in the case of the Angolan New Kwanza and Mauritian rupee. This evidence support monetary union in the region except for Angola and Mauritius. However, the absolute magnitudes of the short run adjustment coefficients of SADC countries\u2019 real exchange rates are low and bear positive signs in some cases. This finding implies that the observed slow speed of adjustment for the (log) real exchange rate of SADC member states might constrain the effectiveness of stabilization policies in the wake of external shocks, rendering SADC countries vulnerable to macroeconomic instability in the region. This result has important policy implications for the proposed monetary union in SADC." }, { "instance_id": "R34316xR34308", "comparison_id": "R34316", "paper_id": "R34308", "text": "On the feasibility of a monetary union in the Southern Africa Development Community This paper investigates the feasibility of a monetary union in Southern Africa Development Community (SADC) by looking at evidence of nominal exchange rate and inflation convergence. Using a methodology based on estimating time-varying parameters, the evidence suggests non-convergence. The non-convergence of nominal exchange rate and consumer price inflation suggests that presently the chances of SADC member countries satisfying some form of Maastricht-type criteria is quite low. Copyright \u00a9 2007 John Wiley & Sons, Ltd." }, { "instance_id": "R34316xR34288", "comparison_id": "R34316", "paper_id": "R34288", "text": "Macroeconomic Convergence in Southern Africa In this paper we aim to answer the following two questions: 1) has the Common Monetary Area in Southern Africa (henceforth CMA) ever been an optimal currency area (OCA)? 2) What are the costs and benefits of the CMA for its participating countries? In order to answer these questions, we carry out a two-step econometric exercise based on the theory of generalised purchasing power parity (G-PPP). The econometric evidence shows that the CMA (but also Botswana as a de facto member) form an OCA given the existence of common long-run trends in their bilateral real exchange rates. Second, we also test that in the case of the CMA and Botswana the smoothness of the operation of the common currency area \u2014 measured through the degree of relative price correlation \u2014 depends on a variety of factors. These factors signal both the advantages and disadvantages of joining a monetary union. On the one hand, the more open and more similarly diversified the economies are, the higher the benefits they ... Ce Document de travail s'efforce de repondre a deux questions : 1) la zone monetaire commune de l'Afrique australe (Common Monetary Area - CMA) a-t-elle vraiment reussi a devenir une zone monetaire optimale ? 2) quels sont les couts et les avantages de la CMA pour les pays participants ? Nous avons effectue un exercice econometrique en deux etapes base sur la theorie des parites de pouvoir d'achat generalisees. D'apres les resultats econometriques, la CMA (avec le Botswana comme membre de facto) est effectivement une zone monetaire optimale etant donne les evolutions communes sur le long terme de leurs taux de change bilateraux. Nous avons egalement mis en evidence que le bon fonctionnement de l'union monetaire \u2014 mesure par le degre de correlation des prix relatifs \u2014 depend de plusieurs facteurs. Ces derniers revelent a la fois les couts et les avantages de l'appartenance a une union monetaire. D'un cote, plus les economies sont ouvertes et diversifiees de facon comparable, plus ..." }, { "instance_id": "R34411xR34341", "comparison_id": "R34411", "paper_id": "R34341", "text": "Therapeutic implications of Clostridium difficile toxin during relapse of chronic inflammatory bowel disease Clostridium difficile toxin was present in the stools of six patients with chronic inflammatory bowel disease during symptomatic relapse. Only two of these individuals had received antibiotics known to cause pseudomembranous colitis, and on proctoscopy none had pseudomembranes. In all patients disappearance of toxin, either with vancomycin therapy (five patients) or spontaneously (one patient), was associated with symptomatic improvement. Cl. difficile toxin may complicate chronic inflammatory bowel disease, and contribute to relapse in some patients." }, { "instance_id": "R34411xR34350", "comparison_id": "R34411", "paper_id": "R34350", "text": "Clostridium difficile enteritis. A cause of intramural gas The patient is a 53-year-old male with a 19-year history of left-sided ulcerative colitis, who had been treated with sulfasalazine and brief courses of corticosteroids. He was admitted to a hospital with a five-week history of loose bloody bowel movements, abdominal cramps, and a 25pound weight loss. The patient's symptoms worsened prior to admission despite treatment with a clear liquid diet, oral sulfasalazine, and oral prednisone 60 mg every day for two weeks. On admission, the patient was placed NPO, and was treated with intravenous fluid, intravenous SoluMedrol, and intravenous Zantac. Intravenous Flagyl was added when his diarrhea and abdominal cramps did not resolve. The exacerbation of ulcerative colitis did not respond to medical treatment and on hospital day 50, the patient underwent a total abdominal colectomy, proctectomy, and ileostomy. The pathology specimen demonstrated severe acute ulcerative colitis of the entire colon with the ileal and rectal margins free of inflammation. The patient was treated with intravenous cefoxitin postoperatively and intravenous nafcillin for cellulitis of the lower extremity. He improved following surgery. On hospital day 63, the patient developed distended small bowel, suggestive clinically and radiographically of an obstruction of the distal ileum. ACT scan of the abdomen demonstrated intramural gas within the wall of the distal ileum (Figure 1). The patient underwent laparotomy the same day at which time 45 cm of distal ileum was resected. The pathology specimen demonstrated pseudomembra-" }, { "instance_id": "R34411xR34392", "comparison_id": "R34411", "paper_id": "R34392", "text": "Treatment of metronidazole-refractory Clostridium difficile enteritis with vancomycin BACKGROUND Clostridium difficile infection of the colon is a common and well-described clinical entity. Clostridium difficile enteritis of the small bowel is believed to be less common and has been described sparsely in the literature. METHODS Case report and literature review. RESULTS We describe a patient who had undergone total proctocolectomy with ileal pouch-anal anastomosis who was treated with broad-spectrum antibiotics and contracted C. difficile refractory to metronidazole. The enteritis resolved quickly after initiation of combined oral vancomycin and metronidazole. A literature review found that eight of the fifteen previously reported cases of C. difficile-associated small-bowel enteritis resulted in death. CONCLUSIONS It is important for physicians who treat acolonic patients to be aware of C. difficile enteritis of the small bowel so that it can be suspected, diagnosed, and treated." }, { "instance_id": "R34411xR34382", "comparison_id": "R34411", "paper_id": "R34382", "text": "Extracolonic manifestations of Clostridium difficile infections: presentation of 2 cases and review of the literature Clostridium difficile is most commonly associated with colonic infection. It may, however, also cause disease in a variety of other organ systems. Small bowel involvement is often associated with previous surgical procedures on the small intestine and is associated with a significant mortality rate (4 of 7 patients). When associated with bacteremia, the infection is, as expected, frequently polymicrobial in association with usual colonic flora. The mortality rate among patients with C. difficile bacteremia is 2 of 10 reported patients. Visceral abscess formation involves mainly the spleen, with 1 reported case of pancreatic abscess formation. Frequently these abscesses are only recognized weeks to months after the onset of diarrhea or other colonic symptoms. C. difficile-related reactive arthritis is frequently polyarticular in nature and is not related to the patient's underlying HLA-B27 status. Fever is not universally present. The most commonly involved joints are the knee and wrist (involved in 18 of 36 cases). Reactive arthritis begins an average of 11.3 days after the onset of diarrhea and is a prolonged illness, taking an average of 68 days to resolve. Other entities, such as cellulitis, necrotizing fasciitis, osteomyelitis, and prosthetic device infections, can also occur. Localized skin and bone infections frequently follow traumatic injury, implying the implantation of either environmental or the patient's own C. difficile spores with the subsequent development of clinical infection. It is noteworthy that except for cases involving the small intestine and reactive arthritis, most of the cases of extracolonic C. difficile disease do not appear to be strongly related to previous antibiotic exposure. The reason for this is unclear. We hope that clinicians will become more aware of these extracolonic manifestations of infection, so that they may be recognized and treated promptly and appropriately. Such early diagnosis may also serve to prevent extensive and perhaps unnecessary patient evaluations, thus improving resource utilization and shortening length of hospital stay." }, { "instance_id": "R34411xR34354", "comparison_id": "R34411", "paper_id": "R34354", "text": "Fatal Clostridium difficile enteritis after total abdominal colectomy A 71-year-old man who had undergone an ileorectal anastomosis some years earlier, developed fulminant fatal Clostridium difficile pseudomembranous enteritis and proctitis after a prostatectomy. This case and three reports of C. difficile involvement of the small bowel in adults emphasize that the small intestine can be affected. No case like ours, of enteritis after colectomy from C. difficile, has hitherto been reported." }, { "instance_id": "R34411xR34379", "comparison_id": "R34411", "paper_id": "R34379", "text": "Idiopathic pseudomembranous colitis limited to the right colon: a change from Clostridium difficile Dear Editor: It cannot be denied that pseudomembranous colitis has been pushed in recent years to the forefront of people\u2019s minds with the ever increasing media interest in \u2018hospital superbugs\u2019 and in particular, Clostridium difficile, which is by far the most common cause of this disease entity. Occasionally, however, pseudomembranous colitis does not result from C. difficile infection but can be a result of a multitude of other causes. We present a case of pseudomembranous colitis in which the underlying cause was never identified despite wide-ranging microbiological and histological searches for an underlying cause. A 29-year-old female patient presented as an emergency with a 14-h history of colicky right iliac fossa pain and feeling generally unwell. The patient admitted to a single episode of loose stool 24 h prior to admission, but had no other symptoms suggestive of gastroenteritis. There were no urinary or gynecological symptoms. Of note, the patient had not received any antibiotic therapy for over 3 years. Her only previous surgical procedure was a tubal ligation several years ago. Her past medical history was otherwise unremarkable, and she was taking no regular medications. General examination revealed a marked tachycardia and pyrexia. Abdominal examination revealed marked peritonitis particularly in the right side of the abdomen. There were no other abnormal findings of note. Initial blood tests revealed anemia with a Hb of 9.8 g/dl and a neutrophilia of 15.4\u00d710/L. The C-reactive protein was not elevated at 5 mg/l, and routine biochemistry including electrolytes and liver function tests were normal. Plain films of the chest and abdomen were unremarkable. At this stage, it was decided to perform computed tomography of the abdomen and pelvis with the use of intravenous and oral contrast. This demonstrated a thickwalled and edematous cecum, ascending and proximal transverse colon in keeping with a diagnosis of colitis. There was also a moderate amount of free fluid demonstrated in the pelvis. Given the clinical situation and the findings, a decision was made to proceed to emergency laparotomy. At laparotomy, the right colon up until the mid-transverse had features consistent with a diagnosis of toxic megacolon; the colon distal to this, however, appeared unremarkable, as was the small bowel. Proximally, there was also purulent fluid noted in the right paracolic gutter and pelvis, a sample of which was sent for microbiological examination. Standard right hemicolectomy was performed with a hand-sutured side-to-side anastamosis. Postoperatively, routine antibiotic prophylaxis was continued with cefuroxime and metronidazole for 48 h. Culture of the fluid obtained from the pelvis grew Staphylococcus aureus sensitive to flucloxacillin. Histological examination of the resected specimen was completed by the fourth postoperative day, and this demonstrated multifocal patchy mucosal ulcerations with crypt withering and overlying mucopurulent spray and pseudomembranes consistent with a diagnosis of pseudomembranous colitis. There were no features to suggest an underlying inflammatory bowel disease or vasculitis. On this basis, oral metronidazole was added to the patient\u2019s therapy, and stool was sent for culture including assay for C. difficle toxin. This failed to show any evidence of C. difficle infection or indeed the presence of any other enteric pathogen. Given the growth of S. aureus from the pelvic fluid, a diagnosis of pelvic inflammatory disease was considered; however, this was considered extremely unlikely in the setting of a previous tubal ligation. Vaginal examination Int J Colorectal Dis (2009) 24:593\u2013594 DOI 10.1007/s00384-008-0589-7" }, { "instance_id": "R34411xR34335", "comparison_id": "R34411", "paper_id": "R34335", "text": "Clostridium difficile Enteritis: An Early Postoperative Complication in Inflammatory Bowel Disease Patients After Colectomy Clostridium difficile, the leading cause of hospital-acquired diarrhea, is known to cause severe colitis. C. difficile small bowel enteritis is rare (14 case reports) with mortality rates ranging from 60 to 83%. C. difficile has increased in incidence particularly among patients with inflammatory bowel disease. This case series of six patients from 2004 to 2006 is the largest in the literature. All patients received antibiotics before colectomies for ulcerative colitis and developed severe enteritis that was C. difficile toxin positive. Three patients underwent ileal pouch anal anastomosis and loop ileostomy. Four of the six patients had C. difficile colitis before colectomy. Presenting symptoms were high volume watery ileostomy output followed by ileus in five of six patients. Four of the six patients presented with fever and elevated WBC. Five of the six developed complications requiring further surgery or prolonged hospitalization. Patients were treated with intravenous hydration and metronidazole then converted to oral metronidazole and/or vancomycin. None of the patients died. A high suspicion of C. difficile enteritis in patients with inflammatory bowel disease and history of C. difficile colitis may lead to more rapid diagnosis, aggressive treatment, and improved outcomes for patients with C. difficile enteritis." }, { "instance_id": "R34411xR34390", "comparison_id": "R34411", "paper_id": "R34390", "text": "Clostridium difficile small-bowel enteritis after total proctocolec- tomy: a rare but fatal, easily missed diagnosis PurposeClostridium difficile enteritis is a rare infection, with less than a dozen cases reported in the literature. We present a case of a patient with total proctocolectomy and ileostomy, developing Clostridium difficile infection of small bowel. We discuss the role of Clostridium difficile toxins and review previously reported cases of Clostridium difficile enteritis after total colectomy.MethodsA 65-year-old male with a history of total proctocolectomy and ileostomy 30 years previously had purulent ileostomy drainage and septic shock. The patient was recently treated with intravenous piperacillin, tazobactam, and levofloxacin for aspiration pneumonia in the previous admission. Ileostomy stool cultures tested positive for Clostridium difficile toxin A, and the patient was promptly treated with intravenous metronidazole.ResultsThe patient was aggressively resuscitated and treated, recovered from the enteritis and shock, but died of pulmonary complications after a prolonged hospitalization.ConclusionsReview of previously reported cases of Clostridium difficile enteritis showed a high mortality rate. We attribute this to delayed diagnosis secondary to rarity of this illness. Some patients were diagnosed only after pseudomembranes in small-bowel segments were found at autopsy. This rare disease entity is firmly established among the differential diagnosis to clinicians treating patients with total proctocolectomy." }, { "instance_id": "R34411xR34387", "comparison_id": "R34411", "paper_id": "R34387", "text": "Catastrophic Clostridium difficile enteritis in a pelvic pouch patient: report of a case IntroductionIn recent years, Clostridium difficile-associated infection has emerged as an increasingly problematic entity. More virulent strains have been isolated and new manifestations of the infection have been described.PurposeThe primary aim of this manuscript is to describe what we believe to be the first reported case of devastating C. difficile enteritis in a patient with an ileal reservoir.ConclusionA high index of suspicion is required in the appropriate clinical setting in light of the apparently changing spectrum of C. difficile disease." }, { "instance_id": "R34411xR34344", "comparison_id": "R34411", "paper_id": "R34344", "text": "Pseudomembra- nous colitis associated with changes in an ileal conduit A case of antibiotic associated pseudomembranous colitis following total cystectomy is reported, in which there was involvement of the ileal conduit. The small bowel remaining in situ was uninvolved. Bacteriological studies revealed Clostridium difficile and the toxin in both colon and ileal conduit. Relevant publications concerning pathogenesis are discussed, in relation to the unusual site described in this case. Epidemiological evidence is reviewed which suggests that isolation of patients with pseudomembranous colitis is a logical course of action." }, { "instance_id": "R34411xR34331", "comparison_id": "R34411", "paper_id": "R34331", "text": "Toxin production by an emerging strain of Clostridium difficile associated with outbreaks of severe disease in North America and Europe BACKGROUND Toxins A and B are the primary virulence factors of Clostridium difficile. Since 2002, an epidemic of C difficile-associated disease with increased morbidity and mortality has been present in Quebec province, Canada. We characterised the dominant strain of this epidemic to determine whether it produces higher amounts of toxins A and B than those produced by non-epidemic strains. METHODS We obtained isolates from 124 patients from Centre Hospitalier Universitaire de Sherbrooke in Quebec. Additional isolates from the USA, Canada, and the UK were included to increase the genetic diversity of the toxinotypes tested. Isolate characterisation included toxinotyping, pulsed-field gel electrophoresis (PFGE), PCR ribotyping, detection of a binary toxin gene, and detection of deletions in a putative negative regulator for toxins A and B (tcdC). By use of an enzyme-linked immunoassay, we measured the in-vitro production of toxins A and B by epidemic strain and non-dominant strain isolates. FINDINGS The epidemic strain was characterised as toxinotype III, North American PFGE type 1, and PCR-ribotype 027 (NAP1/027). This strain carried the binary toxin gene cdtB and an 18-bp deletion in tcdC. We isolated this strain from 72 patients with C difficile-associated disease (58 [67%] of 86 with health-care-associated disease; 14 [37%] of 38 with community-acquired disease). Peak median (IQR) toxin A and toxin B concentrations produced in vitro by NAP1/027 were 16 and 23 times higher, respectively, than those measured in isolates representing 12 different PFGE types, known as toxinotype 0 (toxin A, median 848 microg/L [IQR 504-1022] vs 54 microg/L [23-203]; toxin B, 180 microg/L [137-210] vs 8 microg/L [5-25]; p<0.0001 for both toxins). INTERPRETATION The severity of C difficile-associated disease caused by NAP1/027 could result from hyperproduction of toxins A and B. Dissemination of this strain in North America and Europe could lead to important changes in the epidemiology of C difficile-associated disease." }, { "instance_id": "R34411xR34366", "comparison_id": "R34411", "paper_id": "R34366", "text": "Perforation Complicating Rifampin-Associated Pseudomembranous Enteritis An 18-year-old man developed a perforated jejunum while receiving rifampin antituberculous chemotherapy. The perforations were located within longitudinal ulcers characteristic of pseudomembranous enterocolitis. Pseudomembranous inflammation was limited to the small intestine. The absence of colonic involvement delayed establishment of the diagnosis. Successful surgical intervention consisting of small-bowel resection with primary anastomosis was accomplished for this rare and potentially fatal complication of antituberculous chemotherapy." }, { "instance_id": "R34411xR34405", "comparison_id": "R34411", "paper_id": "R34405", "text": "Enteral Clostrid- ium difficile, an emerging cause for high-output ileostomy The loss of fluid and electrolytes from a high-output ileostomy (>1200 ml/day) can quickly result in dehydration and if not properly managed may cause acute renal failure. The management of a high-output ileostomy is based upon three principles: correction of electrolyte disturbance and fluid balance, pharmacological reduction of ileostomy output, and treatment of any underlying identifiable cause. There is an increasing body of evidence to suggest that Clostridium difficile may behave pathologically in the small intestine producing a spectrum of enteritis that mirrors the well-recognised colonic disease manifestation. Clinically this can range from high-output ileostomy to fulminant enteritis. This report describes two cases of high-output ileostomy associated with enteric C difficile infection and proposes that the management algorithm of a high-output ileostomy should include exclusion of small bowel C difficile ." }, { "instance_id": "R34411xR34374", "comparison_id": "R34411", "paper_id": "R34374", "text": "Clostridium difficile small bowel enteritis occurring after total colectomy Clostridium difficile infection is usually associated with antibiotic therapy and is almost always limited to the colonic mucosa. Small bowel enteritis is rare: only 9 cases have been previously cited in the literature. This report describes a case of C. difficile small bowel enteritis that occurred in a patient after total colectomy and reviews the 9 previously reported cases of C. difficile enteritis." }, { "instance_id": "R34411xR34328", "comparison_id": "R34411", "paper_id": "R34328", "text": "The role of pouch compliance measurement in the management of pouch dysfunction PurposeIleal pouch anal anastomosis is an established option for patients who require total proctocolectomy and restoration of bowel continuity. However, the functional results are not always good and low pouch compliance has been suggested as one possible cause. We aimed to review the results of pouch compliance tests over 11 years to assess whether measuring pouch compliance is a useful diagnostic tool to guide management of pouch dysfunction.MethodsThe results of pouch compliance tests performed between 1996 and 2007 together with the details of symptoms, treatments and outcome were reviewed.ResultsOne hundred and forty-one pouch compliance tests were performed. There was no difference in pouch compliance between those with overt pathology (pouchitis, pelvic sepsis or anastomotic stricture) and those with idiopathic pouch dysfunction. In this second group, there was no difference in pouch compliance between patients with and without each of the symptoms of increased defaecatory frequency, incontinence and evacuation difficulties. The results of the compliance testing did not influence the clinical decision making on idiopathic pouch dysfunction (p = 0.77) nor diverted pouches (p = 0.07).ConclusionsMeasuring pouch compliance does not offer new information accounting for idiopathic pouch dysfunction and has little influence on the clinical management." }, { "instance_id": "R34411xR34356", "comparison_id": "R34411", "paper_id": "R34356", "text": "Pseudomembranous colitis with associated fulminant ileitis in the defunctionalized limb of a jejunal-ileal bypass: report of a case Presented is what is believed to be the first reported case of a defunctionalized limb of small intestine serving as a reservoir forClostridium difficile.Because of the altered intestinal continuity, the ensuing enteritis and colitis failed to respond to nonoperative management. Current treatment strategies are reviewed. Surgical intervention, including restoration of normal gastrointestinal continuity, should be considered early in the hospital course of this patient population." }, { "instance_id": "R34454xR34446", "comparison_id": "R34454", "paper_id": "R34446", "text": "Robust Facial Expression Recognition Using Local Binary Patterns A novel low-computation discriminative feature space is introduced for facial expression recognition capable of robust performance over a rang of image resolutions. Our approach is based on the simple local binary patterns (LBP) for representing salient micro-patterns of face images. Compared to Gabor wavelets, the LBP features can be extracted faster in a single scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. Template matching with weighted Chi square statistic and support vector machine are adopted to classify facial expressions. Extensive experiments on the Cohn-Kanade Database illustrate that the LBP features are effective and efficient for facial expression discrimination. Additionally, experiments on face images with different resolutions show that the LBP features are robust to low-resolution images, which is critical in real-world applications where only low-resolution video input is available." }, { "instance_id": "R34454xR34452", "comparison_id": "R34454", "paper_id": "R34452", "text": "Facial event classification with task oriented dynamic bayesian network Facial events include all activities of face and facial features in spatial or temporal space, such as facial expressions, face gesture, gaze and furrow happening, etc. Developing an automated system for facial event classification is always a challenging task due to the richness, ambiguity and dynamic nature of facial expressions. This paper presents an efficient approach to real-world facial event classification. By integrating dynamic Bayesian network (DBN) with a general-purpose facial behavior description language, a task-oriented stochastic and temporal framework is constructed to systematically represent and classify facial events of interest. Based on the task oriented DBN, we can spatially and temporally incorporate results from previous times and prior knowledge of the application domain. With the top-down inference, the system can make active selection among multiple visual channels to identify the most effective sensory channels to use. With the bottom-up inference from observed evidences, the current facial event can be classified with a desired confident level via the belief propagation. We applied the task-oriented DBN framework to monitoring driver vigilance. Experimental results demonstrate the feasibility and efficiency of our approach." }, { "instance_id": "R34454xR34444", "comparison_id": "R34454", "paper_id": "R34444", "text": "Facial Expression Recognition from Line-Based Caricatures The automatic recognition of facial expression presents a significant challenge to the pattern analysis and man-machine interaction research community. Recognition from a single static image is particularly a difficult task. In this paper, we present a methodology for facial expression recognition from a single static image using line-based caricatures. The recognition process is completely automatic. It also addresses the computational expensive problem and is thus suitable for real-time applications. The proposed approach uses structural and geometrical features of a user sketched expression model to match the line edge map (LEM) descriptor of an input face image. A disparity measure that is robust to expression variations is defined. The effectiveness of the proposed technique has been evaluated and promising results are obtained. This work has proven the proposed idea that facial expressions can be characterized and recognized by caricatures." }, { "instance_id": "R34454xR34450", "comparison_id": "R34454", "paper_id": "R34450", "text": "Alberto del. Bimbo, \u201c3D facial expression recognition using SIFT descriptors of automatically detected keypoints Methods to recognize humans\u2019 facial expressions have been proposed mainly focusing on 2D still images and videos. In this paper, the problem of person-independent facial expression recognition is addressed using the 3D geometry information extracted from the 3D shape of the face. To this end, a completely automatic approach is proposed that relies on identifying a set of facial keypoints, computing SIFT feature descriptors of depth images of the face around sample points defined starting from the facial keypoints, and selecting the subset of features with maximum relevance. Training a Support Vector Machine (SVM) for each facial expression to be recognized, and combining them to form a multi-class classifier, an average recognition rate of 78.43% on the BU-3DFE database has been obtained. Comparison with competitor approaches using a common experimental setting on the BU-3DFE database shows that our solution is capable of obtaining state of the art results. The same 3D face representation framework and testing database have been also used to perform 3D facial expression retrieval (i.e., retrieve 3D scans with the same facial expression as shown by a target subject), with results proving the viability of the proposed solution." }, { "instance_id": "R34454xR34448", "comparison_id": "R34454", "paper_id": "R34448", "text": "Facial expression recognition and synthesis based on an appearance model Facial expression interpretation, recognition and analysis is a key issue in visual communication and man to machine interaction. We address the issues of facial expression recognition and synthesis and compare the proposed bilinear factorization based representations with previously investigated methods such as linear discriminant analysis and linear regression. We conclude that bilinear factorization outperforms these techniques in terms of correct recognition rates and synthesis photorealism especially when the number of training samples is restrained." }, { "instance_id": "R34454xR34436", "comparison_id": "R34454", "paper_id": "R34436", "text": "Facial Expression Recognition using PCA and Gabor with JAFFE Database Abstract \u2014 In this paper I discussed Facial Expression Recognition System in two different ways and with two different databases. Principal Component Analysis is used here for feature extraction. I used JAFFE (Japanese Female Facial Expression). I implemented system with JAFFE database, I got accuracy of the algorithm is about 70-71% which gives quite poor Efficiency of the system. Then I implemented facial expression recognition system with Gabor filter and PCA. Here Gabor filter selected because of its good feature extraction property. The output of the Gabor filter was used as an input for the PCA. PCA has a good feature of dimension reduction so it was choose for that purpose." }, { "instance_id": "R34454xR34432", "comparison_id": "R34454", "paper_id": "R34432", "text": "Constants across cultures in the face and emotion. This study addresses the question of whether any facial expressions of emotion are universal. Recent studies showing that members of literate cultures associated the same emotion concepts with the same facial behaviors could not demonstrate that at least some facial expressions of emotion are universal; the cultures compared had all been exposed to some of the same mass media presentations of facial expression, and these may have taught the people in each culture to recognize the unique facial expressions of other cultures. To show that members of a preliterate culture who had minimal exposure to literate cultures would associate the same emotion concepts with the same facial behaviors as do members of Western and Eastern literate cultures, data were gathered in New Guinea by telling subjects a story, showing them a set of three faces, and asking them to select the face which showed the emotion appropriate to the story. The results provide evidence in support of the hypothesis that the association between particular facial muscular patterns and discrete emotions is universal." }, { "instance_id": "R34605xR34556", "comparison_id": "R34605", "paper_id": "R34556", "text": "A New Approach to Manage Security against Neighborhood Attacks in Social Networks Now a days, more and more of social network data are being published in one way or other. So, preserving privacy in publishing social network data has become an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Most of the work done so far towards privacy preservation can deal with relational data only. However, Bin Zhou and Jian Pei [11] proposed a scheme for anonymization of social networks, which is an initiative in this direction and provides a partial solution to this problem. In fact, their algorithm cannot handle the situations in which an adversary has knowledge about vertices in the second or higher hops of a vertex, in addition to its immediate neighbors. In this paper, we propose a modification to their algorithm for the network anonymization which can handle such situations. In doing so, we use an algorithm for graph isomorphism based on adjacency matrix instead of their approach using DFS technique [11]. More importantly, the time complexity of our algorithm is less than that of Zhou and Pei." }, { "instance_id": "R34605xR34519", "comparison_id": "R34605", "paper_id": "R34519", "text": "Measuring Topological Anonymity in Social Networks While privacy preservation of data mining approaches has been an important topic for a number of years, privacy of social network data is a relatively new area of interest. Previous research has shown that anonymization alone may not be sufficient for hiding identity information on certain real world data sets. In this paper, we focus on understanding the impact of network topology and node substructure on the level of anonymity present in the network. We present a new measure, topological anonymity, that quantifies the amount of privacy preserved in different topological structures. The measure uses a combination of known social network metrics and attempts to identify when node and edge inference breeches arise in these graphs." }, { "instance_id": "R34605xR34526", "comparison_id": "R34605", "paper_id": "R34526", "text": "Preserving Privacy in Social Networks Against Neighborhood Attacks Recently, as more and more social network data has been published in one way or another, preserving privacy in publishing social network data becomes an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Unfortunately, most of the previous studies on privacy preservation can deal with relational data only, and cannot be applied to social network data. In this paper, we take an initiative towards preserving privacy in social network data. We identify an essential type of privacy attacks: neighborhood attacks. If an adversary has some knowledge about the neighbors of a target victim and the relationship among the neighbors, the victim may be re-identified from a social network even if the victim's identity is preserved using the conventional anonymization techniques. We show that the problem is challenging, and present a practical solution to battle neighborhood attacks. The empirical study indicates that anonymized social networks generated by our method can still be used to answer aggregate network queries with high accuracy." }, { "instance_id": "R34605xR34574", "comparison_id": "R34605", "paper_id": "R34574", "text": "Randomizing Social Networks: a Spectrum Preserving Approach Understanding the general properties of real social networks has gained much attention due to the proliferation of networked data. The nodes in the network are the individuals and the links among them denote their relationships. Many applications of networks such as anonymous Web browsing require relationship anonymity due to the sensitive, stigmatizing, or confidential nature of the relationship. One general approach for this problem is to randomize the edges in true networks, and only disclose the randomized networks. In this paper, we investigate how various properties of networks may be affected due to randomization. Specifically, we focus on the spectrum since the eigenvalues of a network are intimately connected to many important topological features. We also conduct theoretical analysis on the extent to which edge anonymity can be achieved. A spectrum preserving graph randomization method, which can better preserve network properties while protecting edge anonymity, is then presented and empirically evaluated." }, { "instance_id": "R34605xR34529", "comparison_id": "R34605", "paper_id": "R34529", "text": "Preservation of Privacy in Publishing Social Network Data This paper consider the privacy disclosure in social network data publishing. We assume that adversaries know the degree of a target individual and the target's immediate neighbors, and identify an essential type of privacy attacks: background knowledge attacks. We propose a practical solution to defend against background knowledge attacks. The experimental results confirm that the anonymized social networks obtained by our method can still be used to answer aggregate network queries with high accuracy." }, { "instance_id": "R34605xR34958", "comparison_id": "R34605", "paper_id": "R34958", "text": "K-isomorphism: privacy preserving network publication against structural attacks Serious concerns on privacy protection in social networks have been raised in recent years; however, research in this area is still in its infancy. The problem is challenging due to the diversity and complexity of graph data, on which an adversary can use many types of background knowledge to conduct an attack. One popular type of attacks as studied by pioneer work [2] is the use of embedding subgraphs. We follow this line of work and identify two realistic targets of attacks, namely, NodeInfo and LinkInfo. Our investigations show that k-isomorphism, or anonymization by forming k pairwise isomorphic subgraphs, is both sufficient and necessary for the protection. The problem is shown to be NP-hard. We devise a number of techniques to enhance the anonymization efficiency while retaining the data utility. A compound vertex ID mechanism is also introduced for privacy preservation over multiple data releases. The satisfactory performance on a number of real datasets, including HEP-Th, EUemail and LiveJournal, illustrates that the high symmetry of social networks is very helpful in mitigating the difficulty of the problem." }, { "instance_id": "R34605xR34538", "comparison_id": "R34605", "paper_id": "R34538", "text": "Anonymizing graphs against weight-based attacks The increasing popularity of graph data, such as social and online communities, has initiated a prolific research area in knowledge discovery and data mining. As more real-world graphs are released publicly, there is growing concern about privacy breaching for the entities involved. An adversary may reveal identities of individuals in a published graph by having the topological structure and/or basic graph properties as background knowledge. Many previous studies addressing such attack as identity disclosure, however, concentrate on preserving privacy in simple graph data only. In this paper, we consider the identity disclosure problem in weighted graphs. The motivation is that, a weighted graph can introduce much more unique information than its simple version, which makes the disclosure easier. We first formalize a general anonymization model to deal with weight-based attacks. Then two concrete attacks are discussed based on weight properties of a graph, including the sum and the set of adjacent weights for each vertex. We also propose a complete solution for the weight anonymization problem to prevent a graph from both attacks. Our approaches are efficient and practical, and have been validated by extensive experiments on both synthetic and real-world datasets." }, { "instance_id": "R34605xR34506", "comparison_id": "R34605", "paper_id": "R34506", "text": "Preserving Privacy in Social Networks: A Structure-Aware Approach Graph structured data can be ubiquitously found in the real world. For example, social networks can easily be represented as graphs where the graph connotes the complex sets of relationships between members of social systems. While their analysis could be beneficial in many aspects, publishing certain types of social networks raises significant privacy concerns. This brings the problem of graph anonymization into sharp focus. Unlike relational data, the true information in graph structured data is encoded within the structure and graph properties. Motivated by this, we propose a structure aware anonymization approach that maximally preserves the structure of the original network as well as its structural properties while anonymizing it. Instead of anonymizing each node one by one independently, our approach treats each partitioned substructural component of the network as one single unit to be anonymized. This maximizes utility while enabling anonymization. We apply our method to both synthetic and real datasets and demonstrate its effectiveness and practical usefulness." }, { "instance_id": "R34605xR34599", "comparison_id": "R34605", "paper_id": "R34599", "text": "L-diversity: privacy beyond k-anonymity Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called \\kappa-anonymity has gained popularity. In a \\kappa-anonymized dataset, each record is indistinguishable from at least k\u20141 other records with respect to certain \"identifying\" attributes. In this paper we show with two simple attacks that a \\kappa-anonymized dataset has some subtle, but severe privacy problems. First, we show that an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. Second, attackers often have background knowledge, and we show that \\kappa-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks and we propose a novel and powerful privacy definition called \\ell-diversity. In addition to building a formal foundation for \\ell-diversity, we show in an experimental evaluation that \\ell-diversity is practical and can be implemented efficiently." }, { "instance_id": "R34605xR34533", "comparison_id": "R34605", "paper_id": "R34533", "text": "Comparisons of randomization and K-degree anonymization schemes for privacy preserving social network publishing Many applications of social networks require identity and/or relationship anonymity due to the sensitive, stigmatizing, or confidential nature of user identities and their behaviors. Recent work showed that the simple technique of anonymizing graphs by replacing the identifying information of the nodes with random ids does not guarantee privacy since the identification of the nodes can be seriously jeopardized by applying background based attacks. In this paper, we investigate how well an edge based graph randomization approach can protect node identities and sensitive links. We quantify both identity disclosure and link disclosure when adversaries have one specific type of background knowledge (i.e., knowing the degrees of target individuals). We also conduct empirical comparisons with the recently proposed K-degree anonymization schemes in terms of both utility and risks of privacy disclosures." }, { "instance_id": "R34605xR34564", "comparison_id": "R34605", "paper_id": "R34564", "text": "Data and Structural k-Anonymity in Social Networks The advent of social network sites in the last years seems to be a trend that will likely continue. What naive technology users may not realize is that the information they provide online is stored and may be used for various purposes. Researchers have pointed out for some time the privacy implications of massive data gathering, and effort has been made to protect the data from unauthorized disclosure. However, the data privacy research has mostly targeted traditional data models such as microdata. Recently, social network data has begun to be analyzed from a specific privacy perspective, one that considers, besides the attribute values that characterize the individual entities in the networks, their relationships with other entities. Our main contributions in this paper are a greedy algorithm for anonymizing a social network and a measure that quantifies the information loss in the anonymization process due to edge generalization." }, { "instance_id": "R34605xR34578", "comparison_id": "R34605", "paper_id": "R34578", "text": "On link privacy in randomizing social networks Many applications of social networks require relationship anonymity due to the sensitive, stigmatizing, or confidential nature of relationship. Recent work showed that the simple technique of anonymizing graphs by replacing the identifying information of the nodes with random IDs does not guarantee privacy since the identification of the nodes can be seriously jeopardized by applying subgraph queries. In this paper, we investigate how well an edge-based graph randomization approach can protect sensitive links. We show via theoretical studies and empirical evaluations that various similarity measures can be exploited by attackers to significantly improve their confidence and accuracy of predicted sensitive links between nodes with high similarity values. We also compare our similarity measure-based prediction methods with the low-rank approximation-based prediction in this paper." }, { "instance_id": "R34605xR34584", "comparison_id": "R34605", "paper_id": "R34584", "text": "Supervised random walks: predicting and recommending links in social networks Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open. We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function. Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction." }, { "instance_id": "R34605xR34561", "comparison_id": "R34605", "paper_id": "R34561", "text": "Preserving the Privacy of Sensitive Relationships in Graph Data In this paper, we focus on the problem of preserving the privacy of sensitive relationships in graph data. We refer to the problem of inferring sensitive relationships from anonymized graph data as link reidentification. We propose five different privacy preservation strategies, which vary in terms of the amount of data removed (and hence their utility) and the amount of privacy preserved. We assume the adversary has an accurate predictive model for links, and we show experimentally the success of different link re-identification strategies under varying structural characteristics of the data." }, { "instance_id": "R34605xR34550", "comparison_id": "R34605", "paper_id": "R34550", "text": "k-automorphism The growing popularity of social networks has generated interesting data management and data mining problems. An important concern in the release of these data for study is their privacy, since social networks usually contain personal information. Simply removing all identifiable personal information (such as names and social security number) before releasing the data is insufficient. It is easy for an attacker to identify the target by performing different structural queries. In this paper we propose k-automorphism to protect against multiple structural attacks and develop an algorithm (called KM) that ensures k-automorphism. We also discuss an extension of KM to handle \"dynamic\" releases of the data. Extensive experiments show that the algorithm performs well in terms of protection it provides." }, { "instance_id": "R34605xR34553", "comparison_id": "R34605", "paper_id": "R34553", "text": "The k-anonymity and l-diversity approaches for privacy preservation in social networks against neighborhood attacks Recently, more and more social network data have been published in one way or another. Preserving privacy in publishing social network data becomes an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Unfortunately, most of the previous studies on privacy preservation data publishing can deal with relational data only, and cannot be applied to social network data. In this paper, we take an initiative toward preserving privacy in social network data. Specifically, we identify an essential type of privacy attacks: neighborhood attacks. If an adversary has some knowledge about the neighbors of a target victim and the relationship among the neighbors, the victim may be re-identified from a social network even if the victim\u2019s identity is preserved using the conventional anonymization techniques. To protect privacy against neighborhood attacks, we extend the conventional k-anonymity and l-diversity models from relational data to social network data. We show that the problems of computing optimal k-anonymous and l-diverse social networks are NP-hard. We develop practical solutions to the problems. The empirical study indicates that the anonymized social network data by our methods can still be used to answer aggregate network queries with high accuracy." }, { "instance_id": "R34605xR34511", "comparison_id": "R34605", "paper_id": "R34511", "text": "P-sensitive k-anonymity for social networks \u2014 The proliferation of social networks, where individuals share private information, has caused, in the last few years, a growth in the volume of sensitive data being stored in these networks. As users subscribe to more services and connect more with their friends, families, and colleagues, the desire to both protect the privacy of the network users and the temptation to extract, analyze, and use this information from the networks have increased. Previous research has looked at anonymizing social network graphs to ensure their k-anonymity in order to protect their nodes against identity disclosure. In this paper we introduce an extension to this k-anonymity model that adds the ability to protect against attribute disclosure. This new model has similar privacy features with the existing p-sensitive k-anonymity model for microdata. We also present a new algorithm for enforcing p-sensitive k-anonymity on social network data based on a greedy clustering approach. To our knowledge, no previous research has been done to deal with preventing against disclosing attribute information that is associated to social networks nodes." }, { "instance_id": "R34663xR34656", "comparison_id": "R34663", "paper_id": "R34656", "text": "Construction of quality-assured infant feeding process of care data repositories: definition and design (Part 1) This study has been partially funded by the IBIME research group and by the EMCA Programme (Foundation for Training and Healthcare Research in the Region of Murcia and Regional Health and Consumer Authority of Murcia, Project PIEMCA08-13). The authors thank Montserrat Robles (Director of ITACA institute), the codification service of the Hospital Virgen del Castillo and the management team of Gerencia Area de Salud V-Altiplano for their valuable collaboration in general aspects of this work." }, { "instance_id": "R34663xR34627", "comparison_id": "R34663", "paper_id": "R34627", "text": "Barriers to using eHealth data for clinical performance feedback in Malawi: A case study INTRODUCTION Sub-optimal performance of healthcare providers in low-income countries is a critical and persistent global problem. The use of electronic health information technology (eHealth) in these settings is creating large-scale opportunities to automate performance measurement and provision of feedback to individual healthcare providers, to support clinical learning and behavior change. An electronic medical record system (EMR) deployed in 66 antiretroviral therapy clinics in Malawi collects data that supervisors use to provide quarterly, clinic-level performance feedback. Understanding barriers to provision of eHealth-based performance feedback for individual healthcare providers in this setting could present a relatively low-cost opportunity to significantly improve the quality of care. OBJECTIVE The aims of this study were to identify and describe barriers to using EMR data for individualized audit and feedback for healthcare providers in Malawi and to consider how to design technology to overcome these barriers. METHODS We conducted a qualitative study using interviews, observations, and informant feedback in eight public hospitals in Malawi where an EMR system is used. We interviewed 32 healthcare providers and conducted seven hours of observation of system use. RESULTS We identified four key barriers to the use of EMR data for clinical performance feedback: provider rotations, disruptions to care processes, user acceptance of eHealth, and performance indicator lifespan. Each of these factors varied across sites and affected the quality of EMR data that could be used for the purpose of generating performance feedback for individual healthcare providers. CONCLUSION Using routinely collected eHealth data to generate individualized performance feedback shows potential at large-scale for improving clinical performance in low-resource settings. However, technology used for this purpose must accommodate ongoing changes in barriers to eHealth data use. Understanding the clinical setting as a complex adaptive system (CAS) may enable designers of technology to effectively model change processes to mitigate these barriers." }, { "instance_id": "R34663xR34617", "comparison_id": "R34663", "paper_id": "R34617", "text": "Analysis of the quality of hospital information systems audit trails BackgroundAudit Trails (AT) are fundamental to information security in order to guarantee access traceability but can also be used to improve Health information System\u2019s (HIS) quality namely to assess how they are used or misused. This paper aims at analysing the existence and quality of AT, describing scenarios in hospitals and making some recommendations to improve the quality of information.MethodsThe responsibles of HIS for eight Portuguese hospitals were contacted in order to arrange an interview about the importance of AT and to collect audit trail data from their HIS. Five institutions agreed to participate in this study; four of them accepted to be interviewed, and four sent AT data. The interviews were performed in 2011 and audit trail data sent in 2011 and 2012. Each AT was evaluated and compared in relation to data quality standards, namely for completeness, comprehensibility, traceability among others. Only one of the AT had enough information for us to apply a consistency evaluation by modelling user behaviour.ResultsThe interviewees in these hospitals only knew a few AT (average of 1 AT per hospital in an estimate of 21 existing HIS), although they all recognize some advantages of analysing AT. Four hospitals sent a total of 7 AT \u2013 2 from Radiology Information System (RIS), 2 from Picture Archiving and Communication System (PACS), 3 from Patient Records. Three of the AT were understandable and three of the AT were complete. The AT from the patient records are better structured and more complete than the RIS/PACS.ConclusionsExisting AT do not have enough quality to guarantee traceability or be used in HIS improvement. Its quality reflects the importance given to them by the CIO of healthcare institutions. Existing standards (e.g. ASTM:E2147, ISO/TS 18308:2004, ISO/IEC 27001:2006) are still not broadly used in Portugal." }, { "instance_id": "R34663xR34652", "comparison_id": "R34663", "paper_id": "R34652", "text": "The Qu\u00e9bec BCG Vaccination Registry (1956\u20131992): assessing data quality and linkage with administrative health databases BackgroundVaccination registries have undoubtedly proven useful for estimating vaccination coverage as well as examining vaccine safety and effectiveness. However, their use for population health research is often limited. The Bacillus Calmette-Gu\u00e9rin (BCG) Vaccination Registry for the Canadian province of Qu\u00e9bec comprises some 4 million vaccination records (1926-1992). This registry represents a unique opportunity to study potential associations between BCG vaccination and various health outcomes. So far, such studies have been hampered by the absence of a computerized version of the registry. We determined the completeness and accuracy of the recently computerized BCG Vaccination Registry, as well as examined its linkability with demographic and administrative medical databases.MethodsTwo systematically selected verification samples, each representing ~0.1% of the registry, were used to ascertain accuracy and completeness of the electronic BCG Vaccination Registry. Agreement between the paper [listings (n = 4,987 records) and vaccination certificates (n = 4,709 records)] and electronic formats was determined along several nominal and BCG-related variables. Linkage feasibility with the Birth Registry (probabilistic approach) and provincial Healthcare Registration File (deterministic approach) was examined using nominal identifiers for a random sample of 3,500 individuals born from 1961 to 1974 and BCG vaccinated between 1970 and 1974.ResultsExact agreement was observed for 99.6% and 81.5% of records upon comparing, respectively, the paper listings and vaccination certificates to their corresponding computerized records. The proportion of successful linkage was 77% with the Birth Registry, 70% with the Healthcare Registration File, 57% with both, and varied by birth year.ConclusionsComputerization of this Registry yielded excellent results. The registry was complete and accurate, and linkage with administrative databases was highly feasible. This study represents the first step towards assembling large scale population-based epidemiological studies which will enable filling important knowledge gaps on the potential health effects of early life non-specific stimulation of the immune function, as resulting from BCG vaccination." }, { "instance_id": "R34663xR34615", "comparison_id": "R34663", "paper_id": "R34615", "text": "Defining and measuring completeness of electronic health records for secondary use We demonstrate the importance of explicit definitions of electronic health record (EHR) data completeness and how different conceptualizations of completeness may impact findings from EHR-derived datasets. This study has important repercussions for researchers and clinicians engaged in the secondary use of EHR data. We describe four prototypical definitions of EHR completeness: documentation, breadth, density, and predictive completeness. Each definition dictates a different approach to the measurement of completeness. These measures were applied to representative data from NewYork-Presbyterian Hospital's clinical data warehouse. We found that according to any definition, the number of complete records in our clinical database is far lower than the nominal total. The proportion that meets criteria for completeness is heavily dependent on the definition of completeness used, and the different definitions generate different subsets of records. We conclude that the concept of completeness in EHR is contextual. We urge data consumers to be explicit in how they define a complete record and transparent about the limitations of their data." }, { "instance_id": "R34663xR34609", "comparison_id": "R34663", "paper_id": "R34609", "text": "Reporting systems, reporting rates and completeness of data reported from primary healthcare to a Swedish quality register \u2013 The National Diabetes Register OBJECTIVE The aims of this paper were to study the reporting rate and completeness of data reported from primary healthcare centres (PHCCs) in Sweden to the Swedish National Diabetes Register (NDR), with a special attention on the relation between these measures and the reporting system used by the PHCCs. METHOD A national survey conducted in Swedish primary healthcare covering the year 2006. A questionnaire was used to collect data from 523 PHCCs. Data on 87,099 adult diabetic patients attending these PHCCs and reported to the NDR were obtained from the register. In Sweden, participation in the NDR is voluntary. The data were reported through the Internet, either online using a web-based system or by direct transmission. The main outcome measures were reporting rate and completeness of reported data. RESULTS Of the 523 PHCCs, almost two-thirds had reported <75% of their diabetic patients to the NDR. The lowest reporting rate was found among the largest PHCCs, while the highest was found among small PHCCs (p<0.001). Reasons given for not reporting data to the NDR were lack of time and lack of personnel resources. Altogether, 73.1% of the PHCCs reported data to the NDR online using a web-based system, 20.5% used direct transmission and 6.3% used both systems. The PHCCs that reported data through direct transmission systems reported almost 70% of their diabetic patients to the NDR, while PHCCs using web-based systems reported 54% of their diabetic patients to the NDR. Adjusted for other factors, using direct transmission increased the reporting rate by 13.0 percentage points. However, the web-based system contributed to a higher completeness of data than the direct transmission system. CONCLUSIONS A direct transmission system facilitates a high reporting rate to the register at the expense of lower completeness of the reported data." }, { "instance_id": "R34663xR34641", "comparison_id": "R34663", "paper_id": "R34641", "text": "Corrigendum to \u201cTowards an ontology for data quality in integrated chronic disease management: A realist review of the literature\u201d [Int. J. Med. Inform. 82 (2013) 10\u201324] University of NSW School of Public Health & Community Medicine, Sydney, Australia Isfahan University of Medical Sciences, Faculty of Management and Medical Information Sciences, Health Information Technology esearch Center, Iran University of NSW Centre for Primary Health Care & Equity, Sydney, Australia General Practice Unit, South West Sydney Local Health District, Australia Asia Pacific ubiquitous Healthcare research Centre (APuHC), University of NSW, Sydney, Australia Department of Health Care Management and Policy, University of Surrey, Guildford, UK Population Health Unit, South West Sydney Local Health District, Australia Ingham Institute of Applied Medical Research, Australia" }, { "instance_id": "R34663xR34635", "comparison_id": "R34663", "paper_id": "R34635", "text": "Optimizing the user interface of a data entry module for an electronic patient record for cardiac rehabilitation: A mixed method usability approach INTRODUCTION Cumbersome electronic patient record (EPR) interfaces may complicate data-entry in clinical practice. Completeness of data entered in the EPR determines, among other things, the value of computerized clinical decision support (CCDS). Quantitative usability evaluations can provide insight into mismatches between the system design model of data entry and users' data entry behavior, but not into the underlying causes for these mismatches. Mixed method usability evaluation studies may provide these insights, and thus support generating redesign recommendations for improving an EPR system's data entry interface. AIM To improve the usability of the data entry interface of an EPR system with CCDS in the field of cardiac rehabilitation (CR), and additionally, to assess the value of a mixed method usability approach in this context. METHODS Seven CR professionals performed a think-aloud usability evaluation both before (beta-version) and after the redesign of the system. Observed usability problems from both evaluations were analyzed and categorized using Zhang et al.'s heuristic principles of good interface design. We combined the think-aloud usability evaluation of the system's beta-version with the measurement of a new usability construct: users' deviations in action sequence from the system's predefined data entry order sequence. Recommendations for redesign were implemented. We assessed whether the redesign improved CR professionals' (1) task efficacy (with respect to the completeness of data they collected), and (2) task efficiency (with respect to the average number of mouse clicks they needed to complete data entry subtasks). RESULTS With the system's beta version, 40% of health care professionals' navigation actions through the system deviated from the predefined next system action. The causes for these deviations as revealed by the think-aloud method mostly concerned mismatches between the system design model for data entry action sequences and users expectations of these action sequences, based on their paper-based daily routines. This caused non completion of data entry tasks (31% of main tasks completed), and more navigation actions than minimally required (146% of the minimum required). In the redesigned system the data entry navigational structure was organized in a flexible way around an overview screen to better mimic users' paper-based daily routines of collecting patient data. This redesign resulted in an increased number of completed main tasks (70%) and a decrease in navigation actions (133% of the minimum required). The think-aloud usability evaluation of the redesigned system showed that remaining problems concerned flexibility (e.g., lack of customization options) and consistency (mainly with layout and position of items on the screen). CONCLUSION The mixed method usability evaluation was supportive in revealing the magnitude and causes of mismatches between the system design model of data-entry with users' data entry behavior. However, as both task efficacy and efficiency were still not optimal with the redesigned EPR, we advise to perform a cognitive analysis on end users' mental processes and behavior patterns in daily work processes specifically during the requirements analysis phase of development of interactive healthcare information systems." }, { "instance_id": "R34663xR34647", "comparison_id": "R34663", "paper_id": "R34647", "text": "Electronic immunization data collection systems: application of an evaluation framework Abstract Background Evaluating the features and performance of health information systems can serve to strengthen the systems themselves as well as to guide other organizations in the process of designing and implementing surveillance tools. We adapted an evaluation framework in order to assess electronic immunization data collection systems, and applied it in two Ontario public health units. Methods The Centers for Disease Control and Prevention\u2019s Guidelines for Evaluating Public Health Surveillance Systems are broad in nature and serve as an organizational tool to guide the development of comprehensive evaluation materials. Based on these Guidelines, and informed by other evaluation resources and input from stakeholders in the public health community, we applied an evaluation framework to two examples of immunization data collection and examined several system attributes: simplicity, flexibility, data quality, timeliness, and acceptability. Data collection approaches included key informant interviews, logic and completeness assessments, client surveys, and on-site observations. Results Both evaluated systems allow high-quality immunization data to be collected, analyzed, and applied in a rapid fashion. However, neither system is currently able to link to other providers\u2019 immunization data or provincial data sources, limiting the comprehensiveness of coverage assessments. We recommended that both organizations explore possibilities for external data linkage and collaborate with other jurisdictions to promote a provincial immunization repository or data sharing platform. Conclusions Electronic systems such as the ones described in this paper allow immunization data to be collected, analyzed, and applied in a rapid fashion, and represent the infostructure required to establish a population-based immunization registry, critical for comprehensively assessing vaccine coverage." }, { "instance_id": "R34663xR34654", "comparison_id": "R34663", "paper_id": "R34654", "text": "Are family physicians comprehensively using electronic medical records such that the data can be used for secondary purposes? A Canadian perspective BackgroundWith the introduction and implementation of a variety of government programs and policies to encourage adoption of electronic medical records (EMRs), EMRs are being increasingly adopted in North America. We sought to evaluate the completeness of a variety of EMR fields to determine if family physicians were comprehensively using their EMRs and the suitability of use of the data for secondary purposes in Ontario, Canada.MethodsWe examined EMR data from a convenience sample of family physicians distributed throughout Ontario within the Electronic Medical Record Administrative data Linked Database (EMRALD) as extracted in the summer of 2012. We identified all physicians with at least one year of EMR use. Measures were developed and rates of physician documentation of clinical encounters, electronic prescriptions, laboratory tests, blood pressure and weight, referrals, consultation letters, and all fields in the cumulative patient profile were calculated as a function of physician and patient time since starting on the EMR.ResultsOf the 167 physicians with at least one year of EMR use, we identified 186,237 patients. Overall, the fields with the highest level of completeness were for visit documentations and prescriptions (>70 %). Improvements were observed with increasing trends of completeness overtime for almost all EMR fields according to increasing physician time on EMR. Assessment of the influence of patient time on EMR demonstrated an increasing likelihood of the population of EMR fields overtime, with the largest improvements occurring between the first and second years.ConclusionsAll of the data fields examined appear to be reasonably complete within the first year of adoption with the biggest increase occurring the first to second year. Using all of the basic functions of the EMR appears to be occurring in the current environment of EMR adoption in Ontario. Thus the data appears to be suitable for secondary use." }, { "instance_id": "R34663xR34607", "comparison_id": "R34663", "paper_id": "R34607", "text": "An assessment of data quality in a multi-site electronic medical record system in Haiti OBJECTIVES Strong data quality (DQ) is a precursor to strong data use. In resource limited settings, routine DQ assessment (DQA) within electronic medical record (EMR) systems can be resource-intensive using manual methods such as audit and chart review; automated queries offer an efficient alternative. This DQA focused on Haiti's national EMR - iSant\u00e9 - and included longitudinal data for over 100,000 persons living with HIV (PLHIV) enrolled in HIV care and treatment services at 95 health care facilities (HCF). METHODS This mixed-methods evaluation used a qualitative Delphi process to identify DQ priorities among local stakeholders, followed by a quantitative DQA on these priority areas. The quantitative DQA examined 13 indicators of completeness, accuracy, and timeliness of retrospective data collected from 2005 to 2013. We described levels of DQ for each indicator over time, and examined the consistency of within-HCF performance and associations between DQ and HCF and EMR system characteristics. RESULTS Over all iSant\u00e9 data, age was incomplete in <1% of cases, while height, pregnancy status, TB status, and ART eligibility were more incomplete (approximately 20-40%). Suspicious data flags were present for <3% of cases of male sex, ART dispenses, CD4 values, and visit dates, but for 26% of cases of age. Discontinuation forms were available for about half of all patients without visits for 180 or more days, and >60% of encounter forms were entered late. For most indicators, DQ tended to improve over time. DQ was highly variable across HCF, and within HCFs DQ was variable across indicators. In adjusted analyses, HCF and system factors with generally favorable and statistically significant associations with DQ were University hospital category, private sector governance, presence of local iSante server, greater HCF experience with the EMR, greater maturity of the EMR itself, and having more system users but fewer new users. In qualitative feedback, local stakeholders emphasized lack of stable power supply as a key challenge to data quality and use of the iSant\u00e9 EMR. CONCLUSIONS Variable performance on key DQ indicators across HCF suggests that excellent DQ is achievable in Haiti, but further effort is needed to systematize and routinize DQ approaches within HCFs. A dynamic, interactive \"DQ dashboard\" within iSant\u00e9 could bring transparency and motivate improvement. While the results of the study are specific to Haiti's iSant\u00e9 data system, the study's methods and thematic lessons learned holdgeneralized relevance for other large-scale EMR systems in resource-limited countries." }, { "instance_id": "R34663xR34629", "comparison_id": "R34663", "paper_id": "R34629", "text": "Concept and implementation of a computer-based reminder system to increase completeness in clinical documentation PURPOSE Medical documentation is often incomplete. Missing information may impede or bias analysis of study data and can cause delays. In a single source information system, clinical routine documentation and electronic data capture (EDC) systems are connected in the hospital information system (HIS). In this setting, both clinical routine and research would benefit from a higher rate of complete documentation. METHODS We designed a HIS-based reminder system which identifies not yet finalized forms and sends reminder e-mails to responsible physicians depending on escalation level. The generic concept to create reminder e-mail messages consists in database queries on not-finalized forms and generation of e-mail messages based on this output via the communication server. We compared completeness of electronic HIS forms before and after introduction of the reminder system three months each. RESULTS Completeness increased highly significantly (p<0.0001) for each form type (medical history form 93% (145 of 156 forms) vs 100% (206 forms), stress injection protocol 90% (142 of 157 forms) vs 100% (198 forms) and rest injection protocol 31% (45 of 147 forms) vs 100% (208 forms)). Forty-six reminder e-mails to the responsible study physician and 53 reminder e-mails to the principal investigator were sent to finish 2 medical history forms, 8 stress and 20 rest injection protocols. These 2 medical history forms were completed after 1 and 56 days. The median processing time of the stress injection protocols in the post-implementation phase was 18 days (range from 1 to 60 days). The median processing time of the rest injection protocols was 26 days (range from 5 to 37 days). CONCLUSION A computer-based reminder system to identify incomplete documentation forms with a notification and escalation mechanism can improve completeness of finalized forms significantly. It is technically feasible and effective in the clinical setting." }, { "instance_id": "R34663xR34633", "comparison_id": "R34663", "paper_id": "R34633", "text": "Implementation of a cloud-based electronic medical record for maternal and child health in rural Kenya BACKGROUND Complete and timely health information is essential to inform public health decision-making for maternal and child health, but is often lacking in resource-constrained settings. Electronic medical record (EMR) systems are increasingly being adopted to support the delivery of health care, and are particularly amenable to maternal and child health services. An EMR system could enable the mother and child to be tracked and monitored throughout maternity shared care, improve quality and completeness of data collected and enhance sharing of health information between outpatient clinic and the hospital, and between clinical and public health services to inform decision-making. METHODS This study implemented a novel cloud-based electronic medical record system in a maternal and child health outpatient setting in Western Kenya between April and June 2013 and evaluated its impact on improving completeness of data collected by clinical and public health services. The impact of the system was assessed using a two-sample test of proportions pre- and post-implementation of EMR-based data verification. RESULTS Significant improvements in completeness of the antenatal record were recorded through implementation of EMR-based data verification. A difference of 42.9% in missing data (including screening for hypertension, tuberculosis, malaria, HIV status or ART status of HIV positive women) was recorded pre- and post-implementation. Despite significant impact of EMR-based data verification on data completeness, overall screening rates in antenatal care were low. CONCLUSION This study has shown that EMR-based data verification can improve the completeness of data collected in the patient record for maternal and child health. A number of issues, including data management and patient confidentiality, must be considered but significant improvements in data quality are recorded through implementation of this EMR model." }, { "instance_id": "R34663xR34658", "comparison_id": "R34663", "paper_id": "R34658", "text": "Data quality assessment in healthcare: a 365-day chart review of inpatients' health records at a Nigerian tertiary hospital BACKGROUND Health records are essential for good health care. Their quality depends on accurate and prompt documentation of the care provided and regular analysis of content. This study assessed the quantitative properties of inpatient health records at the Federal Medical Centre, Bida, Nigeria. METHOD A retrospective study was carried out to assess the documentation of 780 paper-based health records of inpatients discharged in 2009. RESULTS 732 patient records were reviewed from the departments of obstetrics (45.90%), pediatrics (24.32%), and other specialties (29.78%). Documentation performance was very good (98.49%) for promptness recording care within the first 24 h of admission, fair (58.80%) for proper entry of patient unit number (unique identifier), and very poor (12.84%) for utilization of discharge summary forms. Overall, surgery records were nearly always (100%) prompt regarding care documentation, obstetrics records were consistent (80.65%) in entering patients' names in notes, and the principal diagnosis was properly documented in all (100%) completed discharge summary forms in medicine. 454 (62.02%) folders were chronologically arranged, 456 (62.29%) were properly held together with file tags, and most (80.60%) discharged folders reviewed, analyzed and appropriate code numbers were assigned. CONCLUSIONS Inadequacies were found in clinical documentation, especially gross underutilization of discharge summary forms. However, some forms were properly documented, suggesting that hospital healthcare providers possess the necessary skills for quality clinical documentation but lack the will. There is a need to institute a clinical documentation improvement program and promote quality clinical documentation among staff." }, { "instance_id": "R34663xR34613", "comparison_id": "R34663", "paper_id": "R34613", "text": "Validating an ontology-based algorithm to identify patients with Type 2 Diabetes Mellitus in Electronic Health Records BACKGROUND Improving healthcare for people with chronic conditions requires clinical information systems that support integrated care and information exchange, emphasizing a semantic approach to support multiple and disparate Electronic Health Records (EHRs). Using a literature review, the Australian National Guidelines for Type 2 Diabetes Mellitus (T2DM), SNOMED-CT-AU and input from health professionals, we developed a Diabetes Mellitus Ontology (DMO) to diagnose and manage patients with diabetes. This paper describes the manual validation of the DMO-based approach using real world EHR data from a general practice (n=908 active patients) participating in the electronic Practice Based Research Network (ePBRN). METHOD The DMO-based algorithm to query, using Semantic Protocol and RDF Query Language (SPARQL), the structured fields in the ePBRN data repository were iteratively tested and refined. The accuracy of the final DMO-based algorithm was validated with a manual audit of the general practice EHR. Contingency tables were prepared and Sensitivity and Specificity (accuracy) of the algorithm to diagnose T2DM measured, using the T2DM cases found by manual EHR audit as the gold standard. Accuracy was determined with three attributes - reason for visit (RFV), medication (Rx) and pathology (path) - singly and in combination. RESULTS The Sensitivity and Specificity of the algorithm were 100% and 99.88% with RFV; 96.55% and 98.97% with Rx; and 15.6% and 98.92% with Path. This suggests that Rx and Path data were not as complete or correct as the RFV for this general practice, which kept its RFV information complete and current for diabetes. However, the completeness is good enough for this purpose as confirmed by the very small relative deterioration of the accuracy (Sensitivity and Specificity of 97.67% and 99.18%) when calculated for the combination of RFV, Rx and Path. The manual EHR audit suggested that the accuracy of the algorithm was influenced by data quality such as incorrect data due to mistaken units of measurement and unavailable data due to non-documentation or documented in the wrong place or progress notes, problems with data extraction, encryption and data management errors. CONCLUSION This DMO-based algorithm is sufficiently accurate to support a semantic approach, using the RFV, Rx and Path to define patients with T2DM from EHR data. However, the accuracy can be compromised by incomplete or incorrect data. The extent of compromise requires further study, using ontology-based and other approaches." }, { "instance_id": "R34663xR34619", "comparison_id": "R34663", "paper_id": "R34619", "text": "Evaluation of data completeness in the electronic health record for the purpose of patient recruitment into clinical trials: a retrospective analysis of element presence BackgroundComputerized clinical trial recruitment support is one promising field for the application of routine care data for clinical research. The primary task here is to compare the eligibility criteria defined in trial protocols with patient data contained in the electronic health record (EHR). To avoid the implementation of different patient definitions in multi-site trials, all participating research sites should use similar patient data from the EHR. Knowledge of the EHR data elements which are commonly available from most EHRs is required to be able to define a common set of criteria. The objective of this research is to determine for five tertiary care providers the extent of available data compared with the eligibility criteria of randomly selected clinical trials.MethodsEach participating study site selected three clinical trials at random. All eligibility criteria sentences were broken up into independent patient characteristics, which were then assigned to one of the 27 semantic categories for eligibility criteria developed by Luo et al. We report on the fraction of patient characteristics with corresponding structured data elements in the EHR and on the fraction of patients with available data for these elements. The completeness of EHR data for the purpose of patient recruitment is calculated for each semantic group.Results351 eligibility criteria from 15 clinical trials contained 706 patient characteristics. In average, 55% of these characteristics could be documented in the EHR. Clinical data was available for 64% of all patients, if corresponding data elements were available. The total completeness of EHR data for recruitment purposes is 35%. The best performing semantic groups were \u2018age\u2019 (89%), \u2018gender\u2019 (89%), \u2018addictive behaviour\u2019 (74%), \u2018disease, symptom and sign\u2019 (64%) and \u2018organ or tissue status\u2019 (61%). No data was available for 6 semantic groups.ConclusionsThere exists a significant gap in structure and content between data documented during patient care and data required for patient eligibility assessment. Nevertheless, EHR data on age and gender of the patient, as well as selected information on his disease can be complete enough to allow for an effective support of the manual screening process with an intelligent preselection of patients and patient data." }, { "instance_id": "R34706xR34678", "comparison_id": "R34706", "paper_id": "R34678", "text": "A Lock-Free Solution for Load Balancing in Multi-Core Environment Load balancing device is an important part of cloud platform. One of the most common applications of load balancing is to provide a single powerful virtual machine from multiple servers. In multi-core environment, the load balancing device can run multiple physically parallel load-balancing processes to increase overall performance. An important issue when operating a load-balanced service is how to send all requests in a user session consistently to the same backend server, i.e. session maintaining. Most of multiprocessing load balancing solutions use shared memory and lock when manage session. By modifying Linux kernel, we avoid using shared memory and implement a lock-free multiprocessing load balancing solution." }, { "instance_id": "R34706xR34699", "comparison_id": "R34706", "paper_id": "R34699", "text": "The analytic hierarchy process: task scheduling and resource allocation in cloud computing environment Resource allocation is a complicated task in cloud computing environment because there are many alternative computers with varying capacities. The goal of this paper is to propose a model for task-oriented resource allocation in a cloud computing environment. Resource allocation task is ranked by the pairwise comparison matrix technique and the Analytic Hierarchy Process giving the available resources and user preferences. The computing resources can be allocated according to the rank of tasks. Furthermore, an induced bias matrix is further used to identify the inconsistent elements and improve the consistency ratio when conflicting weights in various tasks are assigned. Two illustrative examples are introduced to validate the proposed method." }, { "instance_id": "R34706xR34701", "comparison_id": "R34706", "paper_id": "R34701", "text": "Performance evaluation of web servers using central load balancing policy over virtual machines on cloud Cloud Computing adds more power to the existing Internet technologies. Virtualization harnesses the power of the existing infrastructure and resources. With virtualization we can simultaneously run multiple instances of different commodity operating systems. Since we have limited processors and jobs work in concurrent fashion, overload situations can occur. Things become even more challenging in distributed environment. We propose Central Load Balancing Policy for Virtual Machines (CLBVM) to balance the load evenly in a distributed virtual machine/cloud computing environment. This work tries to compare the performance of web servers based on our CLBVM policy and independent virtual machine(VM) running on a single physical server using Xen Virtualizaion. The paper discusses the efficacy and feasibility of using this kind of policy for overall performance improvement." }, { "instance_id": "R34706xR34693", "comparison_id": "R34706", "paper_id": "R34693", "text": "Profit-driven scheduling for cloud services with data access awareness Resource sharing between multiple tenants is a key rationale behind the cost effectiveness in the cloud. While this resource sharing greatly helps service providers improve resource utilization and increase profit, it impacts on the service quality (e.g., the performance of consumer applications). In this paper, we address the reconciliation of these conflicting objectives by scheduling service requests with the dynamic creation of service instances. Specifically, our scheduling algorithms attempt to maximize profit within the satisfactory level of service quality specified by the service consumer. Our contributions include (1) the development of a pricing model using processor-sharing for clouds (i.e., queuing delay is embedded in processing time), (2) the application of this pricing model to composite services with dependency consideration, (3) the development of two sets of service request scheduling algorithms, and (4) the development of a prioritization policy for data service aiming to maximize the profit of data service." }, { "instance_id": "R34706xR34670", "comparison_id": "R34706", "paper_id": "R34670", "text": "Load Balancing for Internet Distributed Services Using Limited Redirection Rates The Internet has become the universal support for computer applications. This increases the need for solutions that provide dependability and QoS for web applications. The replication of web servers on geographically distributed data centers allows the service provider to tolerate disastrous failures and to improve the response times perceived by clients. A key issue for good performance of worldwide distributed web services is the efficiency of the load balancing mechanism used to distribute client requests among the replicated servers. Load balancing can reduce the need for over-provision of resources, and help tolerate abrupt load peaks and/or partial failures through load conditioning. In this paper, we propose a new load balancing solution that reduces service response times by redirecting requests to the closest remote servers without overloading them. We also describe a middle ware that implements this protocol and present the results of a set of simulations that show its usefulness." }, { "instance_id": "R34706xR34686", "comparison_id": "R34706", "paper_id": "R34686", "text": "HEFT based workflow scheduling algorithm for cost optimization within deadline in hybrid clouds Cloud computing nowadays is playing major role in storage and processing huge tasks with scalability options. Deadline based scheduling is the main focus when we process the tasks using available resources. Private cloud is owned by an organization and resources are free for user whereas public clouds charge users using pay-as-you-go model. When the private cloud is not enough for processing user tasks, resources can be acquired from public cloud. The combination of a public cloud and a private cloud gives rise to hybrid cloud. In hybrid clouds, task scheduling is a complex process as tasks can be allocated resources of either the private cloud or the public cloud. This paper presents an algorithm that decides which resources should be taken on lease from public cloud to complete the workflow execution within deadline and with minimum monetary cost for user. A hybrid scheduling algorithm has been proposed which uses a new concept of sub-deadline for rescheduling and allocation of resources in public cloud. The algorithm helps in finding best resources on public cloud for cost saving and complete workflow execution within deadlines. Three rescheduling policies have been evaluated in this paper. For performance analysis, we have compared the HEFT (Heterogeneous Earliest Finish Time) based hybrid scheduling algorithm with greedy approach and min-min approach. Results have shown that the proposed algorithm optimizes a large amount of cost compared to greedy and min-min approaches and completes all tasks within deadline." }, { "instance_id": "R34706xR34666", "comparison_id": "R34706", "paper_id": "R34666", "text": "User-Priority guided min min scheduling algorithm for load balancing in cloud computing Cloud computing is emerging as a new paradigm of large-scale distributed computing. In order to utilize the power of cloud computing completely, we need an efficient task scheduling algorithm. The traditional Min-Min algorithm is a simple, efficient algorithm that produces a better schedule that minimizes the total completion time of tasks than other algorithms in the literature [7]. However the biggest drawback of it is load imbalanced, which is one of the central issues for cloud providers. In this paper, an improved load balanced algorithm is introduced on the ground of Min-Min algorithm in order to reduce the makespan and increase the resource utilization (LBIMM). At the same time, Cloud providers offer computer resources to users on a pay-per-use base. In order to accommodate the demands of different users, they may offer different levels of quality for services. Then the cost per resource unit depends on the services selected by the user. In return, the user receives guarantees regarding the provided resources. To observe the promised guarantees, user-priority was considered in our proposed PA-LBIMM so that user's demand could be satisfied more completely. At last, the introduced algorithm is simulated using Matlab toolbox. The simulation results show that the improved algorithm can lead to significant performance gain and achieve over 20% improvement on both VIP user satisfaction and resource utilization ratio." }, { "instance_id": "R34845xR34761", "comparison_id": "R34845", "paper_id": "R34761", "text": "Effects of some egg characteristics on the mass loss and hatchability of ostrich (Struthio camelus) eggs 1. This study was conducted to examine some egg characteristics and determine the effects of eggshell thickness and eggshell porosity on water loss and hatchability of eggs in ostriches. 2. Shell thickness did not correlate significantly with hatchability. However, eggs of low shell thickness lost more mass (13\u00b703%) than those with intermediate (11\u00b722%) and high (10\u00b736%) shell thickness. Mass loss during incubation was higher in hatched (11\u00b798%) than unhatched eggs (11\u00b709%). Shell thickness was negatively correlated to egg mass loss (r = \u22120\u00b765). 3. The pore density was correlated with hatchability. Hatchability was 50% lower in eggs with low pore densities (40\u00b793%) than with high densities (80\u00b794%). Pore density was positively correlated with egg mass loss (r = 0\u00b763). Incubation mass losses of hatched and unhatched eggs were not significantly different. 4. Mean eggshell water vapour conductance (G) value and shell conductance constant (k) were 87\u00b777 \u00b1 4\u00b721 mg H2O/d/Torr and 2\u00b744 respectively (n = 15). 5. Because of eggshell functional properties and resulting low egg mass loss hatchability is low when ostrich eggs are artificially incubated. The mass of eggs used in the experiment was relatively high and their eggshell water vapour conductance was low. As a result, egg incubation mass loss was lower than it should be. It is concluded that incubator humidity should be low (25%) to allow enough mass loss during incubation from the eggs." }, { "instance_id": "R34845xR34796", "comparison_id": "R34845", "paper_id": "R34796", "text": "Avian embryonic development does not change the stable isotope composition of the calcite eggshell The avian embryo resorbs most of the calcium for bone formation from the calcite eggshell but the exact mechanisms of the resorption are unknown. The present study tested whether this process results in variable fractionation of the oxygen and carbon isotopes in shell calcium carbonate, which could provide a detailed insight into the temporal and spatial use of the eggshell by the developing embryo. Despite the uncertainty regarding changes in stable isotope composition of the eggshell across developmental stages or regions of the shell, eggshells are a popular resource for the analysis of historic and extant trophic relationships. To clarify how the stable isotope composition varies with embryonic development, the \u03b413C and \u03b418O content of the carbonate fraction in shells of black-headed gull (Larus ridibundus) eggs were sampled at four different stages of embryonic development and at five eggshell regions. No consistent relationship between the stable isotope composition of the eggshell and embryonic development, shell region or maculation was observed, although shell thickness decreased with development in all shell regions. By contrast, individual eggs differed significantly in isotope composition. These results establish that eggshells can be used to investigate a species\u2019 carbon and oxygen sources, regardless of the egg\u2019s developmental stage." }, { "instance_id": "R34845xR34814", "comparison_id": "R34845", "paper_id": "R34814", "text": "The effect of developmental stage on eggshell thickness variation in endangered falcons We compared eggshell thickness of hatched eggs with that of non-developed eggs in endangered falcon taxa to explore the effect of embryo development on eggshell thinning. To our knowledge, this has never been examined before in falcons, despite the fact that eggshell thinning due to pollutants and environmental contamination is often considered the most common cause of egg failure in falcons. Because of the endangered nature of these birds, and the difficulty in gaining access to the nests and their eggs, there is a large gap in our knowledge regarding eggshell thickness variation and the factors affecting it. We used a linear mixed-effects (LME) model to explore the variation in eggshell thickness (n=335 eggs) in relation to the developmental stage of the eggs, but also in relation to the falcon taxa, the laying sequence and the study zone. Female identity (n=69) and clutch identity (n=98) were also included in the LME model. Our results are consistent with the prediction that eggshell thickness decreases during incubation because of the important effect of calcium uptake by the embryo during development. Our results also show that eggs laid later in the sequence had significantly thinner eggshells. In this study, we provide the first quantitative data on eggshell thickness variation of hatched eggs in different falcon taxa that were not subjected to contamination or food limitation (i.e., bred under captive conditions). Because eggshell thickness strongly influences survival and because the species examined in this study are endangered, our data represent a valuable control for future studies on the effects of pollution on eggshells from wild populations and thus are an important contribution to the conservation of falcons." }, { "instance_id": "R34845xR34773", "comparison_id": "R34845", "paper_id": "R34773", "text": "The Effect of Embryonic Development on the Thickness of the Egg Shells of Coturnix Quail Abstract The average thickness of the shells from 75 unincubated coturnix quail eggs was found to be 0.193 mm. This was 7.3 percent greater than the average thickness (0.179 mm.) of the shells from 60 fully incubated eggs from the same hens. The two sets of eggs were collected simultaneously. This thickness difference was statistically significant (t-test:p" }, { "instance_id": "R34845xR34784", "comparison_id": "R34845", "paper_id": "R34784", "text": "Regional changes in shell thickness, shell conductance, and pore structure during incubation in eggs of the Mute Swan Shell thickness, water vapor conductance, pore density, and pore structure were examined in three regions (blunt pole, equator, sharp pole) of both unincubated and hatched mute swan eggs. There was a trend for shell thickness to increase from the blunt pole to the sharp pole in unincubated eggs. During incubation, shell in the equatorial and sharp pole regions was eroded so that shell thickness in all regions was similar in hatched eggs. Pore density and water-vapor conductance of unincubated eggs were significantly smaller in the equatorial region, compared with that in either the blunt or sharp pole regions, but increased during incubation. Pore structure of unincubated eggs was similar in all three regions. The removal of the mammillary knobs from the equatorial and sharp pole regions during incubation caused the shell to thin but had a negligible effect on pore structure. Therefore, the increase in water-vapor conductance in the equatorial region during incubation was directly related to the increase in pore density that occurred in this region." }, { "instance_id": "R34845xR34799", "comparison_id": "R34845", "paper_id": "R34799", "text": "Eggshells of arctic terns from Finland e effects of incubation and geography -Seventy-four eggs from seven colonies of Arctic Terns (Sterna paradisaea) in the Quark and the Bothnian Bay of Finland were collected in 1981 shortly after laying and immediately before hatching. Shell thickness, weight, thickness index, and egg weight index were determined and compared with the same characteristics of 200 eggs collected between 1874 and 1935. We found no significant differences in these measures of egg thickness between recent and museum shells from the same geographical areas. Shells of museum specimens from different geographical regions did show significant variations. The weight and the wing and tarsus length of the embryos correlated negatively and significantly with all measured characteristics of the shell except its thickness when the shell membranes were present. During the incubation period, the shell's thickness (without membranes) decreased 8%; thickness index and weight decreased 4%; and the shell's thickness with shell membranes present decreased 12%. In this paper, we discuss reasons for these changes. Pesticide-related reproductive failures have been reported in both American and European terns (Sterna spp.; Switzer and Lewin 1971, Koeman and van Genderen 1972, Switzer et al. 1973, Gochfield 1975, Fox 1976). For example, high levels of chlorinated hydrocarbons were found in the tissues of marine animals from the Baltic Sea (Jensen et al. 1969, Koivusaari et al. 1972, Anderson and Hickey 1974), an area where eggshell thinning of 1117% was reported in White-tailed Eagles (Haliaeetus albicilla) and Ospreys (Pandion haliaetus; Koivusaari et al. 1980, OdsjO 1982). In contrast, Lemmetyinen and Rantamiki (1980) reported low pesticide contamination in the eggs of Arctic Terns (S. paradisaea) from the archipelago of southwestern Finland. The thickness of eggshells from these terns has recently increased significantly (5.2%, P < 0.05) in Finland (Gulf of Bothnia; Pulliainen and Marjakangas 1980). Several studies concerning geographic variations in eggshells have been published (e.g., Anderson and Hickey 1970, 1972; Sutcliffe 1978; Olsen 1982), but there are almost no studies of this kind from Europe (e.g., Svensson 1978). Besides effects of pesticides and geography, the mobilization of eggshell calcium for the developing embryo is known to affect eggshell thickness (e.g., Kreitzer 1972). Our objectives, ther fore, were (1) to describe recent changes in the thickness and size of Arctic Tern eggshells in comparison to museum material; (2) to check the reliability of available museum material for use as a standard; and (3) to find a method for measuring shell variables that is independent of the shell thinning that occurs naturally during embryonic development. MATERIALS AND METHODS We collected Arctic Tern eggs in 1981 from three colonies in Quark (6 310'N, 2 12 5'E) and from four colonies in Bothnian Bay (65003'N, 25010'E) in the Gulf of Bothnia (Fig. 1). At ach nest, one egg was chosen randomly at an early stage of incubation (little or no embryonic development). Nine of these nests in the Quark colony were marked and two more eggs were taken from each shortly before hatching was expected. Where Arctic and Common terns (S. hirundo) bred in the same colonies, we confirmed identification of Arctic Tern nests by observation from a blind or by flushing a parent from its nest before taking an egg. Eggs were kept in a refrigerator until prepared. Their length and breadth were measured with a vernier caliper to the nearest 0.1 mm. A piece of shell, 16-18 mm in diameter, was cut out from the equator of each egg. The contents of the egg were then removed, the shells were rinsed with water, and the shell" }, { "instance_id": "R34845xR34777", "comparison_id": "R34845", "paper_id": "R34777", "text": "An assessment of embryonic mortality stages in Chukar partridge (Alectoris chukar) by means of classification tree method The Chukar partridge (Alectoris chukar) is a wild bird in poultry. Its natural populations significantly diminished in recent years due to excessive hunting and destruction of natural habitats. However, partridge breeding for hunting and egg and meat production gets more and more common (\u00d6ZBEY and ESEN, 2007). The egg quality characteristics (NARUSHIN and ROMANOV, 2002; KHURSHID et al., 2004) and embryonic mortality (SCOTT and MACKENZIE, 1993; MROZ et al., 2007) have been well documented for domestic fowl. Egg fertility and embryonic mortality were found affecting hatchability (FAIRCHILD et al., 2002). Egg weight, shell weight, shell thickness and fertility, which are physical characteristics of eggs, also play an important role in the processes of embryo development and hatching success in poultry (NARUSHIN and ROMANOV, 2002; and KHURSHID et al., 2004). However, there is little information in literature regarding egg characteristic parameters and embryo mortality for partridge (\u00d6ZBEY and ESEN, 2007; KIRIK\u00c7I et al., 2007). Many possible (genetic and environmental) factors and complex interactions between them can affect egg characteristics and embryo mortality in partridge. Traditional statistical methods can be cumbersome to analyze this kind of data sets. Classification tree method (CTM) is a potentially powerful tool to predict membership of cases in the classes of a categorical dependent variable from their measurements on one or more predictor variables. CTM will be a good choice, especially when data set is large, relations between variables are non-linear and when independent variables are mixed (both continuous and categorical). CTM is also structurally very simple and easy to visualize. CTM is a binary decision tree. The tree is constructed by splitting the whole data into nodes or sub-groups based on yes/no answers about the values of the predictors. Each split is based on a single predictor variable. On the other hand, some of the predictors may be used more than one times while others may not be used at all. The rule generated at each step maximizes the class purity within each of the two resulting subsets. Each subset is split further based on entirely different relationships." }, { "instance_id": "R34845xR34791", "comparison_id": "R34845", "paper_id": "R34791", "text": "Clutch Size, Hatching Success, and Eggshell-Thinning in Western Gulls Author(s): Hunt, GL; Hunt, MW | Abstract: Average clutch size for large Larus gulls is close to three eggs, and the production of a clutch of four is uncommon (Keith 1966; Paludan 1951; Vermeer 1963). We report here on a colony of Western Gulls (Larms occidentalis) in which many clutches containing four and five eggs were found. It is of particular interest that in these large clutches not only was hatching success low but also eggshell thickness was reduced." }, { "instance_id": "R34845xR34769", "comparison_id": "R34845", "paper_id": "R34769", "text": "Eggshell of the domestic guinea fowl Abstract 1. Physical characteristics of eggs of the domestic guinea fowl, Numida meleagris galeata, were measured and compared with those of its wild counterpart and with other birds using allometric relationships. 2. The shell thickness increased and the area density of pores decreased from the blunt to the pointed end of the egg. During incubation, shell thickness decreased, but the shell diffusive conductance to water vapour (GH2O) remained constant. 3. Fresh egg mass (m0), length and breadth of the egg, GH2O and specific water vapour conductance, spGH2O (GH2O per g of m0 ), were affected by the age of the laying flock. 4. Eggs of the domestic guinea fowl were bigger and heavier than eggs of the wild one. 5. Allometry showed that guinea fowl eggs differ from those of the other birds by their greater shell thickness and density of pores. However spGH2O was normal, the thickness of the shell being compensated for by a greater density of pores for gas exchanges." }, { "instance_id": "R36153xR36118", "comparison_id": "R36153", "paper_id": "R36118", "text": "The Novel Coronavirus, 2019-nCoV, is Highly Contagious and More Infectious Than Initially Estimated Abstract The novel coronavirus (2019-nCoV) is a recently emerged human pathogen that has spread widely since January 2020. Initially, the basic reproductive number, R 0 , was estimated to be 2.2 to 2.7. Here we provide a new estimate of this quantity. We collected extensive individual case reports and estimated key epidemiology parameters, including the incubation period. Integrating these estimates and high-resolution real-time human travel and infection data with mathematical models, we estimated that the number of infected individuals during early epidemic double every 2.4 days, and the R 0 value is likely to be between 4.7 and 6.6. We further show that quarantine and contact tracing of symptomatic individuals alone may not be effective and early, strong control measures are needed to stop transmission of the virus. One-sentence summary By collecting and analyzing spatiotemporal data, we estimated the transmission potential for 2019-nCoV." }, { "instance_id": "R36153xR36109", "comparison_id": "R36153", "paper_id": "R36109", "text": "Transmission interval estimates suggest pre-symptomatic spread of COVID-19 Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread." }, { "instance_id": "R36153xR36106", "comparison_id": "R36153", "paper_id": "R36106", "text": "Characterizing the transmission and identifying the control strategy for COVID-19 through epidemiological modeling ABSTRACT The outbreak of the novel coronavirus disease, COVID-19, originating from Wuhan, China in early December, has infected more than 70,000 people in China and other countries and has caused more than 2,000 deaths. As the disease continues to spread, the biomedical society urgently began identifying effective approaches to prevent further outbreaks. Through rigorous epidemiological analysis, we characterized the fast transmission of COVID-19 with a basic reproductive number 5.6 and proved a sole zoonotic source to originate in Wuhan. No changes in transmission have been noted across generations. By evaluating different control strategies through predictive modeling and Monte carlo simulations, a comprehensive quarantine in hospitals and quarantine stations has been found to be the most effective approach. Government action to immediately enforce this quarantine is highly recommended." }, { "instance_id": "R36153xR36132", "comparison_id": "R36153", "paper_id": "R36132", "text": "Lessons drawn from China and South Korea for managing COVID-19 epidemic: insights from a comparative modeling study Abstract We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8 th and 18 th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea." }, { "instance_id": "R36153xR36138", "comparison_id": "R36153", "paper_id": "R36138", "text": "Estimating the generation interval for COVID-19 based on symptom onset data Abstract Background Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities." }, { "instance_id": "R36153xR36128", "comparison_id": "R36153", "paper_id": "R36128", "text": "Risk estimation and prediction by modeling the transmission of the novel coronavirus (COVID-19) in mainland China excluding Hubei province Abstract Background In December 2019, an outbreak of coronavirus disease (COVID-19) was identified in Wuhan, China and, later on, detected in other parts of China. Our aim is to evaluate the effectiveness of the evolution of interventions and self-protection measures, estimate the risk of partial lifting control measures and predict the epidemic trend of the virus in mainland China excluding Hubei province based on the published data and a novel mathematical model. Methods A novel COVID-19 transmission dynamic model incorporating the intervention measures implemented in China is proposed. COVID-19 daily data of mainland China excluding Hubei province, including the cumulative confirmed cases, the cumulative deaths, newly confirmed cases and the cumulative recovered cases for the period January 20th-March 3rd, 2020, were archived from the National Health Commission of China (NHCC). We parameterize the model by using the Markov Chain Monte Carlo (MCMC) method and estimate the control reproduction number R c , as well as the effective daily reproduction ratio R e ( t ), of the disease transmission in mainland China excluding Hubei province. Results The estimation outcomes indicate that R c is 3.36 (95% CI 3.20-3.64) and R e ( t ) has dropped below 1 since January 31st, 2020, which implies that the containment strategies implemented by the Chinese government in mainland China excluding Hubei province are indeed effective and magnificently suppressed COVID-19 transmission. Moreover, our results show that relieving personal protection too early may lead to the spread of disease for a longer time and more people would be infected, and may even cause epidemic or outbreak again. By calculating the effective reproduction ratio, we prove that the contact rate should be kept at least less than 30% of the normal level by April, 2020. Conclusions To ensure the epidemic ending rapidly, it is necessary to maintain the current integrated restrict interventions and self-protection measures, including travel restriction, quarantine of entry, contact tracing followed by quarantine and isolation and reduction of contact, like wearing masks, etc. People should be fully aware of the real-time epidemic situation and keep sufficient personal protection until April. If all the above conditions are met, the outbreak is expected to be ended by April in mainland China apart from Hubei province." }, { "instance_id": "R36153xR36143", "comparison_id": "R36153", "paper_id": "R36143", "text": "A Cybernetics-based Dynamic Infection Model for Analyzing SARS-COV-2 Infection Stability and Predicting Uncontrollable Risks Abstract Since December 2019, COVID-19 has raged in Wuhan and subsequently all over China and the world. We propose a Cybernetics-based Dynamic Infection Model (CDIM) to the dynamic infection process with a probability distributed incubation delay and feedback principle. Reproductive trends and the stability of the SARS-COV-2 infection in a city can then be analyzed, and the uncontrollable risks can be forecasted before they really happen. The infection mechanism of a city is depicted using the philosophy of cybernetics and approaches of the control engineering. Distinguished with other epidemiological models, such as SIR, SEIR, etc., that compute the theoretical number of infected people in a closed population, CDIM considers the immigration and emigration population as system inputs, and administrative and medical resources as dynamic control variables. The epidemic regulation can be simulated in the model to support the decision-making for containing the outbreak. City case studies are demonstrated for verification and validation." }, { "instance_id": "R36153xR36130", "comparison_id": "R36153", "paper_id": "R36130", "text": "Assessing the plausibility of subcritical transmission of 2019-nCoV in the United States Abstract The 2019-nCoV outbreak has raised concern of global spread. While person-to-person transmission within the Wuhan district has led to a large outbreak, the transmission potential outside of the region remains unclear. Here we present a simple approach for determining whether the upper limit of the confidence interval for the reproduction number exceeds one for transmission in the United States, which would allow endemic transmission. As of February 7, 2020, the number of cases in the United states support subcritical transmission, rather than ongoing transmission. However, this conclusion can change if pre-symptomatic cases resulting from human-to-human transmission have not yet been identified." }, { "instance_id": "R36153xR36123", "comparison_id": "R36153", "paper_id": "R36123", "text": "Transmission potential of COVID-19 in Iran Abstract We estimated the reproduction number of 2020 Iranian COVID-19 epidemic using two different methods: R 0 was estimated at 4.4 (95% CI, 3.9, 4.9) (generalized growth model) and 3.50 (1.28, 8.14) (epidemic doubling time) (February 19 - March 1) while the effective R was estimated at 1.55 (1.06, 2.57) (March 6-19)." }, { "instance_id": "R38484xR23436", "comparison_id": "R38484", "paper_id": "R23436", "text": "Climate Simulations Using MRI-AGCM3.2 with 20-km Grid A new version of the atmospheric general circulation model of the Meteorological Research Institute (MRI), with a horizontal grid size of about 20 km, has been developed. The previous version of the 20-km model, MRIAGCM3.1, which was developed from an operational numerical weather-prediction model, provided information on possible climate change induced by global warming, including future changes in tropical cyclones, the East Asian monsoon, extreme events, and blockings. For the new version, MRI-AGCM3.2, we have introduced various new parameterization schemes that improve the model climate. Using the new model, we performed a present-day climate experiment using observed sea surface temperature. The model shows improvements in simulating heavy monthly-mean precipitation around the tropical Western Pacific, the global distribution of tropical cyclones, the seasonal march of East Asian summer monsoon, and blockings in the Pacific. Improvements in the model climatologies were confirmed numerically using skill scores (e.g., Taylor\u2019s skill score)." }, { "instance_id": "R38484xR9094", "comparison_id": "R38484", "paper_id": "R9094", "text": "Development and evaluation of an Earth-System model \u2013 HadGEM2 Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks." }, { "instance_id": "R38484xR9221", "comparison_id": "R38484", "paper_id": "R9221", "text": "The ACCESS coupled model: description, control climate and evaluation 4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed." }, { "instance_id": "R38484xR23260", "comparison_id": "R38484", "paper_id": "R23260", "text": "The NCEP Climate Forecast System Reanalysis The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere\u2013ocean\u2013land surface\u2013sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25\u00b0 at the equator, extending to a global 0.5\u00b0 beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m..." }, { "instance_id": "R38484xR23312", "comparison_id": "R38484", "paper_id": "R23312", "text": "GFDL's CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics Abstract The formulation and simulation characteristics of two new global coupled climate models developed at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL) are described. The models were designed to simulate atmospheric and oceanic climate and variability from the diurnal time scale through multicentury climate change, given our computational constraints. In particular, an important goal was to use the same model for both experimental seasonal to interannual forecasting and the study of multicentury global climate change, and this goal has been achieved. Two versions of the coupled model are described, called CM2.0 and CM2.1. The versions differ primarily in the dynamical core used in the atmospheric component, along with the cloud tuning and some details of the land and ocean components. For both coupled models, the resolution of the land and atmospheric components is 2\u00b0 latitude \u00d7 2.5\u00b0 longitude; the atmospheric model has 24 vertical levels. The ocean resolution is 1\u00b0 in latitude and longitude, with meridional resolution equatorward of 30\u00b0 becoming progressively finer, such that the meridional resolution is 1/3\u00b0 at the equator. There are 50 vertical levels in the ocean, with 22 evenly spaced levels within the top 220 m. The ocean component has poles over North America and Eurasia to avoid polar filtering. Neither coupled model employs flux adjustments. The control simulations have stable, realistic climates when integrated over multiple centuries. Both models have simulations of ENSO that are substantially improved relative to previous GFDL coupled models. The CM2.0 model has been further evaluated as an ENSO forecast model and has good skill (CM2.1 has not been evaluated as an ENSO forecast model). Generally reduced temperature and salinity biases exist in CM2.1 relative to CM2.0. These reductions are associated with 1) improved simulations of surface wind stress in CM2.1 and associated changes in oceanic gyre circulations; 2) changes in cloud tuning and the land model, both of which act to increase the net surface shortwave radiation in CM2.1, thereby reducing an overall cold bias present in CM2.0; and 3) a reduction of ocean lateral viscosity in the extratropics in CM2.1, which reduces sea ice biases in the North Atlantic. Both models have been used to conduct a suite of climate change simulations for the 2007 Intergovernmental Panel on Climate Change (IPCC) assessment report and are able to simulate the main features of the observed warming of the twentieth century. The climate sensitivities of the CM2.0 and CM2.1 models are 2.9 and 3.4 K, respectively. These sensitivities are defined by coupling the atmospheric components of CM2.0 and CM2.1 to a slab ocean model and allowing the model to come into equilibrium with a doubling of atmospheric CO2. The output from a suite of integrations conducted with these models is freely available online (see http://nomads.gfdl.noaa.gov/)." }, { "instance_id": "R38484xR23443", "comparison_id": "R38484", "paper_id": "R23443", "text": "The Norwegian Earth System Model, NorESM1-M \u2013 Part 1: Description and basic evaluation of the physical climate Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry\u2013aerosol\u2013cloud\u2013radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2\u00b0 for the atmosphere and land components and 1\u00b0 for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M." }, { "instance_id": "R38484xR23326", "comparison_id": "R38484", "paper_id": "R23326", "text": "GFDL\u2019s ESM2 Global Coupled Climate\u2013Carbon Earth System Models. Part I: Physical Formulation and Baseline Simulation Characteristics Abstract The physical climate formulation and simulation characteristics of two new global coupled carbon\u2013climate Earth System Models, ESM2M and ESM2G, are described. These models demonstrate similar climate fidelity as the Geophysical Fluid Dynamics Laboratory\u2019s previous Climate Model version 2.1 (CM2.1) while incorporating explicit and consistent carbon dynamics. The two models differ exclusively in the physical ocean component; ESM2M uses Modular Ocean Model version 4p1 with vertical pressure layers while ESM2G uses Generalized Ocean Layer Dynamics with a bulk mixed layer and interior isopycnal layers. Differences in the ocean mean state include the thermocline depth being relatively deep in ESM2M and relatively shallow in ESM2G compared to observations. The crucial role of ocean dynamics on climate variability is highlighted in El Ni\u00f1o\u2013Southern Oscillation being overly strong in ESM2M and overly weak in ESM2G relative to observations. Thus, while ESM2G might better represent climate changes relating to total heat content variability given its lack of long-term drift, gyre circulation, and ventilation in the North Pacific, tropical Atlantic, and Indian Oceans, and depth structure in the overturning and abyssal flows, ESM2M might better represent climate changes relating to surface circulation given its superior surface temperature, salinity, and height patterns, tropical Pacific circulation and variability, and Southern Ocean dynamics. The overall assessment is that neither model is fundamentally superior to the other, and that both models achieve sufficient fidelity to allow meaningful climate and earth system modeling applications. This affords the ability to assess the role of ocean configuration on earth system interactions in the context of two state-of-the-art coupled carbon\u2013climate models." }, { "instance_id": "R38484xR23457", "comparison_id": "R38484", "paper_id": "R23457", "text": "Evaluation of the carbon cycle components in the Norwegian Earth System Model (NorESM) Abstract. The recently developed Norwegian Earth System Model (NorESM) is employed for simulations contributing to the CMIP5 (Coupled Model Intercomparison Project phase 5) experiments and the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC-AR5). In this manuscript, we focus on evaluating the ocean and land carbon cycle components of the NorESM, based on the control and historical simulations. Many of the observed large scale ocean biogeochemical features are reproduced satisfactorily by the NorESM. When compared to the climatological estimates from the World Ocean Atlas (WOA), the model simulated temperature, salinity, oxygen, and phosphate distributions agree reasonably well in both the surface layer and deep water structure. However, the model simulates a relatively strong overturning circulation strength that leads to noticeable model-data bias, especially within the North Atlantic Deep Water (NADW). This strong overturning circulation slightly distorts the structure of the biogeochemical tracers at depth. Advancements in simulating the oceanic mixed layer depth with respect to the previous generation model particularly improve the surface tracer distribution as well as the upper ocean biogeochemical processes, particularly in the Southern Ocean. Consequently, near surface ocean processes such as biological production and air-sea gas exchange, are in good agreement with climatological observations. NorESM reproduces the general pattern of land-vegetation gross primary productivity (GPP) when compared to the observationally-based values derived from the FLUXNET network of eddy covariance towers. Globally, the NorESM simulated annual mean GPP and terrestrial respiration are 129.8 and 106.6 Pg C yr\u22121, slightly larger than observed of 119.4 \u00b1 5.9 and 96.4 \u00b1 6.0 Pg C yr\u22121. The latitudinal distribution of GPP fluxes simulated by NorESM shows a GPP overestimation of 10% in the tropics and a substantial underestimation of GPP at high latitudes." }, { "instance_id": "R38484xR23383", "comparison_id": "R38484", "paper_id": "R23383", "text": "Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere temperatures and winds, cloud heights, precipitation, and sea level pressure. Data\u2013model comparisons continue, however, to highlight persistent problems in the marine stratocumulus regions." }, { "instance_id": "R38484xR8713", "comparison_id": "R38484", "paper_id": "R8713", "text": "The CNRM-CM5.1 global climate model: description and basic evaluation A new version of the general circulation model CNRM-CM has been developed jointly by CNRM-GAME (Centre National de Recherches M\u00e9t\u00e9orologiques\u2014Groupe d\u2019\u00e9tudes de l\u2019Atmosph\u00e8re M\u00e9t\u00e9orologique) and Cerfacs (Centre Europ\u00e9en de Recherche et de Formation Avanc\u00e9e) in order to contribute to phase 5 of the Coupled Model Intercomparison Project (CMIP5). The purpose of the study is to describe its main features and to provide a preliminary assessment of its mean climatology. CNRM-CM5.1 includes the atmospheric model ARPEGE-Climat (v5.2), the ocean model NEMO (v3.2), the land surface scheme ISBA and the sea ice model GELATO (v5) coupled through the OASIS (v3) system. The main improvements since CMIP3 are the following. Horizontal resolution has been increased both in the atmosphere (from 2.8\u00b0 to 1.4\u00b0) and in the ocean (from 2\u00b0 to 1\u00b0). The dynamical core of the atmospheric component has been revised. A new radiation scheme has been introduced and the treatments of tropospheric and stratospheric aerosols have been improved. Particular care has been devoted to ensure mass/water conservation in the atmospheric component. The land surface scheme ISBA has been externalised from the atmospheric model through the SURFEX platform and includes new developments such as a parameterization of sub-grid hydrology, a new freezing scheme and a new bulk parameterisation for ocean surface fluxes. The ocean model is based on the state-of-the-art version of NEMO, which has greatly progressed since the OPA8.0 version used in the CMIP3 version of CNRM-CM. Finally, the coupling between the different components through OASIS has also received a particular attention to avoid energy loss and spurious drifts. These developments generally lead to a more realistic representation of the mean recent climate and to a reduction of drifts in a preindustrial integration. The large-scale dynamics is generally improved both in the atmosphere and in the ocean, and the bias in mean surface temperature is clearly reduced. However, some flaws remain such as significant precipitation and radiative biases in many regions, or a pronounced drift in three dimensional salinity." }, { "instance_id": "R41148xR41117", "comparison_id": "R41148", "paper_id": "R41117", "text": "Rationalisation and optimization of solid state electro-reduction of SiO 2 to Si in molten CaCl 2 in accordance with dynamic three-phase interlines based voltammetry Abstract The cyclic voltammograms of a silica sheathed tungsten disc (W-SiO2) electrode in molten CaCl2 at 900 \u00b0C exhibited an unusually increasing reduction current with decreasing the potential scan rate. When the cathodic limit was less negative than \u22121.00 V (vs. a quartz sealed Ag/AgCl reference electrode), the reduction current was also smaller in the forward (negative) potential scan than that in the reversed (positive) scan. However, at a given reduction charge, the reduction current increased with the scan rate, following approximately a logarithm law. These unique features have been elaborated according to the dynamic model of the conductor (silicon)/insulator (silica)/electrolyte (molten salt) three-phase interlines (3PIs). Combining the voltammetric observations with the composition analysis of the products from potentiostatic electrolysis of porous silica pellets, the optimal potential window was identified to be from \u22120.65 V to \u22120.95 V. In this potential range, silica was converted to pure silicon with the oxygen content being less than 0.5 wt.%. At potentials more negative than \u22120.95 V, the reduction of Ca2+ ions in the reduction-generated porous silicon layer led to the formation of various calcium silicides. These findings can help the development of an electrolytic process for clean, efficient and inexpensive production of high purity silicon." }, { "instance_id": "R41148xR41140", "comparison_id": "R41148", "paper_id": "R41140", "text": "Silicon surface texturing by electro-deoxidation of a thin silica layer in molten salt Abstract A new method of silicon surface texturing is reported, which is based on thin silica layer electrochemical reduction in molten salts. A thermal silica layer grown on p-type silicon was potentiostatically reduced in molten calcium chloride at 850 \u00b0C. Typical nano\u2013micro-formations obtained at different stages of electrolysis were demonstrated by SEM. X-ray diffraction measurements confirmed conversion of the amorphous thermal silica layer into crystalline silicon. The proposed approach shows promise in photovoltaic applications, for instance, for production of antireflection coatings in silicon solar cells." }, { "instance_id": "R41148xR41126", "comparison_id": "R41148", "paper_id": "R41126", "text": "Electrochemical decomposition of SiO 2 pellets to form silicon in molten salts Abstract Direct electrochemical reduction of porous SiO 2 pellets in molten CaCl 2 salt and CaCl 2 \u2013NaCl salt mixture was investigated by applying 2.8 V potential. The study focused on the effects of temperature, particle size of SiO 2 powder starting material and the behavior of cathode contacting materials during electrochemical reduction process. The starting materials and the electrolysis products were characterized by X-ray diffraction analysis and scanning electron microscopy mainly. The studies showed that smaller particle sizes and higher temperatures had slightly positive effects in increasing the reduction rate within the ranges covered in this study. The results were interpreted from variations of current and accumulative electrical charge that passed through the cell as a function of duration of electrochemical reduction under different conditions. Microstructures and compositions of the reduced pellets were used to infer that electrochemical reduction of SiO 2 in molten salts may become a method to produce silicon that could be used in solar energy utilization. Furthermore, X-ray diffraction analysis results indicated that the silicon produced at the cathode reacts with contacting materials; nickel, and iron in stainless steel to form Ni\u2013Si and Fe\u2013Si compounds due to very reactive nature of silicon especially at high temperatures." }, { "instance_id": "R41148xR41115", "comparison_id": "R41148", "paper_id": "R41115", "text": "Pinpoint and bulk electro- chemical reduction of insulating silicon dioxide to silicon Silicon dioxide (SiO2) is conventionally reduced to silicon by carbothermal reduction, in which the oxygen is removed by a heterogeneous\u2013homogeneous reaction sequence at approximately 1,700 \u00b0C. Here we report pinpoint and bulk electrochemical methods for removing oxygen from solid SiO2 in a molten CaCl2 electrolyte at 850 \u00b0C. This approach involves a 'contacting electrode', in which a metal wire supplies electrons to a selected region of the insulating SiO2. Bulk reduction of SiO2 is possible by increasing the number of contacting points. The same method was also demonstrated with molten LiCl-KCl-CaCl2 at 500 \u00b0C. The novelty and relative simplicity of this method might lead to new processes in silicon semiconductor technology, as well as in high-purity silicon production. The methodology may be applicable to electrochemical processing of a wide variety of insulating materials, provided that the electrolyte dissolves the appropriate constituent ion(s) of the material." }, { "instance_id": "R41148xR41146", "comparison_id": "R41148", "paper_id": "R41146", "text": "Facile electrosynthesis of silicon carbide nanowires from silica/carbon precursors in molten salt Silicon carbide nanowires (SiC NWs) have attracted intensive attention in recent years due to their outstanding performances in many applications. A large-scale and facile production of SiC NWs is critical to its successful application. Here, we report a simple method for the production of SiC NWs from inexpensive and abundantly available silica/carbon (SiO2/C) precursors in molten calcium chloride. The solid-to-solid electroreduction and dissolution-electrodeposition mechanisms can easily lead to the formation of homogenous SiC NWs. This template/catalyst-free approach greatly simplifies the synthesis procedure compared to conventional methods. This general strategy opens a direct electrochemical route for the conversion of SiO2/C into SiC NWs, and may also have implications for the electrosynthesis of other micro/nanostructured metal carbides/composites from metal oxides/carbon precursors." }, { "instance_id": "R41148xR41144", "comparison_id": "R41148", "paper_id": "R41144", "text": "Up-scalable and controllable electrolytic production of photo-responsive nanostructured silicon The electrochemical reduction of solid silica has been investigated in molten CaCl2 at 900 \u00b0C for the one-step, up-scalable, controllable and affordable production of nanostructured silicon with promising photo-responsive properties. Cyclic voltammetry of the metallic cavity electrode loaded with fine silica powder was performed to elaborate the electrochemical reduction mechanism. Potentiostatic electrolysis of porous and dense silica pellets was carried out at different potentials, focusing on the influences of the electrolysis potential and the microstructure of the precursory silica on the product purity and microstructure. The findings suggest a potential range between \u22120.60 and \u22120.95 V (vs. Ag/AgCl) for the production of nanostructured silicon with high purity (>99 wt%). According to the elucidated mechanism on the electro-growth of the silicon nanostructures, optimal process parameters for the controllable preparation of high-purity silicon nanoparticles and nanowires were identified. Scaling-up the optimal electrolysis was successful at the gram-scale for the preparation of high-purity silicon nanowires which exhibited promising photo-responsive properties." }, { "instance_id": "R41148xR41120", "comparison_id": "R41148", "paper_id": "R41120", "text": "Improving purity and process volume during direct electrolytic reduction of solid SiO 2 in molten CaCl 2 for the production of solar-grade silicon The direct electrolytic reduction of solid SiO2 is investigated in molten CaCl2 at 1123 K to produce solar-grade silicon. The target concentrations of impurities for the primary Si are calculated from the acceptable concentrations of impurities in solar-grade silicon (SOG-Si) and the segregation coefficients for the impurity elements. The concentrations of most metal impurities are significantly decreased below their target concentrations by using a quartz vessel and new types of SiO2-contacting electrodes. The electrolytic reduction rate is increased by improving an electron pathway from the lead material to the SiO2, which demonstrates that the characteristics of the electric contact are important factors affecting the reduction rate. Pellet- and basket-type electrodes are tested to improve the process volume for powdery and granular SiO2. Based on the purity of the Si product after melting, refining, and solidifying, the potential of the technology is discussed." }, { "instance_id": "R41148xR41142", "comparison_id": "R41148", "paper_id": "R41142", "text": "Electrochemical formation of a p\u2212n junction of thin film silicon deposited in molten salt Herein we report the demonstration of electrochemical deposition of silicon p-n junctions all in molten salt. The results show that a dense robust silicon thin film with embedded junction formation can be produced directly from inexpensive silicates/silicon oxide precursors by a two-step electrodeposition process. The fabricated silicon p-n junction exhibits clear diode rectification behavior and photovoltaic effects, indicating promise for application in low-cost silicon thin film solar cells." }, { "instance_id": "R41148xR41130", "comparison_id": "R41148", "paper_id": "R41130", "text": "The role of granule size on the kinetics of electrochemical reduction of SiO 2 granules in molten CaCl 2 As a fundamental study to develop a new process for producing solar-grade silicon, the effect of granule size on the kinetics of the electrochemical reduction of SiO2 granules in molten CaCl2 was investigated. SiO2 granules with different size ranges were electrolyzed in molten CaCl2 at 1123 K (850 \u00b0C). The reduction kinetics was evaluated on the basis of the growth rate of the reduced Si layer and the behavior of the current during electrolysis. The results indicated that finer SiO2 granules are more favorable for a high reduction rate because the contact resistance between the bottom Si plate and the reduced Si particles is small and the diffusion of O2\u2212 ions in CaCl2 inside the porous Si shell is easy. Electrolysis using SiO2 granules less than 0.1 mm in size maintained a current density of no less than 0.4 A cm\u22122 within 20 minutes, indicating that the electrochemical reduction of fine SiO2 granules in molten CaCl2 has the potential of becoming a high-yield production process for solar-grade silicon." }, { "instance_id": "R41148xR41134", "comparison_id": "R41148", "paper_id": "R41134", "text": "Oscillatory behavior in electrochemical deposition reaction of polycrystalline silicon thin films through reduction of silicon tetrachloride in a molten salt electrolyte A new electrochemical oscillation is found for reduction reaction of silicon tetrachloride on a partially immersed single crystal n-Si electrode in a lithium chloride-potassium chloride eutectic melt electrolyte. The reduction of SiCl4, which is almost insoluble in the electrolyte, occurs mainly near the upper edge of an electrolyte meniscus on the electrode, and it is discussed that the oscillation is caused by a change in the height of the meniscus due to a change in the chemical structure (and hence the interfacial tension) of the electrode surface with progress of the silicon deposition reaction." }, { "instance_id": "R41466xR37003", "comparison_id": "R41466", "paper_id": "R37003", "text": "Real-Time Estimation of the Risk of Death from Novel Coronavirus (COVID-19) Infection: Inference Using Exported Cases The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number\u2014the average number of secondary cases generated by a single primary case in a na\u00efve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data." }, { "instance_id": "R41466xR41013", "comparison_id": "R41466", "paper_id": "R41013", "text": "Transmission potential and severity of COVID-19 in South Korea Abstract Objectives Since the first case of 2019 novel coronavirus (COVID-19) identified on Jan 20, 2020 in South Korea, the number of cases rapidly increased, resulting in 6,284 cases including 42 deaths as of March 6, 2020. To examine the growth rate of the outbreak, we aimed to present the first study to report the reproduction number of COVID-19 in South Korea. Methods The daily confirmed cases of COVID-19 in South Korea were extracted from publicly available sources. By using the empirical reporting delay distribution and simulating the generalized growth model, we estimated the effective reproduction number based on the discretized probability distribution of the generation interval. Results We identified four major clusters and estimated the reproduction number at 1.5 (95% CI: 1.4-1.6). In addition, the intrinsic growth rate was estimated at 0.6 (95% CI: 0.6, 0.7) and the scaling of growth parameter was estimated at 0.8 (95% CI: 0.7, 0.8), indicating sub-exponential growth dynamics of COVID-19. The crude case fatality rate is higher among males (1.1%) compared to females (0.4%) and increases with older age. Conclusions Our results indicate early sustained transmission of COVID-19 in South Korea and support the implementation of social distancing measures to rapidly control the outbreak." }, { "instance_id": "R44930xR44806", "comparison_id": "R44930", "paper_id": "R44806", "text": "Estimation of the Transmission Risk of 2019-nCov and Its Implication for Public Health Interventions English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: \u80cc\u666f\uff1a\u81ea\u4ece\u4e2d\u56fd\u6b66\u6c49\u51fa\u73b0\u7b2c\u4e00\u4f8b\u80ba\u708e\u75c5\u4f8b\u4ee5\u6765\uff0c\u65b0\u578b\u51a0\u72b6\u75c5\u6bd2\uff082019-nCov\uff09\u611f\u67d3\u5df2\u8fc5\u901f\u4f20\u64ad\u5230\u5176\u4ed6\u7701\u4efd\u548c\u5468\u8fb9\u56fd\u5bb6\u3002\u901a\u8fc7\u6570\u5b66\u6a21\u578b\u4f30\u8ba1\u57fa\u672c\u518d\u751f\u6570\uff0c\u6709\u52a9\u4e8e\u786e\u5b9a\u75ab\u60c5\u7206\u53d1\u7684\u53ef\u80fd\u6027\u548c\u4e25\u91cd\u6027\uff0c\u5e76\u4e3a\u786e\u5b9a\u75be\u75c5\u5e72\u9884\u7c7b\u578b\u548c\u5f3a\u5ea6\u63d0\u4f9b\u5173\u952e\u4fe1\u606f\u3002 \u65b9\u6cd5\uff1a\u6839\u636e\u75be\u75c5\u7684\u4e34\u5e8a\u8fdb\u5c55\uff0c\u4e2a\u4f53\u7684\u6d41\u884c\u75c5\u5b66\u72b6\u51b5\u548c\u5e72\u9884\u63aa\u65bd\uff0c\u8bbe\u8ba1\u786e\u5b9a\u6027\u7684\u4ed3\u5ba4\u6a21\u578b\u3002 \u7ed3\u679c\uff1a\u57fa\u4e8e\u4f3c\u7136\u51fd\u6570\u548c\u6a21\u578b\u5206\u6790\u7684\u4f30\u8ba1\u7ed3\u679c\u8868\u660e\uff0c\u63a7\u5236\u518d\u751f\u6570\u53ef\u80fd\u9ad8\u8fbe6.47\uff0895\uff05CI 5.71-7.23\uff09\u3002\u654f\u611f\u6027\u5206\u6790\u663e\u793a\uff0c\u5bc6\u96c6\u63a5\u89e6\u8ffd\u8e2a\u548c\u9694\u79bb\u7b49\u5e72\u9884\u63aa\u65bd\u53ef\u4ee5\u6709\u6548\u51cf\u5c11\u63a7\u5236\u518d\u751f\u6570\u548c\u4f20\u64ad\u98ce\u9669\uff0c\u6b66\u6c49\u5c01\u57ce\u63aa\u65bd\u5bf9\u5317\u4eac2019-nCov\u611f\u67d3\u7684\u5f71\u54cd\u51e0\u4e4e\u7b49\u540c\u4e8e\u589e\u52a0\u9694\u79bb\u63aa\u65bd10\u4e07\u7684\u57fa\u7ebf\u503c\u3002 \u89e3\u91ca\uff1a\u5fc5\u987b\u8bc4\u4f30\u4e2d\u56fd\u5f53\u5c40\u5b9e\u65bd\u7684\u6602\u8d35\uff0c\u8d44\u6e90\u5bc6\u96c6\u578b\u63aa\u65bd\u5982\u4f55\u6709\u52a9\u4e8e\u9884\u9632\u548c\u63a7\u52362019-nCov\u611f\u67d3\uff0c\u4ee5\u53ca\u5e94\u7ef4\u6301\u591a\u957f\u65f6\u95f4\u3002\u5728\u6700\u4e25\u683c\u7684\u63aa\u65bd\u4e0b\uff0c\u9884\u8ba1\u75ab\u60c5\u5c06\u5728\u4e24\u5468\u5185\uff08\u81ea2020\u5e741\u670823\u65e5\u8d77\uff09\u8fbe\u5230\u5cf0\u503c\uff0c\u5cf0\u503c\u8f83\u4f4e\u3002\u4e0e\u6ca1\u6709\u51fa\u884c\u9650\u5236\u7684\u60c5\u51b5\u76f8\u6bd4\uff0c\u6709\u4e86\u51fa\u884c\u9650\u5236\uff08\u5373\u6ca1\u6709\u8f93\u5165\u7684\u6f5c\u4f0f\u7c7b\u4e2a\u4f53\u8fdb\u5165\u5317\u4eac\uff09\uff0c\u5317\u4eac\u76847\u5929\u611f\u67d3\u8005\u6570\u91cf\u5c06\u51cf\u5c1191.14\uff05\u3002" }, { "instance_id": "R44930xR44865", "comparison_id": "R44930", "paper_id": "R44865", "text": "Modelling the epidemic trend of the 2019 novel coronavirus outbreak in China We present a timely evaluation of the Chinese 2019-nCov epidemic in its initial phase, where 2019-nCov demonstrates comparable transmissibility but lower fatality rates than SARS and MERS. A quick diagnosis that leads to case isolation and integrated interventions will have a major impact on its future trend. Nevertheless, as China is facing its Spring Festival travel rush and the epidemic has spread beyond its borders, further investigation on its potential spatiotemporal transmission pattern and novel intervention strategies are warranted." }, { "instance_id": "R44930xR44825", "comparison_id": "R44930", "paper_id": "R44825", "text": "Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city in China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and other countries. We present estimates of the basic reproduction number, R 0, of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate (\u03b3), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96\u20132.55) to 3.58 (95%CI: 2.89\u20134.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0. Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and is significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks." }, { "instance_id": "R44930xR44731", "comparison_id": "R44930", "paper_id": "R44731", "text": "Transmission interval estimates suggest pre-symptomatic spread of COVID-19 Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread." }, { "instance_id": "R44930xR44856", "comparison_id": "R44930", "paper_id": "R44856", "text": "Time-varying transmission dynamics of Novel Coronavirus Pneumonia in China ABSTRACT Rationale Several studies have estimated basic production number of novel coronavirus pneumonia (NCP). However, the time-varying transmission dynamics of NCP during the outbreak remain unclear. Objectives We aimed to estimate the basic and time-varying transmission dynamics of NCP across China, and compared them with SARS. Methods Data on NCP cases by February 7, 2020 were collected from epidemiological investigations or official websites. Data on severe acute respiratory syndrome (SARS) cases in Guangdong Province, Beijing and Hong Kong during 2002-2003 were also obtained. We estimated the doubling time, basic reproduction number ( R 0 ) and time-varying reproduction number ( R t ) of NCP and SARS. Measurements and main results As of February 7, 2020, 34,598 NCP cases were identified in China, and daily confirmed cases decreased after February 4. The doubling time of NCP nationwide was 2.4 days which was shorter than that of SARS in Guangdong (14.3 days), Hong Kong (5.7 days) and Beijing (12.4 days). The R 0 of NCP cases nationwide and in Wuhan were 4.5 and 4.4 respectively, which were higher than R 0 of SARS in Guangdong ( R 0 =2.3), Hongkong ( R 0 =2.3), and Beijing ( R 0 =2.6). The R t for NCP continuously decreased especially after January 16 nationwide and in Wuhan. The R 0 for secondary NCP cases in Guangdong was 0.6, and the R t values were less than 1 during the epidemic. Conclusions NCP may have a higher transmissibility than SARS, and the efforts of containing the outbreak are effective. However, the efforts are needed to persist in for reducing time-varying reproduction number below one. At a Glance Commentary Scientific Knowledge on the Subject Since December 29, 2019, pneumonia infection with 2019-nCoV, now named as Novel Coronavirus Pneumonia (NCP), occurred in Wuhan, Hubei Province, China. The disease has rapidly spread from Wuhan to other areas. As a novel virus, the time-varying transmission dynamics of NCP remain unclear, and it is also important to compare it with SARS. What This Study Adds to the Field We compared the transmission dynamics of NCP with SARS, and found that NCP has a higher transmissibility than SARS. Time-varying production number indicates that rigorous control measures taken by governments are effective across China, and persistent efforts are needed to be taken for reducing instantaneous reproduction number below one." }, { "instance_id": "R44930xR44847", "comparison_id": "R44930", "paper_id": "R44847", "text": "Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions Abstract Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39\u20134.13); 58\u201376% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6\u20137.4); 21022 (11090\u201333490) total infections in Wuhan 1 to 22 January. Changes to previous version case data updated to include 22 Jan 2020; we did not use cases reported after this period as cases were reported at the province level hereafter, and large-scale control interventions were initiated on 23 Jan 2020; improved likelihood function, better accounting for first 41 confirmed cases, and now using all infections (rather than just cases detected) in Wuhan for prediction of infection in international travellers; improved characterization of uncertainty in parameters, and calculation of epidemic trajectory confidence intervals using a more statistically rigorous method; extended range of latent period in sensitivity analysis to reflect reports of up to 6 day incubation period in household clusters; removed travel restriction analysis, as different modelling approaches (e.g. stochastic transmission, rather than deterministic transmission) are more appropriate to such analyses. " }, { "instance_id": "R44930xR44873", "comparison_id": "R44930", "paper_id": "R44873", "text": "Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study Summary Background Since Dec 31, 2019, the Chinese city of Wuhan has reported an outbreak of atypical pneumonia caused by the 2019 novel coronavirus (2019-nCoV). Cases have been exported to other Chinese cities, as well as internationally, threatening to trigger a global outbreak. Here, we provide an estimate of the size of the epidemic in Wuhan on the basis of the number of cases exported from Wuhan to cities outside mainland China and forecast the extent of the domestic and global public health risks of epidemics, accounting for social and non-pharmaceutical prevention interventions. Methods We used data from Dec 31, 2019, to Jan 28, 2020, on the number of cases exported from Wuhan internationally (known days of symptom onset from Dec 25, 2019, to Jan 19, 2020) to infer the number of infections in Wuhan from Dec 1, 2019, to Jan 25, 2020. Cases exported domestically were then estimated. We forecasted the national and global spread of 2019-nCoV, accounting for the effect of the metropolitan-wide quarantine of Wuhan and surrounding cities, which began Jan 23\u201324, 2020. We used data on monthly flight bookings from the Official Aviation Guide and data on human mobility across more than 300 prefecture-level cities in mainland China from the Tencent database. Data on confirmed cases were obtained from the reports published by the Chinese Center for Disease Control and Prevention. Serial interval estimates were based on previous studies of severe acute respiratory syndrome coronavirus (SARS-CoV). A susceptible-exposed-infectious-recovered metapopulation model was used to simulate the epidemics across all major cities in China. The basic reproductive number was estimated using Markov Chain Monte Carlo methods and presented using the resulting posterior mean and 95% credibile interval (CrI). Findings In our baseline scenario, we estimated that the basic reproductive number for 2019-nCoV was 2\u00b768 (95% CrI 2\u00b747\u20132\u00b786) and that 75 815 individuals (95% CrI 37 304\u2013130 330) have been infected in Wuhan as of Jan 25, 2020. The epidemic doubling time was 6\u00b74 days (95% CrI 5\u00b78\u20137\u00b71). We estimated that in the baseline scenario, Chongqing, Beijing, Shanghai, Guangzhou, and Shenzhen had imported 461 (95% CrI 227\u2013805), 113 (57\u2013193), 98 (49\u2013168), 111 (56\u2013191), and 80 (40\u2013139) infections from Wuhan, respectively. If the transmissibility of 2019-nCoV were similar everywhere domestically and over time, we inferred that epidemics are already growing exponentially in multiple major cities of China with a lag time behind the Wuhan outbreak of about 1\u20132 weeks. Interpretation Given that 2019-nCoV is no longer contained within Wuhan, other major Chinese cities are probably sustaining localised outbreaks. Large cities overseas with close transport links to China could also become outbreak epicentres, unless substantial public health interventions at both the population and personal levels are implemented immediately. Independent self-sustaining outbreaks in major cities globally could become inevitable because of substantial exportation of presymptomatic cases and in the absence of large-scale public health interventions. Preparedness plans and mitigation interventions should be readied for quick deployment globally. Funding Health and Medical Research Fund (Hong Kong, China)." }, { "instance_id": "R44930xR44910", "comparison_id": "R44930", "paper_id": "R44910", "text": "Estimating the Unreported Number of Novel Coronavirus (2019-nCoV) Cases in China in the First Half of January 2020: A Data-Driven Modelling Analysis of the Early Outbreak Background: In December 2019, an outbreak of respiratory illness caused by a novel coronavirus (2019-nCoV) emerged in Wuhan, China and has swiftly spread to other parts of China and a number of foreign countries. The 2019-nCoV cases might have been under-reported roughly from 1 to 15 January 2020, and thus we estimated the number of unreported cases and the basic reproduction number, R0, of 2019-nCoV. Methods: We modelled the epidemic curve of 2019-nCoV cases, in mainland China from 1 December 2019 to 24 January 2020 through the exponential growth. The number of unreported cases was determined by the maximum likelihood estimation. We used the serial intervals (SI) of infection caused by two other well-known coronaviruses (CoV), Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) CoVs, as approximations of the unknown SI for 2019-nCoV to estimate R0. Results: We confirmed that the initial growth phase followed an exponential growth pattern. The under-reporting was likely to have resulted in 469 (95% CI: 403\u2013540) unreported cases from 1 to 15 January 2020. The reporting rate after 17 January 2020 was likely to have increased 21-fold (95% CI: 18\u201325) in comparison to the situation from 1 to 17 January 2020 on average. We estimated the R0 of 2019-nCoV at 2.56 (95% CI: 2.49\u20132.63). Conclusion: The under-reporting was likely to have occurred during the first half of January 2020 and should be considered in future investigation." }, { "instance_id": "R44930xR44901", "comparison_id": "R44930", "paper_id": "R44901", "text": "Real-Time Estimation of the Risk of Death from Novel Coronavirus (COVID-19) Infection: Inference Using Exported Cases The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number\u2014the average number of secondary cases generated by a single primary case in a na\u00efve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data." }, { "instance_id": "R44930xR44776", "comparison_id": "R44930", "paper_id": "R44776", "text": "Estimating the generation interval for COVID-19 based on symptom onset data Abstract Background Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities." }, { "instance_id": "R44930xR44836", "comparison_id": "R44930", "paper_id": "R44836", "text": "Estimating the effective reproduction number of the 2019-nCoV in China Abstract We estimate the effective reproduction number for 2019-nCoV based on the daily reported cases from China CDC. The results indicate that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate. Article Summary Line This modeling study indicates that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate." }, { "instance_id": "R44930xR44726", "comparison_id": "R44930", "paper_id": "R44726", "text": "The early phase of the COVID-19 outbreak in Lombardy, Italy In the night of February 20, 2020, the first case of novel coronavirus disease (COVID-19) was confirmed in the Lombardy Region, Italy. In the week that followed, Lombardy experienced a very rapid increase in the number of cases. We analyzed the first 5,830 laboratory-confirmed cases to provide the first epidemiological characterization of a COVID-19 outbreak in a Western Country. Epidemiological data were collected through standardized interviews of confirmed cases and their close contacts. We collected demographic backgrounds, dates of symptom onset, clinical features, respiratory tract specimen results, hospitalization, contact tracing. We provide estimates of the reproduction number and serial interval. The epidemic in Italy started much earlier than February 20, 2020. At the time of detection of the first COVID-19 case, the epidemic had already spread in most municipalities of Southern-Lombardy. The median age for of cases is 69 years (range, 1 month to 101 years). 47% of positive subjects were hospitalized. Among these, 18% required intensive care. The mean serial interval is estimated to be 6.6 days (95% CI, 0.7 to 19). We estimate the basic reproduction number at 3.1 (95% CI, 2.9 to 3.2). We estimated a decreasing trend in the net reproduction number starting around February 20, 2020. We did not observe significantly different viral loads in nasal swabs between symptomatic and asymptomatic. The transmission potential of COVID-19 is very high and the number of critical cases may become largely unsustainable for the healthcare system in a very short-time horizon. We observed a slight decrease of the reproduction number, possibly connected with an increased population awareness and early effect of interventions. Aggressive containment strategies are required to control COVID-19 spread and catastrophic outcomes for the healthcare system." }, { "instance_id": "R44930xR44918", "comparison_id": "R44930", "paper_id": "R44918", "text": "Estimation of the Transmission Risk of the 2019-nCoV and Its Implication for Public Health Interventions Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71\u20137.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction." }, { "instance_id": "R44930xR44759", "comparison_id": "R44930", "paper_id": "R44759", "text": "Transmission potential of COVID-19 in Iran Abstract We estimated the reproduction number of 2020 Iranian COVID-19 epidemic using two different methods: R 0 was estimated at 4.4 (95% CI, 3.9, 4.9) (generalized growth model) and 3.50 (1.28, 8.14) (epidemic doubling time) (February 19 - March 1) while the effective R was estimated at 1.55 (1.06, 2.57) (March 6-19)." }, { "instance_id": "R44978xR44716", "comparison_id": "R44978", "paper_id": "R44716", "text": "Randomised controlled trial of non-directive counselling, cognitive-behaviour therapy, and usual general practitioner care for patients with depression Abstract Objective: To compare the clinical effectiveness of general practitioner care and two general practice based psychological therapies for depressed patients. Design: Prospective, controlled trial with randomised and patient preference allocation arms. Setting: General practices in London and greater Manchester. Participants: 464 of 627 patients presenting with depression or mixed anxiety and depression were suitable for inclusion. Interventions: Usual general practitioner care or up to 12 sessions of non-directive counselling or cognitive-behaviour therapy provided by therapists. Main outcome measures: Beck depression inventory scores, other psychiatric symptoms, social functioning, and satisfaction with treatment measured at baseline and at 4 and 12 months. Results: 197 patients were randomly assigned to treatment, 137 chose their treatment, and 130 were randomised only between the two psychological therapies. All groups improved significantly over time. At four months, patients randomised to non-directive counselling or cognitive-behaviour therapy improved more in terms of the Beck depression inventory (mean (SD) scores 12.9 (9.3) and 14.3 (10.8) respectively) than those randomised to usual general practitioner care (18.3 (12.4)). However, there was no significant difference between the two therapies. There were no significant differences between the three treatment groups at 12 months (Beck depression scores 11.8 (9.6), 11.4 (10.8), and 12.1 (10.3) for non-directive counselling, cognitive-behaviour therapy, and general practitioner care). Conclusions: Psychological therapy was a more effective treatment for depression than usual general practitioner care in the short term, but after one year there was no difference in outcome." }, { "instance_id": "R44978xR44697", "comparison_id": "R44978", "paper_id": "R44697", "text": "Telephone counseling for patients with minor depression: preliminary findings in a family practice setting BACKGROUND Depression is a frequently occurring condition in family practice patients, but time limitations may hamper the physician's ability to it treat effectively. Referrals to mental health professionals are frequently resisted by patients. The need for more effective treatment strategies led to the development and evaluation of a telephone-based, problem-solving intervention. METHODS Patients in a family practice residency practice were evaluated through the Medical Outcomes Study Depression Screening Scale and the Diagnostic Interview Schedule to identify those with subthreshold or minor depression. Twenty-nine subjects were randomly assigned to either a treatment or comparison group. Initial scores on the Hamilton Depression Rating Scale were equivalent for the groups and were in the mildly depressed range. Six problem-solving therapy sessions were conducted over the telephone by graduate student therapists supervised by a psychiatrist. RESULTS Treatment group subjects had significantly lower post-intervention scores on the Hamilton Depression Rating Scale compared with their pre-intervention scores (P < .05). Scores did not differ significantly over time in the comparison group. Post-intervention, treatment group subjects also had lower Beck Depression Inventory scores than did the comparison group (P < .02), as well as more positive scores for social health (P < .002), mental health (P < .05), and self-esteem (P < .05) on the Duke Health Profile. CONCLUSIONS The findings indicate that brief, telephone-based treatment for minor depression in family practice settings may be an efficient and effective method to decrease symptoms of depression and improve functioning. Nurses in these settings with appropriate training and supervision may also be able to provide this treatment." }, { "instance_id": "R44978xR44702", "comparison_id": "R44978", "paper_id": "R44702", "text": "Randomised controlled trial comparing problem solving treatment with amitriptyline and placebo for major depression in primary care Abstract Objective: To determine whether, in the treatment of major depression in primary care, a brief psychological treatment (problem solving) was (a) as effective as antidepressant drugs and more effective than placebo; (b) feasible in practice; and (c) acceptable to patients. Design: Randomised controlled trial of problem solving treatment, amitriptyline plus standard clinical management, and drug placebo plus standard clinical management. Each treatment was delivered in six sessions over 12 weeks. Setting: Primary care in Oxfordshire. Subjects: 91 patients in primary care who had major depression. Main outcome measures: Observer and self reported measures of severity of depression, self reported measure of social outcome, and observer measure of psychological symptoms at six and 12 weeks; self reported measure of patient satisfaction at 12 weeks. Numbers of patients recovered at six and 12 weeks. Results: At six and 12 weeks the difference in score on the Hamilton rating scale for depression between problem solving and placebo treatments was significant (5.3 (95% confidence interval 1.6 to 9.0) and 4.7 (0.4 to 9.0) respectively), but the difference between problem solving and amitriptyline was not significant (1.8 (\u22121.8 to 5.5) and 0.9 (\u22123.3 to 5.2) respectively). At 12 weeks 60% (18/30) of patients given problem solving treatment had recovered on the Hamilton scale compared with 52% (16/31) given amitriptyline and 27% (8/30) given placebo. Patients were satisfied with problem solving treatment; all patients who completed treatment (28/30) rated the treatment as helpful or very helpful. The six sessions of problem solving treatment totalled a mean therapy time of 3 1/2 hours. Conclusions: As a treatment for major depression in primary care, problem solving treatment is effective, feasible, and acceptable to patients. Key messages Key messages Patient compliance with antidepressant treatment is often poor, so there is a need for a psychological treatment This study found that problem solving is an effective psychological treatment for major depression in primary care\u2014as effective as amitriptyline and more effective than placebo Problem solving is a feasible treatment in primary care, being effective when given over six sessions by a general practitioner Problem solving treatment is acceptable to patients" }, { "instance_id": "R44978xR44713", "comparison_id": "R44978", "paper_id": "R44713", "text": "Telephone psychotherapy and telephone care management for primary care patients starting antidepressant treatment: a randomized controlled trial CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was \"much improved\" (80% vs 55%, P<.001), and a higher proportion of patients \"very satisfied\" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment." }, { "instance_id": "R44978xR44722", "comparison_id": "R44978", "paper_id": "R44722", "text": "Combined pharmacological and behavioural treatment of depression Abstract Sixty-four depressed subjects received one of three psychological therapies (task assignment, relaxation training, minimal contact) in combination with either amitriptyline or placebo medication for a 2-month period. Depression and anxiety were assessed prior to treatment, at midtreatment, immediately following treatment, and during a 6-month follow-up period. Sleep disturbance, phobic symptoms and receipt of reinforcement were assessed less frequently. Marked improvement was observed on most measures during treatment independently of the type of treatment received. Treatment effects were maintained at the follow-up assessment. More rapid improvement was found for subjects who received amitriptyline as compared with placebo medication. No other advantages were found for the use of amitriptyline in combination with psychological treatments. Subjects who received either task assignment or relaxation training sought less additional treatment during the follow-up period than those who received minimal contact." }, { "instance_id": "R44978xR44689", "comparison_id": "R44978", "paper_id": "R44689", "text": "Problem solving treatment and group psychoeducation for depression: multicentre randomised controlled trial. Outcomes of Depression International Network (ODIN) Group Abstract Objectives: To determine the acceptability of two psychological interventions for depressed adults in the community and their effect on caseness, symptoms, and subjective function. Design: A pragmatic multicentre randomised controlled trial, stratified by centre. Setting: Nine urban and rural communities in Finland, Republic of Ireland, Norway, Spain, and the United Kingdom. Participants: 452 participants aged 18 to 65, identified through a community survey with depressive or adjustment disorders according to the international classification of diseases, 10th revision or Diagnostic and Statistical Manual of Mental Disorders, fourth edition. Interventions: Six individual sessions of problem solving treatment (n=128), eight group sessions of the course on prevention of depression (n=108), and controls (n=189). Main outcome measures: Completion rates for each intervention, diagnosis of depression, and depressive symptoms and subjective function. Results: 63% of participants assigned to problem solving and 44% assigned to prevention of depression completed their intervention. The proportion of problem solving participants depressed at six months was 17% less than that for controls, giving a number needed to treat of 6; the mean difference in Beck depression inventory score was \u22122.63 (95% confidence interval \u22124.95 to \u22120.32), and there were significant improvements in SF-36 scores. For depression prevention, the difference in proportions of depressed participants was 14% (number needed to treat of 7); the mean difference in Beck depression inventory score was \u22121.50 (\u22124.16 to 1.17), and there were significant improvements in SF-36 scores. Such differences were not observed at 12 months. Neither specific diagnosis nor treatment with antidepressants affected outcome. Conclusions: When offered to adults with depressive disorders in the community, problem solving treatment was more acceptable than the course on prevention of depression. Both interventions reduced caseness and improved subjective function." }, { "instance_id": "R44978xR44709", "comparison_id": "R44978", "paper_id": "R44709", "text": "Acute and one-year outcome of a randomised controlled trial of brief cognitive therapy for major depressive disorder in primary care Background The consensus statement on the treatment of depression (Paykel & Priest, 1992) advocates the use of cognitive therapy techniques as an adjunct to medication. Method This paper describes a randomised controlled trial of brief cognitive therapy (BCT) plus \u2018treatment as usual\u2019 versus treatment as usual in the management of 48 patients with major depressive disorder presenting in primary care. Results At the end of the acute phase, significantly more subjects ( P < 0.05) met recovery criteria in the intervention group ( n =15) compared with the control group ( n =8). When initial neuroticism scores were controlled for, reductions in Beck Depression Inventory and Hamilton Rating Scale for Depression scores favoured the BCT group throughout the 12 months of follow-up. Conclusions BCT may be beneficial, but given the time constraints, therapists need to be more rather than less skilled in cognitive therapy. This, plus methodological limitations, leads us to advise caution before applying this approach more widely in primary care." }, { "instance_id": "R44978xR44680", "comparison_id": "R44978", "paper_id": "R44680", "text": "Mindfulness-based cognitive therapy as a treatment for chronic depression: A preliminary study This pilot study investigated the effectiveness of Mindfulness-Based Cognitive Therapy (MBCT), a treatment combining mindfulness meditation and interventions taken from cognitive therapy, in patients suffering from chronic-recurrent depression. Currently symptomatic patients with at least three previous episodes of depression and a history of suicidal ideation were randomly allocated to receive either MBCT delivered in addition to treatment-as-usual (TAU; N = 14 completers) or TAU alone (N = 14 completers). Depressive symptoms and diagnostic status were assessed before and after treatment phase. Self-reported symptoms of depression decreased from severe to mild levels in the MBCT group while there was no significant change in the TAU group. Similarly, numbers of patients meeting full criteria for depression decreased significantly more in the MBCT group than in the TAU group. Results are consistent with previous uncontrolled studies. Although based on a small sample and, therefore, limited in their generalizability, they provide further preliminary evidence that MBCT can be used to successfully reduce current symptoms in patients suffering from a protracted course of the disorder." }, { "instance_id": "R44978xR44724", "comparison_id": "R44978", "paper_id": "R44724", "text": "Comparative efficacy of behavioral and cognitive therapies of depression Twenty-five depressed subjects were allocated to either behavioral treatment, cognitive treatment, or no-treatment conditions for an 8-week period. Measures of depressive-related symptomatology and treatment-related target areas were administered prior to treatment, at midtreatment, and immediately following treatment. Depression was also assessed at a 5-month follow-up. Marked improvement was observed on most measures across the treatment period in both treatment conditions but not in the notreatment condition. Cognitive and behavioral treatments were found to be equally effective in alleviating depression. Treatment effects were maintained at the follow-up. At all but the midtreatment assessment, the two treatments were found to have an equivalent impact on treatment-related target areas. Various explanations of the results are offered, including the role of nonspecific treatment factors." }, { "instance_id": "R46295xR45122", "comparison_id": "R46295", "paper_id": "R45122", "text": "Dynamics of photogenerated charges in the phosphate modified TiO 2 and the enhanced activity for photoelectrochemical water splitting Phosphate modified nanocrystalline TiO2 (nc-TiO2) films were prepared by a doctor blade method, followed by post-treatment with monometallic sodium orthophosphate solution. The dynamic processes of the photogenerated charges from the resulting nc-TiO2 films were thoroughly investigated by means of transient absorption spectroscopy (TAS). It is shown that photogenerated holes in the un-modified TiO2 film exhibit the same dynamic decay process as its photogenerated electrons, in oxygen-free water of pH 7. However, photogenerated holes in the phosphate modified film display a slightly faster dynamic decay process than its photogenerated electrons, and photogenerated charges of the modified film have a much longer lifetime than those of the un-modified film. These differences are attributed to the surface-carried negative charges of nc-TiO2 resulting from the phosphate groups (\u2013Ti\u2013O\u2013P\u2013O\u2212). Interestingly, the photoelectrochemical (PEC) experiments show that modification with an appropriate amount of phosphate could improve the photocurrent density of the nc-TiO2 film electrode by about 2 times, at a voltage of 0 V in the neutral electrolyte. Based on the TAS and PEC measurements of un-modified and phosphate modified nc-TiO2 films, with different conditions, it is suggested that the prolonged lifetime of photogenerated charges can be attributed to the negative electrostatic field formed in the surface layers. It is also responsible for the increase in activity for PEC water splitting and for the reported photocatalytic degradation of pollutants. The suggested mechanism would be applicable to other oxide semiconductor photocatalysts and to modification with other inorganic anions." }, { "instance_id": "R46295xR45098", "comparison_id": "R46295", "paper_id": "R45098", "text": "Flash photolysis observation of the absorption spectra of trapped positive holes and electrons in colloidal titanium dioxide Photolyse laser eclair a 347 nm d'un sol de TiO 2 contenant un intercepteur d'electron adsorbe (Pt ou MV 2+ ). Etude par spectres d'absorption des especes piegees. A \u03bb max =475 nm observation de \u00abtrous\u00bb h + . Vitesses de declin de h + en solutions acide et alcaline. Trous h + en exces. Avec un sol de TiO 2 contenant un intercepteur de trous (alcool polyvinylique ou thiocyanate), observation d'un spectre a \u03bb max =650 nm attribue aux electrons pieges en exces, proches de la surface des particules colloidales" }, { "instance_id": "R46295xR45118", "comparison_id": "R46295", "paper_id": "R45118", "text": "Transient absorption spectra of nanocrystalline TiO2 films at high excitation density Abstract We found that transient absorption spectra of nanocrystalline TiO2 films changed under high excitation density conditions. The spectra could be reproduced by accounting for three spectral components: holes, trapped electrons, and conducting electrons. On the basis of the observed spectral features, we concluded that the absorption coefficient due to trapped electrons was affected by generated holes at high excitation density." }, { "instance_id": "R46295xR45112", "comparison_id": "R46295", "paper_id": "R45112", "text": "Photochemical Reduction of Oxygen Adsorbed to Nanocrystalline TiO2 Films:\u2009 A Transient Absorption and Oxygen Scavenging Study of Different TiO2 Preparations Transient absorption spectroscopy (TAS) has been used to study the interfacial electron-transfer reaction between photogenerated electrons in nanocrystalline titanium dioxide (TiO(2)) films and molecular oxygen. TiO(2) films from three different starting materials (TiO(2) anatase colloidal paste and commercial anatase/rutile powders Degussa TiO(2) P25 and VP TiO(2) P90) have been investigated in the presence of ethanol as a hole scavenger. Separate investigations on the photocatalytic oxygen consumption by the films have also been performed with an oxygen membrane polarographic detector. Results show that a correlation exists between the electron dynamics of oxygen consumption observed by TAS and the rate of oxygen consumption through the photocatalytic process. The highest activity and the fastest oxygen reduction dynamics were observed with films fabricated from anatase TiO(2) colloidal paste. The use of TAS as a tool for the prediction of the photocatalytic activities of the materials is discussed. TAS studies indicate that the rate of reduction of molecular oxygen is limited by interfacial electron-transfer kinetics rather than by the electron trapping/detrapping dynamics within the TiO(2) particles." }, { "instance_id": "R46295xR45106", "comparison_id": "R46295", "paper_id": "R45106", "text": "How fast is interfacial hole transfer? In situ monitoring of carrier dynamics in anatase TiO 2 nanoparticles by femtosecond laser spectroscopy By comparing the transient absorption spectra of nanosized anatase TiO2 colloidal systems with and without SCN\u2212, the broad absorption band around 520 nm observed immediately after band-gap excitation for the system without SCN\u2212 has been assigned to shallowly trapped holes. In the presence of SCN\u2212, the absorption from the trapped holes at 520 nm cannot be observed because of the ultrafast interfacial hole transfer between TiO2 nanoparticles and SCN\u2212. The hole and electron trapping times were estimated to be <50 and 260 fs, respectively, by the analysis of rise and decay dynamics of transient absorption spectra. The rate of the hole transfer from nanosized TiO2 colloid to SCN\u2212 is comparable to that of the hole trapping and the time of formation of a weakly coupled (SCN\u00b7\u00b7\u00b7SCN)\u2022\u2212 is estimated to be \u223d2.3 ps with 0.3 M KSCN. A further structural change to form a stable (SCN)2\u2022\u2212 is observed in a timescale of 100\u223d150 ps, which is almost independent of the concentration of SCN\u2212." }, { "instance_id": "R46295xR45120", "comparison_id": "R46295", "paper_id": "R45120", "text": "Mechanism of O2 Production from Water Splitting: Nature of Charge Carriers in Nitrogen Doped Nanocrystalline TiO2 Films and Factors Limiting O2 Production The low efficiency of the extensively investigatedvisible light photocatalyst N-TiO2 has been widely assumed to be determined by the dynamics of the charge carriers. The nature of the photoelectrons and photoholes produced on the nanostructured (nc) N-TiO2 film has been systematically investigated in this work by the use of time-resolved absorption spectroscopy. Here the fingerprints of the two distinct photohole populations on nc-N-TiO2 films are reported and the reaction between these photoholes and water has been examined. The origin of the low efficiency of the visible-driven material for water oxidation was explored and rapid electron hole decay following visible excitation is believed to be a key factor. Pt deposition on nc-N-TiO2 resulted in an 80% enhancement of the quantum yield for O2 production under UV light. Finally, it has been summarized that the oxygen production on the nc-N-TiO2 film requires photoholes with lifetimes of \u223c0.4s." }, { "instance_id": "R46295xR45116", "comparison_id": "R46295", "paper_id": "R45116", "text": "Dynamics of efficient electron\u2013hole separation in TiO2 nanoparticles revealed by femtosecond transient absorption spectroscopy under the weak-excitation condition The transient absorption of nanocrystalline TiO(2) films in the visible and IR wavelength regions was measured under the weak-excitation condition, where the second-order electron-hole recombination process can be ignored. The intrinsic dynamics of the electron-hole pairs in the femtosecond to picosecond time range was elucidated. Surface-trapped electrons and surface-trapped holes were generated within approximately 200 fs (time resolution). Surface-trapped electrons, which gave an absorption peak at around 800 nm, and bulk electrons, which absorbed in the IR wavelength region, decayed with a 500-ps time constant due to relaxation into deep bulk trapping sites. It is already known that, after this relaxation, electrons and holes survive for microseconds. We interpreted these long lifetimes in terms of the prompt spatial charge separation of electrons in the bulk and holes at the surface." }, { "instance_id": "R46295xR45100", "comparison_id": "R46295", "paper_id": "R45100", "text": "Charge carrier trapping and recombination dynamics in small semiconductor particles Reference LPI-ARTICLE-1985-033doi:10.1021/ja00312a043View record in Web of Science Record created on 2006-02-21, modified on 2017-05-12" }, { "instance_id": "R46295xR45110", "comparison_id": "R46295", "paper_id": "R45110", "text": "Trapping dynamics of electrons and holes in a nanocrystalline TiO2 film revealed by femtosecond visible/near-infrared transient absorption spectroscopy Abstract The trapping dynamics of electrons and holes in TiO2 nanocrystlline films excited by ultraviolet laser pulses (266-nm wavelength) were studied with femtosecond visible/near-infrared transient absorption spectroscopy. UV irradiations of the TiO2 film generated hot carriers in the conduction and valence bands. The formation rate of deeply trapped holes was estimated to be 200 \u00b1 50 fs. The rate was limited by intraband relaxation (cooling) of hot holes. The spectral shift of the transient absorption indicated that trapped holes relax from shallow sites to deep ones. This relaxation was occurring more than 100 ps after photoexcitation. To cite this article: Y. Tamaki et al., C. R. Chimie 9 (2006)." }, { "instance_id": "R46295xR45104", "comparison_id": "R46295", "paper_id": "R45104", "text": "Charge Carrier Dynamics at TiO2 Particles:\u2009 Reactivity of Free and Trapped Holes Details of the mechanism of the photocatalytic oxidation of the model compounds dichloroacetate, DCA-, and thiocyanate, SCN-, have been investigated employing time-resolved laser flash photolysis. Nanosized colloidal titanium dioxide (TiO2, anatase) particles with a mean diameter of 24 A were used as photocatalysts in optically transparent aqueous suspensions. Detailed spectroscopic investigations of the processes occurring upon band gap irradiation in these colloidal aqueous TiO2 suspensions in the absence of any hole scavengers showed that while electrons are trapped instantaneously, i.e., within the duration of the laser flash (20 ns), at least two different types of traps have to be considered for the remaining holes. Deeply trapped holes, h+tr, are rather long-lived and unreactive, i.e., they are transferred neither to DCA- nor to SCN- ions. Shallowly trapped holes, h+tr*, on the other hand, are in a thermally activated equilibrium with free holes which exhibit a very high oxidation potential. The ov..." }, { "instance_id": "R46296xR46099", "comparison_id": "R46296", "paper_id": "R46099", "text": "Effects of F- Doping on the Photocatalytic Activity and Microstructures of Nanocrystalline TiO2 Powders A novel and simple method for preparing highly photoactive nanocrystalline F--doped TiO2 photocatalyst with anatase and brookite phase was developed by hydrolysis of titanium tetraisopropoxide in a mixed NH4F\u2212H2O solution. The prepared F--doped TiO2 powders were characterized by differential thermal analysis-thermogravimetry (DTA-TG), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), UV\u2212vis absorption spectroscopy, photoluminescence spectra (PL), transmission electron microscopy (TEM), and BET surface areas. The photocatalytic activity was evaluated by the photocatalytic oxidation of acetone in air. The results showed that the crystallinity of anatase was improved upon F- doping. Moreover, fluoride ions not only suppressed the formation of brookite phase but also prevented phase transition of anatase to rutile. The F--doped TiO2 samples exhibited stronger absorption in the UV\u2212visible range with a red shift in the band gap transition. The photocatalytic activity of F--doped TiO2 powders prep..." }, { "instance_id": "R46296xR46129", "comparison_id": "R46296", "paper_id": "R46129", "text": "H\u2010doped black titania with very high solar absorption and excellent photocatalysis enhanced by localized surface plasmon resonance Black TiO2 attracts enormous attention due to its large solar absorption and induced excellent photocatalytic activity. Herein, a new approach assisted by hydrogen plasma to synthesize unique H-doped black titania with a core/shell structure (TiO2@TiO2-xHx) is presented, superior to the high H-2-pressure process (under 20 bar for five days). The black titania possesses the largest solar absorption (approximate to 83%), far more than any other reported black titania (the record (high-pressure): approximate to 30%). H doping is favorable to eliminate the recombination centers of light-induced electrons and holes. High absorption and low recombination ensure the excellent photocatalytic activity for the black titania in the photo-oxidation of organic molecules in water and the production of hydrogen. The H-doped amorphous shell is proposed to play the same role as Ag or Pt loading on TiO2 nanocrystals, which induces the localized surface plasma resonance and black coloration. Photocatalytic water splitting and cleaning using TiO2-xHx is believed to have a bright future for sustainable energy sources and cleaning environment." }, { "instance_id": "R46296xR46127", "comparison_id": "R46296", "paper_id": "R46127", "text": "Super-hydrophobic fluorination mesoporous MCF/TiO2 composite as a high-performance photocatalyst NH4F was used instead of conventional organic silylation agent as the hydrophobic modifier to synthesize the super-hydrophobic mesocellular foams (MCF) loaded with nano-sized TiO2 photocatalysts in its pore channels, which could be considered as an extractant for organics. Compared to organosilane modified catalysts, NH4F-modified MCF/TiO2 has a more stable super-hydrophobic property and much higher photocatalytic activity. It was found that only using isopropanol as the solvent, the NH4F-modified catalyst showed super-hydrophobic property. It is believed that the solvent plays a role in controlling the exchange between surface OH groups and F ions. The special structure of supported mesoporous catalyst greatly facilitated the surface fluorination, which together with the Ti3+ generation led to its excellent adsorption capacity and UV/visible light photocatalytic activity. This novel super-hydrophobic mesoporous photocatalyst has a large application potential in the field of photocatalysis, shipbuilding, and other industries." }, { "instance_id": "R46296xR46105", "comparison_id": "R46296", "paper_id": "R46105", "text": "Improved photocatalytic activity of Sn 4+ doped TiO 2 nanoparticulate films prepared by plasma-enhanced chemical vapor deposition Sn4+ ion doped TiO2 (TiO2\u2013Sn4+) nanoparticulate films with a doping ratio of about 7\u2236100 [(Sn)\u2236(Ti)] were prepared by the plasma-enhanced chemical vapor deposition (PCVD) method. The doping mode (lattice Ti substituted by Sn4+ ions) and the doping energy level of Sn4+ were determined by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), surface photovoltage spectroscopy (SPS) and electric field induced surface photovoltage spectroscopy (EFISPS). It is found that the introduction of a doping energy level of Sn4+ ions is profitable to the separation of photogenerated carriers under both UV and visible light excitation. Characterization of the films with XRD and SPS indicates that after doping by Sn, more surface defects are present on the surface. Consequently, the photocatalytic activity for photodegradation of phenol in the presence of the TiO2\u2013Sn4+ film is higher than that of the pure TiO2 film under both UV and visible light irradiation." }, { "instance_id": "R46296xR46117", "comparison_id": "R46296", "paper_id": "R46117", "text": "Self-Doped Ti3+ Enhanced Photocatalyst for Hydrogen Production under Visible Light TiO2-x is prepared by combustion of an EtOH solution containing HCl, Ti(OiPr)4, and 2-ethylimidazole at 500 \u00b0C in air followed by annealing for 5 h." }, { "instance_id": "R46296xR46107", "comparison_id": "R46296", "paper_id": "R46107", "text": "Chemical vapor deposition of doped TiO2 thin films Abstract Niobium-, tantalum- and fluorine-doped TiO 2 films were made by atmospheric pressure chemical vapor deposition from titanium alkoxides mixed with niobium ethoxide, tantalum ethoxide and t -butyl fluoride respectively. 4% H 2 in N 2 was used as the carrier gas and the deposition temperatures were in the range 400\u2013600\u00b0C. The resistivities of the films increased dramatically with film thickness. For highly doped films 1 \u03bcm thick resistivities as low as 0.01 \u03a9 cm were achieved." }, { "instance_id": "R46296xR46087", "comparison_id": "R46296", "paper_id": "R46087", "text": "Preparation, Photocatalytic Activity, and Mechanism of Nano-TiO2 Co-Doped with Nitrogen and Iron (III) Nanoparticles of titanium dioxide co-doped with nitrogen and iron (III) were first prepared using the homogeneous precipitation-hydrothermal method. The structure and properties of the co-doped were studied by XRD, XPS, Raman, FL, and UV-diffuse reflectance spectra. By analyzing the structures and photocatalytic activities of the undoped and nitrogen and/or Fe3+-doped TiO2 under ultraviolet and visible light irradiation, the probable mechanism of co-doped particles was investigated. It is presumed that the nitrogen and Fe3+ ion doping induced the formation of new states closed to the valence band and conduction band, respectively. The co-operation of the nitrogen and Fe3+ ion leads to the much narrowing of the band gap and greatly improves the photocatalytic activity in the visible light region. Meanwhile, the co-doping can also promote the separation of the photogenerated electrons and holes to accelerate the transmission of photocurrent carrier. The photocatalyst co-doped with nitrogen and 0.5% Fe3+ sho..." }, { "instance_id": "R46296xR46093", "comparison_id": "R46296", "paper_id": "R46093", "text": "Silver-doped TiO2 prepared by microemulsion method: Surface properties, bio-and photoactivity Abstract A series of Ag-TiO 2 photocatalysts were obtained in microemulsion system (water/AOT/cyclohexane), using several Ag precursor amounts ranging from 1.5 to 8.5 mol.%. The photocatalysts\u2019 characteristics by X-ray diffraction, STEM microscopy, UV\u2013vis spectroscopy, X-ray photoelectron spectroscopy, BET methods showed that a sample with the highest photo- and bioactivity had anatase structure, about 90 m 2 /g specific surface area, absorbed light over 400 nm and contained 1.64 at.% of silver (0.30 at.% of Ag 0 and 1.34 at.% of Ag 2 O) and about 13 at.% of carbon in the surface layer. The photocatalytic activity of the catalysts was estimated by measuring the decomposition rate of phenol in 0.21 mM aqueous solution under visible and ultraviolet light irradiation. The bioactivity of silver-doped titanium dioxide nanocomposites was estimated using bacteria Escherichia coli and Staphylococcus aureus , yeast Saccharomyces cerevisiae and pathogenic fungi belonging to Candida family. All modified powders showed localized surface plasmon resonance (LSPR) in visible region with almost the same position of LSPR peaks indicating that similar sizes of silver, regardless of used amount of Ag, is deposited on titania particles during microemulsion method. STEM microscopy revealed that almost 50% of observed silver nanoparticles deposited at the TiO 2 surface are in the range from 5 to 10 nm." }, { "instance_id": "R46296xR46097", "comparison_id": "R46296", "paper_id": "R46097", "text": "Band structure and visible light photocatalytic activity of multi-type nitrogen doped TiO2 nanoparticles prepared by thermal decomposition Multi-type nitrogen doped TiO(2) nanoparticles were prepared by thermal decomposition of the mixture of titanium hydroxide and urea at 400 degrees C for 2h. The as-prepared photocatalysts were characterized by X-ray diffraction (XRD), high-resolution transmission electron microscopy (HRTEM), X-ray photoelectron spectroscopy (XPS), UV-vis diffuse reflectance spectra (UV-vis DRS), and photoluminescence (PL). The results showed that the as-prepared samples exhibited strong visible light absorption due to multi-type nitrogen doped in the form of substitutional (N-Ti-O and Ti-O-N) and interstitial (pi* character NO) states, which were 0.14 and 0.73 eV above the top of the valence band, respectively. A physical model of band structure was established to clarify the visible light photocatalytic process over the as-prepared samples. The photocatalytic activity was evaluated for the photodegradation of gaseous toluene under visible light irradiation. The activity of the sample prepared from wet titanium hydroxide and urea (TiO(2)-Nw, apparent reaction rate constant k = 0.045 min(-1)) was much higher than other samples including P25 (k = 0.0013 min(-1)). The high activity can be attributed to the results of the synergetic effects of strong visible light absorption, good crystallization, large surface hydroxyl groups, and enhanced separation of photoinduced carriers." }, { "instance_id": "R46296xR46080", "comparison_id": "R46296", "paper_id": "R46080", "text": "One-step solvothermal synthesis of a carbon@ TiO2 dyade structure effectively promoting visible-light photocatalysis The development of sunlight harvesting chemical systems to catalyze relevant reactions, i.e., water splitting, CO 2 fi xation, and organic mineralization, is the key target in artifi cial photosynthesis but remains a diffi cult challenge. Titanium dioxide (TiO 2 ) has been widely used as a photocatalyst for solar energy conversion and environmental applications because of its low toxicity, abundance, high photostability, and high effi ciency. [ 1\u20134 ] However, the application of pure TiO 2 is limited, because it requires ultraviolet (UV) light, which makes up only a small fraction ( < 4%) of the total solar spectrum reaching the surface of the earth. Therefore, over the past few years, considerable efforts have been directed towards the improvement of the photocatalytic effi ciency of TiO 2 in the visible (vis)-light region. [ 5\u20137 ] This has been mainly achieved by introducing various dopants into the TiO 2 structure which can narrow the bandgap. The initial approach to dope TiO 2 materials was achieved using transition metals ions such as V, Cr, or Fe. [ 6 , 8\u201310 ] However, such metal doped materials lack the necessary thermal stability, exhibit atom diffusion and a remarkably increased electron/hole recombination of defect sites, which results in a low photocatalytic effi ciency. [ 11 ] Non-metal doping has since proved to be far more successful and has been extensively investigated. Thus, numerous reports on TiO 2 doped with B, F, N, C, S, or I have demonstrated a signifi cant improvement of the visible-light photocatalytic effi ciency. [ 4 , 12\u201316 ]" }, { "instance_id": "R46296xR46121", "comparison_id": "R46296", "paper_id": "R46121", "text": "Efficient Photochemical Water Splitting by a Chemically Modified n-TiO2 Although n-type titanium dioxide (TiO2) is a promising substrate for photogeneration of hydrogen from water, most attempts at doping this material so that it absorbs light in the visible region of the solar spectrum have met with limited success. We synthesized a chemically modified n-type TiO2 by controlled combustion of Ti metal in a natural gas flame. This material, in which carbon substitutes for some of the lattice oxygen atoms, absorbs light at wavelengths below 535 nanometers and has a lower band-gap energy than rutile (2.32 versus 3.00 electron volts). At an applied potential of 0.3 volt, chemically modified n-type TiO2 performs water splitting with a total conversion efficiency of 11% and a maximum photoconversion efficiency of 8.35% when illuminated at 40 milliwatts per square centimeter. The latter value compares favorably with a maximum photoconversion efficiency of 1% for n-type TiO2biased at 0.6 volt." }, { "instance_id": "R46296xR46074", "comparison_id": "R46296", "paper_id": "R46074", "text": "Chemical State and Environment of Boron Dopant in B,N-Codoped Anatase TiO2 Nanoparticles: An Avenue for Probing Diamagnetic Dopants in TiO2 by Electron Paramagnetic Resonance Spectroscopy Boron is diamagnetic in B-doped anatase TiO2 nanoparticles which exhibit photocatalytic activities in the visible light range. Using N as a paramagnetic probe for the formal oxidation state of boron in N/B-codoped TiO2, with more than 90% unpaired spin density in the N2p orbital, we infer that boron enters the oxygen vacancy substitutionally in the form of B1-. Combination of spin-Hamiltonian analysis and interpretation of light dependent EPR spectra in terms of a charge compensating mechanism supports a model of [N2-B1-]+1 for the new EPR active center which acts as a trap for electrons liberated from [N1-] centers under blue light irradiation. Definite assignment of the boron oxidation state will contribute to the preparation, characterization, and understanding of B-TiO2 photoactivity under visible light which has been the subject of extensive work in the past few years." }, { "instance_id": "R46296xR46113", "comparison_id": "R46296", "paper_id": "R46113", "text": "Preparation of Polycrystalline TiO2 Photocatalysts Impregnated with Various Transition Metal Ions: Characterization and Photocatalytic Activity for the Degradation of 4-Nitrophenol A set of polycrystalline TiO2 photocatalysts loaded with various ions of transition metals (Co, Cr, Cu, Fe, Mo, V, and W) were prepared by using the wet impregnation method. The samples were characterized by using some bulk and surface techniques, namely X-ray diffraction, BET specific surface area determination, scanning electron microscopy, point of zero charge determination, and femtosecond pump\u2212probe diffuse reflectance spectroscopy (PP-DRS). The samples were employed as catalysts for 4-nitrophenol photodegradation in aqueous suspension, used as a probe reaction. The characterization results have confirmed the difficulty to find a straightforward correlation between photoactivity and single specific properties of the powders. Diffuse reflectance measurements showed a slight shift in the band gap transition to longer wavelengths and an extension of the absorption in the visible region for almost all the doped samples. SEM observation and EDX measurements indicated a similar morphology for all the parti..." }, { "instance_id": "R46296xR46123", "comparison_id": "R46296", "paper_id": "R46123", "text": "Effective Visible Light-Activated B-Doped and B,N-Codoped TiO2 Photocatalysts A simple low cost method permits controlled, reproducible doping of TiO2 anatase by boron. The resulting materials exhibit red-shifted absorption spectra and unprecedented photocatalytic activity under visible light. XP spectra are consistent with the presence of \u201cactive\u201d and \u201cinactive\u201d forms of B corresponding to substitutional boron and boric oxide-like material, respectively." }, { "instance_id": "R46296xR46072", "comparison_id": "R46296", "paper_id": "R46072", "text": "Synthesis of Fe3+ doped ordered mesoporous TiO2 with enhanced visible light photocatalytic activity and highly crystallized anatase wall Fe3+ doped mesoporous TiO2 with ordered mesoporous structure were successfully prepared by the solvent evaporation-induced self-assembly process using P123 as soft template. The properties and structure of Fe3+ doped mesoporous TiO2 were characterized by means of XRD, EPR, BET, TEM, and UV\u2013vis absorption spectra. The characteristic results clearly show that the amount of Fe3+ dopant affects the mesoporous structure as well as the visible light absorption of the catalysts. The photocatalytic activity of the prepared mesoporous TiO2 was evaluated from an analysis of the photodegradation of methyl orange under visible light irradiation. The results indicate that the sample of 0.50%Fe\u2013MTiO2 exhibits the highest visible light photocatalytic activity compared with other catalysts." }, { "instance_id": "R46296xR46091", "comparison_id": "R46296", "paper_id": "R46091", "text": "Synthesis and Characterization of Nitrogen-Doped TiO2 Nanophotocatalyst with High Visible Light Activity Nitrogen-doped TiO2 nanocatalysts with a homogeneous anatase structure were successfully synthesized through a microemulsion\u2212hydrothermal method by using some organic compounds such as triethylamine, urea, thiourea, and hydrazine hydrate. Analysis by Raman and X-ray photoemission spectroscopy indicated that nitrogen was doped effectively and most nitrogen dopants might be present in the chemical environment of Ti\u2212O\u2212N and O\u2212Ti\u2212N. A shift of the absorption edge to a lower energy and a stronger absorption in the visible light region were observed. The results of photodegradation or the organic pollutant rhodamine B in the visible light irradiation (\u03bb > 420 nm) suggested that the TiO2 photocatalysts after nitrogen doping were greatly improved compared with the undoped TiO2 photocatalysts and Degussa P-25; especially the nitrogen-doped TiO2 using triathylamine as the nitrogen source showed the highest photocatalytic activity, which also showed a higher efficiency for photodecomposition of 2,4-dichlorophenol. T..." }, { "instance_id": "R46296xR46109", "comparison_id": "R46296", "paper_id": "R46109", "text": "Preparation, characterization and visible-light-driven photocatalytic activity of Fe-doped titania nanorods and first-principles study for electronic structures Abstract Fe-doped TiO 2 (Fe-TiO 2 ) nanorods were prepared by an impregnating-calcination method using the hydrothermally prepared titanate nanotubes as precursors and Fe(NO 3 ) 3 as dopant. The as-prepared samples were characterized by scanning electron microscope, transmission electron microscope, X-ray diffraction, X-ray photoelectron spectroscopy, N 2 adsorption\u2013desorption isotherms and UV\u2013vis spectroscopy. The photocatalytic activity was evaluated by the photocatalytic oxidation of acetone in air under visible-light irradiation. The results show that Fe-doping greatly enhance the visible-light photocatalytic activity of mesoporous TiO 2 nanorods, and when the atomic ratio of Fe/Ti ( R Fe ) is in the range of 0.1\u20131.0%, the photocatalytic activity of the samples is higher than that of Degussa P25 and pure TiO 2 nanorods. At R Fe = 0.5%, the photocatalytic activity of Fe-TiO 2 nanorods exceeds that of Degussa P25 by a factor of more than two times. This is ascribed to the fact that the one-dimensional nanostructure can enhance the transfer and transport of charge carrier, the Fe-doping induces the shift of the absorption edge into the visible-light range with the narrowing of the band gap and reduces the recombination of photo-generated electrons and holes. Furthermore, the first-principle density functional theory (DFT) calculation further confirms the red shift of absorption edges and the narrowing of band gap of Fe-TiO 2 nanorods." }, { "instance_id": "R46296xR46084", "comparison_id": "R46296", "paper_id": "R46084", "text": "Electrical Properties of Nb\u2010, Ga\u2010, and Y\u2010Substituted Nanocrystalline Anatase TiO2 Prepared by Hydrothermal Synthesis Nanocrystalline anatase titanium dioxide powders were produced by a hydrothermal synthesis route in pure form and substituted with trivalent Ga3+ and Y3+ or pentavalent Nb5+ with the intention of creating acceptor or donor states, respectively. The electrical conductivity of each powder was measured using the powder-solution-composite (PSC) method. The conductivity increased with the addition of Nb5+ from 3 similar to x similar to 10-3 similar to S/cm to 10 similar to x similar to 10-3 similar to S/cm in as-prepared powders, and from 0.3 similar to x similar to 10-3 similar to S/cm to 0.9 similar to x similar to 10-3 similar to S/cm in heat-treated powders (520 degrees C, 1 similar to h). In contrast, substitution with Ga3+ and Y3+ had no measureable effect on the material's conductivity. The lack of change with the addition of Ga3+ and Y3+, and relatively small increase upon Nb5+ addition is attributed to ionic compensation owing to the highly oxidizing nature of hydrothermal synthesis." }, { "instance_id": "R46296xR46070", "comparison_id": "R46296", "paper_id": "R46070", "text": "Preparation of porous carbon-doped TiO2 film by sol\u2013gel method and its application for the removal of gaseous toluene in the optical fiber reactor Abstract In this paper, we prepared the carbon-doped TiO 2 (C-TiO 2 ) film to enhance photodegradation efficiency (PE) of gaseous toluene under UV light. PE was affected by thickness of TiO 2 film, specific surface area of TiO 2 , and carbon content doped on TiO 2 . The highest value of PE was 76% with 3.2-\u03bcm-thick film. By adding 0.435 g HPC, the maximum specific surface area of TiO 2 was 230 m 2 g \u22121 and PE with porous TiO 2 film was 85\u201387% in 30 min. The specific surface area of conventional TiO 2 was 55 m 2 g \u22121 and PE with normal TiO 2 film was 77\u201379%. To increase higher photodegradation capability of TiO 2 under UV light, carbon was doped on TiO 2 by sol\u2013gel and combustion method. PE of gaseous toluene with porous carbon-doped TiO 2 was 91\u201394% in 30 min and this was more enhanced by 18\u201319% than that with conventional TiO 2 . By-products derived from the photocatalytic oxidation of toluene were mostly amyl formate and ethyl formate, which are non-toxic." }, { "instance_id": "R46297xR46158", "comparison_id": "R46297", "paper_id": "R46158", "text": "Photocatalytic activity of nanostructured composites based on layered niobates and C3N4 in the hydrogen eVolution reaction from electron donor solutions under visible light Abstract Nanostructured composites based on layered niobates KNb3O8, K3\u041d3Nb10.8O30 and HNb3O8 as well as C3N4 were obtained. It was established that the rate of hydrogen evolution from electron donors water solutions under visible light using composites niobate/C3N4 as photocatalysts significantly exceeds the rate value for an individual C3N4. This phenomenon was explained by an effective separation of photogenerated charges between the components of the composite and thus a decrease of the electron-hole recombination. Transition from an individual C3N4 to the composites niobate/C3N4 allows significantly extend the list of an effective electron donor compounds for the photocatalytic hydrogen evolution process." }, { "instance_id": "R46297xR46164", "comparison_id": "R46297", "paper_id": "R46164", "text": "Boosting interfacial charge separation of Ba5Nb4O15/g-C3N4 photocatalysts by 2D/2D nanojunction towards e\ufb03cient visible-light driven H2 generation Abstract To efficiently facilitate the charge transfer by constructing heterojunction photocatalysts is a promising strategy for improving solar-driven hydrogen generation. Herein, a novel 2D/2D nanojunction architecture of Ba5Nb4O15/g-C3N4 photocatalysts with powerful interfacial charge transfer are rationally designed. Advanced electron microscopy analysis elucidates the layered hexagonal nanosheets were coupled on the surface of ultrathin g-C3N4 forming a 2D/2D nanojunction. More importantly, such characterizations and theoretical calculations together illustrate that a strong interfacial charge transfer existed between the g-C3N4 layer and Ba-O layer of the Ba5Nb4O15 nanosheets, which fostered the efficient transfer and provided more massive reactive centers for photocatalytic hydrogen evolution. The unique 2D/2D structure in Ba5Nb4O15/g-C3N4 heterojunction leads to generate numerous charge transfer nanochannels, and which could accelerate the interfacial charge separation efficiency to a great extent. Ba5Nb4O15/g-C3N4 (1:20) sample displayed a remarkable photocatalytic H2 evolution rate (2.67 mmol h\u22121 g\u22121) in oxalic acid solution, nearly 2.35 times higher than that of single g-C3N4 under visible light and exhibits an outstanding photostability even after four cycles. This work would provide a new insight for the design of 2D/2D heterojunction photocatalyst with efficient interfacial charge transfer and separation for solar-to-H2 conversion." }, { "instance_id": "R46297xR46154", "comparison_id": "R46297", "paper_id": "R46154", "text": "Giant enhancement of photocatalytic H2 production over KNbO3 photocatalyst obtained via carbon doping and MoS2 decoration Abstract This paper was designed for the first time to improve the photocatalytic activity of KNbO3 via carbon doping and MoS2 decoration simultaneously. The efficient photocatalytic hydrogen production was realized on the MoS2/C-KNbO3 composite under simulated sunlight irradiation in the present of methanol and chloroplatinic acid. The optimal composite presents a H2 production rate of 1300 \u03bcmol\u00b7g\u22121\u00b7h\u22121, which reaches 260 times that of pure KNbO3. Characterization results of the synthesized composite indicates that the introduction of a small amount of carbon into the KNbO3 lattice greatly hinders the recombination of electron-hole pairs. The decoration of MoS2 further induces the separation of charge carriers via trapping the electron in the conduction band of C-KNbO3, which is proven by the EIS and transient photocurrent response analyses. The remarkably enhanced separation efficiency of electron-hole pairs is believed to be the origin of the excellent photocatalytic performance, though other changes in surface area and optical property may also contribute the photocatalytic process. This study provides a feasible way for the design and preparation of novel photocatalysts with high efficiency." }, { "instance_id": "R46297xR46148", "comparison_id": "R46297", "paper_id": "R46148", "text": "Self-assembled nanohybrid of cadmium sul\ufb01de and calcium niobate: Photocatalyst with enhanced charge separation for e\ufb03cient visible light induced hydrogen generation Abstract A nanohybrid of CdS and HCa2Nb3O10 with enhanced separation of charge carriers was conveniently prepared by self-assembly of negatively charged HCa2Nb3O10 nanosheets with the existence of Cd2+ followed by sulfuration. With CdS working as a visible light absorbent, HCa2Nb3O10, with a wide bandgap, showed H2 evolution activity without cocatalysts upon visible light excitation and neutral pH (450 \u03bcmol h\u22121 gcat\u22121), and showed a high quantum efficiency of 7.2% at 425 nm because of enhanced charge separation. The self-assembly method rapidly realized introduction of guest materials into the interlayer space of layered perovskites from a layer-by-layer strategy, showing great potential in synthesizing novel 2D material based hybrid photocatalyst. A type II composite band structure was obtained from combination of CdS and niobates, making it possible for spontaneous charge separation. While ultra-fine nature of the guest CdS, ultra-thin nature of the host niobate nanosheets and close junction between the host and guest components facilitated charge transfer between components. Enhanced charge separation in hybrid photocatalyst is the key factor that leads to its superior photocatalytic performance in comparison with its components. This work extends photocatalytic application of wide gap niobates into visible light region. Since the unique nanostrucute and charged nature of 2D materials can be fully utilized, it is believable that more efficient photocatalytic water splitting can be realized by constructing nanosheets based hybrids in the future." }, { "instance_id": "R46297xR46152", "comparison_id": "R46297", "paper_id": "R46152", "text": "Ultrasmall NiS decorated HNb3O8 nanosheeets as highly e\ufb03cient photocatalyst for H2 eVolution reaction Abstract The construction of high-performance and stable nanocomposite photocatalyst remains a great challenge toward photocatalytic hydrogen evolution reaction, which is mainly due to the mediocre interfacial contact between the cocatalyst and the host. In this work, ultrasmall NiS is highly dispersed on HNb3O8 2D nanosheet via a facile electrostatic adsorption/self-assembly process. Interestingly, the growth of NiS is greatly suppressed by the interlayered spatial steric inhibition effect and a strong interaction between NiS and HNb3O8 nanosheet is formed. While the modification of NiS by the traditional methods, such as the coprecipitation and mechanical mixing methods, could not generate the sub-nanometer size of NiS with the tight contract between NiS and HNb3O8 nanosheet. As a result, the recombination of the photogenerated carriers is greatly suppressed on the sample prepared by our method. Furthermore, the overpotential of hydrogen evolution reaction could also be reduced significantly. Thus, the as-prepared composite exhibits a markedly improved photocatalytic H2 evolution activity as well as considerable stability. The optimal sample shows the H2 evolution rate of 1519.4 \u03bcmol g\u22121 h\u22121, which is about 17.4 times higher than that of the bare HNb3O8 nanosheets. The activity is also a magnitude higher than that of the sample prepared by a mechanical mixing method. Additionally, NiS/HNb3O8 prepared by the developed method shows a comparable activity as well as overpotential with Pt/HNb3O8, indicating an alternative to platinum." }, { "instance_id": "R46297xR46166", "comparison_id": "R46297", "paper_id": "R46166", "text": "In-situ synthesis of AgNbO3/g-C3N4 photocatalyst via microwave heating method for e\ufb03ciently photocatalytic H2 generation This paper is designed for elevating the photocatalytic H2-evoultion performance of g-C3N4 through the modification of AgNbO3 nanocubes. Via the microwave heating method, g-C3N4 was in-situ formed on AgNbO3 surface to fabricate a close contact between the two semiconductors in forty minutes. X-ray diffraction (XRD), Fourier transform-infrared (FT-IR), X-ray photoelectron spectroscopy (XPS) experiments were performed to confirm the binary structure of the synthesized AgNbO3/g-C3N4 composite. N2-adsorption and visible diffuse reflection spectroscopy (DRS) analyses indicated that the addition of AgNbO3 to g-C3N4 showed nearly negligible influence on the specific surface area and the optical property. Photoluminescence (PL) spectroscopy experiment suggested that the AgNbO3/g-C3N4 displayed reduced PL emission and longer lifetime of photoexcited charge carriers than g-C3N4, which could be ascribed to the suitable band potential and the intimate contact of g-C3N4 and AgNbO3. This result was also confirmed by the transient photocurrent response experiment. The influence of the enhanced charge separation was displayed in their photocatalytic reaction. AgNbO3/g-C3N4 sample showed enhanced performance in photocatalytic H2-generation under visible light illumination. The H2-evolution rate is determined to be 88 \u03bcmol\u00b7g-1\u00b7h-1, which reaches 2.0 times of g-C3N4. This study provides a feasible and rapid approach to fabricate g-C3N4 based composite." }, { "instance_id": "R46297xR46162", "comparison_id": "R46297", "paper_id": "R46162", "text": "Two-dimensional g-C3N4/Ca2Nb2TaO10 nanosheet composites for e\ufb03cient visible light photocatalytic hydrogen eVolution Abstract Scalable g-C 3 N 4 nanosheet powder catalyst was prepared by pyrolysis of dicyandiamide and ammonium chloride followed by ultra-sonication and freeze-drying. Nanosheet composite that combines the g-C 3 N 4 nanosheets and Ca 2 Nb 2 TaO 10 nanosheets with various ratios were developed and applied as photocatalysts for solar hydrogen generation. Systematic studies reveal that the g-C 3 N 4 /Ca 2 Nb 2 TaO 10 nanosheet composite with a mass ratio of 80:20 shows the best performance in photocatalytic H 2 evolution under visible light-irradiation, which is more than 2.8 times out-performing bare g-C 3 N 4 bulk. The resulting nanosheets possess a high surface area of 96 m 2 /g, which provides abundance active sites for the photocatalytic activity. More importantly, the g-C 3 N 4 /Ca 2 Nb 2 TaO 10 nanosheet composite shows efficient charge transfer kinetics at its interface, as evident by the photoluminescence measurement. The intimate interfacial connections and the synergistic effect between g-C 3 N 4 nanosheets and Ca 2 Nb 2 TaO 10 nanosheets with cascading electrons are efficient in suppressing charge recombination and improving photocatalytic H 2 evolution performance." }, { "instance_id": "R46299xR46225", "comparison_id": "R46299", "paper_id": "R46225", "text": "Photocatalytic reduction of CO2 to methane over HNb3O8 nanobelts Abstract KNb 3 O 8 and HNb 3 O 8 nanobelts were prepared by hydrothermal synthesis. The characteristics of samples were investigated by XRD, SEM, and UV\u2013vis diffuse reflectance spectroscopy. The KNb 3 O 8 and HNb 3 O 8 nanobelts exhibited much higher activities for CO 2 photoreduction to methane than commercial TiO 2 (Degussa P25), and the KNb 3 O 8 and HNb 3 O 8 particles prepared by conventional solid state reaction. It was also found that either the HNb 3 O 8 nanobelts or the HNb 3 O 8 particles performed better than the corresponding KNb 3 O 8 counterpart. It is proposed that the nanobelt-like morphology and the protonic acidity contribute to the higher photocatalytic activity of the HNb 3 O 8 nanobelts." }, { "instance_id": "R46299xR46221", "comparison_id": "R46299", "paper_id": "R46221", "text": "CO2 reduction over NaNbO3 and NaTaO3 perovskite photocatalysts

Both NaNbO3 and NaTaO3 exhibit interesting intrinsic photocatalytic activities for CO2 reduction in terms of conversion and selectivity.

" }, { "instance_id": "R46299xR46217", "comparison_id": "R46299", "paper_id": "R46217", "text": "Photoreduction of carbon dioxide over NaNbO3 nanostructured photocatalysts NaNbO3 had been successfully developed as a new photocatalyst for CO2 reduction. The catalysts were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), and ultraviolet\u2013visible spectroscopy (UV\u2013Vis). The DFT calculations revealed that the top of VB consisted of the hybridized O 2p orbital, while the bottom of CB was constructed by Nb 3d orbital, respectively. In addition, the photocatalytic activities of the NaNbO3 samples for reduction of CO2 into methanol under UV light irradiation were investigated systematically. Compared with the bulk NaNbO3 prepared by a solid state reaction method, the present NaNbO3 nanowires exhibited a much higher photocatalytic activity for CH4 production. This is the first example that CO2 conversion into CH4 proceeded on the semiconductor nanowire photocatalyst.Graphical AbstractNaNbO3 had been successfully developed as a new photocatalyst for CO2 reduction. It was noted that NaNbO3 nanowires showed a much higher activity for CH4 production compared with bulk counterpart (SSR NNO)." }, { "instance_id": "R46299xR46227", "comparison_id": "R46299", "paper_id": "R46227", "text": "Photocatalytic Reduction of Carbon Dioxide to Methane over SiO2-Pillared HNb3O8 Carbon dioxide (CO2) photoreduction by gaseous water over silica-pillared lamellar niobic acid, viz. HNb3O8, was studied in this work. The physicochemical characteristics of samples were examined by techniques such as XRD, FT-IR, SEM, TEM, and UV\u2013visible diffuse reflectance spectroscopy. Aspects that influence CO2 photoreduction, such as the layered structure, the protonic acidity, silica pillaring, and cocatalyst loading, were investigated in detail. Pt loading obvious promoted the activity for CO2 photoreduction to methane. The loading of Pt also promoted the formation of methane from catalyst associated carbon residues, although this contributes insignificantly to the overall amount of methane produced. The layered structure and the protonic acidity of the lamellar niobic acid have significant influences on CO2 photoreduction by water in gas phase. With layered structure, expanded interlayer distance, and stronger intercalation ability to water molecules, the silica pillared niobic acid showed much hig..." }, { "instance_id": "R46299xR46229", "comparison_id": "R46299", "paper_id": "R46229", "text": "Acidic surface niobium pentoxide is catalytic active for CO2 photoreduction Abstract In this paper, we report for the first time the significant photocatalytic activity of Nb-based materials for CO2 reduction. Nb2O5 catalysts were prepared through a modified peroxide sol-gel method using different annealing temperatures, showing activity for CO2 photoreduction in all conditions. The activity and selectivity of the Nb2O5 samples were directly related to their surface acidity: high surface acidity prompted conversion of CO2 to CO, HCOOH, and CH3COOH; low surface acidity prompted conversion of CO2 to CH4. The results also indicated that CO is the main intermediate species of the CO2 photoreduction in all conditions. We have uncovered the role played by the surface acidity of Nb2O5 and the mechanism behind its performance for CO2 photoreduction." }, { "instance_id": "R46299xR46233", "comparison_id": "R46299", "paper_id": "R46233", "text": "ConversionofCO2 intorenewablefueloverPt-g-C3N4/KNbO3 composite photocatalyst

g-C3N4/KNbO3 composite photocatalyst was prepared and developed for reduction of CO2 into CH4.

" }, { "instance_id": "R48103xR46658", "comparison_id": "R48103", "paper_id": "R46658", "text": "A multiobjective simulated annealing approach for classifier ensemble: Named entity recognition in Indian languages as case studies In this paper, we propose a simulated annealing (SA) based multiobjective optimization (MOO) approach for classifier ensemble. Several different versions of the objective functions are exploited. We hypothesize that the reliability of prediction of each classifier differs among the various output classes. Thus, in an ensemble system, it is necessary to find out the appropriate weight of vote for each output class in each classifier. Diverse classification methods such as Maximum Entropy (ME), Conditional Random Field (CRF) and Support Vector Machine (SVM) are used to build different models depending upon the various representations of the available features. One most important characteristics of our system is that the features are selected and developed mostly without using any deep domain knowledge and/or language dependent resources. The proposed technique is evaluated for Named Entity Recognition (NER) in three resource-poor Indian languages, namely Bengali, Hindi and Telugu. Evaluation results yield the recall, precision and F-measure values of 93.95%, 95.15% and 94.55%, respectively for Bengali, 93.35%, 92.25% and 92.80%, respectively for Hindi and 84.02%, 96.56% and 89.85%, respectively for Telugu. Experiments also suggest that the classifier ensemble identified by the proposed MOO based approach optimizing the F-measure values of named entity (NE) boundary detection outperforms all the individual models, two conventional baseline models and three other MOO based ensembles." }, { "instance_id": "R48103xR46670", "comparison_id": "R48103", "paper_id": "R46670", "text": "A Two-Phase Bio-NER System Based on Integrated Classifiers and Multiagent Strategy Biomedical named entity recognition (Bio-NER) is a fundamental step in biomedical text mining. This paper presents a two-phase Bio-NER model targeting at JNLPBA task. Our two-phase method divides the task into two subtasks: named entity detection (NED) and named entity classification (NEC). The NED subtask is accomplished based on the two-layer stacking method in the first phase, where named entities (NEs) are distinguished from nonnamed-entities (NNEs) in biomedical literatures without identifying their types. Then six classifiers are constructed by four toolkits (CRF++, YamCha, maximum entropy, Mallet) with different training methods and integrated based on the two-layer stacking method. In the second phase for the NEC subtask, the multiagent strategy is introduced to determine the correct entity type for entities identified in the first phase. The experiment results show that the presented approach can achieve an F-score of 76.06 percent, which outperforms most of the state-of-the-art systems." }, { "instance_id": "R48103xR46668", "comparison_id": "R48103", "paper_id": "R46668", "text": "Combining multiple classifiers using vote based classifier ensemble technique for named entity recognition In this paper, we pose the classifier ensemble problem under single and multiobjective optimization frameworks, and evaluate it for Named Entity Recognition (NER), an important step in almost all Natural Language Processing (NLP) application areas. We propose the solutions to two different versions of the ensemble problem for each of the optimization frameworks. We hypothesize that the reliability of predictions of each classifier differs among the various output classes. Thus, in an ensemble system it is necessary to find out either the eligible classes for which a classifier is most suitable to vote (i.e., binary vote based ensemble) or to quantify the amount of voting for each class in a particular classifier (i.e., real vote based ensemble). We use seven diverse classifiers, namely Naive Bayes, Decision Tree (DT), Memory Based Learner (MBL), Hidden Markov Model (HMM), Maximum Entropy (ME), Conditional Random Field (CRF) and Support Vector Machine (SVM) to build a number of models depending upon the various representations of the available features that are identified and selected mostly without using any domain knowledge and/or language specific resources. The proposed technique is evaluated for three resource-constrained languages, namely Bengali, Hindi and Telugu. Results using multiobjective optimization (MOO) based technique yield the overall recall, precision and F-measure values of 94.21%, 94.72% and 94.74%, respectively for Bengali, 99.07%, 90.63% and 94.66%, respectively for Hindi and 82.79%, 95.18% and 88.55%, respectively for Telugu. Results for all the languages show that the proposed MOO based classifier ensemble with real voting attains the performance level which is superior to all the individual classifiers, three baseline ensembles and the corresponding single objective based ensemble." }, { "instance_id": "R48103xR46656", "comparison_id": "R48103", "paper_id": "R46656", "text": "CRFS-based Chinese named entity recognition with improved tag set Chinese Named entity recognition is one of the most important tasks in NLP. The paper mainly describes our work on NER tasks. The paper built up a system under the framework of Conditional Random Fields (CRFs) model. With an improved tag set the system gets an F-value of 93.49 using SIGHAN2007 MSRA corpus." }, { "instance_id": "R48103xR46672", "comparison_id": "R48103", "paper_id": "R46672", "text": "Named entity recognition for Mongolian language This paper presents a pioneering work on building a Named Entity Recognition system for the Mongolian language, with an agglutinative morphology and a subject-object-verb word order. Our work explores the fittest feature set from a wide range of features and a method that refines machine learning approach using gazetteers with approximate string matching, in an effort for robust handling of out-of-vocabulary words. As well as we tried to apply various existing machine learning methods and find optimal ensemble of classifiers based on genetic algorithm. The classifiers uses different feature representations. The resulting system constitutes the first-ever usable software package for Mongolian NER, while our experimental evaluation will also serve as a much-needed basis of comparison for further research." }, { "instance_id": "R48103xR46660", "comparison_id": "R48103", "paper_id": "R46660", "text": "A multi-strategy approach to biological named entity recognition Recognizing and disambiguating bio-entities (genes, proteins, cells, etc.) names are very challenging tasks as some biologica databases can be outdated, names may not be normalized, abbreviations are used, syntactic and word order is modified, etc. Thus, the same bio-entity might be written into different ways making searching tasks a key obstacle as many candidate relevant literature containing those entities might not be found. As consequence, the same protein mention but using different names should be looked for or the same discovered protein name is being used to name a new protein using completely different features hence named-entity recognition methods are required. In this paper, we developed a bio-entity recognition model which combines different classification methods and incorporates simple pre-processing tasks for bio-entities (genes and proteins) recognition is presented. Linguistic pre-processing and feature representation for training and testing is observed to positively affect the overall performance of the method, showing promising results. Unlike some state-of-the-art methods, the approach does not require additional knowledge bases or specific-purpose tasks for post processing which make it more appealing. Experiments showing the promise of the model compared to other state-of-the-art methods are discussed." }, { "instance_id": "R48103xR46654", "comparison_id": "R48103", "paper_id": "R46654", "text": "Two-phase biomedical named entity recognition using CRFs As a fundamental step of biomedical text mining, Biomedical Named Entity Recognition (Bio-NER) remains a challenging task. This paper explores a so-called two-phase approach to identify biomedical entities, in which the recognition task is divided into two subtasks: Named Entity Detection (NED) and Named Entity Classification (NEC). And the two subtasks are finished in two phases. At the first phase, we try to identify each named entity with a Conditional Random Fields (CRFs) model without identifying its type; at the second phase, another CRFs model is used to determine the correct entity type for each identified entity. This treatment can reduce the training time significantly and furthermore, more relevant features can be selected for each subtask. In order to achieve a better performance, post-processing algorithms are employed before NEC subtask. Experiments conducted on JNLPBA2004 datasets show that our two-phase approach can achieve an F-score of 74.31%, which outperforms most of the state-of-the-art systems." }, { "instance_id": "R48392xR48265", "comparison_id": "R48392", "paper_id": "R48265", "text": "Probabilistic 21st and 22nd century sea-level projections at a global network of tide-gauge sites. Sea-level rise due to both climate change and non-climatic factors threatens coastal settlements, infrastructure, and ecosystems. Projections of mean global sea-level (GSL) rise provide insufficient information to plan adaptive responses; local decisions require local projections that accommodate different risk tolerances and time frames and that can be linked to storm surge projections. Here we present a global set of local sea-level (LSL) projections to inform decisions on timescales ranging from the coming decades through the 22nd century. We provide complete probability distributions, informed by a combination of expert community assessment, expert elicitation, and process modeling. Between the years 2000 and 2100, we project a very likely (90% probability) GSL rise of 0.5\u20131.2 m under representative concentration pathway (RCP) 8.5, 0.4\u20130.9 m under RCP 4.5, and 0.3\u20130.8 m under RCP 2.6. Site-to-site differences in LSL projections are due to varying non-climatic background uplift or subsidence, oceanographic effects, and spatially variable responses of the geoid and the lithosphere to shrinking land ice. The Antarctic ice sheet (AIS) constitutes a growing share of variance in GSL and LSL projections. In the global average and at many locations, it is the dominant source of variance in late 21st century projections, though at some sites oceanographic processes contribute the largest share throughout the century. LSL rise dramatically reshapes flood risk, greatly increasing the expected number of \u201c1-in-10\u201d and \u201c1-in-100\u201d year events." }, { "instance_id": "R48392xR48253", "comparison_id": "R48392", "paper_id": "R48253", "text": "Sea level rise projections for northern Europe under RCP8.5 Sea level rise poses a significant threat to coastal communities, infrastructure, and ecosystems. Sea level rise is not uniform globally but is affected by a range of regional factors. In this study, we calculate regional projections of 21st century sea level rise in northern Europe, focusing on the British Isles, the Baltic Sea, and the North Sea. The input to the regional sea level projection is a probabilistic projection of the major components of the global sea level budget. Local sea level rise is partly compensated by vertical land movement from glacial isostatic adjustment. We explore the uncertainties beyond the likely range provided by the IPCC, including the risk and potential rate of marine ice sheet collapse. Our median 21st century relative sea level rise projection is 0.8 m near London and Hamburg, with a relative sea level drop of 0.1 m in the Bay of Bothnia (near Oulu, Finland). Considerable uncertainties remain in both the sea level budget and in the regional expression of sea level rise. The greatest uncertainties are associated with Antarctic ice loss, and uncertainties are skewed towards higher values, with the 95th percentile being characterized by an additional 0.9 m sea level rise above median projections." }, { "instance_id": "R48392xR48303", "comparison_id": "R48392", "paper_id": "R48303", "text": "A probabilistic approach to 21st century regional sea-level projections using RCP and High-end scenarios Sea-level change is an integrated climate system response due to changes in radiative forcing, anthropogenic land-water use and land-motion. Projecting sea-level at a global and regional scale requires a subset of projections - one for each sea-level component given a particular climate-change scenario. We construct relative sea-level projections through the 21st century for RCP 4.5, RCP 8.5 and High-end (RCP 8.5 with increased ice-sheet contribution) scenarios by aggregating spatial projections of individual sea-level components in a probabilistic manner. Most of the global oceans adhere to the projected global average sea level change within 5 cm throughout the century for all scenarios; however coastal regions experience localised effects due to the non-uniform spatial patterns of individual components. This can result in local projections that are 10\u2032s of centimetres different from the global average by 2100. Early in the century, RSL projections are consistent across all scenarios, however from the middle of the century the patterns of RSL for RCP scenarios deviate from the High-end where the contribution from Antarctica dominates. Similarly, the uncertainty in projected sea-level is dominated by an uncertain Antarctic fate. We also explore the effect upon projections of, treating CMIP5 model ensembles as normally distributed when they might not be, correcting CMIP5 model output for internal variability using different polynomials and using different unloading patterns of ice for the Greenland and Antarctic ice sheets." }, { "instance_id": "R48392xR48309", "comparison_id": "R48392", "paper_id": "R48309", "text": "Uncertainty in Sea Level Rise Projections Due to the Dependence Between Contributors 11 Using two process-based models to project sea level for the 21st century, it is shown that 12 taking into account the correlation between sea level contributors is important to bet13 ter quantify the uncertainty of future sea level. In these models the correlation primar14 ily arises from global mean surface temperature that simultaneously leads to more or less 15 ice melt and thermal expansion. Assuming that sea level contributors are independent 16 of each other underestimates the uncertainty in sea level projections. As a result, high17 end low probability events that are important for decision making are underestimated. 18 For a probabilistic model it is shown that the 95th percentile of the total sea level rise 19 distribution at the end of the 21st century is underestimated by 5 cm for the RCP4.5 20 scenario under the independent assumption. This underestimation is up to 16 cm for the 21 99.9th percentile of the RCP8.5 scenario. On the other hand, assuming perfect corre22 lation overestimates the uncertainty. The strength of the dependence between contrib23 utors is difficult to constrain from observations so its uncertainty is also explored. New 24 dependence relation between the uncertainty of dynamical processes and surface mass 25 balance in glaciers and ice caps and in the Antarctic and Greenland ice sheets are in26 troduced in our model. Total sea level uncertainty is found to be as sensitive to the de27 pendence between contributors as to uncertainty in individual contributors like thermal 28 expansion and Greenland ice sheet. 29" }, { "instance_id": "R48392xR48233", "comparison_id": "R48392", "paper_id": "R48233", "text": "A scaling approach to project regional sea level rise and its uncertainties Abstract. Climate change causes global mean sea level to rise due to thermal expansion of seawater and loss of land ice from mountain glaciers, ice caps and ice sheets. Locally, sea level can strongly deviate from the global mean rise due to changes in wind and ocean currents. In addition, gravitational adjustments redistribute seawater away from shrinking ice masses. However, the land ice contribution to sea level rise (SLR) remains very challenging to model, and comprehensive regional sea level projections, which include appropriate gravitational adjustments, are still a nascent field (Katsman et al., 2011; Slangen et al., 2011). Here, we present an alternative approach to derive regional sea level changes for a range of emission and land ice melt scenarios, combining probabilistic forecasts of a simple climate model (MAGICC6) with the new CMIP5 general circulation models. The contribution from ice sheets varies considerably depending on the assumptions for the ice sheet projections, and thus represents sizeable uncertainties for future sea level rise. However, several consistent and robust patterns emerge from our analysis: at low latitudes, especially in the Indian Ocean and Western Pacific, sea level will likely rise more than the global mean (mostly by 10\u201320%). Around the northeastern Atlantic and the northeastern Pacific coasts, sea level will rise less than the global average or, in some rare cases, even fall. In the northwestern Atlantic, along the American coast, a strong dynamic sea level rise is counteracted by gravitational depression due to Greenland ice melt; whether sea level will be above- or below-average will depend on the relative contribution of these two factors. Our regional sea level projections and the diagnosed uncertainties provide an improved basis for coastal impact analysis and infrastructure planning for adaptation to climate change." }, { "instance_id": "R48392xR48315", "comparison_id": "R48392", "paper_id": "R48315", "text": "Sea-level projections representing the deeply uncertain contribution of the West Antarctic ice sheet There is a growing awareness that uncertainties surrounding future sea-level projections may be much larger than typically perceived. Recently published projections appear widely divergent and highly sensitive to non-trivial model choices. Moreover, the West Antarctic ice sheet (WAIS) may be much less stable than previous believed, enabling a rapid disintegration. Here, we present a set of probabilistic sea-level projections that approximates the deeply uncertain WAIS contributions. The projections aim to inform robust decisions by clarifying the sensitivity to non-trivial or controversial assumptions. We show that the deeply uncertain WAIS contribution can dominate other uncertainties within decades. These deep uncertainties call for the development of robust adaptive strategies. These decision-making needs, in turn, require mission-oriented basic science, for example about potential signposts and the maximum rate of WAIS-induced sea-level changes." }, { "instance_id": "R48392xR48353", "comparison_id": "R48392", "paper_id": "R48353", "text": "Future sea level rise constrained by observations and long-term commitment Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28\u201356 cm, 37\u201377 cm, and 57\u2013131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The \u201cconstrained extrapolation\u201d approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections." }, { "instance_id": "R48392xR48381", "comparison_id": "R48392", "paper_id": "R48381", "text": "Long-term sea-level rise implied by 1.5\u2009\u00b0C and 2\u2009\u00b0C warming levels Sea-level rise is one of the key consequences of climate change. Its impact is long-term owing to the multi-century response timescales involved. This study addresses how much sea-level rise will result in coming centuries from climate-policy decisions taken today." }, { "instance_id": "R48401xR48303", "comparison_id": "R48401", "paper_id": "R48303", "text": "A probabilistic approach to 21st century regional sea-level projections using RCP and High-end scenarios Sea-level change is an integrated climate system response due to changes in radiative forcing, anthropogenic land-water use and land-motion. Projecting sea-level at a global and regional scale requires a subset of projections - one for each sea-level component given a particular climate-change scenario. We construct relative sea-level projections through the 21st century for RCP 4.5, RCP 8.5 and High-end (RCP 8.5 with increased ice-sheet contribution) scenarios by aggregating spatial projections of individual sea-level components in a probabilistic manner. Most of the global oceans adhere to the projected global average sea level change within 5 cm throughout the century for all scenarios; however coastal regions experience localised effects due to the non-uniform spatial patterns of individual components. This can result in local projections that are 10\u2032s of centimetres different from the global average by 2100. Early in the century, RSL projections are consistent across all scenarios, however from the middle of the century the patterns of RSL for RCP scenarios deviate from the High-end where the contribution from Antarctica dominates. Similarly, the uncertainty in projected sea-level is dominated by an uncertain Antarctic fate. We also explore the effect upon projections of, treating CMIP5 model ensembles as normally distributed when they might not be, correcting CMIP5 model output for internal variability using different polynomials and using different unloading patterns of ice for the Greenland and Antarctic ice sheets." }, { "instance_id": "R48401xR48265", "comparison_id": "R48401", "paper_id": "R48265", "text": "Probabilistic 21st and 22nd century sea-level projections at a global network of tide-gauge sites. Sea-level rise due to both climate change and non-climatic factors threatens coastal settlements, infrastructure, and ecosystems. Projections of mean global sea-level (GSL) rise provide insufficient information to plan adaptive responses; local decisions require local projections that accommodate different risk tolerances and time frames and that can be linked to storm surge projections. Here we present a global set of local sea-level (LSL) projections to inform decisions on timescales ranging from the coming decades through the 22nd century. We provide complete probability distributions, informed by a combination of expert community assessment, expert elicitation, and process modeling. Between the years 2000 and 2100, we project a very likely (90% probability) GSL rise of 0.5\u20131.2 m under representative concentration pathway (RCP) 8.5, 0.4\u20130.9 m under RCP 4.5, and 0.3\u20130.8 m under RCP 2.6. Site-to-site differences in LSL projections are due to varying non-climatic background uplift or subsidence, oceanographic effects, and spatially variable responses of the geoid and the lithosphere to shrinking land ice. The Antarctic ice sheet (AIS) constitutes a growing share of variance in GSL and LSL projections. In the global average and at many locations, it is the dominant source of variance in late 21st century projections, though at some sites oceanographic processes contribute the largest share throughout the century. LSL rise dramatically reshapes flood risk, greatly increasing the expected number of \u201c1-in-10\u201d and \u201c1-in-100\u201d year events." }, { "instance_id": "R48401xR48353", "comparison_id": "R48401", "paper_id": "R48353", "text": "Future sea level rise constrained by observations and long-term commitment Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28\u201356 cm, 37\u201377 cm, and 57\u2013131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The \u201cconstrained extrapolation\u201d approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections." }, { "instance_id": "R48401xR48309", "comparison_id": "R48401", "paper_id": "R48309", "text": "Uncertainty in Sea Level Rise Projections Due to the Dependence Between Contributors 11 Using two process-based models to project sea level for the 21st century, it is shown that 12 taking into account the correlation between sea level contributors is important to bet13 ter quantify the uncertainty of future sea level. In these models the correlation primar14 ily arises from global mean surface temperature that simultaneously leads to more or less 15 ice melt and thermal expansion. Assuming that sea level contributors are independent 16 of each other underestimates the uncertainty in sea level projections. As a result, high17 end low probability events that are important for decision making are underestimated. 18 For a probabilistic model it is shown that the 95th percentile of the total sea level rise 19 distribution at the end of the 21st century is underestimated by 5 cm for the RCP4.5 20 scenario under the independent assumption. This underestimation is up to 16 cm for the 21 99.9th percentile of the RCP8.5 scenario. On the other hand, assuming perfect corre22 lation overestimates the uncertainty. The strength of the dependence between contrib23 utors is difficult to constrain from observations so its uncertainty is also explored. New 24 dependence relation between the uncertainty of dynamical processes and surface mass 25 balance in glaciers and ice caps and in the Antarctic and Greenland ice sheets are in26 troduced in our model. Total sea level uncertainty is found to be as sensitive to the de27 pendence between contributors as to uncertainty in individual contributors like thermal 28 expansion and Greenland ice sheet. 29" }, { "instance_id": "R48401xR48337", "comparison_id": "R48401", "paper_id": "R48337", "text": "Impacts of Antarctic fast dynamics on sea-level projections and coastal flood defense Strategies to manage the risks posed by future sea-level rise hinge on a sound characterization of the inherent uncertainties. One of the major uncertainties is the possible rapid disintegration of large fractions of the Antarctic ice sheet in response to rising global temperatures. This could potentially lead to several meters of sea-level rise during the next few centuries. Previous studies have typically been silent on two coupled questions: (i) What are probabilistic estimates of this \u201cfast dynamic\u201d contribution to sea-level rise? (ii) What are the implications for strategies to manage coastal flooding risks? Here, we present probabilistic hindcasts and projections of sea-level rise to 2100. The fast dynamic mechanism is approximated by a simple parameterization, designed to allow for a careful quantification of the uncertainty in its contribution to sea-level rise. We estimate that global temperature increases ranging from 1.9 to 3.1 \u00b0C coincide with fast Antarctic disintegration, and these contributions account for sea-level rise of 21\u201374 cm this century (5\u201395% range, Representative Concentration Pathway 8.5). We use a simple cost-benefit analysis of coastal defense to demonstrate in a didactic exercise how neglecting this mechanism and associated uncertainty can (i) lead to strategies which fall sizably short of protection targets and (ii) increase the expected net costs." }, { "instance_id": "R48401xR48367", "comparison_id": "R48401", "paper_id": "R48367", "text": "Linking sea level rise and socioeconomic indicators underthe Shared Socioeconomic Pathways In order to assess future sea level rise and its societal impacts, we need to study climate change pathways combined with different scenarios of socioeconomic development. Here, we present Sea Level Rise (SLR) projections for the Shared Socioeconomic Pathway (SSP) storylines and different year-2100 radiative Forcing Targets (FTs). Future SLR is estimated with a comprehensive SLR emulator that accounts for Antarctic rapid discharge from hydrofracturing and ice cliff instability. Across all baseline scenario realizations (no dedicated climate mitigation), we find 2100 median SLR relative to 1986-2005 of 89 cm (likely range: 57 to 130 cm) for SSP1, 105 cm (73 to 150 cm) for SSP2, 105 cm (75 to 147 cm) for SSP3, 93 cm (63 to 133 cm) for SSP4, and 132 cm (95 to 189 cm) for SSP5. The 2100 sea level responses for combined SSP-FT scenarios are dominated by the mitigation targets and yield median estimates of 52 cm (34 to 75 cm) for FT 2.6 Wm-2, 62 cm (40 to 96 cm) for FT 3.4 Wm-2, 75 cm (47 to 113 cm) for FT 4.5 Wm-2, and 91 cm (61 to 132 cm) for FT 6.0 Wm-2. Average 2081-2100 annual SLR rates are 5 mm yr-1 and 19 mm yr-1 for FT 2.6 Wm-2 and the baseline scenarios, respectively. Our model setup allows linking scenario-specific emission and socioeconomic indicators to projected SLR. We find that 2100 median SSP SLR projections could be limited to around 50 cm if 2050 cumulative CO2 emissions since pre-industrial stay below 850 GtC ,with a global coal phase-out nearly completed by that time. For SSP mitigation scenarios, a 2050 carbon price of 100 US$2005 tCO2 -1 would correspond to a median 2100 SLR of around 65 cm. Our results confirm that rapid and early emission reductions are essential for limiting 2100 SLR." }, { "instance_id": "R48401xR48279", "comparison_id": "R48401", "paper_id": "R48279", "text": "Evolving Understanding of Antarctic Ice\u2010Sheet Physics and Ambiguity in Probabilistic Sea\u2010Level Projections Mechanisms such as ice-shelf hydrofracturing and ice-cliff collapse may rapidly increase discharge from marine-based ice sheets. Here, we link a probabilistic framework for sea-level projections to a small ensemble of Antarctic ice-sheet (AIS) simulations incorporating these physical processes to explore their influence on global-mean sea-level (GMSL) and relative sea-level (RSL). We compare the new projections to past results using expert assessment and structured expert elicitation about AIS changes. Under high greenhouse gas emissions (Representative Concentration Pathway [RCP] 8.5), median projected 21st century GMSL rise increases from 79 to 146 cm. Without protective measures, revised median RSL projections would by 2100 submerge land currently home to 153 million people, an increase of 44 million. The use of a physical model, rather than simple parameterizations assuming constant acceleration of ice loss, increases forcing sensitivity: overlap between the central 90% of simulations for 2100 for RCP 8.5 (93\u2013243 cm) and RCP 2.6 (26\u201398 cm) is minimal. By 2300, the gap between median GMSL estimates for RCP 8.5 and RCP 2.6 reaches >10 m, with median RSL projections for RCP 8.5 jeopardizing land now occupied by 950 million people (versus 167 million for RCP 2.6). The minimal correlation between the contribution of AIS to GMSL by 2050 and that in 2100 and beyond implies current sea-level observations cannot exclude future extreme outcomes. The sensitivity of post-2050 projections to deeply uncertain physics highlights the need for robust decision and adaptive management frameworks." }, { "instance_id": "R52143xR52120", "comparison_id": "R52143", "paper_id": "R52120", "text": "Plant functional group diversity as a mechanism for invasion resistance A commonly cited mechanism for invasion resistance is more complete resource use by diverse plant assemblages with maximum niche complementarity. We investigated the invasion resistance of several plant functional groups against the nonindigenous forb Spotted knapweed (Centaurea maculosa). The study consisted of a factorial combination of seven functional group removals (groups singularly or in combination) and two C. maculosa treatments (addition vs. no addition) applied in a randomized complete block design replicated four times at each of two sites. We quantified aboveground plant material nutrient concentration and uptake (concentration 3 biomass) by indigenous functional groups: grasses, shallow-rooted forbs, deep-rooted forbs, spikemoss, and the nonindigenous invader C. maculosa. In 2001, C. maculosa density depended upon which functional groups were removed. The highest C. maculosa densities occurred where all vegetation or all forbs were removed. Centaurea maculosa densities were the lowest in plots where nothing, shallowrooted forbs, deep-rooted forbs, grasses, or spikemoss were removed. Functional group biomass was also collected and analyzed for nitrogen, phosphorus, potassium, and sulphur. Based on covariate analyses, postremoval indigenous plot biomass did not relate to invasion by C. maculosa. Analysis of variance indicated that C. maculosa tissue nutrient percentage and net nutrient uptake were most similar to indigenous forb functional groups. Our study suggests that establishing and maintaining a diversity of plant functional groups within the plant community enhances resistance to invasion. Indigenous plants of functionally similar groups as an invader may be particularly important in invasion resistance." }, { "instance_id": "R52143xR52122", "comparison_id": "R52143", "paper_id": "R52122", "text": "Grassland invader responses to realistic changes in native species richness The importance of species richness for repelling exotic plant invasions varies from ecosystem to ecosystem. Thus, in order to prioritize conservation objectives, it is critical to identify those ecosystems where decreasing richness will most greatly magnify invasion risks. Our goal was to determine if invasion risks greatly increase in response to common reductions in grassland species richness. We imposed treatments that mimic management-induced reductions in grassland species richness (i.e., removal of shallow- and/or deep-rooted forbs and/or grasses and/or cryptogam layers). Then we introduced and monitored the performance of a notorious invasive species (i.e., Centaurea maculosa). We found that, on a per-gram-of-biomass basis, each resident plant group similarly suppressed invader growth. Hence, with respect to preventing C. maculosa invasions, maintaining overall productivity is probably more important than maintaining the productivity of particular plant groups or species. But at the sites we studied, all plant groups may be needed to maintain overall productivity because removing forbs decreased overall productivity in two of three years. Alternatively, removing forbs increased productivity in another year, and this led us to posit that removing forbs may inflate the temporal productivity variance as opposed to greatly affecting time-averaged productivity. In either case, overall productivity responses to single plant group removals were inconsistent and fairly modest, and only when all plant groups were removed did C. maculosa growth increase substantially over a no-removal treatment. As such, it seems that intense disturbances (e.g., prolonged drought, overgrazing) that deplete multiple plant groups may often be a prerequisite for C. maculosa invasion." }, { "instance_id": "R52143xR52075", "comparison_id": "R52143", "paper_id": "R52075", "text": "Overlapping resource use in three Great Basin species: implications for community invasibility and vegetation dynamics Summary 1 In the Great Basin of the western United States of America, the invasive annual grass Bromus tectorum has extensively replaced native shrub and bunchgrass communities, but the native bunchgrass Elymus elymoides has been reported to suppress Bromus . Curlew Valley, a site in Northern Utah, provides a model community to test the effects of particular species on invasion by examining competitive relationships among Elymus , Bromus and the native shrub Artemisia tridentata . 2 The site contains Bromus / Elymus , Elymus / Artemisia and monodominant Elymus stands. Transect data indicate that Elymus suppresses Bromus disproportionately relative to its above-ground cover. Artemisia seedlings recruit in Elymus stands but rarely in the presence of Bromus . This relationship might be explained by competition between the two grasses involving a different resource or occurring in a different season to that between each grass and Artemisia . 3 Time reflectometry data collected in monodominant patches indicated that in spring, soil moisture use by Bromus is rapid, whereas depletion under Elymus and Artemisia is more moderate. Artemisia seedlings may therefore encounter a similar moisture environment in monodominant or mixed perennial stands. However, efficient autumn soil moisture use by Elymus may help suppress Bromus . 4 In competition plots, target Artemisia grown with Bromus were stunted relative to those grown with Elymus , despite equivalent above-ground biomass of the two grasses. Competition for nitrogen in spring and autumn, assessed with 15 N tracer, appears to be secondary to moisture availability in determining competitive outcomes. 5 Elymus physiology and function appear to play an important role in determining the composition of communities in Curlew Valley, by maintaining zones free of Bromus where Artemisia can recruit. Key-words : 15 N tracer, Artemisia tridentata , Bromus tectorum , cheatgrass, Elymus elymoides , Great Basin, N uptake, resource competition, semiarid, Sitanion hystrix ," }, { "instance_id": "R52143xR52133", "comparison_id": "R52143", "paper_id": "R52133", "text": "Is phylogenetic relatedness to native species important for the establishment of reptiles introduced to California and Florida? Aim Charles Darwin posited that introduced species with close relatives were less likely to succeed because of fiercer competition resulting from their similarity to residents. There is much debate about the generality of this rule, and recent studies on plant and fish introductions have been inconclusive. Information on phylogenetic relatedness is potentially valuable for explaining invasion outcomes and could form part of screening protocols for minimizing future invasions. We provide the first test of this hypothesis for terrestrial vertebrates using two new molecular phylogenies for native and introduced reptiles for two regions with the best data on introduction histories. Location California and Florida, USA. Methods We performed an ordination of ecological traits to confirm that ecologically similar species are indeed closely related phylogenetically. We then inferred molecular phylogenies for introduced and native reptiles using sequence data for two nuclear and three mitochondrial genes. Using these phylogenies, we computed two distance metrics: the mean phylogenetic distance (MPD) between each introduced species and all native species in each region (which indicates the potential interactions between introduced species and all native species in the community) and the distance of each introduced species to its nearest native relative \u2013 NN (indicating the degree of similarity and associated likelihood of competition between each introduced species and its closest evolutionary analogue). These metrics were compared for introduced species that established and those that failed. Results We demonstrate that phylogenetically related species do share similar ecological functions. Furthermore, successfully introduced species are more distantly related to natives (for NN and MPD) than failed species, although variation is high. Main conclusions The evolutionary history of a region has value for explaining and predicting the outcome of human-driven introductions of reptiles. Phylogenetic metrics are thus useful inputs to multi-factor risk assessments, which are increasingly required for screening introduced species." }, { "instance_id": "R52143xR52098", "comparison_id": "R52143", "paper_id": "R52098", "text": "Community assembly and invasion: An experimental test of neutral versus niche processes A species-addition experiment showed that prairie grasslands have a structured, nonneutral assembly process in which resident species inhibit, via resource consumption, the establishment and growth of species with similar resource use patterns and in which the success of invaders decreases as diversity increases. In our experiment, species in each of four functional guilds were introduced, as seed, into 147 prairie\u2013grassland plots that previously had been established and maintained to have different compositions and diversities. Established species most strongly inhibited introduced species from their own functional guild. Introduced species attained lower abundances when functionally similar species were abundant and when established species left lower levels of resources unconsumed, which occurred at lower species richness. Residents of the C4 grass functional guild, the dominant guild in nearby native grasslands, reduced the major limiting resource, soil nitrate, to the lowest levels in midsummer and exhibited the greatest inhibitory effect on introduced species. This simple mechanism of greater competitive inhibition of invaders that are similar to established abundant species could, in theory, explain many of the patterns observed in plant communities." }, { "instance_id": "R52143xR52096", "comparison_id": "R52143", "paper_id": "R52096", "text": "Species-rich Scandinavian grasslands are inherently open to invasion Invasion of native habitats by alien or generalist species is recognized worldwide as one of the major causes behind species decline and extinction. One mechanism determining community invasibility, i.e. the susceptibility of a community to invasion, which has been supported by recent experimental studies, is species richness and functional diversity acting as barriers to invasion. We used Scandinavian semi-natural grasslands, exceptionally species-rich at small spatial scales, to examine this mechanism, using three grassland generalists and one alien species as experimental invaders. Removal of two putative functional groups, legumes and dominant non-legume forbs, had no effect on invasibility except a marginally insignificant effect of non-legume forb removal. The amount of removed biomass and original plot species richness had no effect on invasibility. Actually, invasibility was high already in the unmanipulated community, leading us to further examine the relationship between invasion and propagule pressure, i.e. the inflow of seeds into the community. Results from an additional experiment suggested that these species-rich grasslands are effectively open to invasion and that diversity may be immigration driven. Thus, species richness is no barrier to invasion. The high species diversity is probably in itself a result of the community being highly invasible, and species have accumulated at small scales during centuries of grassland management." }, { "instance_id": "R52143xR52100", "comparison_id": "R52143", "paper_id": "R52100", "text": "Early emergence and resource availability can competitively favour natives over a functionally similar invader Invasive plant species can form dense populations across large tracts of land. Based on these observations of dominance, invaders are often described as competitively superior, despite little direct evidence of competitive interactions with natives. The few studies that have measured competitive interactions have tended to compare an invader to natives that are unlikely to be strong competitors because they are functionally different. In this study, we measured competitive interactions among an invasive grass and two Australian native grasses that are functionally similar and widely distributed. We conducted a pair-wise glasshouse experiment, where we manipulated both biotic factors (timing of establishment, neighbour identity and density) and abiotic factors (nutrients and timing of water supply). We found that the invader significantly suppressed the performance of the natives; but its suppression ability was contingent on resource levels, with pulsed water/low nutrients or continuous watering reducing its competitive effects. The native grasses were able to suppress the performance of the invader when given a 3-week head-start, suggesting the invader may be incapable of establishing unless it emerges first, including in its own understorey. These findings provide insight for restoration, as the competitive effect of a functionally similar invader may be reduced by altering abiotic and biotic conditions in favour of natives." }, { "instance_id": "R52143xR52088", "comparison_id": "R52143", "paper_id": "R52088", "text": "Evidence of deterministic assembly according to flowering time in an old-field plant community Summary 1. Theory has produced contrasting predictions related to flowering time overlap among coexisting plant species largely because of the diversity of potential influences on flowering time. In this study, we use a trait-based null modelling approach to test for evidence of deterministic assembly of species according to flowering time in an old-field plant community. 2. Plant species coexisting in one-metre-square plots overlapped in flowering time significantly more than expected. This flowering synchrony was more pronounced when analyses focused on bee-pollinated species. Flowering synchrony was also observed for wind-pollinated species, although for only one of our two null model tests, highlighting the sensitivity of some results to different randomization methods. In general, these patterns suggest that relationships between pollinators and plants can influence community assembly processes. 3. Because our study community is composed of approximately 43% native plant species and 57% exotic species, and because the arrival of new species may complicate plant\u2013pollinator interactions, we tested whether flowering time overlap was altered by introduced species. Flowering synchrony was greater in plots with a higher proportion of introduced species. This pattern held for both null model tests, but was slightly stronger when analyses focused on beepollinated species. These results indicate that introduced species alter community flowering distributions and in so doing will inevitably affect pollinator\u2013plant interactions. 4. Finally, we tested whether our results were influenced by variation among study plots in above-ground biomass production, which some theory predicts will be related to the importance of competition. Our results were not influenced by this variation, suggesting that resource variation among our plots did not contribute to observed patterns. 5. Synthesis: Our results provide support for predictions that coexisting species should display flowering synchrony, and provide no support for species coexistence via temporal niche partitioning at this scale in this study community. Our results also indicate that introduced species significantly alter the community assembly process such that flowering synchrony is more pronounced in plots with a greater proportion of introduced plant species." }, { "instance_id": "R52143xR52109", "comparison_id": "R52143", "paper_id": "R52109", "text": "Establishment and Management of Native Functional Groups in Restoration The limiting similarity hypothesis predicts that communities should be more resistant to invasion by non-natives when they include natives with a diversity of traits from more than one functional group. In restoration, planting natives with a diversity of traits may result in competition between natives of different functional groups and may influence the efficacy of different seeding and maintenance methods, potentially impacting native establishment. We compare initial establishment and first-year performance of natives and the effectiveness of maintenance techniques in uniform versus mixed functional group plantings. We seeded ruderal herbaceous natives, longer-lived shrubby natives, or a mixture of the two functional groups using drill- and hand-seeding methods. Non-natives were left undisturbed, removed by hand-weeding and mowing, or treated with herbicide to test maintenance methods in a factorial design. Native functional groups had highest establishment, growth, and reproduction when planted alone, and hand-seeding resulted in more natives as well as more of the most common invasive, Brassica nigra. Wick herbicide removed more non-natives and resulted in greater reproduction of natives, while hand-weeding and mowing increased native density. Our results point to the importance of considering competition among native functional groups as well as between natives and invasives in restoration. Interactions among functional groups, seeding methods, and maintenance techniques indicate restoration will be easier to implement when natives with different traits are planted separately." }, { "instance_id": "R52143xR52083", "comparison_id": "R52143", "paper_id": "R52083", "text": "Plant traits across different habitats of the Italian Alps: a comparative analysis between native and alien species While it is well known that the success of alien plants in new environments greatly depends on their functional traits, to date only a few other studies have tested whether coexisting alien and native species show converging or diverging functional attributes. To our knowledge, no comparative analysis between native and alien species has been carried out in the same mountain habitats. We characterized the main habitats of the Italian Alps on the basis of plant species traits and we then tested for evidence of functional axes of variation among the habitats for native and alien plants. Finally, we tested the \u2018try-harder\u2019 and the \u2018join-the-locals\u2019 hypotheses to understand whether coexisting native and alien plant species showed converging or diverging functional attributes. Ordination analysis showed a distribution of the habitats according to the Grime\u2019s CSR strategies, and associated to plant growth form and resource acquisition. Co-inertia analysis showed a significant association between native and alien plant traits at habitat level (RV = 0.73; Monte-Carlo test, p = 0.035). Across all species and habitats, the comparative analysis of individual traits showed that alien species have 25% higher plant height, 250% higher leaf mass, 19% lower leaf dry matter content, 10% higher SLA, and 17% longer flowering duration than native species. Overall, our findings demonstrated that aliens differ in many traits from native species in the Italian Alps, but that many of these differences disappear when one compares aliens and natives that co-occur in the same types of habitats." }, { "instance_id": "R52143xR52102", "comparison_id": "R52143", "paper_id": "R52102", "text": "Functional differences between alien and native species: do biotic interactions determine the functional structure of highly invaded grasslands? Summary 1. Although observed functional differences between alien and native plant species support the idea that invasions are favoured by niche differentiation (ND), when considering invasions along large ecological gradients, habitat filtering (HF) has been proposed to constrain alien species such that they exhibit similar trait values to natives. 2. To reconcile these contrasting observations, we used a multiscale approach using plant functional traits to evaluate how biotic interactions with native species and grazing might determine the functional structure of highly invaded grasslands along an elevation gradient in New Zealand. 3. At a regional scale, functional differences between alien and native plant species translated into nonrandom community assembly and high ND. Alien and native species showed contrasting responses to elevation and the degree of ND between them decreased as elevation increased, suggesting a role for HF. At the plant-neighbourhood scale, species with contrasting traits were generally spatially segregated, highlighting the impact of biotic interactions in structuring local plant communities. A confirmatory multilevel path analysis showed that the effect of elevation and grazing was moderated by the presence of native species, which in turn influenced the local abundance of alien species. 4. Our study showed that functional differences between aliens and natives are fundamental to understand the interplay between multiple mechanisms driving alien species success and their coexistence with natives. In particular, the success of alien species is driven by the presence of native species which can have a negative (biotic resistance) or a positive (facilitation) effect depending on the functional identity of alien species." }, { "instance_id": "R52143xR52131", "comparison_id": "R52143", "paper_id": "R52131", "text": "Experimental invasion by legumes reveals non-random assembly rules in grassland communities 1. Although experimental studies usually reveal that resistance to invasion increases with species diversity, observational studies sometimes show the opposite trend. The higher resistance of diverse plots to invasion may be partly due to the increased probability of a plot containing a species with similar resource requirements to the invader. 2. We conducted a study of the invasibility of monocultures belonging to three different functional groups by seven sown species of legume. By only using experimentally established monocultures, rather than manipulating the abundance of particular functional groups, we removed both species diversity and differences in underlying abiotic conditions as potentially confounding variables. 3. We found that legume monocultures were more resistant than monocultures of grasses or non-leguminous forbs to invasion by sown legumes but not to invasion by other unsown species. The functional group effect remained after controlling for differences in total biomass and the average height of the above-ground biomass. 4. The relative success of legume species and types also varied with monoculture characteristics. The proportional biomass of climbing legumes increased strongly with biomass height in non-leguminous forb monocultures, while it declined with biomass height in grass monocultures. Trifolium pratense was the most successful invader in grass monocultures, while Vicia cracca was the most successful in non-leguminous forb monocultures. 5. Our results suggest that non-random assembly rules operate in grassland communities both between and within functional groups. Legume invaders found it much more difficult to invade legume plots, while grass and non-leguminous forb plots favoured non-climbing and climbing legumes, respectively. If plots mimic monospecific patches, the effect of these assembly rules in diverse communities might depend upon the patch structure of diverse communities. This dependency on patch structure may contribute to differences in results of research from experimental vs. natural communities." }, { "instance_id": "R52143xR52135", "comparison_id": "R52143", "paper_id": "R52135", "text": "Testing Fox's assembly rule: does plant invasion depend on recipient community structure? Fox's assembly rule, that relative dearth of certain functional groups in a community will facilitate invasion of that particular functional group, serves as the basis for investigation into the functional group effects of invasion resistance. We explored resistance to plant invaders by eliminating or decreasing the number of understory plant species in particular functional groups from plots at a riparian site in southwestern Virginia, USA. Our functional groups comprise combinations of aboveground biomass and rooting structure type. Manipulated plots were planted with 10 randomly chosen species from widespread native and introduced plants commonly found throughout the floodplains of Big Stony Creek. We assessed success of an invasion by plant survivorship and growth. We analyzed survivorship of functional groups with loglinear models for the analysis of categorical data in a 4-way table. There was a significant interaction between functional groups removed in a plot and survivorship in the functional groups added to that plot. However, survivorship of species in functional groups introduced into plots with their respective functional group removed did not differ from survivorship when any other functional group was removed. Additionally, growth of each of the most abundant species did not differ significantly among plots with different functional groups manipulated. Specifically, species did not fare better in those plots that had representatives of their own functional group removed. Fox's assembly rule does not hold for these functional groups in this plant community; however, composition of the recipient community is a significant factor in community assembly." }, { "instance_id": "R52143xR52124", "comparison_id": "R52143", "paper_id": "R52124", "text": "Resistance of Native Plant Functional Groups to Invasion by Medusahead (Taeniatherum caput-medusae) Abstract Understanding the relative importance of various functional groups in minimizing invasion by medusahead is central to increasing the resistance of native plant communities. The objective of this study was to determine the relative importance of key functional groups within an intact Wyoming big sagebrush\u2013bluebunch wheatgrass community type on minimizing medusahead invasion. Treatments consisted of removal of seven functional groups at each of two sites, one with shrubs and one without shrubs. Removal treatments included (1) everything, (2) shrubs, (3) perennial grasses, (4) taprooted forbs, (5) rhizomatous forbs, (6) annual forbs, and (7) mosses. A control where nothing was removed was also established. Plots were arranged in a randomized complete block with 4 replications (blocks) at each site. Functional groups were removed beginning in the spring of 2004 and maintained monthly throughout each growing season through 2009. Medusahead was seeded at a rate of 2,000 seeds m \u22122 (186 seeds ft \u22122 ) in fall 2005. Removing perennial grasses nearly doubled medusahead density and biomass compared with any other removal treatment. The second highest density and biomass of medusahead occurred from removing rhizomatous forbs (phlox). We found perennial grasses played a relatively more significant role than other species in minimizing invasion by medusahead. We suggest that the most effective basis for establishing medusahead-resistant plant communities is to establish 2 or 3 highly productive grasses that are complementary in niche and that overlap that of the invading species." }, { "instance_id": "R52143xR52138", "comparison_id": "R52143", "paper_id": "R52138", "text": "The role of diversity and functional traits of species in community invasibility The invasion of exotic species into assemblages of native plants is a pervasive and widespread phenomenon. Many theoretical and observational studies suggest that diverse communities are more resistant to invasion by exotic species than less diverse ones. However, experimental results do not always support such a relationship. Therefore, the hypothesis of diversity-community invasibility is still a focus of controversy in the field of invasion ecology. In this study, we established and manipulated communities with different species diversity and different species functional groups (16 species belong to C3, C4, forbs and legumes, respectively) to test Elton's hypothesis and other relevant hypotheses by studying the process of invasion. Alligator weed (Alternanthera philoxeroides) was chosen as the invader. We found that the correlation between the decrement of extractable soil nitrogen and biomass of alligator weed was not significant, and that species diversity, independent of functional groups diversity, did not show a significant correlation with invasibility. However, the communities with higher functional groups diversity significantly reduced the biomass of alligator weed by decreasing its resource opportunity. Functional traits of species also influenced the success of the invasion. Alternanthera sessilis, in the same morphological and functional group as alligator weed, was significantly resistant to alligator weed invasion. Because community invasibility is influenced by many factors and interactions among them, the pattern and mechanisms of community invasibility are likely to be far subtler than we found in this study. More careful manipulated experiments coupled with theoretical modeling studies are essential steps to a more profound understanding of community invasibility." }, { "instance_id": "R52143xR52129", "comparison_id": "R52143", "paper_id": "R52129", "text": "A test of the effects of functional group richness and composition on grassland invasibility Although many theoretical and observational studies suggest that diverse systems are more resistant to invasion by novel species than are less diverse systems, experimental data are uncommon. In this experiment, I manipulated the functional group richness and composition of a grassland community to test two related hypotheses: (1) Diversity and invasion resistance are positively related through diversity's effects on the resources necessary for invading plants' growth. (2) Plant communities resist invasion by species in functional groups already present in the community. To test these hypotheses, I removed plant functional groups (forbs, C3 graminoids, and C4 graminoids) from existing grassland vegetation to create communities that contained all possible combinations of one, two, or three functional groups. After three years of growth, I added seeds of 16 different native prairie species (legumes, nonleguminous forbs, C3 graminoids, and C4 graminoids) to a1 3 1 m portion of each 4 3 8 m plot. Overall invasion success was negatively related to resident functional group richness, but there was only weak evidence that resident species repelled functionally similar invaders. A weak effect of functional group richness on some resources did not explain the significant diversity-invasibility relationship. Other factors, particularly the different responses of resident functional groups to the initial disturbance of the experimental manipulation, seem to have been more important to community in- vasibility." }, { "instance_id": "R52143xR52077", "comparison_id": "R52143", "paper_id": "R52077", "text": "Plant functional group identity and diversity determine biotic resistance to invasion by an exotic grass Summary 1. Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion-resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. 2. We classi! ed 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity\u2010interaction models. 3. In monoculture treatments, RCIavg of wetland plants was signi! cantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast-growing annuals, suggesting priority effect. 4. RCIavg of wetland plants was signi! cantly greater in mixture than in monoculture mainly due to complementarity\u2010diversity effect among functional groups. In diversity\u2010interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when ! tted to RCIavg or biomass, implying niche partitioning. 5. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche preemption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to ! eld situations will facilitate generalization." }, { "instance_id": "R52143xR52081", "comparison_id": "R52143", "paper_id": "R52081", "text": "Strengthening Invasion Filters to Reassemble Native Plant Communities: Soil Resources and Phenological Overlap Preventing invasion by exotic species is one of the key goals of restoration, and community assembly theory provides testable predictions about native community attributes that will best resist invasion. For instance, resource availability and biotic interactions may represent \u201cfilters\u201d that limit the success of potential invaders. Communities are predicted to resist invasion when they contain native species that are functionally similar to potential invaders; where phenology may be a key functional trait. Nutrient reduction is another common strategy for reducing invasion following native species restoration, because soil nitrogen (N) enrichment often facilitates invasion. Here, we focus on restoring the herbaceous community associated with coastal sage scrub vegetation in Southern California; these communities are often highly invaded, especially by exotic annual grasses that are notoriously challenging for restoration. We created experimental plant communities composed of the same 20 native species, but manipulated functional group abundance (according to growth form, phenology, and N-fixation capacity) and soil N availability. We fertilized to increase N, and added carbon to reduce N via microbial N immobilization. We found that N reduction decreased exotic cover, and the most successful seed mix for reducing exotic abundance varied depending on the invader functional type. For instance, exotic annual grasses were least abundant when the native community was dominated by early active forbs, which matched the phenology of the exotic annual grasses. Our findings show that nutrient availability and the timing of biotic interactions are key filters that can be manipulated in restoration to prevent invasion and maximize native species recovery." }, { "instance_id": "R52143xR52104", "comparison_id": "R52143", "paper_id": "R52104", "text": "Assembly rules operating along a primary riverbed-grassland successional sequence Summary 1 Assembly rules are broadly defined as any filter imposed on a regional species pool that acts to determine the local community structure and composition. Environmental filtering is thought to result in the formation of groups of species with similar traits that tend to co-occur more often than expected by chance alone, known as Beta guilds. At a smaller scale, within a single Beta guild, species may be partitioned into Alpha guilds \u2013 groups of species that have similar resource use and hence should tend not to co-occur at small scales due the principle of limiting similarity. 2 This research investigates the effects of successional age and the presence of an invasive exotic species on Alpha and Beta guild structuring within plant communities along two successional river terrace sequences in the Waimakariri braided river system in New Zealand. 3 Fifteen sites were sampled, six with and nine without the Russel lupin (Lupinus polyphyllus), an invasive exotic species. At each site, species presence/absence was recorded in 100 circular quadrats (5 cm in diameter) at 30-cm intervals along a 30-m transect. Guild proportionality (Alpha guild structuring) was tested for using two a priori guild classifications each containing three guilds, and cluster analysis was used to test for environmental structuring between sites. 4 Significant assembly rules based on Alpha guild structuring were found, particularly for the monocot and dicot guild. Guild proportionality increased with increasing ecological age, which indicated an increase in the relative importance of competitive structuring at later stages of succession. This provides empirical support for Weiher and Keddy's theoretical model of community assembly. 5 Lupins were associated with altered Alpha and Beta guild structuring at early mid successional sites. Lupin-containing sites had higher silt content than sites without lupins, and this could have altered the strength and scale of competitive structuring within the communities present. 6 This research adds to the increasing evidence for the existence of assembly rules based on limiting similarity within plant communities, and demonstrates the need to incorporate gradients of environmental and competitive adversity when investigating the rules that govern community assembly." }, { "instance_id": "R52143xR52112", "comparison_id": "R52143", "paper_id": "R52112", "text": "Do alien plants on Mediterranean islands tend to invade different niches from native species? In order to understand invasions, it is important to know how alien species exploit opportunities in unfamiliar ecosystems. For example, are aliens concentrated in niches under-exploited by native communities, or widely distributed across the ecological spectrum? To explore this question, we compared the niches occupied by 394 naturalized alien plants with a representative sample from the native flora of Mediterranean islands. When niche structure was described by a functional group categorization, the distribution of native and alien species was remarkably similar, although \u201csucculent shrubs\u201d and \u201ctrees with specialized animal pollination mechanisms\u201d were under-represented in the native species pool. When niche structure was described by Grime\u2019s CSR strategy, the positioning of aliens and natives differed more strongly. Stress-tolerance was much rarer amongst the aliens, and a competitive strategy was more prevalent at the habitat level. This pattern is similar to previous findings in temperate Europe, although in those regions it closely reflects patterns of native diversity. Stressed environments are much more dominant in the Mediterranean. We discuss a number of factors which may contribute to this difference, e.g., competitive and ruderal niches are often associated with anthropogenic habitats, and their high invasibility may be due partly to introduction patterns rather than to a greater efficiency of aliens at exploiting them. Thus far, the reasons for invasion success amongst introduced species have proved difficult to unravel. Despite some differences, our evidence suggests that alien species naturalize across a wide range of niches. Given that their ecologies therefore vary greatly, one may ask why such species should be expected to share predictable traits at all?" }, { "instance_id": "R52143xR52114", "comparison_id": "R52143", "paper_id": "R52114", "text": "Using prairie restoration to curtail invasion of Canada thistle: the importance of limiting similarity and seed mix richness Theory has predicted, and many experimental studies have confirmed, that resident plant species richness is inversely related to invisibility. Likewise, potential invaders that are functionally similar to resident plant species are less likely to invade than are those from different functional groups. Neither of these ideas has been tested in the context of an operational prairie restoration. Here, we tested the hypotheses that within tallgrass prairie restorations (1) as seed mix species richness increased, cover of the invasive perennial forb, Canada thistle (Cirsium arvense) would decline; and (2) guilds (both planted and arising from the seedbank) most similar to Canada thistle would have a larger negative effect on it than less similar guilds. Each hypothesis was tested on six former agricultural fields restored to tallgrass prairie in 2005; all were within the tallgrass prairie biome in Minnesota, USA. A mixed-model with repeated measures (years) in a randomized block (fields) design indicated that seed mix richness had no effect on cover of Canada thistle. Structural equation models assessing effects of cover of each planted and non-planted guild on cover of Canada thistle in 2006, 2007, and 2010 revealed that planted Asteraceae never had a negative effect on Canada thistle. In contrast, planted cool-season grasses and non-Asteraceae forbs, and many non-planted guilds had negative effects on Canada thistle cover. We conclude that early, robust establishment of native species, regardless of guild, is of greater importance in resistance to Canada thistle than is similarity of guilds in new prairie restorations." }, { "instance_id": "R53407xR53360", "comparison_id": "R53407", "paper_id": "R53360", "text": "Introduction pathway and climate trump ecology and life history as predictors of establishment success in alien frogs and toads A major goal for ecology and evolution is to understand how abiotic and biotic factors shape patterns of biological diversity. Here, we show that variation in establishment success of nonnative frogs and toads is primarily explained by variation in introduction pathways and climatic similarity between the native range and introduction locality, with minor contributions from phylogeny, species ecology, and life history. This finding contrasts with recent evidence that particular species characteristics promote evolutionary range expansion and reduce the probability of extinction in native populations of amphibians, emphasizing how different mechanisms may shape species distributions on different temporal and spatial scales. We suggest that contemporary changes in the distribution of amphibians will be primarily determined by human-mediated extinctions and movement of species within climatic envelopes, and less by species-typical traits." }, { "instance_id": "R53407xR53363", "comparison_id": "R53407", "paper_id": "R53363", "text": "A theory of seed plant invasiveness: The first sketch Abstract Although biological invasions are clearly one of the most important impacts humans have had on the Earth's ecosystems, we still do not have reliable tools which can help us to predict which species are potential invaders. At present, several limited generalizations are available for seed plants: (1) invasiveness of woody species in disturbed landscapes is significantly associated with small seed mass, short juvenile period, and short mean interval between large seed crops; (2) vertebrate dispersal is responsible for the success of many woody invaders in disturbed as well as \u2018undisturbed\u2019 habitats; (3) primary (native) latitudinal range of herbaceous Gramineae, Compositae, and Fabaceae seems to be the best predictor of their invasiveness, at least for species introduced from Eurasia to North America; (4) low nuclear DNA content (genome size) seems to be a result of selection for short minimum generation time and, therefore, may be associated with plant invasiveness in disturbed landscapes; (5) analysis of exotic Gramineae and Compositae introduced from Europe to California supports Darwin's suggestion that alien species belonging to exotic genera are more likely to be invasive than alien species from genera represented in the native flora. Fortunately, these seemingly disparate stories can be brought together and provide a foundation for building a general theory of seed plant invasiveness." }, { "instance_id": "R53407xR53322", "comparison_id": "R53407", "paper_id": "R53322", "text": "Is invasiveness a legacy of evolution? Phylogenetic patterns in the alien flora of Mediterranean islands Summary 1 The Mediterranean region has been invaded by a wide range of introduced plant species which differ greatly in their ecology, morphology and human utilization. In order to identify a suite of traits which characterize invasiveness, recent studies have advocated the use of evolutionary relationships to unravel highly confounded influences. 2 This study attempts to identify an evolutionary component to invasiveness and other complex invasion-related traits in the Mediterranean alien flora using an autocorrelation technique, the \u2018phylogenetic association test\u2019. I compared a traditional hierarchical taxonomy with the recent phylogeny of the Angiosperm Phylogeny Group. 3 Invasiveness did not have a significant phylogenetic component. Any weak clustering was generally at the genus level. 4 Several associated \u2018meta-traits\u2019 (high introduction frequency, adaptation to several habitat types and favourability for different modes of introduction), exhibited stronger phylogenetic components. Although each of these conveys some of the attributes of invasiveness, their clustering patterns differed considerably, suggesting that they arise from independent evolutionary pressures. Furthermore, within each meta-trait, different clusters may have been selected for different reasons. 5 Other reasons for the lack of a detectable evolutionary component to invasiveness are discussed. Firstly, the results of our test simulations suggested that incorrect phylogeny could result in a moderate degree of error. Secondly, over evolutionary time, complex or stochastic events such as ecosystem change could radically alter the adaptive advantages of particular traits. 6 Synthesis. Since invasiveness has little phylogenetic component, I argue that it is less likely to be predictable from as yet unidentified traits in any simple way. Although trait syndromes could develop without leaving a phylogenetic pattern, its absence probably indicates that the dominant selective forces are responses to short-term ecological shifts, and a greater mechanistic understanding of these is needed." }, { "instance_id": "R53407xR53295", "comparison_id": "R53407", "paper_id": "R53295", "text": "Establishment of introduced reptiles increases with the presence and richness of native congeners Darwin proposed two contradictory hypotheses to explain the influence of congeners on the outcomes of invasion: the naturalization hypothesis, which predicts a negative relationship between the presence of congeners and invasion success, and the pre-adaptation hypothesis, which predicts a positive relationship between the presence of congeners and invasion success. Studies testing these hypotheses have shown mixed support. We tested these hypotheses using the establishment success of non-native reptiles and congener presence/absence and richness across the globe. Our results demonstrated support for the pre-adaptation hypothesis. We found that globally, both on islands and continents, establishment success was higher in the presence than in the absence of congeners and that establishment success increased with increasing congener richness. At the life form level, establishment success was higher for lizards, marginally higher for snakes, and not different for turtles in the presence of congeners; data were insufficient to test the hypotheses for crocodiles. There was no relationship between establishment success and congener richness for any life form. We suggest that we found support for the pre-adaptation hypothesis because, at the scale of our analysis, native congeners represent environmental conditions appropriate for the species rather than competition for niche space. Our results imply that areas to target for early detection of non-native reptiles are those that host closely related species." }, { "instance_id": "R53407xR53331", "comparison_id": "R53407", "paper_id": "R53331", "text": "How many- and which- plants will invade natural areas? Of established nonindigenous plant species in California, Florida, and Tennessee, 5.8%, 9.7%, and 13.4%, respectively, invade natural areas according to designations tabulated by state Exotic Pest Plant Councils. Only Florida accords strictly with the tens rule, though California and Tennessee fall within the range loosely viewed as obeying the rule. The species that invaded natural areas in each state were likely, if they invaded either of the other states at all, to have invaded natural areas there. There was a detectable but inconsistent tendency for species that invade natural areas to come from particular families. At the genus level in California and Florida, and the family level in California, there was also a tendency for natural area invaders to come from taxa that were not represented in the native flora. All three of the above patterns deserve further studies to determine management implications. Only the first (that natural area invaders of one state are likely to invade natural areas if they invade another state) seems firm enough from our data to suggest actions on the part of managers." }, { "instance_id": "R53407xR53325", "comparison_id": "R53407", "paper_id": "R53325", "text": "How strongly do interactions with closely-related native species influence plant invasions? Darwin's naturalization hypothesis assessed on Mediterranean islands Recent works have found the presence of native congeners to have a small effect on the naturalization rates of introduced plants, some suggesting a negative interaction (as proposed by Charles Darwin in The Origin of Species), and others a positive association. We assessed this question for a new biogeographic region, and discuss some of the problems associated with data base analyses of this type. Location Islands of the Mediterranean basin. Presence or absence of congeners was assessed for all naturalized alien plants species at regional, local and habitat scales. Using general linear models, we attempted to explain the abundance of the species (as measured by the number of islands where recorded) from their congeneric status, and assessed whether the patterns could be alternatively accounted for by a range of biological, geographical and anthropogenic factors. A simulation model was also used to investigate the impact of a simple bias on a comparable but hypothetical data set. Data base analyses addressing Darwin's hypothesis are prone to bias from a number of sources. Interaction between invaders and congenerics may be overestimated, as they often do not co-occur in the same habitats. Furthermore, intercorrelations between naturalization success and associated factors such as introduction frequency, which are also not independent from relatedness with the native flora, may generate an apparent influence of congenerics without implying a biological interaction. We detected no true influence from related natives on the successful establishment of alien species of the Mediterranean. Rarely-introduced species tended to fare better in the presence of congeners, but it appears that this effect was generated because species introduced accidentally into highly invasible agricultural and ruderal habitats have many relatives in the region, due to common evolutionary origins. Relatedness to the native flora has no more than a marginal influence on the invasion success of alien plants in the Mediterranean, although apparent trends can easily be generated through artefacts of the data base" }, { "instance_id": "R53407xR53266", "comparison_id": "R53407", "paper_id": "R53266", "text": "Darwin's naturalization hypothesis: scale matters in coastal plant communities Darwin proposed two seemingly contradictory hypotheses for a better understanding of biological invasions. Strong relatedness of invaders to native communities as an indication of niche overlap could promote naturalization because of appropriate niche adaptation, but could also hamper naturalization because of negative interactions with native species ('Darwin's naturalization hypothesis'). Although these hypotheses provide clear and opposing predictions for expected patterns of species relatedness in invaded communities, so far no study has been able to clearly disentangle the underlying mechanisms. We hypothesize that conflicting past results are mainly due to the neglected role of spatial resolution of the community sampling. In this study, we corroborate both of Darwin's expectations by using phylogenetic relatedness as a measure of niche overlap and by testing the effects of sampling resolution in highly invaded coastal plant communities. At spatial resolutions fine enough to detect signatures of biotic interactions, we find that most invaders are less related to their nearest relative in invaded plant communities than expected by chance (phylogenetic overdispersion). Yet at coarser spatial resolutions, native assemblages become more invasible for closely-related species as a consequence of habitat filtering (phylogenetic clustering). Recognition of the importance of the spatial resolution at which communities are studied allows apparently contrasting theoretical and empirical results to be reconciled. Our study opens new perspectives on how to better detect, differentiate and understand the impact of negative biotic interactions and habitat filtering on the ability of invaders to establish in native communities." }, { "instance_id": "R53407xR53382", "comparison_id": "R53407", "paper_id": "R53382", "text": "Exotic taxa less related to native species are more invasive Some species introduced into new geographical areas from their native ranges wreak ecological and economic havoc in their new environment. Although many studies have searched for either species or habitat characteristics that predict invasiveness of exotic species, the match between characteristics of the invader and those of members of the existing native community may be essential to understanding invasiveness. Here, we find that one metric, the phylogenetic relatedness of an invader to the native community, provides a predictive tool for invasiveness. Using a phylogenetic supertree of all grass species in California, we show that highly invasive grass species are, on average, significantly less related to native grasses than are introduced but noninvasive grasses. The match between the invader and the existing native community may explain why exotic pest species are not uniformly noxious in all novel habitats. Relatedness of invaders to the native biota may be one useful criterion for prioritizing management efforts of exotic species." }, { "instance_id": "R53407xR53387", "comparison_id": "R53407", "paper_id": "R53387", "text": "Fish species introductions provide novel insights into the patterns and drivers of phylogenetic structure in freshwaters Despite long-standing interest of terrestrial ecologists, freshwater ecosystems are a fertile, yet unappreciated, testing ground for applying community phylogenetics to uncover mechanisms of species assembly. We quantify phylogenetic clustering and overdispersion of native and non-native fishes of a large river basin in the American Southwest to test for the mechanisms (environmental filtering versus competitive exclusion) and spatial scales influencing community structure. Contrary to expectations, non-native species were phylogenetically clustered and related to natural environmental conditions, whereas native species were not phylogenetically structured, likely reflecting human-related changes to the basin. The species that are most invasive (in terms of ecological impacts) tended to be the most phylogenetically divergent from natives across watersheds, but not within watersheds, supporting the hypothesis that Darwin's naturalization conundrum is driven by the spatial scale. Phylogenetic distinctiveness may facilitate non-native establishment at regional scales, but environmental filtering restricts local membership to closely related species with physiological tolerances for current environments. By contrast, native species may have been phylogenetically clustered in historical times, but species loss from contemporary populations by anthropogenic activities has likely shaped the phylogenetic signal. Our study implies that fundamental mechanisms of community assembly have changed, with fundamental consequences for the biogeography of both native and non-native species." }, { "instance_id": "R53407xR53372", "comparison_id": "R53407", "paper_id": "R53372", "text": "Invasiveness of alien plants in Brussels is related to their phylogenetic similarity to native species Aim: Understanding the processes that drive invasion success of alien species has received considerable attention in current ecological research. From an evolutionary point of view, many studies have shown that the phylogenetic similarity between the invader species and the members of the native community may be an important aspect of invasiveness. In this study, using a coarse-scale systematic sampling grid of 1 km2, we explore whether the occupancy frequency of two groups of alien species, archaeophytes and neophytes, in the urban angiosperm flora of Brussels is influenced by their phylogenetic relatedness to native species. Location: The city of Brussels (Belgium). Methods: We used ordinary least-squares regressions and quantile regressions for analysing the relationship between the occupancy frequency of alien species in the sampled grid and their phylogenetic distance to the native species pool. Results: Alien species with high occupancy frequency in the sampled grid are, on average, more phylogenetically related to native species than are less frequent aliens, although this relationship is significant only for archaeophytes. In addition, as shown by the quantile regressions, the relationship between phylogenetic relatedness to the native flora and occupancy frequency is much stronger for the most frequent aliens than for rare aliens. Main conclusions: Our data suggest that it is unlikely that species with very low phylogenetic relatedness to natives will become successful invaders with very high distribution in the area studied. To the contrary, under future climate warming scenarios, present-day urban aliens of high occupancy frequency are likely to become successful invaders even outside urban areas. \u00a9 2010 Blackwell Publishing Ltd." }, { "instance_id": "R53407xR53377", "comparison_id": "R53407", "paper_id": "R53377", "text": "Testing Darwin's naturalization hypothesis in the Azores Invasive species are a threat for ecosystems worldwide, especially oceanic islands. Predicting the invasive potential of introduced species remains difficult, and only a few studies have found traits correlated to invasiveness. We produced a molecular phylogenetic dataset and an ecological trait database for the entire Azorean flora and find that the phylogenetic nearest neighbour distance (PNND), a measure of evolutionary relatedness, is significantly correlated with invasiveness. We show that introduced plant species are more likely to become invasive in the absence of closely related species in the native flora of the Azores, verifying Darwin's 'naturalization hypothesis'. In addition, we find that some ecological traits (especially life form and seed size) also have predictive power on invasive success in the Azores. Therefore, we suggest a combination of PNND with ecological trait values as a universal predictor of invasiveness that takes into account characteristics of both introduced species and receiving ecosystem." }, { "instance_id": "R53407xR53306", "comparison_id": "R53407", "paper_id": "R53306", "text": "Phylogenetic structure predicts capitular damage to Asteraceae better than origin or phylogenetic distance to natives Exotic species more closely related to native species may be more susceptible to attack by native natural enemies, if host use is phylogenetically conserved. Where this is the case, the use of phylogenies that include co-occurring native and exotic species may help to explain interspecific variation in damage. In this study, we measured damage caused by pre-dispersal seed predators to common native and exotic plants in the family Asteraceae. Damage was then mapped onto a community phylogeny of this family. We tested the predictions that damage is phylogenetically structured, that exotic plants experience lower damage than native species after controlling for this structure, and that phylogenetically novel exotic species would experience lower damage. Consistent with our first prediction, 63% of the variability in damage was phylogenetically structured. When this structure was accounted for, exotic plants experienced significantly lower damage than native plants, but species origin only accounted for 3% of the variability of capitular damage. Finally, there was no support for the phylogenetic novelty prediction. These results suggest that interactions between exotic plants and their seed predators may be strongly influenced by their phylogenetic position, but not by their relationship to locally co-occurring native species. In addition, the influence of a species\u2019 origin on the damage it experiences often may be small relative to phylogenetically conserved traits." }, { "instance_id": "R53407xR53319", "comparison_id": "R53407", "paper_id": "R53319", "text": "Colonization plasticity of the boring bivalve Lithophaga aristata (Dillwyn- 1817) on the Southeastern Brazilian coast: considerations on its invasiveness potential Lithophaga aristata is a boring bivalve native to the Caribbean Sea, first recorded in 2005 as an introduced species on the Southeastern Brazilian coast. The geographic distribution and density of L. aristata and of its native congeneric L. bisulcata were assessed in four areas of Brazil (24 sites), additionally considering their relationship with types of substrate, depth and wave exposure. This study records the first occurrence of L. aristata in the Sepetiba Bay and also reports the species at five new localities in the Arraial do Cabo Bay. Lithophaga aristata is established in the four surveyed regions. At intertidal habitats, the exotic species only colonizes the infralittoral fringe but its density was not related to wave action. At subtidal habitats, the species colonizes natural and artificial substrates, from shallow (0.5m) to deep (5.0-7.0m) zones but no relationship between density and these evaluated factors was detected. Broad geographical and ecological distributions and higher densities of this introduced species in relation to its native congeneric are suggested as contrary to Darwin\u2019s naturalization hypothesis and instead indicate a high invasiveness potential." }, { "instance_id": "R53407xR53276", "comparison_id": "R53407", "paper_id": "R53276", "text": "Learning from failures: testing broad taxonomic hypotheses about plant naturalization Our understanding of broad taxonomic patterns of plant naturalizations is based entirely on observations of successful naturalizations. Omission of the failures, however, can introduce bias by conflating the probabilities of introduction and naturalization. Here, we use two comprehensive datasets of successful and failed plant naturalizations in New Zealand and Australia for a unique, flora-wide comparative test of several major invasion hypotheses. First, we show that some taxa are consistently more successful at naturalizing in these two countries, despite their environmental differences. Broad climatic origins helped to explain some of the differences in success rates in the two countries. We further show that species with native relatives were generally more successful in both countries, contrary to Darwin's naturalization hypothesis, but this effect was inconsistent among families across the two countries. Finally, we show that contrary to studies based on successful naturalizations only, islands need not be inherently more invisible than continents." }, { "instance_id": "R53407xR53393", "comparison_id": "R53407", "paper_id": "R53393", "text": "Establishment success of introduced amphibians increases in the presence of congeneric species Darwin\u2019s naturalization hypothesis predicts that the success of alien invaders will decrease with increasing taxonomic similarity to the native community. Alternatively, shared traits between aliens and the native assemblage may preadapt aliens to their novel surroundings, thereby facilitating establishment (the preadaptation hypothesis). Here we examine successful and failed introductions of amphibian species across the globe and find that the probability of successful establishment is higher when congeneric species are present at introduction locations and increases with increasing congener species richness. After accounting for positive effects of congeners, residence time, and propagule pressure, we also find that invader establishment success is higher on islands than on mainland areas and is higher in areas with abiotic conditions similar to the native range. These findings represent the first example in which the preadaptation hypothesis is supported in organisms other than plants and suggest that preadaptation has played a critical role in enabling introduced species to succeed in novel environments." }, { "instance_id": "R53407xR53405", "comparison_id": "R53407", "paper_id": "R53405", "text": "Plant invasions in Taiwan: Insights from the flora of casual and naturalized alien species Data on floristic status, biological attributes, chronology and distribution of naturalized species have been shown to be a very powerful tool for discerning the patterns of plant invasions and species invasiveness. We analysed the newly compiled list of casual and naturalized plant species in Taiwan (probably the only complete data set of this kind in East Asia) and found that Taiwan is relatively lightly invaded with only 8% of the flora being casual or naturalized. Moreover, the index of casual and naturalized species per log area is also moderate, in striking contrast with many other island floras where contributions of naturalized species are much higher. Casual and naturalized species have accumulated steadily and almost linearly over the past decades. Fabaceae, Asteraceae, and Poaceae are the families with the most species. However, Amaranthaceae, Convolvulaceae, and Onagraceae have the largest ratios of casual and naturalized species to their global numbers. Ipomoea, Solanum and Crotalaria have the highest numbers of casual and naturalized species. About 60% of all genera with exotic species are new to Taiwan. Perennial herbs represent one third of the casual and naturalized flora, followed by annual herbs. About 60% of exotic species were probably introduced unintentionally onto the island; many species imported intentionally have ornamental, medicinal, or forage values. The field status of 50% of these species is unknown, but ornamentals represent noticeable proportions of naturalized species, while forage species represent a relatively larger proportion of casual species. Species introduced for medicinal purposes seem to be less invasive. Most of the casual and naturalized species of Taiwan originated from the Tropical Americas, followed by Asia and Europe." }, { "instance_id": "R53407xR53345", "comparison_id": "R53407", "paper_id": "R53345", "text": "A test of Darwin's naturalization hypothesis in the thistle tribe shows that close relatives make bad neighbors Significance Invasive species negatively impact both natural ecosystems and human society and are notoriously difficult to control once established. Thus, identifying potentially invasive taxa and preventing their dislocation is the most efficient management method. Darwin\u2019s naturalization hypothesis, which predicts that the less closely related to native flora species are, the more likely they are to succeed as invaders, is tested here with an unprecedentedly thorough molecular phylogenetic approach, examining >100,000 phylogenies of the weed-rich thistle tribe Cardueae. Branch lengths between taxa were used as measures of evolutionary relatedness. Results show that invasive thistles are more closely related to natives than noninvasive introduced thistles, suggesting they share preadaptive traits with the natives that make them more likely to succeed as invaders. Invasive species have great ecological and economic impacts and are difficult to control once established, making the ability to understand and predict invasive behavior highly desirable. Preemptive measures to prevent potential invasive species from reaching new habitats are the most economically and environmentally efficient form of management. Darwin\u2019s naturalization hypothesis predicts that invaders less related to native flora are more likely to be successful than those that are closely related to natives. Here we test this hypothesis, using the weed-rich thistle tribe, Cardueae, in the California Floristic Province, a biodiversity hotspot, as our study system. An exhaustive molecular phylogenetic approach was used, generating and examining more than 100,000 likely phylogenies of the tribe based on nuclear and chloroplast DNA markers, representing the most in-depth reconstruction of the clade to date. Branch lengths separating invasive and noninvasive introduced taxa from native California taxa were used to represent phylogenetic distances between these groups and were compared at multiple biogeographical scales to ascertain whether invasive thistles are more or less closely related to natives than noninvasive introduced thistles are. Patterns within this highly supported clade show that not only are introduced thistles more closely related to natives more likely to be invasive, but these invasive species are also evolutionarily closer to native flora than by chance. This suggests that preadaptive traits are important in determining an invader\u2019s success. Such rigorous molecular phylogenetic analyses may prove a fruitful means for furthering our understanding of biological invasions and developing predictive frameworks for screening potential invasive taxa." }, { "instance_id": "R53407xR53255", "comparison_id": "R53407", "paper_id": "R53255", "text": "Predictors of regional establishment success and spread of introduced non-indigenous vertebrates Aim To provide the first analysis of predictors of both establishment and spread, both within and across taxa, for all vertebrate taxa within a region. We used Florida, USA, as our study system because it has a well-documented history of introduction and invasion, and is a hotspot for biological invasions. Location Florida, USA. Methods We analysed non-indigenous species (NIS) data from peninsular Florida \u2013 which included both successful and unsuccessful introductions from all vertebrate classes \u2013 to determine the best predictors of both establishment and spread for fish (65 species), herpetofauna (63 species), birds (71 species) and mammals (25 species). We used 10 variables proposed to be associated with the establishment and spread of NIS: body mass, geographic origin, reproductive rate, diet generalism, native-range size, latitude of native range, number of NIS present at date of introduction, presence of NIS congeners, morphological proximity to other NIS (in terms of body mass) and propagule pressure. A multimodel selection process was used with an information-theoretic approach to determine the best fit models for predicting establishment and spread of NIS. We selected a priori plausible predictive models for establishment and spread. Results Large native-range size and small body mass best predicted establishment of non-indigenous herpetofauna. The presence of NIS congeners had the largest positive effect on the establishment of non-indigenous fish. For mammals, the number of NIS present at the time of introduction best explained establishment. No single model best explained bird establishment. For all taxa but birds, the number of NIS present at time of introduction was included in at least one of the best-supported models for explaining spread." }, { "instance_id": "R53407xR53314", "comparison_id": "R53407", "paper_id": "R53314", "text": "An experimental test of Darwin's naturalization hypothesis One of the oldest ideas in invasion biology, known as Darwin\u2019s naturalization hypothesis, suggests that introduced species are more successful in communities in which their close relatives are absent. We conducted the first experimental test of this hypothesis in laboratory bacterial communities varying in phylogenetic relatedness between resident and invading species with and without a protist bacterivore. As predicted, invasion success increased with phylogenetic distance between the invading and the resident bacterial species in both the presence and the absence of protistan bacterivory. The frequency of successful invader establishment was best explained by average phylogenetic distance between the invader and all resident species, possibly indicating limitation by the availability of the unexploited niche (i.e., organic substances in the medium capable of supporting the invader growth); invader abundance was best explained by phylogenetic distance between the invader and its nearest resident relative, possibly indicating limitation by the availability of the unexploited optimal niche (i.e., the subset of organic substances supporting the best invader growth). These results were largely driven by one resident bacterium (a subspecies of Serratia marcescens) posting the strongest resistance to the alien bacterium (another subspecies of S. marcescens). Overall, our findings support phylogenetic relatedness as a useful predictor of species invasion success." }, { "instance_id": "R53407xR53311", "comparison_id": "R53407", "paper_id": "R53311", "text": "Biotic interactions experienced by a new invader: effects of its close relatives at the community scale The success of nonindigenous species may be influenced by biotic interactions during the initial stages of invasion. Here, we investigated whether a potential invader, Solidago virgaurea L., would experience more damage by natural enemies in communities dominated by close relatives than those without them; interactions with mutualistic mycorrhizae might partially counteract these effects. We monitored damage experienced by S. virgaurea planted into communities with native congeners and without close relatives. Community type was crossed with a vegetation removal treatment to assess the combined effects of herbivory and competition on survival. We also evaluated growth of S. virgaurea in a greenhouse experiment where seedlings were exposed to soil biota sampled from these communities and compared with sterile controls. Overall, community type did not affect levels of herbivory or plant survival. Removal of surrounding vegetation resulted in reduced damage and increased survival; these effects were largest in grass-dominated communities. Soil sterilization reduced root growth and tended to reduce shoot growth, especially when compared with plants inoculated with biota collected near congeners. Overall, our results suggest that the presence of close relatives is unlikely to make old-field communities more resistant to invasion by S. virgaurea; instead, soil biota might facilitate growth in communities dominated by close relatives." }, { "instance_id": "R53407xR53355", "comparison_id": "R53407", "paper_id": "R53355", "text": "Validity of Darwin's naturalization hypothesis relates to the stages of invasion Naturalization is the introduction and establishment of a nonnative species with sustainable populations in a novel environment. The success of nonnative species may be influenced by their relatedness to the native flora. Darwin proposed that if a nonnative plant species is introduced into an environment without native congeners, the nonnative species will have a greater chance of becoming naturalized. To test Darwin\u2019s naturalization hypothesis, we compiled a Kentucky plant database consisting of 821 vascular plant species and subsequently selected species traits and distribution information to determine the effect of congeneric species and traits on the probability of successful naturalization and invasion. The predictors used include reproductive traits, growth form, abundance, habitat type, native congeners, and biogeographical origin. We fit three sets of generalized linear mixed models (GLMMs) with a binomial family and a logit link. Backward selection based on minimizing the Akaike Information Criterion (AIC) was used in the analyses. Our results from these three sets of models clearly indicate that the validity of Darwin\u2019s hypothesis is invasion stage dependent. More specific, the naturalized and invasive models (predicting the probability of being naturalized and invasive respectively) did not support Darwin\u2019s naturalization hypothesis. The number of native congeners had no effect on the likelihood that a particular species would naturalize and become invasive. Our results suggest that Darwin\u2019s naturalization hypothesis is more relevant during the early stage of establishment as demonstrated by the native model (predicting the probability of being native) and it becomes irrelevant during the late stages of invasion as indicated by the naturalized and invasive models. Thus, it can be generalized that biotic interactions, especially competition, is a critical determinant of initial success for nonnative species in the recipient communities. Once established, the fate of non-native species during the late stages of invasion may be more related to other factors such as biogeographic origin and habitat conditions. Furthermore, we found reproductive traits such as flowering phenology and flower type are associated with invasion success. We also recognized contrasting traits between native and nonnative species, indicating niche differentiation between these two groups of species. Niche overlapping was found as well among species regardless of the status of being native or otherwise. Our study provides a novel approach to advance the understanding of phylogenetic relatedness between nonnative species and native flora by integrating traits and niche concepts at the regional scale." }, { "instance_id": "R53407xR53282", "comparison_id": "R53407", "paper_id": "R53282", "text": "Enemy damage of exotic plant species is similar to that of natives and increases with productivity Summary 1. In their colonized ranges, exotic plants may be released from some of the herbivores or pathogens of their home ranges but these can be replaced by novel enemies. It is of basic and practical interest to understand which characteristics of invaded communities control accumulation of the new pests. Key questions are whether enemy load on exotic species is smaller than on native competitors as suggested by the enemy release hypothesis (ERH) and whether this difference is most pronounced in resource-rich habitats as predicted by the resource\u2013enemy release hypothesis (R-ERH). 2. In 72 populations of 12 exotic invasive species, we scored all visible above-ground damage morphotypes caused by herbivores and fungal pathogens. In addition, we quantified levels of leaf herbivory and fruit damage. We then assessed whether variation in damage diversity and levels was explained by habitat fertility, by relatedness between exotic species and the native community or rather by native species diversity. 3. In a second part of the study, we also tested the ERH and the R-ERH by comparing damage of plants in 28 pairs of co-occurring native and exotic populations, representing nine congeneric pairs of native and exotic species. 4. In the first part of the study, diversity of damage morphotypes and damage levels of exotic populations were greater in resource-rich habitats. Co-occurrence of closely related, native species in the community significantly increased the probability of fruit damage. Herbivory on exotics was less likely in communities with high phylogenetic diversity. 5. In the second part of the study, exotic and native congeneric populations incurred similar damage diversity and levels, irrespective of whether they co-occurred in nutrient-poor or nutrient-rich habitats. 6. Synthesis. We identified habitat productivity as a major community factor affecting accumulation of enemy damage by exotic populations. Similar damage levels in exotic and native congeneric populations, even in species pairs from fertile habitats, suggest that the enemy release hypothesis or the R-ERH cannot always explain the invasiveness of introduced species." }, { "instance_id": "R53407xR53340", "comparison_id": "R53407", "paper_id": "R53340", "text": "Patterns of bird invasion are consistent with environmental filtering Predicting invasion potential has global significance for managing ecosystems as well as important theoretical implications for understanding community assembly. Phylogenetic relationships of introduced species to the extant community may be predictive of establishment success because of the opposing forces of competition/shared enemies (which should limit invasions by close relatives) versus environmental filtering (which should allow invasions by close relatives). We examine here the association between establishment success of introduced birds and their phylogenetic relatedness to the extant avifauna within three highly invaded regions (Florida, New Zealand, and Hawaii). Published information on both successful and failed introductions, as well as native species, was compiled for all three regions. We created a phylogeny for each avifauna including all native and introduced bird species. From the estimated branch lengths on these phylogenies, we calculated multiple measurements of relatedness between each introduced species and the extant avifauna. We used generalized linear models to test for an association between relatedness and establishment success. We found that close relatedness to the extant avifauna was significantly associated with increased establishment success for exotic birds both at the regional (Florida, Hawaii, New Zealand) and sub-regional (islands within Hawaii) levels. Our results suggest that habitat filtering may be more important than interspecific competition in avian communities assembled under high rates of anthropogenic species introductions. This work also supports the utility of community phylogenetic methods in the study of vertebrate invasions." }, { "instance_id": "R53407xR53279", "comparison_id": "R53407", "paper_id": "R53279", "text": "Does relatedness of natives used for soil conditioning influence plant-soil feedback of exotics? The naturalisation hypothesis has been gaining attention recently as a possible mechanism to explain variations in invasion success. It predicts that exotic genera with native representatives should be less successful because of an overlap in resource use and of the existence of common specialised enemies. In this study, we tested whether native congenerics have more negative impact on exotic species than heterogenerics by increasing the effects of soil pathogens. We sampled soil in populations of three exotic species (Epilobium ciliatum, Impatiens parviflora and Stenactis annua) at sites with and without respective congeneric species. This soil was used as an inoculum for cultivating the first plant cohort, which included exotics, as well as native congenerics and heterogenerics. The conditioned soil was subsequently used for cultivating the second cohort of plants (exotics only). We found no consistent impact of relatedness of conditioning species on exotic growth. Although soil conditioned by congeneric E. hirsutum had the largest reduction on the performance of E. ciliatum, the final biomass of S. annua was lowest when grown in soil conditioned by itself. There was no effect of stimulating species on the biomass of I. parviflora. In both experimental phases, performance of exotics was improved when cultivated with sterilised inoculua, indicating the dominance of soil generalist pathogens. However, the biomass of S. annua was increased most by congeneric-stimulated inoculum from congeneric sites, suggesting a possible role for specialised symbionts. Our results suggest that variations in invasion success of at least some exotics may be affected by species-specific interactions mediated by the soil biota." }, { "instance_id": "R53407xR53298", "comparison_id": "R53407", "paper_id": "R53298", "text": "Australian family ties: does a lack of relatives help invasive plants escape natural enemies? Invasive plants may initially be released from natural enemies when introduced to new regions, but once established, natural enemies may accumulate. How closely related invasive species are to species in the native recipient community may drive patterns of herbivore and pathogen damage and therefore, may be important in understanding the success of some invasions. We compared herbivore and pathogen damage across a group of invasive species occurring in natural environments on the east coast of Australia. We examined whether the level of damage experienced by the invasive species was associated with the degree of phylogenetic relatedness between these plants and the native plants within the region. We found that phylogenetic distance to the nearest native relative was a good predictor of herbivore and pathogen damage on the invasive plants, explaining nearly 37 % of the variance in leaf damage. Total leaf damage and the variety of damage types declined with increasing phylogenetic distance to the nearest native relative. In addition, as the phylogenetic distance to the nearest native relative increased, invasive species were colonized by fewer functional guilds and the herbivore assemblage was increasingly dominated by generalist species. These results suggest that invasive species that are only distantly related to those in the native invaded community may be released from specialist natural enemies. Our results indicate that the phylogenetic relatedness of invasive plants to species in native communities is a significant predictor of the rate of colonization by the herbivore and pathogen community, and thus a useful tool to assess invasion potential." }, { "instance_id": "R54244xR54014", "comparison_id": "R54244", "paper_id": "R54014", "text": "Native jewelweed, but not other native species, displays post-invasion trait divergence Invasive exotic plants reduce the diversity of native communities by displacing native species. According to the coexistence theory, native plants are able to coexist with invaders only when their fitness is not significantly smaller than that of the exotics or when they occupy a different niche. It has therefore been hypothesized that the survival of some native species at invaded sites is due to post-invasion evolutionary changes in fitness and/or niche traits. In common garden experiments, we tested whether plants from invaded sites of two native species, Impatiens noli-tangere and Galeopsis speciosa, outperform conspecifics from non-invaded sites when grown in competition with the invader (Impatiens parviflora). We further examined whether the expected superior performance of the plants from the invaded sites is due to changes in the plant size (fitness proxy) and/or changes in the germination phenology and phenotypic plasticity (niche proxies). Invasion history did not influence the performance of any native species when grown with the exotic competitor. In I. noli-tangere, however, we found significant trait divergence with regard to plant size, germination phenology and phenotypic plasticity. In the absence of a competitor, plants of I. noli-tangere from invaded sites were larger than plants from non-invaded sites. The former plants germinated earlier than inexperienced conspecifics or an exotic congener. Invasion experience was also associated with increased phenotypic plasticity and an improved shade-avoidance syndrome. Although these changes indicate fitness and niche differentiation of I. noli-tangere at invaded sites, future research should examine more closely the adaptive value of these changes and their genetic basis." }, { "instance_id": "R54244xR54164", "comparison_id": "R54244", "paper_id": "R54164", "text": "Phenotypic variability of natural populations of an invasive drosophilid, Zaprionus indianus, on different continents: Comparison of wild-living and laboratory-grown flies Phenotypic variability in nature is the most important feature for Darwinian adaptation, yet it has been rarely investigated in invasive species. Zaprionus indianus is an Afrotropical drosophilid species that have recently invaded the Palearctic and the Neotropical regions. Here, we compared the variability of three size-related traits and one meristic trait the sternopleural (STP) bristle number, between wild-collected flies living under different conditions: a stressful Mediterranean environment in Egypt, and a benign tropical environment in Brazil. From each population, a F(1) generation was also grown under the stable conditions of the laboratory. Variability of size in nature had a variance 13 times greater than in the laboratory, but not affected by different climates. By contrast, STP variability was identical in nature and in the laboratory. Sexual dimorphism was also investigated with contrasting results between traits. It is suggested that the very high invasiveness of Z. indianus might be related to a better capacity to survive adverse conditions." }, { "instance_id": "R54244xR54120", "comparison_id": "R54244", "paper_id": "R54120", "text": "Variation in morphological characters of two invasive leafminers, Liriomyza huidobrensis and L. sativae, across a tropical elevation gradient Abstract Changes in morphological traits along elevation and latitudinal gradients in ectotherms are often interpreted in terms of the temperature-size rule, which states that the body size of organisms increases under low temperatures, and is therefore expected to increase with elevation and latitude. However other factors like host plant might contribute to spatial patterns in size as well, particularly for polyphagous insects. Here elevation patterns for trait size and shape in two leafminer species are examined, Liriomyza huidobrensis (Blanchard) (Diptera: Agromyzidae) and L. sativae Blanchard, along a tropical elevation gradient in Java, Indonesia. Adult leafminers were trapped from different locations in the mountainous area of Dieng in the province of Central Java. To separate environmental versus genetic effects, L. huidobrensis originating from 1378 m and 2129 m ASL were reared in the laboratory for five generations. Size variation along the elevation gradient was only found in L. huidobrensis and this followed expectations based on the temperature-size rule. There were also complex changes in wing shape along the gradient. Morphological differences were influenced by genetic and environmental effects. Findings are discussed within the context of adaptation to different elevations in the two species." }, { "instance_id": "R54244xR54198", "comparison_id": "R54244", "paper_id": "R54198", "text": "Predicting invasiveness in exotic species: do subtropical native and invasive exotic aquatic plants differ in their growth responses to macronutrients? We investigated whether plasticity in growth responses to nutrients could predict invasive potential in aquatic plants by measuring the effects of nutrients on growth of eight non-invasive native and six invasive exotic aquatic plant species. Nutrients were applied at two levels, approximating those found in urbanized and relatively undisturbed catchments, respectively. To identify systematic differences between invasive and non-invasive species, we compared the growth responses (total biomass, root:shoot allocation, and photosynthetic surface area) of native species with those of related invasive species after 13 weeks growth. The results were used to seek evidence of invasive potential among four recently naturalized species. There was evidence that invasive species tend to accumulate more biomass than native species (P = 0.0788). Root:shoot allocation did not differ between native and invasive plant species, nor was allocation affected by nutrient addition. However, the photosynthetic surface area of invasive species tended to increase with nutrients, whereas it did not among native species (P = 0.0658). Of the four recently naturalized species, Hydrocleys nymphoides showed the same nutrient-related plasticity in photosynthetic area displayed by known invasive species. Cyperus papyrus showed a strong reduction in photosynthetic area with increased nutrients. H. nymphoides and C. papyrus also accumulated more biomass than their native relatives. H. nymphoides possesses both of the traits we found to be associated with invasiveness, and should thus be regarded as likely to be invasive." }, { "instance_id": "R54244xR54050", "comparison_id": "R54244", "paper_id": "R54050", "text": "Common and rare plant species respond differently to fertilisation and competition, whether they are alien or native Plant traits associated with alien invasiveness may also distinguish rare from common native species. To test this, we grew 23 native (9 common, 14 rare) and 18 alien (8 common, 10 rare) herbaceous species in Switzerland from six plant families under nutrient-addition and competition treatments. Alien and common species achieved greater biomass than native and rare species did overall respectively. Across alien and native origins, common species increased total biomass more strongly in response to nutrient addition than rare species did and this difference was not confounded by habitat dissimilarities. There was a weak tendency for common species to survive competition better than rare species, which was also independent of origin. Overall, our study suggests that common alien and native plant species are not fundamentally different in their responses to nutrient addition and competition." }, { "instance_id": "R54244xR54220", "comparison_id": "R54244", "paper_id": "R54220", "text": "Acclimation effects on thermal tolerances of springtails from sub-Antarctic Marion Island: Indigenous and invasive species Collembola are abundant and functionally significant arthropods in sub-Antarctic terrestrial ecosystems, and their importance has increased as a consequence of the many invasive alien species that have been introduced to the region. It has also been predicted that current and future climate change will favour alien over indigenous species as a consequence of more favourable responses to warming in the former. It is therefore surprising that little is known about the environmental physiology of sub-Antarctic springtails and that few studies have explicitly tested the hypothesis that invasive species will outperform indigenous ones under warmer conditions. Here we present thermal tolerance data on three invasive (Pogonognathellus flavescens, Isotomurus cf. palustris, Ceratophysella denticulata) and two indigenous (Cryptopygus antarcticus, Tullbergia bisetosa) species of springtails from Marion Island, explicitly testing the idea that consistent differences exist between the indigenous and invasive species both in their absolute limits and the ways in which they respond to acclimation (at temperatures from 0 to 20 degrees C). Phenotypic plasticity is the first in a series of ways in which organisms might respond to altered environments. Using a poorly explored, but highly appropriate technique, we demonstrate that in these species the crystallization temperature (Tc) is equal to the lower lethal temperature. We also show that cooling rate (1 degree C min(-1); 0.1 degrees C min(-1); 0.5 degrees C h(-1) from 5 to -1 degrees C followed by 0.1 degrees C min(-1)) has little effect on Tc. The indigenous species typically have low Tcs (c. -20 to -13 degrees C depending on the acclimation temperature), whilst those of the invasive species tend to be higher (c. -12 to -6 degrees C) at the lower acclimation temperatures. However, Ceratophysella denticulata is an exception with a low Tc (c. -20 to -18 degrees C), and in P. flavescens acclimation to 20 degrees C results in a pronounced decline in Tc. In general, the invasive and alien species do not differ substantially in acclimation effects on Tc (with the exception of the strong response in P. flavescens). Upper lethal temperatures (ULT50) are typically higher in the invasive (33-37 degrees C) than in the indigenous (30-33 degrees C) species and the response to acclimation differs among the two groups. The indigenous species show either a weak response to acclimation or ULT50 declines with increasing acclimation temperature, whereas in the invasive species ULT50 increases with acclimation temperature. These findings support the hypothesis that many invasive species will be favoured by climate change (warming and drying) at Marion Island. Moreover, manipulative field experiments have shown abundance changes in the indigenous and invasive springtail species in the direction predicted by the physiological data." }, { "instance_id": "R54244xR54150", "comparison_id": "R54244", "paper_id": "R54150", "text": "Phenotypic plasticity and performance of Taraxacum officinale (dandelion) in habitats of contrasting environmental heterogeneity Ecological theory predicts a positive association between environmental heterogeneity of a given habitat and the magnitude of phenotypic plasticity exhibited by resident plant populations. Taraxacum officinale (dandelion) is a perennial herb from Europe that has spread worldwide and can be found growing in a wide variety of habitats. We tested whether T. officinale plants from a heterogeneous environment in terms of water availability show greater phenotypic plasticity and better performance in response to experimental water shortage than plants from a less variable environment. This was tested at both low and moderate temperatures in plants from two sites (Corvallis, Oregon, USA, and El Blanco, Balmaceda, Chile) that differ in their pattern of monthly variation in rainfall during the growth season. We compared chlorophyll fluorescence (photosynthetic performance), flowering time, seed output, and total biomass. Plants subjected to drought showed delayed flowering and lower photosynthetic performance. Plants from USA, where rainfall variation during the growth season was greater, exhibited greater plasticity to water shortage in photosynthetic performance and flowering time than plants from Chile. This was true at both low and moderate temperatures, which were similar to early- and late-season conditions, respectively. However, phenotypic plasticity to decreased water availability was seemingly maladaptive because under both experimental temperatures USA plants consistently performed worse than Chile plants in the low water environment, showing lower total biomass and fewer seeds per flower head. We discuss the reliability of environmental clues for plasticity to be adaptive. Further research in the study species should include other plant traits involved in functional responses to drought or potentially associated with invasiveness." }, { "instance_id": "R54244xR54124", "comparison_id": "R54244", "paper_id": "R54124", "text": "Phenotypic plasticity of introduced versus native purple loosestrife: univariate and multivariate reaction norm approaches The plastic responses to environmental change by Lythrum salicaria (purple loosestrife) were compared between native plants derived from seeds collected in Europe and those introduced into North America. Plants from nine populations each were grown under two levels of water and nutrient conditions. At the end of the growing season, samples were evaluated for eight traits related to their life history, plant size/architecture, and reproduction. Genetic (G), environmental (E), and G \u00d7 E interactions were assessed by restricted maximum likelihood (REML) analysis of covariance (ANCOVA) and multivariate analysis of covariance (MANCOVA). Both univariate and multivariate reaction norm analyses were used to test for differences in the magnitude and direction of phenotypic plasticity between introduced and native plants. Under high-nutrient conditions, introduced plants were taller and had more branches and greater aboveground biomass. They also exhibited significantly greater amounts of phenotypic plasticity for aboveground biomass than did the natives in response to changing nutrient levels in standing water. This difference in univariate plasticity contributed to the general contrast in multivariate plasticity between introduced and native plants. These results support the idea that introduced plants may successfully invade a habitat and grow better than native plants in response to increased resources." }, { "instance_id": "R54244xR54200", "comparison_id": "R54244", "paper_id": "R54200", "text": "Spreading of the invasive Carpobrotus aff. acinaciformis in Mediterranean ecosystems: The advantage of performing in different light environments ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002\u20132003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous search of available resources, contributed to a good growth and photochemical efficiency of Carpobrotus in the relatively moderate shade of the understories of Mediterranean shrublands and woodlands. Each main shoot of a Carpobrotus clone (which can have several dozens main shoots) grows ca. 40 cm per year, which explains its vigorous habitat colonization capacity. Conclusion: The highly plastic morphological response to different light regimes of this taxon contributes to a rapid colonization of heterogeneous coastal Mediterranean environments spreading well beyond the open sand dune systems where it has been often reported. Nomenclature: Tutin et al. (1964\u20131980)." }, { "instance_id": "R54244xR54236", "comparison_id": "R54244", "paper_id": "R54236", "text": "Induced defenses in response to an invading crab predator: An explanation of historical and geographic phenotypic change The expression of defensive morphologies in prey often is correlated with predator abundance or diversity over a range of temporal and spatial scales. These patterns are assumed to reflect natural selection via differential predation on genetically determined, fixed phenotypes. Phenotypic variation, however, also can reflect within-generation developmental responses to environmental cues (phenotypic plasticity). For example, water-borne effluents from predators can induce the production of defensive morphologies in many prey taxa. This phenomenon, however, has been examined only on narrow scales. Here, we demonstrate adaptive phenotypic plasticity in prey from geographically separated populations that were reared in the presence of an introduced predator. Marine snails exposed to predatory crab effluent in the field increased shell thickness rapidly compared with controls. Induced changes were comparable to (i) historical transitions in thickness previously attributed to selection by the invading predator and (ii) present-day clinal variation predicted from water temperature differences. Thus, predator-induced phenotypic plasticity may explain broad-scale geographic and temporal phenotypic variation. If inducible defenses are heritable, then selection on the reaction norm may influence coevolution between predator and prey. Trade-offs may explain why inducible rather than constitutive defenses have evolved in several gastropod species." }, { "instance_id": "R54244xR54080", "comparison_id": "R54244", "paper_id": "R54080", "text": "Morphological variation of introduced species: The case of American mink (Neovison vison) in Spain Abstract We studied the morphology of American mink Neovison vison in five out of the six introduced populations in Spain. The spatial and temporal variation of body weight (BW), body length (BL), tail length, hind-foot length and ear length were analysed. Temporal trends in BW and BL in relation to years since mink introduction were also analyzed. In addition, we tested the effect of sex, age (juvenile, subadult and adult) and age\u2013sex interaction, on each parameter. Morphological parameters differed between populations, illustrating the high variability of body size of American mink in different environments, and the phenotypic plasticity of the species. Annual variations were synchronized between populations, suggesting a large-scale effect on all of them. BW and BL showed a decreasing trend in both males and females in relation to years since introduction. This decrease may be related to mink's diet. Differences in sex and age were found, pointing to sexual dimorphism in adults, subadults and juveniles. The dimorphism in non-adult individuals suggests that subadult males may have a competitive advantage from subadult females in feeding and/or hunting on bigger prey from an early age (resource partitioning hypothesis)." }, { "instance_id": "R54244xR54118", "comparison_id": "R54244", "paper_id": "R54118", "text": "Invasive species can handle higher leaf temperature under water stress than Mediterranean natives Abstract Thermal tolerance of Photosystem II (PSII) highly influences plant distribution worldwide because it allows for photosynthesis during periods of high temperatures and water stress, which are common in most terrestrial ecosystems and particularly in dry and semi-arid ones. However, there is a lack of information about how this tolerance influences invasiveness of exotic species in ecosystems with seasonal drought. To address this question for Mediterranean-type ecosystems (MTE) of the Iberian Peninsula, we carried out an experiment with fifteen phylogenetically related species (8 invasive and 7 native, Pinus pinaster Ait., Pinus radiata D. Don, Schinus molle Linn., Elaeagnus angustifolia L., Eucalyptus globulus Labill., Acacia melanoxylon R. Br., Gleditsia triacanthos L., Pistacia terebinthus L., Rhamnus alaternus L., Anagyris foetida L., Colutea arborescens L., Oenothera biennis L., Epilobium hirsutum L., Achillea filipendulina Lam. and Achillea millefolium L). Seedlings were grown and maximal photochemical efficiency of PSII (Fv/Fm) was measured at two water availabilities (well-watered and with water stress). PSII thermal tolerance measurements were related to specific leaf area (SLA), which varied significantly across the study species, and to the mean potential evapotranspiration (PET) of the month with the lowest precipitation in the native areas of both groups and in the invaded area of the Iberian Peninsula. Additionally, PSII thermal tolerance measurements under water stress were phylogenetically explored. Invasive and native species neither differed in SLA nor in their thermal tolerance under well-watered conditions. For well-watered plants, SLA was significantly and positively related to PSII thermal tolerance when all species were explored together regardless of their invasive nature. However, this relationship did not persist under water stress and invasive species had higher plastic responses than Mediterranean natives resulting in higher leaf temperatures. Higher PSII thermal tolerance could explain invasiveness because it allows for longer periods of carbon acquisition under water stress. In fact, PSII thermal tolerance was positively related to the PET of the invaded and native areas of the Iberian Peninsula. PSII thermal tolerance was not related to PET at the native range of the invasive species, suggesting that successful invasive species were plastic enough to cope with novel dry conditions of the Iberian Peninsula. Moreover, our phylogenetic results indicate that future scenarios of increased aridity in MTE associated to climate change will filter invasion success by taxonomic identity. This study reveals the importance of studying ecophysiological traits to understand and better predict future biological invasions." }, { "instance_id": "R54244xR54234", "comparison_id": "R54244", "paper_id": "R54234", "text": "Can life-history traits predict the fate of introduced species? A case study on two cyprinid fish in southern France 1. The ecological and economic costs of introduced species can be high. Ecologists try to predict the probability of success and potential risk of the establishment of recently introduced species, given their biological characteristics. 2. In 1990 gudgeon, Gobio gobio, were released in a drainage canal of the Rhone delta of southern France. The Asian topmouth gudgeon, Pseudorasbora parva, was found for the first time in the same canal in 1993. Those introductions offered a unique opportunity to compare in situ the fate of two closely related fish in the same habitat. 3. Our major aims were to assess whether G. gobio was able to establish in what seemed an unlikely environment, to compare populations trends and life-history traits of both species and to assess whether we could explain or could have predicted our results, by considering their life-history strategies. 4. Data show that both species have established in the canal and have spread. Catches of P. parva have increased strongly and are now higher than those of G. gobio. 5. The two cyprinids have the same breeding season and comparable traits (such as short generation time, small body, high reproductive effort), so both could be classified as opportunists. The observed difference in their success (in terms of population growth and colonization rate) could be explained by the wider ecological and physiological tolerance of P. parva. 6. In conclusion, our field study seems to suggest that invasive vigour also results from the ability to tolerate environmental changes through phenotypic plasticity, rather than from particular life-history features pre-adapted to invasion. It thus remains difficult to define a good invader simply on the basis of its life-history features." }, { "instance_id": "R54244xR54158", "comparison_id": "R54244", "paper_id": "R54158", "text": "Phenotypic plasticity of Spartina alterniflora and Phragmites australis in response to nitrogen addition and intraspecific competition Phenotypic plasticity of the two salt marsh grasses Spartina alterniflora and Phragmites australis in salt marshes is crucial to their invasive ability, but the importance of phenotypic plasticity, nitrogen levels, and intraspecific competition to the success of the two species is unclear at present. Spartina alterniflora Loisel. is an extensively invasive species that has increased dramatically in distribution and abundance on the Chinese and European coasts, and has had considerable ecological impacts in the regions where it has established. Meanwhile, Phragmites australis Cav., a native salt marsh species on the east coast of China, has replaced the native S. alterniflora in many marshes along the Atlantic Coast of the US. This study determined the effects of nitrogen availability and culm density on the morphology, growth, and biomass allocation traits of Spartina alterniflora and Phragmites australis. A large number of morphological, growth, and biomass parameters were measured, and various derived values (culm: root ratio, specific leaf area, etc.) were calculated, along with an index of phenotypic plasticity. Nitrogen addition significantly affected growth performance and biomass allocation traits of Spartina alterniflora, and culm density significantly affected morphological characteristics in a negative way, especially for Spartina alterniflora. However, there were no significant interactions between nitrogen levels and culm density on the morphological parameters, growth performances parameters, and biomass allocation parameters of the two species. Spartina alterniflora appears to respond more strongly to nitrogen than to culm density and this pattern of phenotypic plasticity appears to offer an expedition for successful invasion and displacement of Phramites australias in China. The implication of this study is that, in response to the environmental changes that are increasing nitrogen levels, the range of Spartina alterniflora is expected to continue to expand on the east coast of China." }, { "instance_id": "R54244xR54180", "comparison_id": "R54244", "paper_id": "R54180", "text": "Evidence for a shift in life-history strategy during the secondary phase of a plant invasion We investigated the correlated response of several key traits of Lythrum salicaria L. to water availability gradients in introduced (Iowa, USA) and native (Switzerland, Europe) populations. This was done to investigate whether plants exhibit a shift in life-history strategy during expansion into more stressful habitats during the secondary phase of invasion, as has recently been hypothesized by Dietz and Edwards (Ecology 87(6):1359, 2006). Plants in invaded habitats exhibited a correlated increase in longevity and decrease in overall size in the transition into more stressful mesic habitats. In contrast, plants in the native range only exhibited a decrease in height. Our findings are consistent with the hypothesis that secondary invasion is taking place in L. salicaria, allowing it to be more successful under the more stressful mesic conditions in the invaded range. If this trend continues, L. salicaria may become a more problematic species in the future." }, { "instance_id": "R54244xR54188", "comparison_id": "R54244", "paper_id": "R54188", "text": "Plasticity of Sapium sebiferum seedling growth to light and water resources: Inter- and intraspecific comparisons Abstract Two main hypotheses have been posed to explain the role of phenotypic plasticity in the invasive success of exotic plants: (1) invasive species may be more plastic than resident species in the introduced range, and (2) invasive populations of an exotic species may be more plastic relative to native populations due to evolutionary changes after introduction. To test the first hypothesis, we conducted a greenhouse pot experiment in which seedlings of invasive Sapium sebiferum competed against native Schizachyrium scoparium grasses under different light and water conditions. To test the second hypothesis, we performed an additional greenhouse pot experiment in which seedlings from native and invasive populations of S. sebiferum were grown under environmental treatments analogous to those in the first greenhouse experiment. Compared to native S. scoparium grasses, or to S. sebiferum seedlings from native populations, growth rates of S. sebiferum seedlings from invasive populations were generally higher. When they were competing with S. scoparium grasses, the greater response of S. sebiferum to light and water conditions reflected different patterns: S. sebiferum seedlings were better able to respond with increased growth in unflooded soils, whereas S. sebiferum had more robust growth in the shaded conditions. No difference in responses to change in water conditions, but a significant difference in responses to variation in light conditions was found between two population types of S. sebiferum. The results of this study suggest that relative to S. scoparium, the greater plasticity of S. sebiferum to variation in light conditions is evolved in the introduced range, while that to variation in water conditions reflects an innate property." }, { "instance_id": "R54244xR54100", "comparison_id": "R54244", "paper_id": "R54100", "text": "A comparison of univariate and multivariate methods for analyzing clinal variation in an invasive species The evolution of clinal variation has become a topic widely studied for invasive species. Most studies of this kind have found significant correlations between latitude and various plant traits, usually using univariate analytic methods. However, plants are composed of multiple, interacting traits, and it is this correlation among traits that can affect how quickly or even whether the populations of invasive plants adapt to their local climatic conditions. We used data from a common garden experiment to determine the possible formation of latitudinal clines in invasive North American populations of Lythrum salicaria L. (purple loosestrife) from the central portion of its invasive range. Analyses were conducted using the more common univariate approach (nested and oneway ANOVAs; linear regression) on individual plant traits (e.g., time to flowering, plant height, various mass measures, and growth rate) and then a multivariate approach (principle components analysis followed by redundancy analysis). Significant among-population differences (P < 0.01) were noted when using both the nested and oneway ANOVAs, and multivariate techniques. However, there were no significant relationships between individual plants traits to latitude when using linear regressions, most likely as a result of the small number of populations used in the study (n = 4). On the contrary, the multivariate analyses showed a significant effect of latitude (P < 0.001) on the invasive populations, but this explained only 4% of the variance; latitude explained 8% of the variance when both invasive and native populations were analyzed. Because of the integrated nature of plant phenotypes, a multivariate approach should provide a clearer and deeper understanding of population responses to changing conditions than univariate techniques." }, { "instance_id": "R54244xR54034", "comparison_id": "R54244", "paper_id": "R54034", "text": "Norway maple displays greater seasonal growth and phenotypic plasticity to light than native sugar maple Norway maple (Acer platanoides L), which is among the most invasive tree species in forests of eastern North America, is associated with reduced regeneration of the related native species, sugar maple (Acer saccharum Marsh) and other native flora. To identify traits conferring an advantage to Norway maple, we grew both species through an entire growing season under simulated light regimes mimicking a closed forest understorey vs. a canopy disturbance (gap). Dynamic shade-houses providing a succession of high-intensity direct-light events between longer periods of low, diffuse light were used to simulate the light regimes. We assessed seedling height growth three times in the season, as well as stem diameter, maximum photosynthetic capacity, biomass allocation above- and below-ground, seasonal phenology and phenotypic plasticity. Given the north European provenance of Norway maple, we also investigated the possibility that its growth in North America might be increased by delayed fall senescence. We found that Norway maple had significantly greater photosynthetic capacity in both light regimes and grew larger in stem diameter than sugar maple. The differences in below- and above-ground biomass, stem diameter, height and maximum photosynthesis were especially important in the simulated gap where Norway maple continued extension growth during the late fall. In the gap regime sugar maple had a significantly higher root : shoot ratio that could confer an advantage in the deepest shade of closed understorey and under water stress or browsing pressure. Norway maple is especially invasive following canopy disturbance where the opposite (low root : shoot ratio) could confer a competitive advantage. Considering the effects of global change in extending the potential growing season, we anticipate that the invasiveness of Norway maple will increase in the future." }, { "instance_id": "R54244xR54054", "comparison_id": "R54244", "paper_id": "R54054", "text": "Phenotypic Plasticity in the Invasion of Crofton Weed (Eupatorium adenophorum) in China Phenotypic plasticity and rapid evolution are two important strategies by which invasive species adapt to a wide range of environments and consequently are closely associated with plant invasion. To test their importance in invasion success of Crofton weed, we examined the phenotypic response and genetic variation of the weed by conducting a field investigation, common garden experiments, and intersimple sequence repeat (ISSR) marker analysis on 16 populations in China. Molecular markers revealed low genetic variation among and within the sampled populations. There were significant differences in leaf area (LA), specific leaf area (SLA), and seed number (SN) among field populations, and plasticity index (PI v ) for LA, SLA, and SN were 0.62, 0.46 and 0.85, respectively. Regression analyses revealed a significant quadratic effect of latitude of population origin on LA, SLA, and SN based on field data but not on traits in the common garden experiments (greenhouse and open air). Plants from different populations showed similar reaction norms across the two common gardens for functional traits. LA, SLA, aboveground biomass, plant height at harvest, first flowering day, and life span were higher in the greenhouse than in the open-air garden, whereas SN was lower. Growth conditions (greenhouse vs. open air) and the interactions between growth condition and population origin significantly affect plant traits. The combined evidence suggests high phenotypic plasticity but low genetically based variation for functional traits of Crofton weed in the invaded range. Therefore, we suggest that phenotypic plasticity is the primary strategy for Crofton weed as an aggressive invader that can adapt to diverse environments in China." }, { "instance_id": "R54244xR54074", "comparison_id": "R54244", "paper_id": "R54074", "text": "Phenotypic divergence of exotic fish populations is shaped by spatial proximity and habitat differences across an invaded landscape Background: Brown trout (Salmo trutta) were introduced into, and subsequently colonized, a number of disparate watersheds on the island of Newfoundland, Canada (110,638 km 2 ), starting in 1883. Questions: Do environmental features of recently invaded habitats shape population-level phenotypic variability? Are patterns of phenotypic variability suggestive of parallel adaptive divergence? And does the extent of phenotypic divergence increase as a function of distance between populations? Hypotheses: Populations that display similar phenotypes will inhabit similar environments. Patterns in morphology, coloration, and growth in an invasive stream-dwelling fish should be consistent with adaptation, and populations closer to each other should be more similar than should populations that are farther apart. Organism and study system: Sixteen brown trout populations of probable common descent, inhabiting a gradient of environments. These populations include the most ancestral (\u223c130 years old) and most recently established (\u223c20 years old). Analytical methods: We used multivariate statistical techniques to quantify morphological (e.g. body shape via geometric morphometrics and linear measurements of traits), meristic (e.g. counts of pigmentation spots), and growth traits from 1677 individuals. To account for ontogenetic and allometric effects on morphology, we conducted separate analyses on three distinct size/age classes. We used the BIO-ENV routine and Mantel tests to measure the correlation between phenotypic and habitat features. Results: Phenotypic similarity was significantly correlated with environmental similarity, especially in the larger size classes of fish. The extent to which these associations between phenotype and habitat result from parallel evolution, adaptive phenotypic plasticity, or historical founder effects is not known. Observed patterns of body shape and fin sizes were generally consistent with predictions of adaptive trait patterns, but other traits showed less consistent patterns with habitat features. Phenotypic differences increased as a function of straight-line distance (km) between watersheds and to a lesser extent fish dispersal distances, which suggests habitat has played a more significant role in shaping population phenotypes compared with founder effects." }, { "instance_id": "R54244xR54122", "comparison_id": "R54244", "paper_id": "R54122", "text": "Experimental microevolution: transplantation of pink salmon into the European North Human-mediated translocations of species beyond their native ranges can enhance evolutionary processes in populations introduced to novel environments. We studied such processes in several generations of pink salmon Oncorhynchus gorbuscha introduced to the European North of Russia using a set of morphological and life-history traits as well as molecular genetic markers with different selective values: protein-coding loci, mtDNA, microsatellites, and MHC. The introduction of reproductively isolated pink salmon broodlines of odd and even years yielded different results. The odd-year broodline established self-reproducing local populations in many rivers of new range, but sustainable changes in external morphology, reproduction, and life-history, as well as the impoverishment of the gene pool occurred. Their successful colonisation of the new range resulted in specialisation manifested in the rapid directional shifts in some highly heritable phenotypic traits accompanied by increased homozygosity at molecular markers as a consequence of genetic drift and selective processes. The returns of transplanted pink salmon of even-year broodline decreased sharply already in the second generation, but there was no marked reduction of genetic diversity. Our data, as well as the analysis of the history of all pink salmon transplantations beyond the species range, demonstrate comparatively greater success of introduced odd-year broodline and permit to assume different adaptive plasticity of the even- and odd-year broodlines in pink salmon, what is most likely determined by differences in their evolutionary histories. Population genetic data suggest that the even-year broodline probably diverged from the odd-year broodline relatively recently and, due to the founder effect, may have lost a part of its genetic variation with which adaptive plasticity potential is associated." }, { "instance_id": "R54244xR54056", "comparison_id": "R54244", "paper_id": "R54056", "text": "Seasonal Photoperiods Alter Developmental Time and Mass of an Invasive Mosquito, Aedes albopictus (Diptera: Culicidae), Across Its North-South Range in the United States ABSTRACT The Asian tiger mosquito, Aedes albopictus (Skuse), is perhaps the most successful invasive mosquito species in contemporary history. In the United States, Ae. albopictus has spread from its introduction point in southern Texas to as far north as New Jersey (i.e., a span of \u224814\u00b0 latitude). This species experiences seasonal constraints in activity because of cold temperatures in winter in the northern United States, but is active year-round in the south. We performed a laboratory experiment to examine how life-history traits of Ae. albopictus from four populations (New Jersey [39.4\u00b0 N], Virginia [38.6\u00b0 N], North Carolina [35.8\u00b0 N], Florida [27.6\u00b0 N]) responded to photoperiod conditions that mimic approaching winter in the north (short static daylength, short diminishing daylength) or relatively benign summer conditions in the south (long daylength), at low and high larval densities. Individuals from northern locations were predicted to exhibit reduced development times and to emerge smaller as adults under short daylength, but be larger and take longer to develop under long daylength. Life-history traits of southern populations were predicted to show less plasticity in response to daylength because of low probability of seasonal mortality in those areas. Males and females responded strongly to photoperiod regardless of geographic location, being generally larger but taking longer to develop under the long daylength compared with short day lengths; adults of both sexes were smaller when reared at low larval densities. Adults also differed in mass and development time among locations, although this effect was independent of density and photoperiod in females but interacted with density in males. Differences between male and female mass and development times was greater in the long photoperiod suggesting differences between the sexes in their reaction to different photoperiods. This work suggests that Ae. albopictus exhibits sex-specific phenotypic plasticity in life-history traits matching variation in important environmental variables." }, { "instance_id": "R54244xR54130", "comparison_id": "R54244", "paper_id": "R54130", "text": "Morphological differentiation of introduced pikeperch (Sander lucioperca L., 1758) populations in Tunisian freshwaters Summary In order to evaluate the phenotypic plasticity of introduced pikeperch populations in Tunisia, the intra- and interpopulation differentiation was analysed using a biometric approach. Thus, nine meristic counts and 23 morphological measurements were taken from 574 specimens collected from three dams and a hill lake. The univariate (anova) and multivariate analyses (PCA and DFA) showed a low meristic variability between the pikeperch samples and a segregated pikeperch group from the Sidi Salem dam which displayed a high distance between mouth and pectoral fin and a high antedorsal distance. In addition, the Korba hill lake population seemed to have more important values of total length, eye diameter, maximum body height and a higher distance between mouth and operculum than the other populations. However, the most accentuated segregation was found in the Lebna sample where the individuals were characterized by high snout length, body thickness, pectoral fin length, maximum body height and distance between mouth and operculum. This study shows the existence of morphological differentiations between populations derived from a single gene pool that have been isolated in separated sites for several decades although in relatively similar environments." }, { "instance_id": "R54244xR54174", "comparison_id": "R54244", "paper_id": "R54174", "text": "Growth, water relations, and stomatal development of Caragana korshinskii Kom. and Zygophyllum xanthoxylum (Bunge) Maxim. seedlings in response to water deficits Abstract The selection and introduction of drought tolerant species is a common method of restoring degraded grasslands in arid environments. This study investigated the effects of water stress on growth, water relations, Na+ and K+ accumulation, and stomatal development in the native plant species Zygophyllum xanthoxylum (Bunge) Maxim., and an introduced species, Caragana korshinskii Kom., under three watering regimes. Moderate drought significantly reduced pre\u2010dawn water potential, leaf relative water content, total biomass, total leaf area, above\u2010ground biomass, total number of leaves and specific leaf area, but it increased the root/total weight ratio (0.23 versus 0.33) in C. korshinskii. Only severe drought significantly affected water status and growth in Z. xanthoxylum. In any given watering regime, a significantly higher total biomass was observed in Z. xanthoxylum (1.14 g) compared to C. korshinskii (0.19 g). Moderate drought significantly increased Na+ accumulation in all parts of Z. xanthoxylum, e.g., moderate drought increased leaf Na+ concentration from 1.14 to 2.03 g/100 g DW, however, there was no change in Na+ (0.11 versus 0.12) in the leaf of C. korshinskii when subjected to moderate drought. Stomatal density increased as water availability was reduced in both C. korshinskii and Z. xanthoxylum, but there was no difference in stomatal index of either species. Stomatal length and width, and pore width were significantly reduced by moderate water stress in Z. xanthoxylum, but severe drought was required to produce a significant effect in C. korshinskii. These results indicated that C. korshinskii is more responsive to water stress and exhibits strong phenotypic plasticity especially in above\u2010ground/below\u2010ground biomass allocation. In contrast, Z. xanthoxylum was more tolerant to water deficit, with a lower specific leaf area and a strong ability to maintain water status through osmotic adjustment and stomatal closure, thereby providing an effective strategy to cope with local extreme arid environments." }, { "instance_id": "R54244xR54182", "comparison_id": "R54244", "paper_id": "R54182", "text": "Belowground mutualists and the invasive ability of Acacia longifolia in coastal dunes of Portugal The ability to form symbiotic associations with soil microorganisms and the consequences for plant growth were studied for three woody legumes grown in five different soils of a Portuguese coastal dune system. Seedlings of the invasive Acacia longifolia and the natives Ulex europaeus and Cytisus grandiflorus were planted in the five soil types in which at least one of these species appear in the studied coastal dune system. We found significant differences between the three woody legumes in the number of nodules produced, final plant biomass and shoot 15N content. The number of nodules produced by A. longifolia was more than five times higher than the number of nodules produced by the native legumes. The obtained 15N values suggest that both A. longifolia and U. europaeus incorporated more biologically-fixed nitrogen than C. grandiflorus which is also the species with the smallest distribution. Finally, differences were also found between the three species in the allocation of biomass in the different studied soils. Acacia longifolia displayed a lower phenotypic plasticity than the two native legumes which resulted in a greater allocation to aboveground biomass in the soils with lower nutrient content. We conclude that the invasive success of A. longifolia in the studied coastal sand dune system is correlated to its capacity to nodulate profusely and to use the biologically-fixed nitrogen to enhance aboveground growth in soils with low N content." }, { "instance_id": "R54244xR54170", "comparison_id": "R54244", "paper_id": "R54170", "text": "Life history plasticity magnifies the ecological effects of a social wasp invasion An unresolved question in ecology concerns why the ecological effects of invasions vary in magnitude. Many introduced species fail to interact strongly with the recipient biota, whereas others profoundly disrupt the ecosystems they invade through predation, competition, and other mechanisms. In the context of ecological impacts, research on biological invasions seldom considers phenotypic or microevolutionary changes that occur following introduction. Here, we show how plasticity in key life history traits (colony size and longevity), together with omnivory, magnifies the predatory impacts of an invasive social wasp (Vespula pensylvanica) on a largely endemic arthropod fauna in Hawaii. Using a combination of molecular, experimental, and behavioral approaches, we demonstrate (i) that yellowjackets consume an astonishing diversity of arthropod resources and depress prey populations in invaded Hawaiian ecosystems and (ii) that their impact as predators in this region increases when they shift from small annual colonies to large perennial colonies. Such trait plasticity may influence invasion success and the degree of disruption that invaded ecosystems experience. Moreover, postintroduction phenotypic changes may help invaders to compensate for reductions in adaptive potential resulting from founder events and small population sizes. The dynamic nature of biological invasions necessitates a more quantitative understanding of how postintroduction changes in invader traits affect invasion processes." }, { "instance_id": "R54244xR54114", "comparison_id": "R54244", "paper_id": "R54114", "text": "Preadapted for invasiveness: do species traits or their plastic response to shading differ between invasive and non-invasive plant species in their native range? Aim Species capable of vigorous growth under a wide range of environmental conditions should have a higher chance of becoming invasive after introduction into new regions. High performance across environments can be achieved either by constitutively expressed traits that allow for high resource uptake under different environmental conditions or by adaptive plasticity of traits. Here we test whether invasive and non-invasive species differ in presumably adaptive plasticity. Location Europe (for native species); the rest of the world and North America in particular (for alien species). Methods We selected 14 congeneric pairs of European herbaceous species that have all been introduced elsewhere. One species of each pair is highly invasive elsewhere in the world, particularly so in North America, whereas the other species has not become invasive or has spread only to a limited degree. We grew native plant material of the 28 species under shaded and non-shaded conditions in a common garden experiment, and measured biomass production and morphological traits that are frequently related to shade tolerance and avoidance. Results Invasive species had higher shoot\u2013root ratios, tended to have longer leaf-blades, and produced more biomass than congeneric non-invasive species both under shaded and non-shaded conditions. Plants responded to shading by increasing shoot\u2013root ratios and specific leaf area. Surprisingly, these shade-induced responses, which are widely considered to be adaptive, did not differ between invasive and non-invasive species. Main conclusions We conclude that high biomass production across different light environments pre-adapts species to become invasive, and that this is not mediated by plasticities of the morphological traits that we measured." }, { "instance_id": "R54244xR54222", "comparison_id": "R54244", "paper_id": "R54222", "text": "Adaptation vs. phenotypic plasticity in the success of a clonal invader The relative importance of plasticity vs. adaptation for the spread of invasive species has rarely been studied. We examined this question in a clonal population of invasive freshwater snails (Potamopyrgus antipodarum) from the western United States by testing whether observed plasticity in life history traits conferred higher fitness across a range of temperatures. We raised isofemale lines from three populations from different climate regimes (high- and low-elevation rivers and an estuary) in a split-brood, common-garden design in three temperatures. We measured life history and growth traits and calculated population growth rate (as a measure of fitness) using an age-structured projection matrix model. We found a strong effect of temperature on all traits, but no evidence for divergence in the average level of traits among populations. Levels of genetic variation and significant reaction norm divergence for life history traits suggested some role for adaptation. Plasticity varied among traits and was lowest for size and reproductive traits compared to age-related traits and fitness. Plasticity in fitness was intermediate, suggesting that invasive populations are not general-purpose genotypes with respect to the range of temperatures studied. Thus, by considering plasticity in fitness and its component traits, we have shown that trait plasticity alone does not yield the same fitness across a relevant set of temperature conditions." }, { "instance_id": "R54244xR54058", "comparison_id": "R54244", "paper_id": "R54058", "text": "Comparisons of plastic responses to irradiance and physiological traits by invasive Eupatorium adenophorum and its native congeners To explore the traits contributing to invasiveness of Eupatorium adenophorum and to test the relationship between plasticity of these traits and invasiveness, we compared E. adenophorum with its two native congeners at four irradiances (10%, 23%, 40%, and 100%). The invader showed constantly higher performance (relative growth rate and total biomass) across irradiances than its native congeners. Higher light-saturated photosynthetic rate (P(max)), respiration efficiency (RE), and nitrogen (PNUE) and water (WUE, at 40% and 100% irradiances only) use efficiencies contributed directly to the higher performance of the invader. Higher nitrogen allocation to, stomatal conductance, and the higher contents of leaf nitrogen and pigments contributed to the higher performance of the invader indirectly through increasing P(max), RE, PNUE and WUE. The invader had consistently higher plasticity only in carotenoid content than its native congeners in ranges of low (10-40%), high (40-100%) and total (10-100%) irradiances, contributing to invasion success in high irradiance by photo protection. In the range of low irradiances, the invader had higher plasticity in some physiological traits (leaf nitrogen content, nitrogen contents in bioenergetics, carboxylation and in light-harvesting components, and contents of leaf chlorophylls and carotenoids) but not in performance, while in the ranges of high or total irradiances, the invader did not show higher plasticity in any variable (except Car). The results indicated that the relationship between invasiveness and plasticity of a specific trait was complex, and that a universal generalization about the relationship might be too simplistic." }, { "instance_id": "R54244xR54172", "comparison_id": "R54244", "paper_id": "R54172", "text": "Understanding the consequences of seed dispersal in a heterogeneous environment Plant distributions are in part determined by environmental heterogeneity on both large (landscape) and small (several meters) spatial scales. Plant populations can respond to environmental heterogeneity via genetic differentiation between large distinct patches, and via phenotypic plasticity in response to heterogeneity occurring at small scales relative to dispersal distance. As a result, the level of environmental heterogeneity experienced across generations, as determined by seed dispersal distance, may itself be under selection. Selection could act to increase or decrease seed dispersal distance, depending on patterns of heterogeneity in environmental quality with distance from a maternal home site. Serpentine soils, which impose harsh and variable abiotic stress on non-adapted plants, have been partially invaded by Erodium cicutarium in northern California, USA. Using nearby grassland sites characterized as either serpentine or non-serpentine, we collected seeds from dense patches of E. cicutarium on both soil types in spring 2004 and subsequently dispersed those seeds to one of four distances from their maternal home site (0, 0.5, 1, or 10 m). We examined distance-dependent patterns of variation in offspring lifetime fitness, conspecific density, soil availability, soil water content, and aboveground grass and forb biomass. ANOVA revealed a distinct fitness peak when seeds were dispersed 0.5 m from their maternal home site on serpentine patches. In non-serpentine patches, fitness was reduced only for seeds placed back into the maternal home site. Conspecific density was uniformly high within 1 m of a maternal home site on both soils, whereas soil water content and grass biomass were significantly heterogeneous among dispersal distances only on serpentine soils. Structural equation modeling and multigroup analysis revealed significantly stronger direct and indirect effects linking abiotic and biotic variation to offspring performance on serpentine soils than on non-serpentine soils, indicating the potential for soil-specific selection on seed dispersal distance in this invasive species." }, { "instance_id": "R54244xR54208", "comparison_id": "R54244", "paper_id": "R54208", "text": "When Oskar meets Alice: Does a lack of trade-off in r/K-strategies make Prunus serotina a successful invader of European forests? Abstract Alien plant invasions result from a complex interaction between the species life traits (i.e. \u2018invasiveness\u2019) and the recipient ecosystem attributes (i.e. \u2018invasibility\u2019). However, little is known about the demographical strategy of invaders and its plasticity among similar ecosystems. To assess the role of demographical attributes and their interaction with soil and light conditions on the durable integration of an exotic invasive tree species into a recipient forest, we analyzed population structure, sexual and clonal reproduction, and growth characteristics of the American black cherry ( Prunus serotina Ehrh.) in a European forest. As seeds, P. serotina is able to enter closed-canopy forests and form a long-living sapling bank, according to the \u2018Oskar syndrome\u2019 (no height growth, diameter increment \u22121 ). Suppressed saplings typically develop a \u2018sit-and-wait\u2019 strategy so that the invader had a head start on native species when a disturbance-induced gap occurs. Once released, suppressed saplings grow rapidly (height growth > 56 cm year \u22121 ) to reach the canopy, fill in the gap and produce numerous seeds (6011 per tree on average). During the self-thinning process characterizing the aggrading phase, overtopped saplings die back but subsequently resprout from roots and stumps, going back to \u2018Oskar\u2019 stage. This \u2018Alice behaviour\u2019 would enable individuals to decrease in size, delay mortality and locally self-maintain in the understories. These results suggest that P. serotina may successfully invade European forests thanks to a combination of traits which fits well the disturbance regime of the recipient ecosystems. It would behave as a shade-tolerant K -strategist in juvenile stages by giving priority to persistence, but as a light-demanding r -strategist once released, by allocating high energy in growth and reproduction. Initial stages of colonisation are weakly affected by soil but strongly by light conditions." }, { "instance_id": "R54244xR54241", "comparison_id": "R54244", "paper_id": "R54241", "text": "Greater morphological plasticity of exotic honeysuckle species may make them better invaders than native species sempervirens L., a non-invasive native. We hypothesized that greater morphological plasticity may contribute to the ability of L. japonica to occupy more habitat types, and contribute to its invasiveness. We compared the morphology of plants provided with climbing supports with plants that had no climbing supports, and thus quantified their morphological plasticity in response to an important variable in their habitats. The two species responded differently to the treatments, with L. japonica showing greater responses in more characters. For example, Lonicera japonica responded to climbing supports with a 15.3% decrease in internode length, a doubling of internode number and a 43% increase in shoot biomass. In contrast, climbing supports did not influence internode length or shoot biomass for L. sempervirens, and only resulted in a 25% increase in internode number. This plasticity may allow L. japonica to actively place plant modules in favorable microhabitats and ultimately affect plant fitness." }, { "instance_id": "R54244xR54032", "comparison_id": "R54244", "paper_id": "R54032", "text": "Morphological variation between non-native lake- and stream-dwelling pumpkinseed Lepomis gibbosusin the Iberian Peninsula The objective of this study was to test if morphological differences in pumpkinseed Lepomis gibbosus found in their native range (eastern North America) that are linked to feeding regime, competition with other species, hydrodynamic forces and habitat were also found among stream- and lake- or reservoir-dwelling fish in Iberian systems. The species has been introduced into these systems, expanding its range, and is presumably well adapted to freshwater Iberian Peninsula ecosystems. The results show a consistent pattern for size of lateral fins, with L. gibbosus that inhabit streams in the Iberian Peninsula having longer lateral fins than those inhabiting reservoirs or lakes. Differences in fin placement, body depth and caudal peduncle dimensions do not differentiate populations of L. gibbosus from lentic and lotic water bodies and, therefore, are not consistent with functional expectations. Lepomis gibbosus from lotic and lentic habitats also do not show a consistent pattern of internal morphological differentiation, probably due to the lack of lotic-lentic differences in prey type. Overall, the univariate and multivariate analyses show that most of the external and internal morphological characters that vary among populations do not differentiate lotic from lentic Iberian populations. The lack of expected differences may be a consequence of the high seasonal flow variation in Mediterranean streams, and the resultant low- or no-flow conditions during periods of summer drought." }, { "instance_id": "R54244xR54238", "comparison_id": "R54244", "paper_id": "R54238", "text": "Phenotypic plasticity and genetic diversity in Poa annua L-[Poaceae] at Crozet and Kerguelen Islands (subantarctic) Abstract The widely distributed grass, Poa annua, is one of the most common alien species in the subantarctic islands. The historical events of its introduction remain generally unknown, as well as the evolutionary consequences of its colonisation in these remote environments. Populations from the Crozet archipelago and Kerguelen Islands were compared in terms of morphology, cytogenetics and enzyme polymorphism. Seeds from natural populations were also sown in an experimental garden in France to test phenotypic plasticity. This preliminary study demonstrated the high phenotypic plasticity in P. annua in the French subantarctic islands. This plasticity and allotetraploidy could be important factors which reinforce the colonising capacities of P. annua. Our results revealed the low genetic diversity of the populations analysed, which could be related to the founding effect or to the fragmentation of the populations." }, { "instance_id": "R54244xR54152", "comparison_id": "R54244", "paper_id": "R54152", "text": "Hybridization and Plasticity Contribute to Divergence Among Coastal and Wetland Populations of Invasive Hybrid Japanese Knotweed s.l. (Fallopia spp.) Japanese knotweed s.l. (Fallopia spp.) is a highly invasive clonal plant, best known from roadside and riparian habitats. Its expansion into beaches on Long Island, NY, USA, represents a major habitat shift. I surveyed populations from beaches and wetlands and conducted a common garden experiment to test for variation in drought tolerance and phenotype among populations and habitats. All populations were composed mostly of first- and later-generation hybrids. I found significant variation among populations in growth, lamina size, specific leaf area (SLA), and biomass allocation, in both the field and the common garden. Lamina size, growth, and root-to-shoot responded plastically to drought treatment. Wetland populations tolerated drought as well as beach populations. Differentiation in SLA between habitats suggests that some selection for beach genotypes may have occurred. It appears that both hybridization and phenotypic plasticity are contributing to the expansion of Fallopia spp. into novel habitat." }, { "instance_id": "R54244xR54154", "comparison_id": "R54244", "paper_id": "R54154", "text": "Variation for phenotypic plasticity among populations of an invasive exotic grass Phenotypic plasticity is a common feature of plant invaders, but little is known about variation in plasticity among invading populations. Variation in plasticity of ecologically important traits could facilitate the evolution of greater plasticity and invasiveness. We examined plasticity among invasive populations of Microstegium vimineum (Japanese stiltgrass), a widespread and often dominant grass of forests in the eastern U.S. with two separate experiments. First, we exposed seven Microstegium populations to a drought treatment in growth chambers and monitored growth and physiological responses. Then, we established a greenhouse experiment using a subset of the populations; two that exhibited the most divergent responses and one intermediate population. In the greenhouse, we manipulated drought and shade and evaluated biomass production and specific leaf area (SLA). Microstegium exhibited plasticity for biomass production and SLA in the greenhouse experiment, and populations significantly varied in the degree of plasticity under drought and shade treatments. Two populations significantly increased biomass production under favorable conditions, unlike the third population. The most productive populations also responded to shade stress via greater SLA, possibly allowing for greater utilization of available light, while the third population did not. These results show that Microstegium can exhibit plastic responses to environmental conditions. Moreover, variation for plasticity among populations provides the potential for further evolution of plasticity. Future studies should focus on the relative importance of plasticity for the success of Microstegium and other plant invaders and evaluate post-introduction evolution of plasticity." }, { "instance_id": "R54244xR54036", "comparison_id": "R54244", "paper_id": "R54036", "text": "Latitudinal Patterns in Phenotypic Plasticity and Fitness-Related Traits: Assessing the Climatic Variability Hypothesis (CVH) with an Invasive Plant Species Phenotypic plasticity has been suggested as the main mechanism for species persistence under a global change scenario, and also as one of the main mechanisms that alien species use to tolerate and invade broad geographic areas. However, contrasting with this central role of phenotypic plasticity, standard models aimed to predict the effect of climatic change on species distributions do not allow for the inclusion of differences in plastic responses among populations. In this context, the climatic variability hypothesis (CVH), which states that higher thermal variability at higher latitudes should determine an increase in phenotypic plasticity with latitude, could be considered a timely and promising hypothesis. Accordingly, in this study we evaluated, for the first time in a plant species (Taraxacum officinale), the prediction of the CVH. Specifically, we measured plastic responses at different environmental temperatures (5 and 20\u00b0C), in several ecophysiological and fitness-related traits for five populations distributed along a broad latitudinal gradient. Overall, phenotypic plasticity increased with latitude for all six traits analyzed, and mean trait values increased with latitude at both experimental temperatures, the change was noticeably greater at 20\u00b0 than at 5\u00b0C. Our results suggest that the positive relationship found between phenotypic plasticity and geographic latitude could have very deep implications on future species persistence and invasion processes under a scenario of climate change." }, { "instance_id": "R54244xR54048", "comparison_id": "R54244", "paper_id": "R54048", "text": "The relative importance for plant invasiveness of trait means, and their plasticity and integration in a multivariate framework Functional traits, their plasticity and their integration in a phenotype have profound impacts on plant performance. We developed structural equation models (SEMs) to evaluate their relative contribution to promote invasiveness in plants along resource gradients. We compared 20 invasive-native phylogenetically and ecologically related pairs. SEMs included one morphological (root-to-shoot ratio (R/S)) and one physiological (photosynthesis nitrogen-use efficiency (PNUE)) trait, their plasticities in response to nutrient and light variation, and phenotypic integration among 31 traits. Additionally, these components were related to two fitness estimators, biomass and survival. The relative contributions of traits, plasticity and integration were similar in invasive and native species. Trait means were more important than plasticity and integration for fitness. Invasive species showed higher fitness than natives because: they had lower R/S and higher PNUE values across gradients; their higher PNUE plasticity positively influenced biomass and thus survival; and they offset more the cases where plasticity and integration had a negative direct effect on fitness. Our results suggest that invasiveness is promoted by higher values in the fitness hierarchy--trait means are more important than trait plasticity, and plasticity is similar to integration--rather than by a specific combination of the three components of the functional strategy." }, { "instance_id": "R54244xR54064", "comparison_id": "R54244", "paper_id": "R54064", "text": "Plastic Traits of an Exotic Grass Contribute to Its Abundance but Are Not Always Favourable In herbaceous ecosystems worldwide, biodiversity has been negatively impacted by changed grazing regimes and nutrient enrichment. Altered disturbance regimes are thought to favour invasive species that have a high phenotypic plasticity, although most studies measure plasticity under controlled conditions in the greenhouse and then assume plasticity is an advantage in the field. Here, we compare trait plasticity between three co-occurring, C4 perennial grass species, an invader Eragrostis curvula, and natives Eragrostis sororia and Aristida personata to grazing and fertilizer in a three-year field trial. We measured abundances and several leaf traits known to correlate with strategies used by plants to fix carbon and acquire resources, i.e. specific leaf area (SLA), leaf dry matter content (LDMC), leaf nutrient concentrations (N, C\u2236N, P), assimilation rates (Amax) and photosynthetic nitrogen use efficiency (PNUE). In the control treatment (grazed only), trait values for SLA, leaf C\u2236N ratios, Amax and PNUE differed significantly between the three grass species. When trait values were compared across treatments, E. curvula showed higher trait plasticity than the native grasses, and this correlated with an increase in abundance across all but the grazed/fertilized treatment. The native grasses showed little trait plasticity in response to the treatments. Aristida personata decreased significantly in the treatments where E. curvula increased, and E. sororia abundance increased possibly due to increased rainfall and not in response to treatments or invader abundance. Overall, we found that plasticity did not favour an increase in abundance of E. curvula under the grazed/fertilized treatment likely because leaf nutrient contents increased and subsequently its' palatability to consumers. E. curvula also displayed a higher resource use efficiency than the native grasses. These findings suggest resource conditions and disturbance regimes can be manipulated to disadvantage the success of even plastic exotic species." }, { "instance_id": "R54244xR54184", "comparison_id": "R54244", "paper_id": "R54184", "text": "Allometric growth, disturbance regime, and dilemmas of controlling invasive plants: a model analysis Disturbed communities are observed to be more susceptible to invasion by exotic species, suggesting that some attributes of the invaders may interact with disturbance regime to facilitate invasion success. Alternanthera philoxeroides, endemic to South America, is an amphibious clonal weed invading worldwide. It tends to colonize disturbed habitats such as riparian zones, floodplain wetlands and agricultural areas. We developed an analytical model to explore the interactive effects of two types of physical disturbances, shoot mowing and root fragmentation, on biomass production dynamics of A. philoxeroides. The model is based on two major biological assumptions: (1) allometric growth of root (belowground) vs. shoot (aboveground) biomass and (2) exponential regrowth of shoot biomass after mowing. The model analysis revealed that the interaction among allometric growth pattern, shoot mowing frequency and root fragmentation intensity might lead to diverse plant \u2018fates\u2019. For A. philoxeroides whose root allocation decreases with growing plant size, control by shoot mowing was faced with two dilemmas. (1) Shoot regrowth can be effectively suppressed by frequent mowing. However, frequent shoot mowing led to higher biomass allocation to thick storage roots, which enhanced the potential for faster future plant growth. (2) In the context of periodic shoot mowing, individual shoot biomass converged to a stable equilibrium value which was independent of the root fragmentation intensity. However, root fragmentation resulted in higher equilibrium population shoot biomass and higher frequency of shoot mowing required for effective control. In conclusion, the interaction between allometric growth and physical disturbances may partially account for the successful invasion of A. philoxeroides; improper mechanical control practices could function as disturbances and result in exacerbated invasion." }, { "instance_id": "R54244xR54148", "comparison_id": "R54244", "paper_id": "R54148", "text": "Developmental plasticity of shell morphology of quagga mussels from shallow and deep-water habitats of the Great Lakes SUMMARY The invasive zebra mussel (Dreissena polymorpha) has quickly colonized shallow-water habitats in the North American Great Lakes since the 1980s but the quagga mussel (Dreissena bugensis) is becoming dominant in both shallow and deep-water habitats. While quagga mussel shell morphology differs between shallow and deep habitats, functional causes and consequences of such difference are unknown. We examined whether quagga mussel shell morphology could be induced by three environmental variables through developmental plasticity. We predicted that shallow-water conditions (high temperature, food quantity, water motion) would yield a morphotype typical of wild quagga mussels from shallow habitats, while deep-water conditions (low temperature, food quantity, water motion) would yield a morphotype present in deep habitats. We tested this prediction by examining shell morphology and growth rate of quagga mussels collected from shallow and deep habitats and reared under common-garden treatments that manipulated the three variables. Shell morphology was quantified using the polar moment of inertia. Of the variables tested, temperature had the greatest effect on shell morphology. Higher temperature (\u223c18\u201320\u00b0C) yielded a morphotype typical of wild shallow mussels regardless of the levels of food quantity or water motion. In contrast, lower temperature (\u223c6\u20138\u00b0C) yielded a morphotype approaching that of wild deep mussels. If shell morphology has functional consequences in particular habitats, a plastic response might confer quagga mussels with a greater ability than zebra mussels to colonize a wider range of habitats within the Great Lakes." }, { "instance_id": "R54244xR54226", "comparison_id": "R54244", "paper_id": "R54226", "text": "Crab-mediated phenotypic changes in Spartina densiflora Brong Abstract Although plant phenotypic plasticity has been historically studied as an important adaptive strategy to overcome herbivory and environmental heterogeneity, there are several aspects of its ecological importance that remain controversial. The burrowing crab Chasmagnathus granulata eats Spartina densiflora , and also causes several geomorphologic changes that indirectly affect Spartina growth. Here we evaluate if this crab affects the sexual reproductive effort of S. densiflora by mediating changes in plant phenotypic plasticity (i.e., shape of leaves and spikes) while affecting aboveground production, and if these effects interact with disturbance intensity. We conducted local and regional surveys and two-year field experiments manipulating the density of crabs in a mature Spartina marsh where we clipped at ground level different 1\u00d71 m marsh areas to create and compare crab's effect on young (plants growing after the clipping) and mature (unclipped) Spartina stands. Our results suggest that crabs mediate the phenotypic plasticity of sexual reproductive structures of Spartina . Crabs induced an increase in seed production (up to 721%) and seed viability, potentially favoring Spartina dispersal and colonization of distant sites. This effect appears to be maximal when combined with the experimental clipping disturbance. Crabs also exerted a strong effect on clipped plants by increasing the number of standing dead stems and decreasing the photosynthetic area and leaf production. These effects disappear in about two years if no other disturbance occurs. An a posteriori regional field survey agreed with our experimental results corroborating the prediction that plants in old undisturbed marshes have lower sexual reproductive effort than plants in highly disturbed marshes populated by burrowing-herbivore crabs. All these phenotypic changes have important taxonomic and macro-ecological implications that should not be ignored in discussions of applied ecology and environmental management." }, { "instance_id": "R54244xR54178", "comparison_id": "R54244", "paper_id": "R54178", "text": "Effects of simulated herbivory and resource availability on the invasive plant, Alternanthera philoxeroides in different habitats In biological control programs, the insect natural enemy\u2019s ability to suppress the plant invader may be affected by abiotic factors, such as resource availability, that can influence plant growth and reproduction. Understanding plant tolerance to herbivory under different environmental conditions will help to improve biocontrol efficacy. The invasive alligator weed (Alternanthera philoxeroides) has been successfully controlled by natural enemies in many aquatic habitats but not in terrestrial environments worldwide. This study examined the effects of different levels of simulated leaf herbivory on the growth of alligator weed at two levels of fertilization and three levels of soil moisture (aquatic, semi-aquatic, and terrestrial habitats). Increasing levels of simulated (manual) defoliation generally caused decreases in total biomass in all habitats. However, the plant appeared to respond differently to high levels of herbivory in the three habitats. Terrestrial plants showed the highest below\u2013above ground mass ratio (R/S), indicating the plant is more tolerant to herbivory in terrestrial habitats than in aquatic habitats. The unfertilized treatment exhibited greater tolerance than the fertilized treatment in the terrestrial habitat at the first stage of this experiment (day 15), but fertilizer appears not to have influenced tolerance at the middle and last stages of the experiment. No such difference was found in semi-aquatic and aquatic habitats. These findings suggest that plant tolerance is affected by habitats and soil nutrients and this relationship could influence the biological control outcome. Plant compensatory response to herbivory under different environmental conditions should, therefore, be carefully considered when planning to use biological control in management programs against invasive plants." }, { "instance_id": "R54244xR54046", "comparison_id": "R54244", "paper_id": "R54046", "text": "Phenotypic Plasticity and Population Differentiation in an Ongoing Species Invasion The ability to succeed in diverse conditions is a key factor allowing introduced species to successfully invade and spread across new areas. Two non-exclusive factors have been suggested to promote this ability: adaptive phenotypic plasticity of individuals, and the evolution of locally adapted populations in the new range. We investigated these individual and population-level factors in Polygonum cespitosum, an Asian annual that has recently become invasive in northeastern North America. We characterized individual fitness, life-history, and functional plasticity in response to two contrasting glasshouse habitat treatments (full sun/dry soil and understory shade/moist soil) in 165 genotypes sampled from nine geographically separate populations representing the range of light and soil moisture conditions the species inhabits in this region. Polygonum cespitosum genotypes from these introduced-range populations expressed broadly similar plasticity patterns. In response to full sun, dry conditions, genotypes from all populations increased photosynthetic rate, water use efficiency, and allocation to root tissues, dramatically increasing reproductive fitness compared to phenotypes expressed in simulated understory shade. Although there were subtle among-population differences in mean trait values as well as in the slope of plastic responses, these population differences did not reflect local adaptation to environmental conditions measured at the population sites of origin. Instead, certain populations expressed higher fitness in both glasshouse habitat treatments. We also compared the introduced-range populations to a single population from the native Asian range, and found that the native population had delayed phenology, limited functional plasticity, and lower fitness in both experimental environments compared with the introduced-range populations. Our results indicate that the future spread of P. cespitosum in its introduced range will likely be fueled by populations consisting of individuals able to express high fitness across diverse light and moisture conditions, rather than by the evolution of locally specialized populations." }, { "instance_id": "R54244xR54110", "comparison_id": "R54244", "paper_id": "R54110", "text": "Relatedness predicts phenotypic plasticity in plants better than weediness Background: Weedy non-native species have long been predicted to be more phenotypically plastic than native species. Question: Are weedy non-native species more plastic than natives? Organisms: Fourteen perennial plant species: Acer platanoides, Acer saccharum, Bromus inermis, Bromus latiglumis, Celastrus orbiculatus, Celastrus scandens, Elymus repens, Elymus trachycaulus, Plantago major, Plantago rugelii, Rosa multiflora, Rosa palustris, Solanum dulcamara, and Solanum carolinense. Field site: Mesic old-field in Dryden, NY (422749\u2033N, 762640\u2033W). Methods: We grew seven pairs of native and non-native plant congeners in the field and tested their responses to reduced competition and the addition of fertilizer. We measured the plasticity of six traits related to growth and leaf palatability (total length, leaf dry mass, maximum relative growth rate, leaf toughness, trichome density, and specific leaf area). Conclusions: Weedy non-native species did not differ consistently from natives in their phenotypic plasticity. Instead, relatedness was a better predictor of plasticity." }, { "instance_id": "R54244xR54072", "comparison_id": "R54244", "paper_id": "R54072", "text": "Phenotypic Plasticity Influences the Size, Shape and Dynamics of the Geographic Distribution of an Invasive Plant Phenotypic plasticity has long been suspected to allow invasive species to expand their geographic range across large-scale environmental gradients. We tested this possibility in Australia using a continental scale survey of the invasive tree Parkinsonia aculeata (Fabaceae) in twenty-three sites distributed across four climate regions and three habitat types. Using tree-level responses, we detected a trade-off between seed mass and seed number across the moisture gradient. Individual trees plastically and reversibly produced many small seeds at dry sites or years, and few big seeds at wet sites and years. Bigger seeds were positively correlated with higher seed and seedling survival rates. The trade-off, the relation between seed mass, seed and seedling survival, and other fitness components of the plant life-cycle were integrated within a matrix population model. The model confirms that the plastic response resulted in average fitness benefits across the life-cycle. Plasticity resulted in average fitness being positively maintained at the wet and dry range margins where extinction risks would otherwise have been high (\u201cJack-of-all-Trades\u201d strategy JT), and fitness being maximized at the species range centre where extinction risks were already low (\u201cMaster-of-Some\u201d strategy MS). The resulting hybrid \u201cJack-and-Master\u201d strategy (JM) broadened the geographic range and amplified average fitness in the range centre. Our study provides the first empirical evidence for a JM species. It also confirms mechanistically the importance of phenotypic plasticity in determining the size, the shape and the dynamic of a species distribution. The JM allows rapid and reversible phenotypic responses to new or changing moisture conditions at different scales, providing the species with definite advantages over genetic adaptation when invading diverse and variable environments. Furthermore, natural selection pressure acting on phenotypic plasticity is predicted to result in maintenance of the JT and strengthening of the MS, further enhancing the species invasiveness in its range centre." }, { "instance_id": "R54244xR54192", "comparison_id": "R54244", "paper_id": "R54192", "text": "Differences in plasticity between invasive and native plants from a low resource environment 1. Phenotypic plasticity is often cited as an important mechanism of plant invasion. However, few studies have evaluated the plasticity of a diverse set of traits among invasive and native species, particularly in low resource habitats, and none have examined the functional significance of these traits. 2. I explored trait plasticity in response to variation in light and nutrient availability in five phylogenetically related pairs of native and invasive species occurring in a nutrient-poor habitat. In addition to the magnitude of trait plasticity, I assessed the correlation between 16 leaf- and plant-level traits and plant performance, as measured by total plant biomass. Because plasticity for morphological and physiological traits is thought to be limited in low resource environments (where native species usually display traits associated with resource conservation), I predicted that native and invasive species would display similar, low levels of trait plasticity. 3. Across treatments, invasive and native species within pairs differed with respect to many of the traits measured; however, invasive species as a group did not show consistent patterns in the direction of trait values. Relative to native species, invasive species displayed high plasticity in traits pertaining to biomass partitioning and leaf-level nitrogen and light use, but only in response to nutrient availability. Invasive and native species showed similar levels of resource-use efficiency and there was no relationship between species plasticity and resource-use efficiency across species. 4. Traits associated with carbon fixation were strongly correlated with performance in invasive species while only a single resource conservation trait was strongly correlated with performance in multiple native species. Several highly plastic traits were not strongly correlated with performance which underscores the difficulty in assessing the functional significance of resource conservation traits over short timescales and calls into question the relevance of simple, quantitative assessments of trait plasticity. 5. Synthesis. My data support the idea that invasive species display high trait plasticity. The degree of plasticity observed here for species occurring in low resource systems corresponds with values observed in high resource systems, which contradicts the general paradigm that trait plasticity is constrained in low resource systems. Several traits were positively correlated with plant performance suggesting that trait plasticity will influence plant fitness." }, { "instance_id": "R54244xR54078", "comparison_id": "R54244", "paper_id": "R54078", "text": "Intra-population variability of life-history traits and growth during range expansion of the invasive round goby, Neogobius melanostomus Fish can undergo changes in their life-history traits that correspond with local demographic conditions. Under range expansion, a population of non-native fish might then be expected to exhibit a suite of life-history traits that differ between the edge and the centre of the population\u2019s geographic range. To test this hypothesis, life-history traits of an expanding population of round goby, Neogobius melanostomus (Pallas), in early and newly established sites in the Trent River (Ontario, Canada) were compared in 2007 and 2008. Round goby in the area of first introduction exhibited a significant decrease in age at maturity, increased length at age 1 and they increased in GSI from 2007 to 2008. While individuals at the edges of the range exhibited traits that promote population growth under low intraspecific density, yearly variability in life-history traits suggests that additional processes such as declining density and fluctuating food availability are influencing the reproductive strategy and growth of round goby during an invasion." }, { "instance_id": "R54244xR54204", "comparison_id": "R54244", "paper_id": "R54204", "text": "Phenotypic variation in invasive and biocontrol populations of the harlequin ladybird, Harmonia axyridis Despite numerous releases for biological control purposes during more than 20 years in Europe, Harmonia axyridis failed to become established until the beginning of the 21st century. Its status as invasive alien species is now widely recognised. Theory suggests that invasive populations should evolve toward greater phenotypic plasticity because they encounter differing environments during the invasion process. On the contrary, populations used for biological control have been maintained under artificial rearing conditions for many generations; they are hence expected to become specialised on a narrow range of environments and show lower phenotypic plasticity. Here we compared phenotypic traits and the extent of adaptive phenotypic plasticity in two invasive populations and two populations commercialized for biological control by (i) measuring six phenotypic traits related to fitness (eggs hatching rate, larval survival rate, development time, sex ratio, fecundity over 6 weeks and survival time of starving adults) at three temperatures (18, 24 and 30\u00b0C), (ii) recording the survival rate and quiescence aggregation behaviour when exposed to low temperatures (5, 10 and 15\u00b0C), and (iii) studying the cannibalistic behaviour of populations in the absence of food. Invasive and biocontrol populations displayed significantly different responses to temperature variation for a composite fitness index computed from the traits measured at 18, 24 and 30\u00b0C, but not for any of those traits considered independently. The plasticity measured on the same fitness index was higher in the two invasive populations, but this difference was not statistically significant. On the other hand, invasive populations displayed significantly higher survival and higher phenotypic plasticity when entering into quiescence at low temperatures. In addition, one invasive population displayed a singular cannibalistic behaviour. Our results hence only partly support the expectation of increased adaptive phenotypic plasticity of European invasive populations of H. axyridis, and stress the importance of the choice of the environmental parameters to be manipulated for assessing phenotypic plasticity variation among populations." }, { "instance_id": "R54244xR54190", "comparison_id": "R54244", "paper_id": "R54190", "text": "Phenotypic variability in Holcus lanatus L. in southern Chile: a strategy that enhances plant survival and pasture stability Holcus lanatus L. can colonise a wide range of sites within the naturalised grassland of the Humid Dominion of Chile. The objectives were to determine plant growth mechanisms and strategies that have allowed H. lanatus to colonise contrasting pastures and to determine the existence of ecotypes of H. lanatus in southern Chile. Plants of H. lanatus were collected from four geographic zones of southern Chile and established in a randomised complete block design with four replicates. Five newly emerging tillers were marked per plant and evaluated at the vegetative, pre-ear emergence, complete emerged inflorescence, end of flowering period, and mature seed stages. At each evaluation, one marked tiller was harvested per plant. The variables measured included lamina length and width, tiller height, length of the inflorescence, total number of leaves, and leaf, stem, and inflorescence mass. At each phenological stage, groups of accessions were statistically formed using cluster analysis. The grouping of accessions (cluster analysis) into statistically different groups (ANOVA and canonical variate analysis) indicated the existence of different ecotypes. The phenotypic variation within each group of the accessions suggested that each group has its own phenotypic plasticity. It is concluded that the successful colonisation by H. lanatus has resulted from diversity within the species." }, { "instance_id": "R54244xR54138", "comparison_id": "R54244", "paper_id": "R54138", "text": "Trait means and reaction norms: the consequences of climate change/invasion interactions at the organism level How the impacts of climate change on biological invasions will play out at the mechanistic level is not well understood. Two major hypotheses have been proposed: invasive species have a suite of traits that enhance their performance relative to indigenous ones over a reasonably wide set of circumstances; invasive species have greater phenotypic plasticity than their indigenous counterparts and will be better able to retain performance under altered conditions. Thus, two possibly independent, but complementary mechanistic perspectives can be adopted: based on trait means and on reaction norms. Here, to demonstrate how this approach might be applied to understand interactions between climate change and invasion, we investigate variation in the egg development times and their sensitivity to temperature amongst indigenous and introduced springtail species in a cool temperate ecosystem (Marion Island, 46\u00b054\u2032S 37\u00b054\u2032E) that is undergoing significant climate change. Generalized linear model analyses of the linear part of the development rate curves revealed significantly higher mean trait values in the invasive species compared to indigenous species, but no significant interactions were found when comparing the thermal reaction norms. In addition, the invasive species had a higher hatching success than the indigenous species at high temperatures. This work demonstrates the value of explicitly examining variation in trait means and reaction norms among indigenous and invasive species to understand the mechanistic basis of variable responses to climate change among these groups." }, { "instance_id": "R54244xR54038", "comparison_id": "R54244", "paper_id": "R54038", "text": "Gas exchange and growth responses to nutrient enrichment in invasive Glyceria maxima and native New Zealand Carex species We compared photosynthetic gas exchange, the photosynthesis\u2013leaf nitrogen (N) relationship, and growth response to nutrient enrichment in the invasive wetland grass Glyceria maxima (Hartman) Holmburg with two native New Zealand Carex sedges (C. virgata Boott and C. secta Boott), to explore the ecophysiological traits contributing to invasive behaviour. The photosynthesis\u2013nitrogen relationship was uniform across all three species, and the maximum light-saturated rate of photosynthesis expressed on a leaf area basis (Amaxa) did not differ significantly between species. However, specific leaf area (SLA) in G. maxima (17 \u00b1 6 m2 kg\u22121) was 1.3 times that of the sedges, leading to 1.4 times higher maximum rates of photosynthesis (350\u2013400 nmol CO2 g\u22121 dry mass s\u22121) expressed on a leaf mass basis (Amaxm) when N supply was unlimited, compared to the sedges (<300 nmol CO2 g\u22121 dry mass s\u22121). Analysis of Covariance (ANCOVA) revealed significant positive relationships between leaf N content and chlorophyll a:b ratios, stomatal conductance (gs), dark respiration rate (Rd), and the photosynthetic light saturation point (Ik) in G. maxima, but not in the sedges. ANCOVA also identified that, compared to G. maxima, the sedges had 2.4 times higher intrinsic water use efficiency (A/gs: range 20\u201370 cf. 8\u201330 \u03bcmol CO2 mol\u22121 H2O) and 1.6 times higher nitrogen use efficiency (NUE: 25\u201330 cf. 20\u201323 g dry mass g\u22121 N) under excess N supply. Relative growth rates (RGR) were not significantly higher in G. maxima than the sedges, but correlations between leaf N, gas exchange parameters (Amaxa, Amaxm, Rd and gs) and RGR were all highly significant in G. maxima, whereas they were weak or absent in the sedges. Allocation of biomass (root:shoot ratio, leaf mass ratio, root mass ratio), plant N and P content, and allocation of N to leaves all showed significantly greater phenotypic plasticity and stronger correlation to final biomass in G. maxima than in the sedges. We therefore conclude that photosynthesis and growth rates are not intrinsically higher in this invader than in the native species with which it competes, but that its success under nutrient enrichment is a consequence of greater physiological responsiveness and growth plasticity, and stronger integration between gas exchange and growth, coupled with indifference to resource wastage (i.e. low WUE and NUE) at high nutrient supply. The poorer performance of G. maxima than the sedges under low nutrient supply supports the importance of nutrient management, especially N, as a strategy to minimise the invasive behaviour of fast-growing herbaceous species in wetlands." }, { "instance_id": "R54244xR54104", "comparison_id": "R54244", "paper_id": "R54104", "text": "Hard traits of three Bromus species in their source area explain their current invasive success Abstract We address two highly essential question using three Eurasian Bromus species with different invasion success in North America as model organisms: (1) why some species become invasive and others do not, and (2) which traits can confer pre-adaptation for species to become invasive elsewhere. While the morphology and phenology of the chosen bromes (Bromus tectorum, Bromus sterilis and Bromus squarrosus) are highly similar, we measured complex traits often associated with invasive success: phenotypic plasticity, competitive ability and generalist-specialist character. We performed common-garden experiments, community- and landscape-level surveys in areas of co-occurrence in Central Europe (Hungary) that could have served as donor region for American introductions. According to our results, the three bromes are unequally equipped with trait that could enhance invasiveness. B. tectorum possesses several traits that may be especially relevant: it has uniquely high phenotypic plasticity, as demonstrated in a nitrogen addition experiment, and it is a habitat generalist, thriving in a wide range of habitats, from semi-natural to degraded ones, and having the widest co-occurrence based niche-breadth. The strength of B. sterilis lies in its ability to use resources unexploited by other species. It can become dominant, but only in one non-natural habitat type, namely the understorey of the highly allelopathic stands of the invasive Robinia pseudoacacia. B. squarrosus is a habitat specialist with low competitive ability, always occurring with low coverage. This ranking of the species\u2019 abilities can explain the current spreading success of the three bromes on the North American continent, and highlight the high potential of prehistoric invaders (European archaeophytes) to become invasive elsewhere." }, { "instance_id": "R54244xR54090", "comparison_id": "R54244", "paper_id": "R54090", "text": "Multiple common garden experiments suggest lack of local adaptation in an invasive ornamental plant Aims Adaptive evolution along geographic gradients of climatic conditions is suggested to facilitate the spread of invasive plant species, leading to clinal variation among populations in the introduced range. We investigated whether adaptation to climate is also involved in the invasive spread of an ornamental shrub, Buddleja davidii, across western and central Europe. Methods We combined a common garden experiment, replicated in three climatically different central European regions, with reciprocal transplantation to quantify genetic differentiation in growth and reproductive traits of 20 invasive B. davidii populations. Additionally, we compared compensatory regrowth among populations after clipping of stems to simulate mechanical damage." }, { "instance_id": "R54244xR54166", "comparison_id": "R54244", "paper_id": "R54166", "text": "The Structural Adaptation of Aerial Parts of Invasive Alternanthera philoxeroides to Water Regime Alternanthera philoxeroides has successfully invaded diverse habitats with considerably various water availability, threatening biological diversity in many parts of the world. Because its genetic variation is very low, phenotypic plasticity is believed to be the primary strategy for adapting to the diverse habitats. In the present paper, we investigated the plastic changes of anatomical traits of the aerial parts of A. philoxeroides from flooding to wet then to drought habitat; the results are as follows: A. philoxeroides could change anatomical structures sensitively to adapt to water regime. As a whole, effects of water regime on structures in stem were greater than those in leaf. Except for principal vein diameter and stoma density on leaf surfaces, all other structural traits were significantly affected by water regime. Among which, cuticular wax layer, collenchyma cell wall, phloem fiber cell wall, and hair density on both leaf surfaces thickened significantly with decrease of water availability, whereas, pith cavity and vessel lumen in stem lessened significantly; wet habitat is vital for the spread of A. philoxeroides from flooding to drought habitat and vice versa, because in this habitat, it had the greatest structural variations; when switching from flooding to wet then to drought habitat, the variations of cuticular wax layer, collenchyma cell wall, phloem fiber cell wall, pith cavity area ratio, diameter of vessel lumen, and hair density on both leaf surfaces, played the most important role. These responsive variables contribute most to the adaptation of A. philoxeroides to diverse habitats with considerably various water availability." }, { "instance_id": "R54244xR54228", "comparison_id": "R54244", "paper_id": "R54228", "text": "Predator-induced phenotypic plasticity in the exotic cladoceran Daphnia lumholtzi Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones." }, { "instance_id": "R54244xR54168", "comparison_id": "R54244", "paper_id": "R54168", "text": "The geography of crushing: Variation in claw performance of the invasive crab Carcinus maenas The major claws of predatory, durophagous decapods are specialized structures that are routinely used to crush the armor of their prey. This task requires the generation of extremely strong forces, among the strongest forces measured for any animal in any activity. Laboratory studies have shown that claw strength in crabs can respond plastically to, and thereby potentially match, the strength of their prey's defensive armor. These results suggest that claw strength may be variable among natural populations of crabs. However, very few studies have investigated spatial variation in claw strength and related morphometric traits in crabs. Using three geographically separate populations of the invasive green crab in the Gulf of Maine, we demonstrate, for the first time, geographic variation in directly measured claw crushing forces in a brachyuran. Despite variation in mean claw strength however, the scaling of claw crushing force with claw size was consistent among populations. We found that measurements of crushing force were obtained with low error and were highly repeatable for individual crabs. We also show that claw mass, independent of a linear measure of claw size, and carapace color, which is an indicator of time spent in the intermoult, were important predictors of claw crushing force." }, { "instance_id": "R54244xR54116", "comparison_id": "R54244", "paper_id": "R54116", "text": "Differential growth patterns and fitness may explain contrasted performances of the invasive Prunus serotina in its exotic range This research investigates why the invasive American black cherry tends to dominate the forest canopy on well-drained, nutrient-poor soils, but usually hardly establishes on both waterlogged and calcareous soils in its exotic range. Prunus serotina was sampled from four soil types and two light conditions, to measure (1) radial growth; (2) height growth compared to the main native competitor, Fagus sylvatica; (3) leaf traits; (4) seed production; and (5) rate of fungal attack. We found that P. serotina invested a significant amount of energy in height growth and seed production on well-drained, nutrient-poor soils. These characteristics enabled it to rapidly capture canopy gaps and subsequently exert a mass effect on neighbouring stands. On moist soils, we found irregular growth patterns and high rates of fungal attack, while on calcareous soils, leaf traits suggested a low nitrogen assimilation rate, limiting the production of N-containing compounds. We conclude that P. serotina fails on waterlogged and calcareous soils because it is unable to allocate sufficient energy to fruiting and/or height growth. Conversely, it succeeds on well-drained, nutrient-poor soils because of high fitness which increases its invasiveness." }, { "instance_id": "R54244xR54082", "comparison_id": "R54244", "paper_id": "R54082", "text": "Geographically distinct Ceratophyllum demersum populations differ in growth, photosynthetic responses and phenotypic plasticity to nitrogen availability Two geographically distinct populations of the submerged aquatic macrophyte Ceratophyllum demersum L. were compared after acclimation to five different nitrogen concentrations (0.005, 0.02, 0.05, 0.1 and 0.2 mM N) in a common garden setup. The two populations were an apparent invasive population from New Zealand (NZ) and a noninvasive population from Denmark (DK). The populations were compared with a focus on both morphological and physiological traits. The NZ population had higher relative growth rates (RGRs) and photosynthesis rates (Pmax) (range: RGR, 0.06\u20130.08 per day; Pmax, 200\u2013395 \u00b5mol O2 g\u20131 dry mass (DM) h\u20131) compared with the Danish population (range: RGR, 0.02\u20130.05 per day; Pmax, 88\u2013169 \u00b5mol O2 g\u20131 DM h\u20131). The larger, faster-growing NZ population also showed higher plasticity than the DK population in response to nitrogen in traits important for growth. Hence, the observed differences in growth behaviour between the two populations are a result of genetic differences and differences in their level of plasticity. Here, we show that two populations of the same species from similar climates but different geographical areas can differ in several ecophysiological traits after growth in a common garden setup." }, { "instance_id": "R54244xR54044", "comparison_id": "R54244", "paper_id": "R54044", "text": "Highly plastic response in morphological and physiological traits to light, soil-N and moisture in the model invasive plant, Phalaris arundinacea Abstract The ability of an introduced species to thrive is often influenced by its capacity to cope with disturbance and resource fluctuation, and one way to cope is by being phenotypically plastic. The biomass and resource allocation of the invasive plant species, Phalaris arundinacea (reed canarygrass), to contrasting levels of light, soil-N and moisture was evaluated. We predicted that P. arundinacea would show a highly plastic response in important growth and physiological traits to treatment conditions (presence of three-way interactions and large phenotypic plasticity index (PI) values) because of its ability to persist in variable environments. MANOVA tests showed significant three-way interactions for each of the three groups of plant traits (aboveground (AGB) and belowground biomass (BGB), shoot C/N and root C/N ratios, leaf chlorophyll and soluble protein), demonstrating the complex correlated response to the treatment effects by pairs of response variables. There were significant three-way interactions for seven of nine plant traits (univariate analyses), including AGB and BGB, AGB per tiller, shoot/root ratio, shoot C/N ratio, root C/N ratio and leaf chlorophyll content. Total plasticity values, which represented the greatest possible plasticity for each plant trait, were larger than any of the PI values for the main effects. Understanding which traits show plasticity, as well as the magnitude of response expressed in common invasive species is an important area of research because aspects of their aggressive behavior may be explained by how they grow and allocate resources under variable environmental conditions, which in turn can be important when seeking to make predictions about the probability and degree of invasion success with species-specific invasion models." }, { "instance_id": "R54244xR54160", "comparison_id": "R54244", "paper_id": "R54160", "text": "Genetic Assimilation and the Postcolonization Erosion of Phenotypic Plasticity in Island Tiger Snakes In 1942, C.H. Waddington [1] suggested that colonizing populations could initially succeed by flexibly altering their characteristics (phenotypic plasticity; [2-4]) in fitness-inducing traits, but selective forces would rapidly eliminate that plasticity to result in a canalized trait [1, 5, 6]. Waddington termed this process \"genetic assimilation\"[1, 7]. Despite the potential importance of genetic assimilation to evolutionary changes in founder populations [8-10], empirical evidence on this topic is rare, possibly because it happens on short timescales and is therefore difficult to detect except under unusual circumstances [11, 12]. We exploited a mosaic of snake populations isolated (or introduced) on islands from less than 30 years ago to more than 9000 years ago and exposed to selection for increased head size (i.e., ability to ingest large prey [13-16]). Here we show that a larger head size is achieved by plasticity in \"young\" populations and by genetic canalization in \"older\" populations. Island tiger snakes (Notechis scutatus) thus show clear empirical evidence of genetic assimilation, with the elaboration of an adaptive trait shifting from phenotypically plastic expression through to canalization within a few thousand years." }, { "instance_id": "R54244xR54206", "comparison_id": "R54244", "paper_id": "R54206", "text": "Phenotypic plasticity and contemporary evolution in introduced populations: Evidence from translocated populations of white sands pupfish (Cyrpinodon tularosa) Contemporary evolution has been shown in a few studies to be an important component of colonization ability, but seldom have researchers considered whether phenotypic plasticity facilitates directional evolution from the invasion event. In the current study, we evaluated body shape divergence of the New Mexico State-threatened White Sands pupfish (Cyprinodon tularosa) that were introduced to brackish, lacustrine habitats at two different time in the recent past (approximately 30 years and 1 year previously) from the same source population (saline river environment). Pupfish body shape is correlated with environmental salinity: fish from saline habitats are characterized by slender body shapes, whereas fish from fresher, yet brackish springs are deep-bodied. In this study, lacustrine populations consisted of an approximately 30-year old population and several 1-year old populations, all introduced from the same source. The body shape divergence of the 30-year old population was significant and greater than any of the divergences of the 1-year old populations (which were for the most part not significant). Nonetheless, all body shape changes exhibited body deepening in less saline environments. We conclude that phenotypic plasticity potentially facilitates directional evolution of body deepening for introduced pupfish populations." }, { "instance_id": "R54244xR54016", "comparison_id": "R54244", "paper_id": "R54016", "text": "Phenotypic shifts in white perch life history strategy across stages of invasion Successful invasive species must pass through several invasion stages, and life history or trophic strategies allowing for successful transitions may change as the species advances from one stage to the next. To evaluate the role of life history shifts in the invasion success of white perch (Morone americana), age and length at maturity, gonadosomatic index, and growth were compared across three invasive reservoir populations ranging from 1, 11, and 21 years since initial detection. Individuals in the newly introduced population exhibited increased growth and had higher mean reproductive investment than the two established populations across both study years. Individuals in the newest population also matured earlier than those in the older populations in 2009, but maturity schedules did not differ in 2010, possibly due to changes in environmental conditions causing life history shifts in both older populations. Overall, it appears that life history plasticity confers an important advantage to invasive species, allowing them to adapt for successful transitions throughout the invasion process, as well as to local conditions within the invaded system once they become fully integrated into established communities." }, { "instance_id": "R54244xR54042", "comparison_id": "R54244", "paper_id": "R54042", "text": "Growth and morphology in relation to temperature and light availability during the establishment of three invasive aquatic plant species Abstract Invasive freshwater plants are currently spreading rapidly and this is likely to continue with further changes in global climate resulting in changes in physical and chemical conditions in freshwaters. We studied the effect of summer temperature (20 \u00b0C, 25 \u00b0C and 30 \u00b0C) and light availability (25% and 50% of incident light availability) on shoot establishment in terms of growth rate, photosynthesis, and morphology of three invasive aquatic plants ( Elodea canadensis , Egeria densa and Lagarosiphon major ) in order to assess their interspecific competition. Light availability had an overall stronger effect on growth rate and plant morphology than temperature in the three species. Growth rate increased three-fold from low to high light, and low light reduced belowground biomass, increased stem length, and reduced branching and lateral spread. Photosynthetic rates were the only parameter for which temperature had an equal or stronger effect than light availability. The results show that E. canadensis has the most competitive establishment of the three species in both high and low temperature and light availability. E. densa is most competitive in warm water compared to colder water, whereas the opposite pattern is present for L. major which is most competitive in colder water. In conclusion, we suggest that that E. densa will dominate warmer, shallower waters, whereas L. major will dominate in colder, clear-water lakes, while E. canadensis continues its established role as a pioneer species that is quickly replaced by the two taller species after their arrival." }, { "instance_id": "R54244xR54020", "comparison_id": "R54244", "paper_id": "R54020", "text": "Establishment of an Invasive Plant Species (Conium maculatum) in Contaminated Roadside Soil in Cook County, Illinois Abstract Interactions between environmental variables in anthropogenically disturbed environments and physiological traits of invasive species may help explain reasons for invasive species' establishment in new areas. Here we analyze how soil contamination along roadsides may influence the establishment of Conium maculatum (poison hemlock) in Cook County, IL, USA. We combine analyses that: (1) characterize the soil and measure concentrations of heavy metals and polycyclic aromatic hydrocarbons (PAHs) where Conium is growing; (2) assess the genetic diversity and structure of individuals among nine known populations; and (3) test for tolerance to heavy metals and evidence for local soil growth advantage with greenhouse establishment experiments. We found elevated levels of metals and PAHs in the soil where Conium was growing. Specifically, arsenic (As), cadmium (Cd), and lead (Pb) were found at elevated levels relative to U.S. EPA ecological contamination thresholds. In a greenhouse study we found that Conium is more tolerant of soils containing heavy metals (As, Cd, Pb) than two native species. For the genetic analysis a total of 217 individuals (approximately 20\u201330 per population) were scored with 5 ISSR primers, yielding 114 variable loci. We found high levels of genetic diversity in all populations but little genetic structure or differentiation among populations. Although Conium shows a general tolerance to contamination, we found few significant associations between genetic diversity metrics and a suite of measured environmental and spatial parameters. Soil contamination is not driving the peculiar spatial distribution of Conium in Cook County, but these findings indicate that Conium is likely establishing in the Chicago region partially due to its ability to tolerate high levels of metal contamination." }, { "instance_id": "R54244xR54196", "comparison_id": "R54244", "paper_id": "R54196", "text": "Increased fitness and plasticity of an invasive species in its introduced range: a study using Senecio pterophorus 1. When a plant species is introduced into a new range, it may differentiate genetically from the original populations in the home range. This genetic differentiation may influence the extent to which the invasion of the new range is successful. We tested this hypothesis by examining Senecio pterophorus, a South African shrub that was introduced into NE Spain about 40 years ago. We predicted that in the introduced range invasive populations would perform better and show greater plasticity than native populations. 2. Individuals of S. pterophorus from four Spanish (invasive) and four South African (native) populations were grown in Catalonia, Spain, in a common garden in which disturbance and water availability were manipulated. Fitness traits and several ecophysiological parameters were measured. 3. The invasive populations of S. pterophorus survived better throughout the summer drought in a disturbed (unvegetated) environment than native South African populations. This success may be attributable to the lower specific leaf area (SLA) and better water content regulation of the invasive populations in this treatment. 4. Invasive populations displayed up to three times higher relative growth rate than native populations under conditions of disturbance and non-limiting water availability. 5. The reproductive performance of the invasive populations was higher in all treatments except under the most stressful conditions (i.e. in non-watered undisturbed plots), where no plant from either population flowered. 6. The results for leaf parameters and chlorophyll fluorescence measurements suggested that the greater fitness of the invasive populations could be attributed to more favourable ecophysiological responses. 7. Synthesis. Spanish invasive populations of S. pterophorus performed better in the presence of high levels of disturbance, and displayed higher plasticity of fitness traits in response to resource availability than native South African populations. Our results suggest that genetic differentiation from source populations associated with founding may play a role in invasion success." }, { "instance_id": "R54244xR54216", "comparison_id": "R54244", "paper_id": "R54216", "text": "Phenotypic plasticity rather than locally adapted ecotypes allows the invasive alligator weed to colonize a wide range of habitats Both phenotypic plasticity and locally adapted ecotypes may contribute to the success of invasive species in a wide range of habitats. Here, we conducted common garden experiments and molecular marker analysis to test the two alternative hypotheses in invasive alligator weed (Alternanthera philoxeroides), which colonizes both aquatic and terrestrial habitats. Ninety individuals from three pairs of aquatic versus terrestrial populations across southern China were analyzed, using inter simple sequence repeat (ISSR) marker, to examine population differentiation in neutral loci. Two common gardens simulating aquatic and terrestrial habitats were set up to examine population differentiation in quantitative traits. We found no evidence of population differentiation in both neutral loci and quantitative traits. Most individuals shared the same ISSR genotype. Meanwhile, plants from different habitats showed similar reaction norms across the two common gardens. In particular, plants allocated much more biomass to the belowground roots in the terrestrial environment, where alligator weed may lose part or all of the aboveground shoots because of periodical or accidental disturbances, than those in the aquatic environment. The combined evidence from molecular marker analysis and common garden experiments support the plasticity hypothesis rather than the ecotype hypothesis in explaining the adaptation of alligator weed in a wide range of habitats." }, { "instance_id": "R54244xR54210", "comparison_id": "R54244", "paper_id": "R54210", "text": "Contrasting plant physiological adaptation to climate in the native and introduced range of Hypericum perforatum Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf \u03b413C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf \u03b413C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf \u03b413C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude." }, { "instance_id": "R54244xR54218", "comparison_id": "R54244", "paper_id": "R54218", "text": "Rapid evolution in response to introduced predators I: rates and patterns of morphological and life-history trait divergence Abstract Background Introduced species can have profound effects on native species, communities, and ecosystems, and have caused extinctions or declines in native species globally. We examined the evolutionary response of native zooplankton populations to the introduction of non-native salmonids in alpine lakes in the Sierra Nevada of California, USA. We compared morphological and life-history traits in populations of Daphnia with a known history of introduced salmonids and populations that have no history of salmonid introductions. Results Our results show that Daphnia populations co-existing with fish have undergone rapid adaptive reductions in body size and in the timing of reproduction. Size-related traits decreased by up to 13 percent in response to introduced fish. Rates of evolutionary change are as high as 4,238 darwins (0.036 haldanes). Conclusion Species introductions into aquatic habitats can dramatically alter the selective environment of native species leading to a rapid evolutionary response. Knowledge of the rates and limits of adaptation is an important component of understanding the long-term effects of alterations in the species composition of communities. We discuss the evolutionary consequences of species introductions and compare the rate of evolution observed in the Sierra Nevada Daphnia to published estimates of evolutionary change in ecological timescales." }, { "instance_id": "R54244xR54086", "comparison_id": "R54244", "paper_id": "R54086", "text": "Could phenotypic plasticity limit an invasive species? Incomplete reversibility of mid-winter deacclimation in emerald ash borer The emerald ash borer (Agrilus planipennis, Coleoptera: Buprestidae) is a wood-boring invasive pest devastating North American ash (Fraxinus spp.). A. planipennis overwinters primarily as a freeze-avoiding prepupa within the outer xylem or inner bark of the host tree. The range of this species is expanding outward from its presumed introduction point in southwestern Michigan. We hypothesized that loss of cold tolerance in response to mid-winter warm spells could limit survival and northern distribution of A. planipennis. We determined whether winter-acclimatised A. planipennis prepupae reduced their cold tolerance in response to mid-winter warm periods, and whether this plasticity was reversible with subsequent cold exposure. Prepupae subjected to mid-winter warm spells of 10 and 15\u00b0C had increased supercooling points (SCPs) and thus reduced cold tolerance. This increase in SCP was accompanied by a rapid loss of haemolymph cryoprotectants and the loss of cold tolerance was not reversed when the prepupae were returned to \u221210\u00b0C. Exposure to temperatures fluctuating from 0 to 4\u00b0C did not reduce cold hardiness. Only extreme warming events for several days followed by extreme cold snaps may have lethal effects on overwintering A. planipennis populations. Thus, distribution in North America is likely to be limited by the presence of host trees rather than climatic factors, but we conclude that range extensions of invasive species could be halted if local climatic extremes induce unidirectional plastic responses." }, { "instance_id": "R54244xR54028", "comparison_id": "R54244", "paper_id": "R54028", "text": "Jack-of-all-trades: phenotypic plasticity facilitates the invasion of an alien slug species Invasive alien species might benefit from phenotypic plasticity by being able to (i) maintain fitness in stressful environments (\u2018robust\u2019), (ii) increase fitness in favourable environments (\u2018opportunistic\u2019), or (iii) combine both abilities (\u2018robust and opportunistic\u2019). Here, we applied this framework, for the first time, to an animal, the invasive slug, Arion lusitanicus , and tested (i) whether it has a more adaptive phenotypic plasticity compared with a congeneric native slug, Arion fuscus , and (ii) whether it is robust, opportunistic or both. During one year, we exposed specimens of both species to a range of temperatures along an altitudinal gradient (700\u20132400 m a.s.l.) and to high and low food levels, and we compared the responsiveness of two fitness traits: survival and egg production . During summer, the invasive species had a more adaptive phenotypic plasticity, and at high temperatures and low food levels, it survived better and produced more eggs than A. fuscus , representing the robust phenotype. During winter, A. lusitanicus displayed a less adaptive phenotype than A. fuscus . We show that the framework developed for plants is also very useful for a better mechanistic understanding of animal invasions. Warmer summers and milder winters might lead to an expansion of this invasive species to higher altitudes and enhance its spread in the lowlands, supporting the concern that global climate change will increase biological invasions." }, { "instance_id": "R54244xR54230", "comparison_id": "R54244", "paper_id": "R54230", "text": "Leaf-level phenotypic variability and plasticity of invasive Rhododendron ponticum and non-invasive Ilex aquifolium co-occurring at two contrasting European sites To understand the role of leaf-level plasticity and variability in species invasiveness, foliar characteristics were studied in relation to seasonal average integrated quantum flux density (Qint) in the understorey evergreen species Rhododendron ponticum and Ilex aquifolium at two sites. A native relict population of R. ponticum was sampled in southern Spain (Mediterranean climate), while an invasive alien population was investigated in Belgium (temperate maritime climate). Ilex aquifolium was native at both sites. Both species exhibited a significant plastic response to Qint in leaf dry mass per unit area, thickness, photosynthetic potentials, and chlorophyll contents at the two sites. However, R. ponticum exhibited a higher photosynthetic nitrogen use efficiency and larger investment of nitrogen in chlorophyll than I. aquifolium. Since leaf nitrogen (N) contents per unit dry mass were lower in R. ponticum, this species formed a larger foliar area with equal photosynthetic potential and light-harvesting efficiency compared with I. aquifolium. The foliage of R. ponticum was mechanically more resistant with larger density in the Belgian site than in the Spanish site. Mean leaf-level phenotypic plasticity was larger in the Belgian population of R. ponticum than in the Spanish population of this species and the two populations of I. aquifolium. We suggest that large fractional investments of foliar N in photosynthetic function coupled with a relatively large mean, leaf-level phenotypic plasticity may provide the primary explanation for the invasive nature and superior performance of R. ponticum at the Belgian site. With alleviation of water limitations from Mediterranean to temperate maritime climates, the invasiveness of R. ponticum may also be enhanced by the increased foliage mechanical resistance observed in the alien populations." }, { "instance_id": "R54244xR54066", "comparison_id": "R54244", "paper_id": "R54066", "text": "Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species Luo J & Cardina J (2012). Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species. Weed Research 52, 112\u2013121. Summary The ability to germinate across different environments has been considered an important trait of invasive plant species that allows for establishment success in new habitats. Using two alien congener species of Asteraceae \u2013Taraxacum officinale (invasive) and Taraxacum laevigatum laevigatum (non-invasive) \u2013 we tested the hypothesis that invasive species germinate better than non-invasives under various conditions. The germination patterns of Taraxacum brevicorniculatum, a contaminant found in seeds of the crop Taraxacum kok-saghyz, were also investigated to evaluate its invasive potential. In four experiments, we germinated seeds along gradients of alternating temperature, constant temperature (with or without light), water potential and following accelerated ageing. Neither higher nor lower germination per se explained invasion success for the Taraxacum species tested here. At alternating temperature, the invasive T. officinale had higher germination than or similar to the non-invasive T. laevigatum. Contrary to predictions, T. laevigatum exhibited higher germination than T. officinale in environments of darkness, low water potential or after the seeds were exposed to an ageing process. These results suggested a complicated role of germination in the success of T. officinale. Taraxacum brevicorniculatum showed the highest germination among the three species in all environments. The invasive potential of this species is thus unclear and will probably depend on its performance at other life stages along environmental gradients." }, { "instance_id": "R54244xR54202", "comparison_id": "R54244", "paper_id": "R54202", "text": "Photosynthesis and water-use efficiency: A comparison between invasive (exotic) and non-invasive (native) species Invasive species have been hypothesized to out-compete natives though either a Jack-of-all-trades strategy, where they are able to utilize resources effectively in unfavourable environments, a master-of-some, where resource utilization is greater than its competitors in favourable environments, or a combination of the two (Jack-and-master). We examined the invasive strategy of Berberis darwinii in New Zealand compared with four co-occurring native species by examining germination, seedling survival, photosynthetic characteristics and water-use efficiency of adult plants, in sun and shade environments. Berberis darwinii seeds germinated more in shady sites than the other natives, but survival was low. In contrast, while germination of B. darwinii was the same as the native species in sunny sites, seedling survival after 18 months was nearly twice that of the all native species. The maximum photosynthetic rate of B. darwinii was nearly double that of all native species in the sun, but was similar among all species in the shade. Other photosynthetic traits (quantum yield and stomatal conductance) did not generally differ between B. darwinii and the native species, regardless of light environment. Berberis darwinii had more positive values of \u03b413C than the four native species, suggesting that it gains more carbon per unit water transpired than the competing native species. These results suggest that the invasion success of B. darwinii may be partially explained by combination of a Jack-of-all-trades scenario of widespread germination with a master-of-some scenario through its ability to photosynthesize at higher rates in the sun and, hence, gain a rapid height and biomass advantage over native species in favourable environments." }, { "instance_id": "R54244xR54088", "comparison_id": "R54244", "paper_id": "R54088", "text": "Phenology constrains opportunistic growth response in Bromus tectorum L. Seasonal resource availability may act as a constraint on plant phenology and thereby influence the range of growth responses observed among populations of annual species, especially those occupying a wide range of environments. We compared a mesic and a xeric population of the non-native, annual grass, Bromus tectorum, to examine phenology in response to interspecific competition and water availability. Using a target-neighborhood approach, we assessed how phenological patterns of the two populations affected morphological and growth responses to enhanced resource availability represented by late-season soil moisture. The xeric population exhibited a highly constrained phenology and was unable to extend the growing season despite available soil resources. Because of the low phenotypic variation, allocation to reproduction was similar across resource conditions. In contrast, the mesic population flowered later and showed a more opportunistic phenology in response to late-season water availability. The mesic population was not able to maintain consistent reproductive allocation at low resource levels. The responses of the two populations to late-season water availability were not affected by the density of neighboring plants. We suggest that post-introduction selection pressure on B. tectorum in the xeric habitat has resulted in a more fixed phenology which limits opportunistic response to unpredictable, particularly late-season resource availability. Opportunistic and fixed responses represent contrasting strategies for optimizing fitness in temporally varying environments and, while both play important roles for ensuring reproductive success, these results suggest that local adaptation to temporal resource variation may reflect a balance between flexible and inflexible phenology." }, { "instance_id": "R54244xR54186", "comparison_id": "R54244", "paper_id": "R54186", "text": "Establishment of parallel altitudinal clines in traits of native and introduced forbs Due to altered ecological and evolutionary contexts, we might expect the responses of alien plants to environmental gradients, as revealed through patterns of trait variation, to differ from those of the same species in their native range. In particular, the spread of alien plant species along such gradients might be limited by their ability to establish clinal patterns of trait variation. We investigated trends in growth and reproductive traits in natural populations of eight invasive Asteraceae forbs along altitudinal gradients in their native and introduced ranges (Valais, Switzerland, and Wallowa Mountains, Oregon, USA). Plants showed similar responses to altitude in both ranges, being generally smaller and having fewer inflorescences but larger seeds at higher altitudes. However, these trends were modified by region-specific effects that were independent of species status (native or introduced), suggesting that any differential performance of alien species in the introduced range cannot be interpreted without a fully reciprocal approach to test the basis of these differences. Furthermore, we found differences in patterns of resource allocation to capitula among species in the native and the introduced areas. These suggest that the mechanisms underlying trait variation, for example, increasing seed size with altitude, might differ between ranges. The rapid establishment of clinal patterns of trait variation in the new range indicates that the need to respond to altitudinal gradients, possibly by local adaptation, has not limited the ability of these species to invade mountain regions. Studies are now needed to test the underlying mechanisms of altitudinal clines in traits of alien species." }, { "instance_id": "R54244xR54224", "comparison_id": "R54244", "paper_id": "R54224", "text": "Phenotypic plasticity of an invasive acacia versus two native Mediterranean species The phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings\u2019 ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species." }, { "instance_id": "R54244xR54092", "comparison_id": "R54244", "paper_id": "R54092", "text": "Invasive Microstegium populations consistently outperform native range populations across diverse environments Plant species introduced into novel ranges may become invasive due to evolutionary change, phenotypic plasticity, or other biotic or abiotic mechanisms. Evolution of introduced populations could be the result of founder effects, drift, hybridization, or adaptation to local conditions, which could enhance the invasiveness of introduced species. However, understanding whether the success of invading populations is due to genetic differences between native and introduced populations may be obscured by origin x environment interactions. That is, studies conducted under a limited set of environmental conditions may show inconsistent results if native or introduced populations are differentially adapted to specific conditions. We tested for genetic differences between native and introduced populations, and for origin x environment interactions, between native (China) and introduced (U.S.) populations of the invasive annual grass Microstegium vimineum (stiltgrass) across 22 common gardens spanning a wide range of habitats and environmental conditions. On average, introduced populations produced 46% greater biomass and had 7.4% greater survival, and outperformed native range populations in every common garden. However, we found no evidence that introduced Microstegium exhibited greater phenotypic plasticity than native populations. Biomass of Microstegium was positively correlated with light and resident community richness and biomass across the common gardens. However, these relationships were equivalent for native and introduced populations, suggesting that the greater mean performance of introduced populations is not due to unequal responses to specific environmental parameters. Our data on performance of invasive and native populations suggest that post-introduction evolutionary changes may have enhanced the invasive potential of this species. Further, the ability of Microstegium to survive and grow across the wide variety of environmental conditions demonstrates that few habitats are immune to invasion." }, { "instance_id": "R54244xR54232", "comparison_id": "R54244", "paper_id": "R54232", "text": "Leaf ontogenetic dependence of light acclimation in invasive and native subtropical trees of different successional status In the Bonin Islands of the western Pacific where the light environment is characterized by high fluctuations due to frequent typhoon disturbance, we hypothesized that the invasive success of Bischofia javanica Blume (invasive tree, mid-successional) may be attributable to a high acclimation capacity under fluctuating light availability. The physiological and morphological responses of B. javanica to both simulated canopy opening and closure were compared against three native species of different successional status: Trema orientalis Blume (pioneer), Schima mertensiana (Sieb. et Zucc.) Koidz (mid-successional) and Elaeocarpus photiniaefolius Hook.et Arn (late-successional). The results revealed significant species-specific differences in the timing of physiological maturity and phenotypic plasticity in leaves developed under constant high and low light levels. For example, the photosynthetic capacity of T. orientalis reached a maximum in leaves that had just fully expanded when grown under constant high light (50% of full sun) whereas that of E. photiniaefolius leaves continued to increase until 50 d after full expansion. For leaves that had just reached full expansion, T. orientalis, having high photosynthetic plasticity between high and low light, exhibited low acclimation capacity under the changing light (from high to low or low to high light). In comparison with native species, B. javanica showed a higher degree of physiological and morphological acclimation following transfer to a new light condition in leaves of all age classes (i.e. before and after reaching full expansion). The high acclimation ability of B. javanica in response to changes in light availability may be a part of its pre-adaptations for invasiveness in the fluctuating environment of the Bonin Islands." }, { "instance_id": "R54867xR54640", "comparison_id": "R54867", "paper_id": "R54640", "text": "Soil disturbance, vegetation cover and the establishment of the exotic shrub Pyracantha coccinea in southern France We evaluate the mechanisms that determine the establishment of the non-indigenous shrub Pyracantha coccinea (Rosaceae) in the Montpellier region of southern France. P. coccinea establishes in abandoned agricultural fields in this region; yet, despite its high propagule pressure, it has not become a widespread invasive. We hypothesized that the disturbance conditions prevailing in abandoned agricultural fields right after abandonment may enhance the emergence, survival and growth of P. coccinea, but that shortly after abandonment colonizing vegetation prevents further establishment of this species. We conducted a field experiment to evaluate this hypothesis, studying the response of seedling emergence and growth of P. coccinea to soil and vegetation disturbance. Our results show that both lack of vegetation cover and soil disturbance promote the emergence of seedlings of P. coccinea. Thus, the disturbance conditions prevailing in abandoned agricultural fields seem crucial to allow establishment of this species. However, other factors such as lack of summer dormancy and seed predation might explain why this species has not become a widespread invasive." }, { "instance_id": "R54867xR54755", "comparison_id": "R54867", "paper_id": "R54755", "text": "Big and aerial invaders: dominance of exotic spiders in burned New Zealand tussock grasslands As post-disturbance community response depends on the characteristics of the ecosystem and the species composition, so does the invasion of exotic species rely on their suitability to the new environment. Here, we test two hypotheses: exotic spider species dominate the community after burning; and two traits are prevalent for their colonisation ability: ballooning and body size, the latter being correlated with their dispersal ability. We established spring burn, summer burn and unburned experimental plots in a New Zealand tussock grassland area and collected annual samples 3 and 4 years before and after the burning, respectively. Exotic spider abundance increased in the two burn treatments, driven by an increase in Linyphiidae. Indicator analysis showed that exotic and native species characterised burned and unburned plots, respectively. Generalised linear mixed-effects models indicated that ballooning had a positive effect on the post-burning establishment (density) of spiders in summer burn plots but not in spring plots. Body size had a positive effect on colonisation and establishment. The ability to balloon may partly explain the dominance of exotic Linyphiidae species. Larger spiders are better at moving into and colonising burned sites probably because of their ability to travel longer distances over land. Native species showed a low resilience to burning, and although confirmation requires longer-term data, our findings suggest that frequent fires could cause long lasting damage to the native spider fauna of tussock grasslands, and we propose limiting the use of fire to essential situations." }, { "instance_id": "R54867xR54830", "comparison_id": "R54867", "paper_id": "R54830", "text": "Exotic plant species in a C4-dominated grassland: invasibility, disturbance, and community structure Abstract We used data from a 15-year experiment in a C4-dominated grassland to address the effects of community structure (i.e., plant species richness, dominance) and disturbance on invasibility, as measured by abundance and richness of exotic species. Our specific objectives were to assess the temporal and spatial patterns of exotic plant species in a native grassland in Kansas (USA) and to determine the factors that control exotic species abundance and richness (i.e., invasibility). Exotic species (90% C3 plants) comprised approximately 10% of the flora, and their turnover was relatively high (30%) over the 15-year period. We found that disturbances significantly affected the abundance and richness of exotic species. In particular, long-term annually burned watersheds had lower cover of exotic species than unburned watersheds, and fire reduced exotic species richness by 80\u201390%. Exotic and native species richness were positively correlated across sites subjected to different fire (r = 0.72) and grazing (r = 0.67) treatments, and the number of exotic species was lowest on sites with the highest productivity of C4 grasses (i.e., high dominance). These results provide strong evidence for the role of community structure, as affected by disturbance, in determining invasibility of this grassland. Moreover, a significant positive relationship between exotic and native species richness was observed within a disturbance regime (annually burned sites, r = 0.51; unburned sites, r = 0.59). Thus, invasibility of this C4-dominated grassland can also be directly related to community structure independent of disturbance." }, { "instance_id": "R54867xR54795", "comparison_id": "R54867", "paper_id": "R54795", "text": "Functional and performance comparisons of invasive Hieracium lepidulum and co-occurring species in New Zealand One of the key environmental factors affecting plant species abundance, including that of invasive exotics, is nutrient resource availability. Plant functional response to nutrient availability, and what this tells us about plant interactions with associated species, may therefore give us clues about underlying processes related to plant abundance and invasion. Patterns of abundance of Hieracium lepidulum, a European herbaceous invader of subalpine New Zealand, appear to be related to soil fertility/nutrient availability, however, abundance may be influenced by other factors including disturbance. In this study we compare H. lepidulum and field co-occurring species for growth performance across artificial nutrient concentration gradients, for relative competitiveness and for response to disturbance, to construct a functional profile of the species. Hieracium lepidulum was found to be significantly different in its functional response to nutrient concentration gradients. Hieracium lepidulum had high relative growth rate, high yield and root plasticity in response to nutrient concentration dilution, relatively low absolute yield, low competitive yield and a positive response to clipping disturbance relative to other species. Based on overall functional response to nutrient concentration gradients, compared with other species found at the same field sites, we hypothesize that H. lepidulum invasion is not related to competitive domination. Relatively low tolerance of nutrient dilution leads us to predict that H. lepidulum is likely to be restricted from invading low fertility sites, including sites within alpine vegetation or where intact high biomass plant communities are found. Positive response to clipping disturbance and relatively high nutrient requirement, despite poor competitive performance, leads us to predict that H. lepidulum may respond to selective grazing disturbance of associated vegetation. These results are discussed in relation to published observations of H. lepidulum in New Zealand and possible tests for the hypotheses raised here." }, { "instance_id": "R54867xR54574", "comparison_id": "R54867", "paper_id": "R54574", "text": "Anthropogenic Disturbance Can Determine the Magnitude of Opportunistic Species Responses on Marine Urban Infrastructures Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems." }, { "instance_id": "R54867xR54729", "comparison_id": "R54867", "paper_id": "R54729", "text": "The short-term responses of small mammals to wildfire in semiarid mallee shrubland, Australia Context. Wildfire is a major driver of the structure and function of mallee eucalypt- and spinifex-dominated landscapes. Understanding how fire influences the distribution of biota in these fire-prone environments is essential for effective ecological and conservation-based management. Aims. We aimed to (1) determine the effects of an extensive wildfire (118 000 ha) on a small mammal community in the mallee shrublands of semiarid Australia and (2) assess the hypothesis that the fire-response patterns of small mammals can be predicted by their life-history characteristics. Methods. Small-mammal surveys were undertaken concurrently at 26 sites: once before the fire and on four occasions following the fire (including 14 sites that remained unburnt). We documented changes in small-mammal occurrence before and after the fire, and compared burnt and unburnt sites. In addition, key components of vegetation structure were assessed at each site. Key results. Wildfire had a strong influence on vegetation structure and on the occurrence of small mammals. The mallee ningaui, Ningaui yvonneae, a dasyurid marsupial, showed a marked decline in the immediate post-fire environment, corresponding with a reduction in hummock-grass cover in recently burnt vegetation. Species richness of native small mammals was positively associated with unburnt vegetation, although some species showed no clear response to wildfire. Conclusions. Our results are consistent with the contention that mammal responses to fire are associated with their known life-history traits. The species most strongly affected by wildfire, N. yvonneae, has the most specific habitat requirements and restricted life history of the small mammals in the study area. The only species positively associated with recently burnt vegetation, the introduced house mouse, Mus domesticus, has a flexible life history and non-specialised resource requirements. Implications. Maintaining sources for recolonisation after large-scale wildfires will be vital to the conservation of native small mammals in mallee ecosystems." }, { "instance_id": "R54867xR54849", "comparison_id": "R54867", "paper_id": "R54849", "text": "Alien Flora in Grasslands Adjacent to Road and Trail Corridors in Glacier National Park, Montana (U.S.A.) : Alien plant species have rapidly invaded and successfully displaced native species in many grasslands of western North America. Thus, the status of alien species in the nature reserve grasslands of this region warrants special attention. This study describes alien flora in nine fescue grassland study sites adjacent to three types of transportation corridors\u2014primary roads, secondary roads, and backcountry trails\u2014in Glacier National Park, Montana (U.S.A.). Parallel transects, placed at varying distances from the adjacent road or trail, were used to determine alien species richness and frequency at individual study sites. Fifteen alien species were recorded, two Eurasian grasses, Phleum pratense and Poa pratensis, being particularly common in most of the study sites. In sites adjacent to primary and secondary roads, alien species richness declined out to the most distant transect, suggesting that alien species are successfully invading grasslands from the roadside area. In study sites adjacent to backcountry trails, absence of a comparable decline and unexpectedly high levels of alien species richness 100 m from the trailside suggest that alien species have been introduced in off-trail areas. The results of this study imply that in spite of low levels of livestock grazing and other anthropogenic disturbances, fescue grasslands in nature reserves of this region are vulnerable to invasion by alien flora. Given the prominent role that roadsides play in the establishment and dispersal of alien flora, road construction should be viewed from a biological, rather than an engineering, perspective. Nature reserve man agers should establish effective roadside vegetation management programs that include monitoring, quickly treating keystone alien species upon their initial occurrence in nature reserves, and creating buffer zones on roadside leading to nature reserves. Resumen: Especies de plantas introducidas han invadido rapidamente y desplazado exitosamente especies nativas en praderas del Oeste de America del Norte. Por lo tanto el estado de las especies introducidas en las reservas de pastizales naturales de esta region exige especial atencion. Este estudio describe la flora introducida en nueve pastizales naturales de festuca, las areas de estudios son adyacentes a tres tipos decorredores de transporte\u2014caminos primarios, caminos secundarios y senderos remotos\u2014en el Parque Nacional \u201cGlacier,\u201d Montana (EE.UU). Para determinar riqueza y frecuencia de especies introducidas, se trazaron transectas paralelas, localizadas a distancias variables del camino o sendero adyacente en las areas de estudio. Se registraron quince especies introducidas. Dos pastos eurasiaticos, Phleum pratensis y Poa pratensis, resultaron particularmente abuntes en la mayoria de las areas de estudio. En lugares adyacentes a caminos primarios y secundarios, la riqueza de especies introducidas disminuyo en la direccion de las transectas mas distantes, sugiriendo que las especies introducidas estan invadiendo exitosamente las praderas desde areas aledanas a caminos. En las areas de estudio adyacentes a senderos remotos no se encontro una disminucion comparable; inesperados altos niveles de riqueza de especies introducidas a 100 m de los senderos, sugieren que las especies foraneas han sido introducidas desde otras areas fuero de los senderos. Los resultados de este estudio implican que a pesar de los bajos niveles de pastoreo y otras perturbaciones antropogenicas, los pastizales de festuca en las reservas naturales de esta region son vulnerables a la invasion de la flora introducida. Dada el rol preponderante que juegan los caminos en el establecimiento y dispersion de la flora introducida, la construccion de rutas debe ser vista desde un punto de vista biologica, mas que desde una perspectiva meramente ingenieril. Los administradores de reservas naturales deberian establecer programas efectivos de manejo de vegetacion en los bordes de los caminos. Estos programas deberian incluir monitoreo, tratamiento rapido de especies introducidas y claves tan pronto como se detecten en las reservas naturales, y creacion de zonas de transicion en los caminos que conducen a las reservas naturales." }, { "instance_id": "R54867xR54817", "comparison_id": "R54867", "paper_id": "R54817", "text": "Recent Invasion of the Symbiont-Bearing Foraminifera Pararotalia into the Eastern Mediterranean Facilitated by the Ongoing Warming Trend The eastern Mediterranean is a hotspot of biological invasions. Numerous species of Indo-pacific origin have colonized the Mediterranean in recent times, including tropical symbiont-bearing foraminifera. Among these is the species Pararotalia calcariformata. Unlike other invasive foraminifera, this species was discovered only two decades ago and is restricted to the eastern Mediterranean coast. Combining ecological, genetic and physiological observations, we attempt to explain the recent invasion of this species in the Mediterranean Sea. Using morphological and genetic data, we confirm the species attribution to P. calcariformata McCulloch 1977 and identify its symbionts as a consortium of diatom species dominated by Minutocellus polymorphus. We document photosynthetic activity of its endosymbionts using Pulse Amplitude Modulated Fluorometry and test the effects of elevated temperatures on growth rates of asexual offspring. The culturing of asexual offspring for 120 days shows a 30-day period of rapid growth followed by a period of slower growth. A subsequent 48-day temperature sensitivity experiment indicates a similar developmental pathway and high growth rate at 28\u00b0C, whereas an almost complete inhibition of growth was observed at 20\u00b0C and 35\u00b0C. This indicates that the offspring of this species may have lower tolerance to cold temperatures than what would be expected for species native to the Mediterranean. We expand this hypothesis by applying a Species Distribution Model (SDM) based on modern occurrences in the Mediterranean using three environmental variables: irradiance, turbidity and yearly minimum temperature. The model reproduces the observed restricted distribution and indicates that the range of the species will drastically expand westwards under future global change scenarios. We conclude that P. calcariformata established a population in the Levant because of the recent warming in the region. In line with observations from other groups of organisms, our results indicate that continued warming of the eastern Mediterranean will facilitate the invasion of more tropical marine taxa into the Mediterranean, disturbing local biodiversity and ecosystem structure." }, { "instance_id": "R54867xR54702", "comparison_id": "R54867", "paper_id": "R54702", "text": "Effects of mean intensity and temporal variability of disturbance on the invasion of Caulerpa racemosa var. cylindracea (Caulerpales) in rock pools Disturbance is a key factor influencing the invasibility of habitats and assemblages. This relationship was extensively studied in terrestrial systems, but it was scarcely tested in the marine environment. We investigated experimentally the interactive effects of changes in the intensity and temporal variability of mechanical disturbance by boulders on invasion dynamics of the green alga Caulerpa racemosa var. cylindracea in littoral rock pools. We tested the hypothesis that the success of invasion of C. racemosa would be (1) greater under large than under low intensity of disturbance, (2) greater under large than under low temporal variability of disturbance and that (3) interactive effects could also occur, with variability of disturbance magnifying the effects of intensity. C. racemosa was virtually absent in pools maintained under high intensity of disturbance, independently of temporal variability. High intensity of disturbance was also associated with lower density and length of fronds and thinner diameter of the stolons of the alga. The total number of native taxa and the abundance of encrusting coralline algae increased under high intensity of disturbance. Differently, turf-forming algae were positively affected by temporal variability of disturbance, while canopy-forming algae did not respond to experimental treatments. Our results suggest a direct negative effect of the most severe experimental conditions on the spread of C. racemosa in rock pools. This likely overwhelmed likely concomitant positive and negative effects mediated by resident organisms. The results of this study help anticipating invasion dynamics of C. racemosa in rock pools under climate change scenarios, in which both intensity and temporal variability of extreme meteorological events are predicted to increase." }, { "instance_id": "R54867xR54784", "comparison_id": "R54867", "paper_id": "R54784", "text": "Differential tolerance to metals among populations of the introduced bryozoan Bugula neritina Resistance to heavy metals is a potentially important trait for introduced marine organisms, facilitating their successful invasion into disturbed natural communities. We conducted laboratory and field experiments to examine differential resistance to copper (Cu) between two source populations of the introduced bryozoan Bugula neritina, originating from a polluted (Port Kembla Harbour, NSW, Australia) and an unpolluted (Botany Bay, NSW, Australia) environment. A laboratory toxicity test was conducted to test the relative resistance of B. neritina recruits from the two sources, by measuring the attachment success, survival and growth of individuals exposed to a range of Cu concentrations (0, 25, 50 and 100 \u03bcg l\u22121 Cu). Upon completion, reciprocal transplantation of the colonies to the original polluted and unpolluted locations was carried out to assess ongoing survival and growth of colonies in the field. B. neritina colonies originating from the polluted Port Kembla Harbour had increased resistance to Cu relative to populations from an unpolluted part of Botany Bay. There appeared to be a cost associated with increased metal tolerance. In the laboratory, Botany Bay recruits displayed significantly higher growth in control treatments and significantly poorer growth at 100 \u03bcg l\u22121 Cu with respect to Port Kembla Harbour individuals, which showed unusually uniform and low growth irrespective of Cu concentration. No difference in attachment success or post-metamorphic survival was observed between populations. Field transplantation showed copper resistance in Port Kembla Harbour colonies constituted an advantage in polluted but not benign environments. The findings of this study provide evidence of the benefits to invasive species of pollution tolerance and suggest that human disturbance can facilitate the establishment and spread of invasive species in marine systems." }, { "instance_id": "R54867xR54627", "comparison_id": "R54867", "paper_id": "R54627", "text": "Variable effects of feral pig disturbances on native and exotic plants in a California grassland Biological invasions are a global phenomenon that can accelerate disturbance regimes and facilitate colonization by other nonnative species. In a coastal grassland in northern California, we conducted a four-year exclosure experiment to assess the effects of soil disturbances by feral pigs (Sus scrofa) on plant community composition and soil nitrogen availability. Our results indicate that pig disturbances had substantial effects on the community, although many responses varied with plant functional group, geographic origin (native vs. exotic), and grassland type. (''Short patches'' were dominated by annual grasses and forbs, whereas ''tall patches'' were dominated by perennial bunchgrasses.) Soil disturbances by pigs increased the richness of exotic plant species by 29% and native taxa by 24%. Although native perennial grasses were unaffected, disturbances reduced the bio- mass of exotic perennial grasses by 52% in tall patches and had no effect in short patches. Pig disturbances led to a 69% decrease in biomass of exotic annual grasses in tall patches but caused a 62% increase in short patches. Native, nongrass monocots exhibited the opposite biomass pattern as those seen for exotic annual grasses, with disturbance causing an 80% increase in tall patches and a 56% decrease in short patches. Native forbs were unaffected by disturbance, whereas the biomass of exotic forbs increased by 79% with disturbance in tall patches and showed no response in short patches. In contrast to these vegetation results, we found no evidence that pig disturbances affected nitrogen mineral- ization rates or soil moisture availability. Thus, we hypothesize that the observed vegetation changes were due to space clearing by pigs that provided greater opportunities for colo- nization and reduced intensity of competition, rather than changes in soil characteristics. In summary, although responses were variable, disturbances by feral pigs generally pro- moted the continued invasion of this coastal grassland by exotic plant taxa." }, { "instance_id": "R54867xR54722", "comparison_id": "R54867", "paper_id": "R54722", "text": "Alien plant dynamics following fire in Mediterranean-climate California shrublands Over 75 species of alien plants were recorded during the first five years after fire in southern California shrublands, most of which were European annuals. Both cover and richness of aliens varied between years and plant association. Alien cover was lowest in the first postfire year in all plant associations and remained low during succession in chaparral but increased in sage scrub. Alien cover and richness were significantly correlated with year (time since disturbance) and with precipitation in both coastal and interior sage scrub associations. Hypothesized factors determining alien dominance were tested with structural equation modeling. Models that included nitrogen deposition and distance from the coast were not significant, but with those variables removed we obtained a significant model that gave an R 2 5 0.60 for the response variable of fifth year alien dominance. Factors directly affecting alien dominance were (1) woody canopy closure and (2) alien seed banks. Significant indirect effects were (3) fire intensity, (4) fire history, (5) prefire stand structure, (6) aridity, and (7) community type. According to this model the most critical factor in- fluencing aliens is the rapid return of the shrub and subshrub canopy. Thus, in these communities a single functional type (woody plants) appears to the most critical element controlling alien invasion and persistence. Fire history is an important indirect factor be- cause it affects both prefire stand structure and postfire alien seed banks. Despite being fire-prone ecosystems, these shrublands are not adapted to fire per se, but rather to a particular fire regime. Alterations in the fire regime produce a very different selective environment, and high fire frequency changes the selective regime to favor aliens. This study does not support the widely held belief that prescription burning is a viable man- agement practice for controlling alien species on semiarid landscapes." }, { "instance_id": "R54867xR54669", "comparison_id": "R54867", "paper_id": "R54669", "text": "Fire does not facilitate invasion by alien annual grasses in an infertile Australian agricultural landscape Plant invasions are a significant threat to fragmented native plant communities in many agricultural regions. Fire potentially facilitates invasions, but in landscapes historically subject to recurrent fires, exclusion of fire is also likely to result in loss of biodiversity. We investigated the relationship between fire, fragmentation and alien plant invasion in mallee communities of the Western Australian wheatbelt. We hypothesized that invasion is limited by lack of propagules and the low soil nutrient levels of this old, infertile landscape, but that fire and/or fragmentation disrupt these limits. We tested the effects of three factors on establishment and abundance of alien annuals: \u00b1 fire, \u00b1 post-fire seeding with the locally invasive Avena barbata (propagule availability) and three landscape contexts. The three landscape contexts, exploring site limitations, were reserve interiors, perimeter edges adjacent to agricultural land and internal reserve roadside edges. Our first hypothesis was supported: Avena establishment was consistently greater in seeded plots, but away from perimeter edges, growth was poor. Our second hypothesis was supported only for perimeter edges: neither fire nor fragmentation by interior roads enhanced invasive plant establishment or biomass. At perimeter edges, invasive plant biomass was significantly greater. This was associated with higher propagule availability and elevated soil nutrient levels but was not enhanced by fire. We conclude that fire is unlikely to promote invasion by alien annuals in low-nutrient ecosystems such as mallee, hence is a viable disturbance strategy for biodiversity conservation away from nutrient-enriched edges." }, { "instance_id": "R54867xR54642", "comparison_id": "R54867", "paper_id": "R54642", "text": "Re-colonisation rate differs between co-existing indigenous and invasive intertidal mussels following major disturbance The potential of introduced species to become invasive is often linked to their ability to colonise disturbed habitats rapidly. We studied the effects of major disturbance by severe storms on the indigenous mussel Perna perna and the invasive mussel Mytilus galloprovincialis in sympatric intertidal populations on the south coast of South Africa. At the study sites, these species dominate different shore levels and co-exist in the mid mussel zone. We tested the hypotheses that in the mid- zone P. perna would suffer less dislodgment than M. galloprovincialis, because of its greater tenacity, while M. galloprovincialis would respond with a higher re-colonisation rate. We estimated the per- cent cover of the 2 mussels in the mid-zone from photographs, once before severe storms and 3 times afterwards. M. galloprovincialis showed faster re-colonisation and 3 times more cover than P. perna 1 and 1.5 yr after the storms (when populations had recovered). Storm-driven dislodgment in the mid- zone was highest for the species that initially dominated at each site, conforming to the concept of compensatory mortality. This resulted in similar cover of the 2 species immediately after the storms. Thus, the storm wave forces exceeded the tenacity even of P. perna, while the higher recruitment rate of M. galloprovincialis can explain its greater colonisation ability. We predict that, because of its weaker attachment strength, M. galloprovincialis will be largely excluded from open coast sites where wave action is generally stronger, but that its greater capacity for exploitation competition through re-colonisation will allow it to outcompete P. perna in more sheltered areas (especially in bays) that are periodically disturbed by storms." }, { "instance_id": "R54867xR54711", "comparison_id": "R54867", "paper_id": "R54711", "text": "Dam invaders: impoundments facilitate biological invasions into freshwaters Freshwater ecosystems are at the forefront of the global biodiversity crisis, with more declining and extinct species than in terrestrial or marine environments. Hydrologic alterations and biological invasions represent two of the greatest threats to freshwater biota, yet the importance of linkages between these drivers of environmental change remains uncertain. Here, we quantitatively test the hypothesis that impoundments facilitate the introduction and establishment of aquatic invasive species in lake ecosystems. By combining data on boating activity, water body physicochemistry, and geographical distribution of five nuisance invaders in the Laurentian Great Lakes region, we show that non-indigenous species are 2.4 to 300 times more likely to occur in impoundments than in natural lakes, and that impoundments frequently support multiple invaders. Furthermore, comparisons of the contemporary and historical landscapes revealed that impoundments enhance the invasion risk of natural lakes by increasing their..." }, { "instance_id": "R54867xR54859", "comparison_id": "R54867", "paper_id": "R54859", "text": "Feral sheep on Socorro Island: facilitators of alien plant colonization and ecosystem decay The paper examines the role of feral sheep (Ovis aries) in facilitating the naturalization of alien plants and degrading a formerly robust and stable ecosystem of Socorro, an isolated oceanic island in the Mexican Pacific Ocean. Approximately half of the island is still sheep-free. The other half has been widely overgrazed and transformed into savannah and prairie-like open habitats that exhibit sheet and gully erosion and are covered by a mix of native and alien invasive vegetation today. Vegetation transects in this moderately sheep-impacted sector show that a significant number of native and endemic herb and shrub species exhibit sympatric distribution patterns with introduced plants. Only one alien plant species has been recorded from any undisturbed and sheep-free island sector so far. Socorro Island provides support for the hypothesis that disturbance of a pristine ecosystem is generally required for the colonization and naturalization of alien plants. Sheep are also indirectly responsible for the self-invasion of mainland bird species into novel island habitats and for the decline and range contraction of several endemic bird species." }, { "instance_id": "R54867xR54697", "comparison_id": "R54867", "paper_id": "R54697", "text": "Alien Grass Invasion and Fire In the Seasonal Submontane Zone of Hawaii Island ecosystems are notably susceptible to biological invasions (Elton 1958), and the Hawaiian islands in particular have been colonized by many introduced species (Loope and Mueller-Dombois 1989). Introduced plants now dominate extensive areas of the Hawaiian Islands, and 86 species of alien plants are presently considered to pose serious threats to Hawaiian communities and ecosystems (Smith 1985). Among the most important invasive plants are several species of tropical and subtropical grasses that use the C4 photosynthetic pathway. These grasses now dominate extensive areas of dry and seasonally dry habitats in Hawai'i. They may compete with native species, and they have also been shown to alter hydrological properties in the areas they invade (MuellerDombois 1973). Most importantly, alien grasses can introduce fire into areas where it was previously rare or absent (Smith 1985), thereby altering the structure and functioning of previously native-dominated ecosystems. Many of these grasses evolved in fire-affected areas and have mechanisms for surviving and recovering rapidly from fire (Vogl 1975, Christensen 1985), while most native species in Hawai'i have little background with fire (Mueller-Dombois 1981) and hence few or no such mechanisms. Consequently, grass invasion could initiate a grass/fire cycle whereby invading grasses promote fire, which in turn favors alien grasses over native species. Such a scenario has been suggested in a number of areas, including Latin America, western North America, Australia, and Hawai'i (Parsons 1972, Smith 1985, Christensen and Burrows 1986, Mack 1986, MacDonald and Frame 1988). In most of these cases, land clearing by humans initiates colonization by alien grasses, and the grass/fire cycle then leads to their persistence. In Hawai'i and perhaps other areas, however, grass invasion occurs without any direct human intervention. Where such invasions initiate a grass/fire cy-" }, { "instance_id": "R54867xR54675", "comparison_id": "R54867", "paper_id": "R54675", "text": "Land use intensification differentially benefits alien over native predators in agricultural landscape mosaics Aim Both anthropogenic habitat disturbance and the breadth of habitat use by alien species have been found to facilitate invasion into novel environments, and these factors have been hypothesized to be important within coccinellid communities specifically. In this study, we address two questions: (1) Do alien species benefit more than native species from human-disturbed habitats? (2) Are alien species more generalized in their habitat use than natives within the invaded range or can their abundance patterns be explained by specialization on the most common habitats? Location Chile. Methods We quantified the use of different habitat types by native and alien coccinellid beetles by sampling individuals in nine habitat types that spanned a gradient in disturbance intensity, and represented the dominant natural, semi-natural and agricultural habitats in the landscape. Results Our results provide strong support for the hypotheses that more-disturbed habitats are differentially invaded. Both the proportion of alien individuals and the proportion of alien species increased significantly with increasing disturbance intensity. In contrast, we found no evidence that alien species were more generalized in their habitat use than native species; in fact, the trend was in the opposite direction. The abundance of specialized alien coccinellid species was not correlated with the area of the habitat types in the landscape. Main conclusion The results suggest that successfully established alien coccinellid species may be \u2018disturbance specialists\u2019 that thrive within human-modified habitats. Therefore, less-disturbed agroecosystems are desirable to promote the regional conservation of native species within increasingly human-dominated landscapes." }, { "instance_id": "R54867xR54812", "comparison_id": "R54867", "paper_id": "R54812", "text": "Invasion of natural and agricultural cranberry bogs by introduced and native plants Plant species invasions, i.e., the entry of additional plant species into a habitat with negative effects on species already there, are a major ecological problem in natural habitats and a major economic problem in agricultural habitats. Nutrient availability, disturbance, and proximity to other habitats are likely factors that may interact to control invasion in both types of habitat. We hypothesized (1) that elevated nutrient availability can promote the abundance of introduced species even when high cover of the existing plant community is maintained, and (2) that higher levels of invasion on the edges than in the interiors of habitats are due to differences in resource availability between edges and interiors. To test these hypotheses, we measured soil characteristics and the abundances of plant species in natural and agricultural cranberry (Vaccinium macrocarpon Ait.) bogs in southeastern Massachusetts. Contrary to the first hypothesis, agricultural bogs did not have higher cover or richness of introduced species than natural bogs, despite having higher levels of soil nutrients. Contrary to the second hypothesis, the edges of both agricultural and natural bogs had a higher cover and richness of introduced species than the interiors, even though only natural bogs showed differences in resource availabilities between edges and interiors. Results suggest that having a high cover of existing species can counter positive effects of elevated nutrients on the spread of introduced and non-crop species. However, maintaining similar resource availabilities on the edges and interiors of habitats may not prevent greater invasion of edges. Avoiding disturbances to natural communities, maintaining high crop cover, and focusing active control of introduced or non-crop species on the edges of habitats could help limit plant invasions into natural and agricultural habitats alike." }, { "instance_id": "R54867xR54781", "comparison_id": "R54867", "paper_id": "R54781", "text": "Are invaders disturbance-limited? Conservation of mountain grasslands in Central Argentina Abstract Extensive areas in the mountain grasslands of central Argentina are heavily invaded by alien species from Europe. A decrease in biodiversity and a loss of palatable species is also observed. The invasibility of the tall-grass mountain grassland community was investigated in an experiment of factorial design. Six alien species which are widely distributed in the region were sown in plots where soil disturbance, above-ground biomass removal by cutting and burning were used as treatments. Alien species did not establish in undisturbed plots. All three types of disturbances increased the number and cover of alien species; the effects of soil disturbance and biomass removal was cumulative. Cirsium vulgare and Oenothera erythrosepala were the most efficient alien colonizers. In conditions where disturbances did not continue the cover of aliens started to decrease in the second year, by the end of the third season, only a few adults were established. Consequently, disturbances are needed to maintain ali..." }, { "instance_id": "R54867xR54803", "comparison_id": "R54867", "paper_id": "R54803", "text": "Relationship between productivity, and species and functional group diversity in grazed and non-grazed Pampas grassland Most hypotheses addressing the effect of diversity on ecosystem function indicate the occurrence of higher process rates with increasing diversity, and only diverge in the shape of the function depending on their assumptions about the role of individual species and functional groups. Contrarily to these predictions, we show that grazing of the Flooding Pampas grasslands increased species richness, but drastically reduced above ground net primary production, even when communities with similar initial biomass were compared. Grazing increased species richness through the addition of a number of exotic forbs, without reducing the richness and cover of the native flora. Since these forbs were essentially cool-season species, and also because their introduction has led to the displacement of warm-season grasses from dominant to subordinate positions in the community, grazing not only decreased productivity, but also shifted its seasonality towards the cool season. These results suggest that species diversity and/or richness alone are poor predictors of above-ground primary production. Therefore, models that relate productivity to diversity should take into account the relative abundance and identity of species that are added or deleted by the specific disturbances that modify diversity." }, { "instance_id": "R54867xR54682", "comparison_id": "R54867", "paper_id": "R54682", "text": "Disturbance of biological soil crust increases emergence of exotic vascular plants in California sage scrub Biological soil crusts (BSCs) are comprised of soil particles, bacteria, cyanobacteria, green algae, microfungi, lichens, and bryophytes and confer many ecosystem services in arid and semiarid ecosystems worldwide, including the highly threatened California sage scrub (CSS). These services, which include stabilizing the soil surface, can be adversely affected when BSCs are disturbed. Using field and greenhouse experiments, we tested the hypothesis that mechanical disturbance of BSC increases emergence of exotic vascular plants in a coastal CSS ecosystem. At Whiting Ranch Wilderness Park in southern California, 22 plots were established and emergence of exotic and native plants was compared between disturbed and undisturbed subplots containing BSC. In a separate germination study, seed fate in disturbed BSC cores was compared to seed fate in undisturbed BSC cores for three exotic and three native species. In the field, disturbed BSCs had significantly (>3\u00d7) greater exotic plant emergence than in undisturbed BSC, particularly for annual grasses. Native species, however, showed no difference in emergence between disturbed and undisturbed BSC. Within the disturbed treatment, emergence of native plants was significantly, and three times less than that of exotic plants. In the germination study, seed fates for all species were significantly different between disturbed and undisturbed BSC cores. Exotic species had greater emergence in disturbed BSC, whereas native plants showed either no response or a positive response. This study demonstrates another critical ecosystem service of BSCs\u2014the inhibition of exotic plant species\u2014and underscores the importance of BSC conservation in this biodiversity hotspot and possibly in other aridland ecosystems." }, { "instance_id": "R54867xR54581", "comparison_id": "R54867", "paper_id": "R54581", "text": "Effects of small-scale disturbance on invasion success in marine communities Abstract Introductions of non-indigenous species have resulted in many ecological problems including the reduction of biodiversity, decline of commercially important species and alteration of ecosystems. The link between disturbance and invasion potential has rarely been studied in the marine environment where dominance hierarchies, dynamics of larval supply, and resource acquisition may differ greatly from terrestrial systems. In this study, hard substrate marine communities in Long Island Sound, USA were used to assess the effect of disturbance on resident species and recent invaders, ascidian growth form (i.e. colonial and solitary growth form), and the dominant species-specific responses within the community. Community age was an additional factor considered through manipulation of 5-wk old assemblages and 1-yr old assemblages. Disturbance treatments, exposing primary substrate, were characterized by frequency (single, biweekly, monthly) and magnitude (20%, 48%, 80%) of disturbance. In communities of different ages, disturbance frequency had a significant positive effect on space occupation of recent invaders and a significant negative effect on resident species. In the 5-wk community, magnitude of disturbance also had a significant effect. Disturbance also had a significant effect on ascidian growth form; colonial species occupied more primary space than controls in response to increased disturbance frequency and magnitude. In contrast, solitary species occupied significantly less space than controls. Species-specific responses were similar regardless of community age. The non-native colonial ascidian Diplosoma listerianum responded positively to increased disturbance frequency and magnitude, and occupied more primary space in treatments than in controls. The resident solitary ascidian Molgula manhattensis responded negatively to increased disturbance frequency and magnitude, and occupied less primary space in treatments than in controls. Small-scale biological disturbances, by creating space, may facilitate the success of invasive species and colonial organisms in the development of subtidal hard substrate communities." }, { "instance_id": "R54867xR54636", "comparison_id": "R54867", "paper_id": "R54636", "text": "Effects of surrounding urbanization on non-native flora in small forest patches The purpose of our study was to compare the number, proportion, and species composition of introduced plant species in forest patches situated within predominantly forested, agricultural, and urban landscapes. A previous study suggested that agricultural landscape context does not have a large effect on the proportion of introduced species in forest patches. Therefore, our main goal was to test the hypothesis that forest patches in an urban landscape context contain larger numbers and proportions of non-native plant species. We surveyed the vegetation in 44 small remnant forest fragments (3\u20137.5 ha) in the Ottawa region; 15 were situated within forested landscapes, 18 within agricultural landscapes, and 11 within urban landscapes. Forest fragments in urban landscapes had about 40% more introduced plant species and a 50% greater proportion of introduced plant species than fragments found in the other two types of landscape. There was no significant difference in the number or proportion of introduced species in forest fragments within forested vs. agricultural landscapes. However, the species composition of introduced species differed among the forest patches in the three landscape types. Our results support the hypothesis that urban and suburban areas are important foci for spread of introduced plant species." }, { "instance_id": "R54867xR54686", "comparison_id": "R54867", "paper_id": "R54686", "text": "Disturbance Facilitates Invasion: The Effects Are Stronger Abroad than at Home Disturbance is one of the most important factors promoting exotic invasion. However, if disturbance per se is sufficient to explain exotic success, then \u201cinvasion\u201d abroad should not differ from \u201ccolonization\u201d at home. Comparisons of the effects of disturbance on organisms in their native and introduced ranges are crucial to elucidate whether this is the case; however, such comparisons have not been conducted. We investigated the effects of disturbance on the success of Eurasian native Centaurea solstitialis in two invaded regions, California and Argentina, and one native region, Turkey, by conducting field experiments consisting of simulating different disturbances and adding locally collected C. solstitialis seeds. We also tested differences among C. solstitialis genotypes in these three regions and the effects of local soil microbes on C. solstitialis performance in greenhouse experiments. Disturbance increased C. solstitialis abundance and performance far more in nonnative ranges than in the native range, but C. solstitialis biomass and fecundity were similar among populations from all regions grown under common conditions. Eurasian soil microbes suppressed growth of C. solstitialis plants, while Californian and Argentinean soil biota did not. We suggest that escape from soil pathogens may contribute to the disproportionately powerful effect of disturbance in introduced regions." }, { "instance_id": "R54867xR54763", "comparison_id": "R54867", "paper_id": "R54763", "text": "Regional boreal biodiversity peaks at intermediate human disturbance The worldwide biodiversity crisis has intensified the need to better understand how biodiversity and human disturbance are related. The 'intermediate disturbance hypothesis' suggests that disturbance regimes generate predictable non-linear patterns in species richness. Evidence often contradicts intermediate disturbance hypothesis at small scales, and is generally lacking at large regional scales. Here, we present the largest extent study of human impacts on boreal plant biodiversity to date. Disturbance extent ranged from 0 to 100% disturbed in vascular plant communities, varying from intact forest to agricultural fields, forestry cut blocks and oil sands. We show for the first time that across a broad region species richness peaked in communities with intermediate anthropogenic disturbance, as predicted by intermediate disturbance hypothesis, even when accounting for many environmental covariates. Intermediate disturbance hypothesis was consistently supported across trees, shrubs, forbs and grasses, with temporary and perpetual disturbances. However, only native species fit this pattern; exotic species richness increased linearly with disturbance." }, { "instance_id": "R54867xR54776", "comparison_id": "R54867", "paper_id": "R54776", "text": "Exotic cheatgrass and loss of soil biota decrease the performance of a native grass Soil disturbances can alter microbial communities including arbuscular mycorrhizal (AM) fungi, which may in turn, affect plant community structure and the abundance of exotic species. We hypothesized that altered soil microbial populations owing to disturbance would contribute to invasion by cheatgrass (Bromus tectorum), an exotic annual grass, at the expense of the native perennial grass, squirreltail (Elymus elymoides). Using a greenhouse experiment, we compared the responses of conspecific and heterospecific pairs of cheatgrass and squirreltail inoculated with soil (including live AM spores and other organisms) collected from fuel treatments with high, intermediate and no disturbance (pile burns, mastication, and intact woodlands) and a sterile control. Cheatgrass growth was unaffected by type of soil inoculum, whereas squirreltail growth, reproduction and nutrient uptake were higher in plants inoculated with soil from mastication and undisturbed treatments compared to pile burns and sterile controls. Squirreltail shoot biomass was positively correlated with AM colonization when inoculated with mastication and undisturbed soils, but not when inoculated with pile burn soils. In contrast, cheatgrass shoot biomass was negatively correlated with AM colonization, but this effect was less pronounced with pile burn inoculum. Cheatgrass had higher foliar N and P when grown with squirreltail compared to a conspecific, while squirreltail had lower foliar P, AM colonization and flower production when grown with cheatgrass. These results indicate that changes in AM communities resulting from high disturbance may favor exotic plant species that do not depend on mycorrhizal fungi, over native species that depend on particular taxa of AM fungi for growth and reproduction." }, { "instance_id": "R54867xR54644", "comparison_id": "R54867", "paper_id": "R54644", "text": "Relative importance of wetland type versus anthropogenic activities in determining site invasibility We assessed wetland invasibility by conducting surveys of three wetlands in each of five categories (riverine, depression, lacustrine fringe, mineral flat, and seepage slope). Invasibility was measured as the number of invasive species present, percent of plant species classified as invasive, percent cover of invasive plants, and percent of total cover represented by invasive species. The working hypothesis for this study was that certain types of wetlands (e.g., lacustrine fringe and riverine) would be more prone to invasion than others (spring-seep/slope wetlands or mineral flat wetlands). No significant differences were found among wetland types in any of the invasion metrics evaluated, despite high average invasibility in the riverine and lacustrine fringe categories. However, invasion was correlated very strongly with a qualitative index of anthropogenic modification to the surrounding landscape. A probable result of the substantial influence of human activities on wetland invasion in this study was that effects potentially attributable to greater opportunity for dispersal in certain types of wetlands were obscured. Another factor that likely contributed to the lack of differences among wetland types was the high variability in human activities observed among wetlands within types. These results further highlight the overwhelming contributions of anthropogenic habitat modification and human-assisted dispersal of invasive species to the currently observed homogenization of natural ecosystems." }, { "instance_id": "R54867xR54656", "comparison_id": "R54867", "paper_id": "R54656", "text": "Roads as conduits for exotic plant invasions in a semiarid landscape Roads are believed to be a major contributing factor to the ongoing spread of exotic plants. We ex- amined the effect of road improvement and environmental variables on exotic and native plant diversity in roadside verges and adjacent semiarid grassland, shrubland, and woodland communities of southern Utah (U.S.A.). We measured the cover of exotic and native species in roadside verges and both the richness and cover of exotic and native species in adjacent interior communities (50 m beyond the edge of the road cut) along 42 roads stratified by level of road improvement ( paved, improved surface, graded, and four-wheel- drive track ). In roadside verges along paved roads, the cover of Bromus tectorum was three times as great (27%) as in verges along four-wheel-drive tracks ( 9%). The cover of five common exotic forb species tended to be lower in verges along four-wheel-drive tracks than in verges along more improved roads. The richness and cover of exotic species were both more than 50% greater, and the richness of native species was 30% lower, at interior sites adjacent to paved roads than at those adjacent to four-wheel-drive tracks. In addition, environmental variables relating to dominant vegetation, disturbance, and topography were significantly correlated with exotic and native species richness and cover. Improved roads can act as conduits for the inva- sion of adjacent ecosystems by converting natural habitats to those highly vulnerable to invasion. However, variation in dominant vegetation, soil moisture, nutrient levels, soil depth, disturbance, and topography may render interior communities differentially susceptible to invasions originating from roadside verges. Plant communities that are both physically invasible (e.g., characterized by deep or fertile soils) and disturbed ap- pear most vulnerable. Decision-makers considering whether to build, improve, and maintain roads should take into account the potential spread of exotic plants." }, { "instance_id": "R54867xR54746", "comparison_id": "R54867", "paper_id": "R54746", "text": "Establishment and Post-Hurricane Survival of the Non-Native Rio Grande Cichlid (Herichthys cyanoguttatus) in the Greater New Orleans Metropolitan Area Abstract We conducted multiple surveys to determine the distribution of the non-native Herichthys cyanoguttatus (Rio Grande Cichlid) in the Greater New Orleans Metropolitan Area (GNOMA). First, in 2003\u20132004, we trapped H. cyanoguttatus in Lake Pontchartrain (an oligohaline estuary) to determine if this freshwater species occurred in estuarine habitats. Our goal was to test the prediction that H. cyanoguttatus used estuarine corridors to disperse. Second, we sampled and compared 16 GNOMA sites before and after the 2005 hurricanes to determine how H. cyanoguttatus populations responded. Finally, we monitored H. cyanoguttatus populations monthly over two years (2006\u20132007) at six sites within the GNOMA to determine if numbers continued to increase after the hurricanes. We confirmed that H. cyanoguttatus: 1) does occur in estuarine habitats (0 to 8 psu), 2) effectively survived the 2005 hurricanes, 3) has increased significantly from 2006 to 2007 at three of six GNOMA sites, 4) is currently found more often in urban sites, and 5) persisted through the atypically cold winter of 2009/2010." }, { "instance_id": "R54867xR54607", "comparison_id": "R54867", "paper_id": "R54607", "text": "Determinants of Caulerpa racemosa distribution in the north-western Mediterranean Predicting community susceptibility to invasion has become a priority for preserving biodiversity. We tested the hypothesis that the occurrence and abundance of the seaweed Caulerpa racemosa in the north-western (NW) Mediterranean would increase with increasing levels of human disturbance. Data from a survey encompassing areas subjected to different human influences (i.e. from urbanized to protected areas) were fitted by means of generalized linear mixed models, including descriptors of habitats and communities. The incidence of occurrence of C. racemosa was greater on urban than extra-urban or protected reefs, along the coast of Tuscany and NW Sardinia, respectively. Within the Marine Protected Area of Capraia Island (Tuscan Archipelago), the probability of detecting C. racemosa did not vary according to the degree of protection (partial versus total). Human influence was, however, a poor predictor of the seaweed cover. At the seascape level, C. racemosa was more widely spread within degraded (i.e. Posidonia oceanica dead matte or algal turfs) than in better preserved habitats (i.e. canopy-forming macroalgae or P. oceanica seagrass meadows). At a smaller spatial scale, the presence of the seaweed was positively correlated to the diversity of macroalgae and negatively to that of sessile invertebrates. These results suggest that C. racemosa can take advantage of habitat degradation. Thus, predicting invasion scenarios requires a thorough knowledge of ecosystem structure, at a hierarchy of levels of biological organization (from the landscape to the assemblage) and detailed information on the nature and intensity of sources of disturbance and spatial scales at which they operate." }, { "instance_id": "R54867xR54751", "comparison_id": "R54867", "paper_id": "R54751", "text": "Old World Climbing Fern (Lygodium microphyllum) Invasion in Hurricane Caused Treefalls ABSTRACT: We examined effects of a natural disturbance (hurricanes) on potential invasion of tree islands by an exotic plant (Old World climbing fern, Lygodium microphyllum) in the Arthur R. Marshall Loxahatchee National Wildlife Refuge, Florida. Three major hurricanes in 2004 and 2005 caused varying degrees of impacts to trees on tree islands within the Refuge. Physical impacts of hurricanes were hypothesized to promote invasion and growth of L. microphyllum. We compared presence and density of L. microphyllum in plots of disturbed soil created by hurricane-caused treefalls to randomly selected non-disturbed plots on 12 tree islands. We also examined relationships between disturbed area size, canopy cover, and presence of standing water on presence and density of L. microphyllum. Lygodium microphyllum was present in significantly more treefall plots than random non-treefall plots (76% of the treefall plots (N=55) and only 14% of random non-treefall plots (N=55)). Density of L. microphyllum was higher in treefall plots compared to random non-disturbed plots (6.0 stems per m2 for treefall plots; 0.5 stems per m2 for random non-disturbed plots), and L. microphyllum density was correlated with disturbed area size (P = 0.005). Lygodium microphyllum presence in treefall sites was significantly related to canopy cover and presence of water: it was present in five times more treefalls with water than those without. These results suggest that disturbances, such as hurricanes, that result in canopy openings and the creation of disturbed areas with standing water contribute to the ability of L. microphyllum to invade natural areas." }, { "instance_id": "R54867xR54855", "comparison_id": "R54867", "paper_id": "R54855", "text": "Roads Alter the Colonization Dynamics of a Keystone Herbivore in Neotropical Savannas Roads can facilitate the establishment and spread of both native and exotic species. Nevertheless, the precise mechanisms facilitating this expansion are rarely known. We tested the hypothesis that dirt roads are favorable landing and nest initiation sites for founding-queens of the leaf-cutter ant Atta laevigata. For 2 yr, we compared the number of attempts to found new nests (colonization attempts) in dirt roads and the adjacent vegetation in a reserve of cerrado (tree-dominated savanna) in southeastern Brazil. The number of colonization attempts in roads was 5 to 10 times greater than in the adjacent vegetation. Experimental transplants indicate that founding-queens are more likely to establish a nest on bare soil than on soil covered with leaf-litter, but the amount of litter covering the ground did not fully explain the preference of queens for dirt roads. Queens that landed on roads were at higher risk of predation by beetles and ants than those that landed in the adjacent vegetation. Nevertheless, greater predation in roads was not sufficient to offset the greater number of colonization attempts in this habitat. As a consequence, significantly more new colonies were established in roads than in the adjacent vegetation. Our results suggest that disturbance caused by the opening of roads could result in an increased Atta abundance in protected areas of the Brazilian Cerrado. RESUMO Estradas podem facilitar o estabelecimento e a expansao de especies nativas e exoticas. No entanto, os exatos mecanismos facilitadores desta expansao sao raramente conhecidos. Nos testamos a hipotese de que estradas de terra sao locais favoraveis a colonizacao e estabelecimento de ninhos por rainhas fundadoras da sauva Atta laevigata. Por dois anos nos comparamos o numero de tentativas de fundacao de novos ninhos (tentativas de colonizacao) em estradas de terra e na vegetacao adjacente em uma reserva de cerrado no sudeste do Brasil. O numero de tentativas de colonizacao nas estradas foi de 5 a 10 vezes maior do que na vegetacao adjacente. Transplantes experimentais indicam que as rainhas fundadoras tem maior possibilidade de estabelecer um ninho em solos limpos do que em solos cobertos por serapilheira, mas a quantidade de serapilheira cobrindo o chao nao explicou completamente a preferencia das rainhas por estradas de terra. Rainhas que pousaram nas estradas estavam sob um maior risco de predacao por besouros e formigas do que aquelas que pousaram na vegetacao adjacente. Entretanto, a maior predacao nas estradas nao foi suficiente para compensar o maior numero de tentativas de colonizacao neste habitat. Como consequencia, um numero significativamente maior de colonias novas foi estabelecido nas estradas do que na vegetacao adjacente. Nossos resultados sugerem que as alteracoes ambientais causadas pela abertura de estradas podem resultar em um aumento na abund\u00e2ncia de Atta em areas protegidas do Cerrado brasileiro." }, { "instance_id": "R54867xR54799", "comparison_id": "R54867", "paper_id": "R54799", "text": "Relationship between fragmentation, degradation and native and exotic species richness in an Andean temperate forest of Chile Impactos humanos tales como la fragmentacion y degradacion de bosques pueden tener fuertes efectos en las comunidades de especies vegetales nativas y exoticas. Ademas, perturbaciones antropicas ocurren principalmente en menores altitudes produciendo mayores grados de fragmentacion y degradacion que en mayores altitudes. La invasion de plantas exoticas deberia ser mayor en bosques mas fragmentados o degradados y, por lo tanto, en menores altitudes dentro de un tipo de bosque o piso altitudinal. En cambio, la riqueza de especies nativas deberia ser negativamente afectada por la fragmentacion y degradacion, encontrandose mayor riqueza en mayores altitudes dentro de un tipo de bosque determinado. En este trabajo evaluamos estas hipotesis en un bosque templado andino de la Region de la Araucania, Chile. Registramos la composicion de plantas vasculares en doce fragmentos de diferente tamano, razon perimetro/area, altitud y degradacion antropica (cortas, incendios, fecas de ganado). En base a estas variables construimos un indice de fragmentacion y uno de degradacion para estos fragmentos. Se analizaron las relaciones entre estas variables a traves de correlaciones de Pearson. Nuestros resultados sugieren que la fragmentacion y degradacion estan positivamente relacionadas y que ambos tipos de perturbacion ocurren en altitudes mas bajas del tipo de bosque estudiado. Ademas, la fragmentacion y degradacion estan afectando en diferente forma a la riqueza de especies nativas y exoticas. La invasion se incremento como consecuencia tanto de fragmentacion como de degradacion, y como consecuencia del patron de distribucion altitudinal de estas perturbaciones, la invasion aparentemente ocurre principalmente en zonas bajas. En cambio, la riqueza de especies nativas fue negativamente afectada solo por la fragmentacion, y no se relaciono con la degradacion interna de los bosques ni con la altitud." }, { "instance_id": "R54867xR54659", "comparison_id": "R54867", "paper_id": "R54659", "text": "Testing life history correlates of invasiveness using congeneric plant species We used three congeneric annual thistles, which vary in their ability to invade California (USA) annual grasslands, to test whether invasiveness is related to differences in life history traits. We hypothesized that populations of these summer-flowering Centaurea species must pass through a demographic gauntlet of survival and reproduction in order to persist and that the most invasive species (C. solstitialis) might possess unique life history characteristics. Using the idea of a demographic gauntlet as a conceptual framework, we compared each congener in terms of (1) seed germination and seedling establishment, (2) survival of rosettes subjected to competition from annual grasses, (3) subsequent growth and flowering in adult plants, and (4) variation in breeding system. Grazing and soil disturbance is thought to affect Centaurea establishment, growth, and reproduction, so we also explored differences among congeners in their response to clipping and to different sizes of soil disturbance. We found minimal differences among congeners in either seed germination responses or seedling establishment and survival. In contrast, differential growth responses of congeners to different sizes of canopy gaps led to large differences in adult size and fecundity. Canopy-gap size and clipping affected the fecundity of each species, but the most invasive species (C. solstitialis) was unique in its strong positive response to combinations of clipping and canopy gaps. In addition, the phenology of C. solstitialis allows this species to extend its growing season into the summer\u2014a time when competition from winter annual vegetation for soil water is minimal. Surprisingly, C. solstitialis was highly self-incompatible while the less invasive species were highly self-compatible. Our results suggest that the invasiveness of C. solstitialis arises, in part, from its combined ability to persist in competition with annual grasses and its plastic growth and reproductive responses to open, disturbed habitat patches. Corresponding Editor: D. P. C. Peters." }, { "instance_id": "R54867xR54864", "comparison_id": "R54867", "paper_id": "R54864", "text": "\n\n\n\n\n\nPatterns of plant invasions in the preserves and recreation areas of Shei-Pa National Park in Taiwan Nature preserves in the national parks are usually adjacent to the recreation areas, where most of the tourists visit. Although permits are required and only few small trials are available to enter the preserves, species naturalized in the neighboring recreation areas may hitchhike across the borders. To estimate the differences of plant invasions in neighboring preserves and recreation areas experiencing different intensity of anthropogenic activities, we employed Wuling district (alt. 1,800-3,860 m), Shei-Pa National Park in Taiwan as our study site. Our hypotheses were: (1) the recreation areas harbor more naturalized species, and plant invasion patterns are different in these areas under various land management strategies; (2) species inhabiting the preserves could be found in the recreation areas as well; (3) naturalized species of temperate origins are dominant due to the temperate weather in the mountains. Total of 230 quadrats in one meter square quadrats were randomly selected along the roads and trails in both areas. Naturalized species, relative cover, elevation, and naturalness degree were obtained and analyzed. The results showed that the naturalized species in both areas were herbaceous, originating from tropical and temperate Americas and Europe. Naturalized floras of these two areas were presented by analogous dominant families, Asteraceae and Poaceae, and dominant species, Bromus catharticus and Trifolium repens. However, the number and coverage of naturalized species, \u03b1 diversity, elevation, and naturalness degree, suggested different patterns of plant invasions of these two areas. Recreation areas accommodated significantly more naturalized species and higher coverage, and elevation was responsible for distinct patterns of plant invasions. Both of the preserves and recreation areas in Wuling provided suitable habitats for similar naturalized floras; however, relatively more species harbored by the later implied a source and sink relationships between these two areas. Furthermore, environmental factors that change with the elevation, such as temperature, topography, and native vegetation, may contribute to different patterns of plant invasions presented by preserves and the recreation areas in the subtropical mountains." }, { "instance_id": "R54867xR54663", "comparison_id": "R54867", "paper_id": "R54663", "text": "Fire and competition in a southern California grassland: impacts on the rare forb Erodium macrophyllum Summary 1. The use of off-season burns to control exotic vegetation shows promise for land managers. In California, wildfires tend to occur in the summer and autumn, when most grassland vegetation is dormant. The effects of spring fires on native bunchgrasses have been examined but their impacts on native forbs have received less attention. 2. We introduced Erodium macrophyllum, a rare native annual forb, by seeding plots in 10 different areas in a California grassland. We tested the hypotheses that E. macrophyllum would perform better (increased fecundity and germination) when competing with native grasses than with a mixture of exotic and native grasses, and fire would alter subsequent demography of E. macrophyllum and other species\u2019 abundances. We monitored the demography of E. macrophyllum for two seasons in plots manually weeded so that they were free from exotics, and in areas that were burned or not burned the spring after seeding. 3. Weeding increased E. macrophyllum seedling emergence, survival and fecundity during both seasons. When vegetation was burned in June 2001 (at the end of the first growing season) to kill exotic grass seeds before they dispersed, all E. macrophyllum plants had finished their life cycle and dispersed seeds, suggesting that burns at this time of year would not directly impact on fecundity. In the growing season after burning (2002), burned plots had less recruitment of E. macrophyllum but more establishment of native grass seedlings, suggesting burning may differentially affect seedling recruitment. 4. At the end of the second growing season (June 2002), burned plots had less cover of exotic and native grasses but more cover of exotic forbs. Nevertheless, E. macrophyllum plants in burned plots had greater fecundity than in non-burned plots, suggesting that exotic grasses are more competitive than exotic forbs. 5. A glasshouse study showed that exotic grasses competitively suppress E. macrophyllum to a greater extent than native grasses, indicating that the poor performance of E. macrophyllum in the non-burned plots was due to exotic grass competition. 6. Synthesis and applications. This study illustrates that fire can alter the competitive environment in grasslands with differential effects on rare forbs, and that exotic grasses strongly interfere with E. macrophyllum. For land managers, the benefits of prescribed spring burns will probably outweigh the costs of decreased E. macrophyllum establishment. Land managers can use spring burns to cause a flush of native grass recruitment and to create an environment that is, although abundant with exotic forbs, ultimately less competitive compared with non-burned areas dominated by exotic grasses." }, { "instance_id": "R54867xR54601", "comparison_id": "R54867", "paper_id": "R54601", "text": "Constraints to colonization and growth of the African grass, Melinis minutiflora, in a Venezuelan savanna Melinis minutiflora Beauv. (Poaceae) is an African grass that is invading mid-elevation Trachypogon savannas in Venezuela. The objective of this study was to investigate the influence of soil fertility, competition and soil disturbance in facilitating Melinis' invasion and growth in these savanna sites. We manipulated soil fertility by adding nitrogen (+N), phosphorus and potassium (+PK), or nitrogen, phosphorus, and potassium (+NPK). We simultaneously manipulated the competitive environment by clipping background vegetation. In a separate experiment, we mechanically disrupted the soil to simulate disturbance. We hypothesized that germination and growth were bottlenecks to early establishment in undisturbed savanna, but that disturbance would alleviate those bottlenecks. We measured Melinis seed germination and subsequent establishment by adding seeds to all plots. We examined Melinis growth by measuring biomass of Melinis seedling transplants, 11 months after they were placed into treatment plots. Germination and establishment of Melinis from seed was extremely low. Of the 80,000 seeds applied in the experiment, only 28 plants survived the first growing season. Mortality of Melinis seedling transplants was lowest in PK fertilized plots, but in the absence of PK mortality increased with N additions and clipping. By contrast, fertilization of the savanna with NPK greatly increased Melinis seedling biomass and this effect was greatly enhanced when competition was reduced (e.g. clipping). Melinis transplant growth responded strongly to soil disturbance- a response not fully explained by removal of competitors (clipping) or changes in soil nutrients and moisture. We suspect that disruption of the soil structure allowed for greater root proliferation and subsequent plant growth. We believe that native savanna is relatively resistant to Melinis invasion, since Melinis seedlings persisted in intact savanna but exhibited little or no growth during the first year. The significant enhancement of Melinis seedling growth with clipping and nutrient additions suggests that low soil nutrients and the presence of native savanna species are important factors in the ability of native savanna to resist Melinis establishment. However, the potential for Melinis growth increases enormously with soil disturbance." }, { "instance_id": "R54867xR54736", "comparison_id": "R54867", "paper_id": "R54736", "text": "Species introductions, diversity and disturbances in marine macrophyte assemblages of the northwestern Mediterranean Sea In the process of species introduction, the traits that enable a species to establish and spread in a new habitat, and the habitat characteristics that determine the susceptibility to intro- duced species play a major role. Among the habitat characteristics that render a habitat resistant or susceptible to introductions, species diversity and disturbance are believed to be the most important. It is generally assumed that high species richness renders a habitat resistant to introductions, while disturbances enhance their susceptibility. In the present study, these 2 hypotheses were tested on NW Mediterranean shallow subtidal macrophyte assemblages. Data collection was carried out in early summer 2002 on sub-horizontal rocky substrate at 9 sites along the French Mediterranean coast, 4 undisturbed and 5 highly disturbed. Disturbances include cargo, naval and passenger har- bours, and industrial and urban pollution. Relationships between species richness (point diversity), disturbances and the number of introduced macrophytes were analysed. The following conclusions were drawn: (1) there is no relationship between species introductions, diversity and disturbance for the macrophyte assemblages; (2) multifactorial analyses only revealed the biogeographical relation- ships between the native flora of the sites." }, { "instance_id": "R54867xR54625", "comparison_id": "R54867", "paper_id": "R54625", "text": "Aquatic pollution increases the relative success of invasive species Although individual ecosystems vary greatly in the degree to which they have been invaded by exotic species, it has remained difficult to isolate mechanisms influencing invader success. One largely anecdotal observation is that polluted or degraded areas will accumulate more invaders than less-impacted sites. However, the role of abiotic factors alone in influencing invisibility has been difficult to isolate, often because the supply of potential invaders is confounded with conditions thought to increase vulnerability to invasion. Here, we conducted a field experiment to test how the assemblages of exotic versus native marine invertebrates changed during community assembly under different exposure levels of a common pollutant, copper. The experiment was conducted by deploying fouling panels in a Randomized Block Design in San Francisco Bay. Panels were periodically removed, placed into buckets with differing copper concentrations, and returned to the field after 3 days. This design allowed propagule availability to the plates to be statistically independent of short-term copper exposure. The results demonstrate that copper caused significant differences in community structure. Average native species richness was significantly affected by copper exposure, but average exotic richness was not. The total native species pool within treatments exhibited a greater than 40% decline within increasing copper, while the exotic species pool did not change significantly. These results confirm that anthropogenic alteration of abiotic factors influences invader success, indicating that management strategies to reduce invader impacts should include both efforts to improve environmental conditions as well as reduce invader supply." }, { "instance_id": "R54867xR54616", "comparison_id": "R54867", "paper_id": "R54616", "text": "Impact of Acroptilon repens on co-occurring native plants is greater in the invader's non-native range Concern over exotic invasions is fueled in part by the observation that some exotic species appear to be more abundant and have stronger impacts on other species in their non-native ranges than in their native ranges. Past studies have addressed biogeographic differences in abundance, productivity, biomass, density and demography between plants in their native and non-native ranges, but despite widespread observations of biogeographic differences in impact these have been virtually untested. In a comparison of three sites in each range, we found that the abundance of Acroptilon repens in North America where it is invasive was almost twice that in Uzbekistan where it is native. However, this difference in abundance translated to far greater differences between regions in the apparent impacts of Acroptilon on native species. The biomass of native species in Acroptilon stands was 25\u201330 times lower in the non-native range than in the native range. Experimental addition of native species as seeds significantly increased the abundance of natives at one North American site, but the proportion of native biomass even with seed addition remained over an order of magnitude lower than that of native species in Acroptilon stands in Uzbekistan. Experimental disturbance had no long-term effect on Acroptilon abundance or impact in North America, but Acroptilon increased slightly in abundance after disturbance in Uzbekistan. In a long-term experiment in Uzbekistan, suppression of invertebrate herbivores and pathogens did not result in either consistent increases in Acroptilon biomass across years or declines in the biomass of other native species, as one might expect if the low impact of Acroptilon in the native range was due to its strong top\u2013down regulation by natural enemies. Our local scale measurements do not represent all patterns of Acroptilon distribution and abundance that might exist at the scale of landscapes in either range, but they do suggest the possibility of fundamental biogeographic differences in the way a highly successful invader interacts with other species, differences that are not simply related to greater biomass or reduced top\u2013down regulation of the invader in its non-native range." }, { "instance_id": "R54867xR54622", "comparison_id": "R54867", "paper_id": "R54622", "text": "Responses of exotic plant species to fires in Pinus ponderosa forests in northern Arizona . Changes in disturbance due to fire regime in southwestern Pinus ponderosa forests over the last century have led to dense forests that are threatened by widespread fire. It has been shown in other studies that a pulse of native, early-seral opportunistic species typically follow such disturbance events. With the growing importance of exotic plants in local flora, however, these exotics often fill this opportunistic role in recovery. We report the effects of fire severity on exotic plant species following three widespread fires of 1996 in northern Arizona P. ponderosa forests. Species richness and abundance of all vascular plant species, including exotics, were higher in burned than nearby unburned areas. Exotic species were far more important, in terms of cover, where fire severity was highest. Species present after wildfires include those of the pre-disturbed forest and new species that could not be predicted from above-ground flora of nearby unburned forests." }, { "instance_id": "R54867xR54778", "comparison_id": "R54867", "paper_id": "R54778", "text": "Six years of plant community development after clearcut harvesting in western Washington What roles do ruderals and residuals play in early forest succession and how does repeated disturbance affect them? We examined this question by monitoring plant cover and composition on a productive site for 6 years after clearcutting and planting Douglas-fir ( Pseudotsuga menziesii (Mirb.) Franco). The replicated experiment included three treatments: vegetation control with five annual herbicide applications superimposed over two levels of slash removal (bole only or total tree plus most other wood) and an untreated control. Three species groups were analyzed: native forest, native ruderals, and exotic ruderals. Without vegetation control, the understory was rapidly invaded by exotic ruderals but was codominated by native and exotic ruderals by year 6. Douglas-fir cover surpassed covers in the three species group covers at least 3 years sooner with herbicide treatments than without. Species richness and coverage were lower for all species groups with vegetation control than without vegetation control. The effects of organic matter removal were much less than that of vegetation control. As predicted by the Intermediate Disturbance Hypothesis, repeated vegetation control resulted in declining cover and richness; however, native forest species were surprisingly resilient, maintaining as much or more cover and richness as the ruderal groups." }, { "instance_id": "R54867xR54772", "comparison_id": "R54867", "paper_id": "R54772", "text": "Recovery of native plant communities after the control of a dominant invasive plant species, Foeniculum vulgare: Implications for management Abstract The control and/or removal of a dominant invasive species is expected to lead to increases in native species richness and diversity. Small pilot studies were performed on Santa Cruz Island (SCI), California, in the early 1990s to test the efficacy of different methods on the control of Foeniculum vulgare (fennel) and management\u2019s effects on native species recovery. We chose a treatment that showed significant native species recovery, applied it at the landscape scale, and followed its effects on fennel infested plant communities. We tested the hypothesis that results from small-scale studies translate to the landscape level. We found that although the control of fennel translated from the small to landscape scale, decreasing from an average of 60% to less than 3% cover, native species recovery did not occur in the landscape study as it did in the pilot studies. Invasive fennel cover was replaced by non-native grass cover over time. Unexpectedly, fennel cover in untreated fennel plots decreased significantly (though not as drastically) from over 60% cover to just under 40% cover while native species richness in untreated areas increased significantly. The correlation between precipitation and changes in native and non-native species richness and abundance in this study imply that changes in species abundances were highly correlated with environmental fluctuations. The lack of a native seedbank and the accumulation of non-native grass litter likely prevented the recovery of native species in treated areas. Greater vertical complexity found in fennel communities, which increased visitation by frugivorous birds and likely increased native seed dispersal, may have been responsible for the increase in native species richness in the untreated areas. These results suggest that successful invasive species control and native species recovery experiments conducted at small scales may not translate to the landscape level, and active restoration should be an organic component of such large-scale projects." }, { "instance_id": "R54867xR54839", "comparison_id": "R54867", "paper_id": "R54839", "text": "Altered stream-flow regimes and invasive plant species: the Tamarix case Aim To test the hypothesis that anthropogenic alteration of stream-flow regimes is a key driver of compositional shifts from native to introduced riparian plant species. Location The arid south-western United States; 24 river reaches in the Gila and Lower Colorado drainage basins of Arizona. Methods We compared the abundance of three dominant woody riparian taxa (native Populus fremontii and Salix gooddingii , and introduced Tamarix ) between river reaches that varied in stream-flow permanence (perennial vs. intermittent), presence or absence of an upstream flow-regulating dam, and presence or absence of municipal effluent as a stream water source. Results Populus and Salix were the dominant pioneer trees along the reaches with perennial flow and a natural flood regime. In contrast, Tamarix had high abundance (patch area and basal area) along reaches with intermittent stream flows (caused by natural and cultural factors), as well as those with dam-regulated flows. Main conclusions Stream-flow regimes are strong determinants of riparian vegetation structure, and hydrological alterations can drive dominance shifts to introduced species that have an adaptive suite of traits. Deep alluvial groundwater on intermittent rivers favours the deep-rooted, stress-adapted Tamarix over the shallower-rooted and more competitive Populus and Salix . On flow-regulated rivers, shifts in flood timing favour the reproductively opportunistic Tamarix over Populus and Salix , both of which have narrow germination windows . The prevailing hydrological conditions thus favour a new dominant pioneer species in the riparian corridors of the American Southwest. These results reaffirm the importance of reinstating stream-flow regimes (inclusive of groundwater flows) for re-establishing the native pioneer trees as the dominant forest type." }, { "instance_id": "R54867xR54707", "comparison_id": "R54867", "paper_id": "R54707", "text": "Do biodiversity and human impact influence the introduction or establishment of alien mammals? What determines the number of alien species in a given region? \u2018Native biodiversity\u2019 and \u2018human impact\u2019 are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance." }, { "instance_id": "R54867xR54742", "comparison_id": "R54867", "paper_id": "R54742", "text": "Non-native plants in the understory of riparian forests across a land use gradient in the Southeast\n As urbanization expands into rural areas, an increase in the number of non-native plant species at the urban-rural interface is expected due in large part to the increased availability of propagules from ornamental plantings. A study investigating the distribution of non-native plants in the understories of riparian forests across an urban-to-rural gradient north of Columbus, GA was initiated in 2003. A significantly greater number of non-native plant species occurred at the urban sites and at one site at the urban-rural interface, where 20 to 33% of the species encountered were non-native. In contrast, at the more rural sites non-native species comprised 4\u201314% of the total number of species. However, the importance values of non-native species as a whole did not change significantly across the land use gradient due to the high frequency and abundance of three non-native species (Ligustrum sinense, Lonicera japonica, and Microstegium vimineum) in the majority of the watersheds. Reductions in species richness and overstory reproduction associated with these non-natives could impact long-term forest structure and ecosystem function." }, { "instance_id": "R54867xR54649", "comparison_id": "R54867", "paper_id": "R54649", "text": "Patterns of alien plant distribution in a river landscape following an extreme flood Abstract The availability of suitable patches and gaps in the landscape is a crucial determinant of invasibility for alien plants. The type and arrangement of patches in the landscape may both facilitate and obstruct alien plant invasions, depending on whether alien species perceive the patches as barriers. In February 2000 tropical weather systems caused an extreme flood with an estimated return interval of 90 to 200 years in the Sabie River, South Africa. The impact of the 2000 flood on the Sabie River landscape provides an array of patches that may provide suitable resources for the establishment of alien plants. This study examines the distribution of alien plants in relation to patchiness of the Sabie River landscape. Our hypothesis was that if certain patches in the river landscape do not represent environmental barriers to alien plant invasion, alien species will occur preferentially in these patch types. The Sabie River within Kruger National Park [KNP] was divided into six patch types (zones, channel types, elevations, geomorphic units, substrates and flood imprint types). We then examined the distribution of native and alien woody and herbaceous density and species richness in patches. The density and species richness of alien plants in the Sabie River in KNP is very low when compared to the density and species richness of native plants. Some patches (bedrock distributary and braid bar geomorphic units) contained higher density and richness of alien plants compared to the other patches examined, indicating that these locations in the river landscape offer the resources necessary for alien plant establishment. Individual alien species are also associated with different parts of the river landscape. Failure of large numbers of alien plants to establish after the 2000 flood is most likely due to a combination of factors\u2014the plant specific barriers imposed by landscape patchiness, the high abundance and richness of native vegetation leading to competition, and for some species certainly, the clearing by the management (Working for Water) programme." }, { "instance_id": "R54867xR54765", "comparison_id": "R54867", "paper_id": "R54765", "text": "Limits to tree species invasion in pampean grassland and forest plant communities Factors limiting tree invasion in the Inland Pampas of Argentina were studied by monitoring the establishment of four alien tree species in remnant grassland and cultivated forest stands. We tested whether disturbances facilitated tree seedling recruitment and survival once seeds of invaders were made available by hand sowing. Seed addition to grassland failed to produce seedlings of two study species, Ligustrum lucidum and Ulmus pumila, but did result in abundant recruitment of Gleditsia triacanthos and Prosopis caldenia. While emergence was sparse in intact grassland, seedling densities were significantly increased by canopy and soil disturbances. Longer-term surveys showed that only Gleditsia became successfully established in disturbed grassland. These results support the hypothesis that interference from herbaceous vegetation may play a significant role in slowing down tree invasion, whereas disturbances create microsites that can be exploited by invasive woody plants. Seed sowing in a Ligustrum forest promoted the emergence of all four study species in understorey and treefall gap conditions. Litter removal had species-specific effects on emergence and early seedling growth, but had little impact on survivorship. Seedlings emerging under the closed forest canopy died within a few months. In the treefall gap, recruits of Gleditsia and Prosopis survived the first year, but did not survive in the longer term after natural gap closure. The forest community thus appeared less susceptible to colonization by alien trees than the grassland. We conclude that tree invasion in this system is strongly limited by the availability of recruitment microsites and biotic interactions, as well as by dispersal from existing propagule sources." }, { "instance_id": "R54867xR54680", "comparison_id": "R54867", "paper_id": "R54680", "text": "Grassland invisibility and diversity: responses to nutrients, seed input, and disturbance The diversity and composition of a community are determined by a com- bination of local and regional processes. We conducted a field experiment to examine the impact of resource manipulations and seed addition on the invasibility and diversity of a low-productivity grassland. We manipulated resource levels both by a disturbance treatment that reduced adult plant cover in the spring of the first year and by addition of fertilizer every year. Seeds of 46 native species, both resident and nonresident to the community, were added in spring of the first year to determine the effects of recruitment limitation from local (seed limitation) and regional (dispersal limitation) sources on local species richness. Our results show that the unmanipulated community was not readily invasible. Seed addition increased the species richness of unmanipulated plots, but this was primarily due to increased occurrence of resident species. Nonresident species were only able to invade following a cover-reduction disturbance. Cover reduction resulted in an increase in nitrogen availability in the first year, but had no measurable effect on light availability in any year. In contrast, fertilization created a persistent increase in nitrogen availability that increased plant cover or biomass and reduced light penetration to ground level. Initially, fertilization had an overall positive effect on species richness, but by the third year, the effect was either negative or neutral. Unlike cover reduction, fertilization had no observable effect on seedling recruitment or occurrence (number of plots) of invading resident or nonresident species. The results of our experiment demonstrate that, although resource fluctuations can increase the invasibility of this grass- land, the community response depends on the nature of the resource change." }, { "instance_id": "R54867xR54661", "comparison_id": "R54867", "paper_id": "R54661", "text": "Invasibility and abiotic gradients: the positive correlation between native and exotic plant diversity We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness." }, { "instance_id": "R54867xR54677", "comparison_id": "R54867", "paper_id": "R54677", "text": "Understory response to management treatments in northern Arizona ponderosa pine forests Abstract A range of forest management treatments was applied to ponderosa pine (Pinus ponderosa) in northern Arizona. The treatments represented the full range of existing stand disturbance conditions and included forest stands that were unmanaged, thinned, thinned and prescribed burned, and burned by stand-replacing wildfire in order of increasing disturbance intensity. We assessed differences in understory diversity associated with these treatments. We identified 195 species of understory plants, focusing on the distribution of natives and exotics. The abundance of understory plants was more sensitive to changes in management treatments than the overall species richness. Exotics on the whole, responded statistically more strongly to disturbance treatments than did natives. Both the richness and abundance of exotic forbs increased significantly with treatment intensity. Species richness remained stable while abundance of native graminoids increased significantly with treatment intensity through thinned and burned stands. Both then decreased significantly in stands that experienced wildfire. The number of native shrub species decreased significantly with treatment intensity. Overall plant diversity was least in the unmanaged stands and progressively increased with intensity of disturbance/stand treatments. Both prescribed burning and wildfire increased plant diversity; however, stand-replacing wildfire also appeared to substantially increase the diversity of exotic plants." }, { "instance_id": "R54867xR54578", "comparison_id": "R54867", "paper_id": "R54578", "text": "The roles of habitat features, disturbance, and distance from putative source populations in structuring alien plant invasions at the urban/wildland interface on the Cape Peninsula, South Africa Abstract Natural areas are becoming increasingly fragmented and embedded in an urban matrix. Natural and semi-natural areas at the urban/wildland interface are threatened by a variety of \u2018edge effects\u2019, and are especially vulnerable to invasion by introduced plants, with suburban gardens acting as significant sources of alien propagules. Urban/wildland interfaces also provide access for humans, leading to various types of disturbance. Alien plant invasions are one of the biggest threats facing remaining natural areas on the Cape Peninsula, South Africa. The area provides an ideal opportunity to study the dynamics of invasions at the urban/wildland interface, since the largest natural area, the Table Mountain National Park (TMNP), is surrounded by the city of Cape Town. We explored invasion patterns in Newlands Forest (a small section of the TMNP) and detailed the roles of habitat features and distance from putative source populations in three main habitat types: natural Afromontane forest, riverine woodland habitats, and plantations of exotic pines (Pinus radiata and P. pinaster). We also examined the role of disturbance in driving invasions in two of these habitat types (Afromontane forest and pine plantations). We hypothesized that alien richness and alien stem density would decrease with distance from the urban/wildland interface, and that alien richness and alien stem density would increase with increasing levels of human disturbance. Distance from putative source populations and levels of anthropogenic disturbance influenced alien richness in Newlands Forest but not alien stem density. Alien richness decreased significantly with distance from presumed sources in the pine habitat, and increased significantly with disturbance in the forest habitat. Percentage overstorey cover and soil pH were important environmental variables associated with alien plant species. A socio-economic approach is discussed as being the most effective approach to the management and prevention of alien plant species in Newlands Forest." }, { "instance_id": "R54867xR54836", "comparison_id": "R54867", "paper_id": "R54836", "text": "How grazing and soil quality affect native and exotic plant diversity in rocky mountain grasslands We used multiscale plots to sample vascular plant diversity and soil characteristics in and adjacent to 26 long-term grazing exclosure sites in Colorado, Wyoming, Montana, and South Dakota, USA. The exclosures were 7\u201360 yr old (31.2 \u00b1 2.5 yr, mean \u00b1 1 se). Plots were also randomly placed in the broader landscape in open rangeland in the same vegetation type at each site to assess spatial variation in grazed landscapes. Consistent sampling in the nine National Parks, Wildlife Refuges, and other management units yielded data from 78 1000-m2 plots and 780 1-m2 subplots. We hypothesized that native species richness would be lower in the exclosures than in grazed sites, due to competitive exclusion in the absence of grazing. We also hypothesized that grazed sites would have higher native and exotic species richness compared to ungrazed areas, due to disturbance (i.e., the intermediate-disturbance hypothesis) and the conventional wisdom that grazing may accelerate weed invasion. Both hypotheses were soundly rej..." }, { "instance_id": "R54867xR54632", "comparison_id": "R54867", "paper_id": "R54632", "text": "Dalmatian toadflax (Linaria dalmatica) response to wildfire in a southwestern USA forest ABSTRACT Severe wildfires often facilitate the spread of exotic invasive species, such as Dalmatian toadflax (Linaria dalmatica). We hypothesized that toadflax growth and reproduction would increase with increasing burn severity in a ponderosa pine (Pinus ponderosa)-dominated forest. We measured toadflax density, cover, flowering stalks, and native species richness and cover on 327 plots for 3 y after a 2001 wildfire. Toadflax stem density, cover, and flowering stalks increased in 2003, then decreased in 2004 in all burn severity classes, but remained higher than initial 2002 values. Toadflax spread to previously uncolonized areas, though stem density decreased in unburned plots. Transition matrices showed that more plots on moderately (73%) and severely (74%) burned areas classified as high toadflax density in 2002 remained high density in 2004. Deterministic matrix modeling using 2002 to 2004 transition probabilities projected that the percentage of high-density plots would stabilize on moderately and severely burned sites at 41 and 61%, respectively. In contrast, 20-y rates of change (\u03bb) for unburned and low-severity burn sites were <1.0, and stabilizing at 2% for unburned plots and 19% for low-severity burn plots. Post-wildfire conditions in high-severity burned areas favour increased density, cover, reproduction, and spread of Dalmatian toadflax, while native species richness was reduced, suggesting that the invasive species would persist, at least in the short term, at the expense of natives. Nomenclature: USDA NRCS, 2007." }, { "instance_id": "R54867xR54828", "comparison_id": "R54867", "paper_id": "R54828", "text": "Quantifying the impact of an extreme climate event on species diversity in fragmented temperate forests: the effect of the October 1987 storm on British broadleaved woodlands Summary 1. We report the impact of an extreme weather event, the October 1987 severe storm, on fragmented woodlands in southern Britain. We analysed ecological changes between 1971 and 2002 in 143 200-m 2 plots in 10 woodland sites exposed to the storm with an ecologically equivalent sample of 150 plots in 16 non-exposed sites. Comparing both years, understorey plant species-richness, species composition, soil pH and woody basal area of the tree and shrub canopy were measured. 2. We tested the hypothesis that the storm had deflected sites from the wider national trajectory of an increase in woody basal area and reduced understorey species-richness associated with ageing canopies and declining woodland management. We also expected storm disturbance to amplify the background trend of increasing soil pH, a UK-wide response to reduced atmospheric sulphur deposition. Path analysis was used to quantify indirect effects of storm exposure on understorey species richness via changes in woody basal area and soil pH. 3. By 2002, storm exposure was estimated to have increased mean species richness per 200 m 2 by 32%. Woody basal area changes were highly variable and did not significantly differ with storm exposure. 4. Increasing soil pH was associated with a 7% increase in richness. There was no evidence that soil pH increased more as a function of storm exposure. Changes in species richness and basal area were negatively correlated: a 3.4% decrease in richness occurred for every 0.1-m 2 increase in woody basal area per plot. 5. Despite all sites substantially exceeding the empirical critical load for nitrogen deposition, there was no evidence that in the 15 years since the storm, disturbance had triggered a eutrophication effect associated with dominance of gaps by nitrophilous species. 6. Synthesis. Although the impacts of the 1987 storm were spatially variable in terms of impacts on woody basal area, the storm had a positive effect on understorey species richness. There was no evidence that disturbance had increased dominance of gaps by invasive species. This could change if recovery from acidification results in a soil pH regime associated with greater macronutrient availability." }, { "instance_id": "R54867xR54851", "comparison_id": "R54867", "paper_id": "R54851", "text": "Lateral differentiation and the role of exotic species in roadside vegetation in southern New Zealand Summary Roadside vegetation was surveyed across the southern part of the South Island, New Zealand. Samples were taken at 10 km intervals along 750 km of selected roads that provided a climatic gradient from semi-arid to hyperoceanic conditions and which crossed both, areas of farmland, where the native vegetation has been replaced by an anthropogenic plant cover consisting almost entirely of introduced species, and areas of managed native tussock grassland and native forest. Contiguous plots, placed in four zones parallel to the road, were used to examine any lateral differentiation of vegetation. Variation in floristic composition in all four zones was associated with variation in rainfall, continentality, altitude, and the presence of forest. In all sites there was a distinct change in species composition from the outer verge to the inner roadside. The vegetation of the zone nearest to the road showed weaker correlation with altitude and stronger correlation with continentality, a marked increase in short-lived exotic species, and a greater proportion of the more continental and weedy vegetation types than the vegetation of the outermost verge. This supports the hypothesis of anthropogenic continentality of road-shoulders. The most frequent species on the road-shoulders are those exotic species that transgress climatic barriers in their native continents. This suggests that, globally, the range of such species is liable to expand, particularly in the habitat-complex provided by roadsides." }, { "instance_id": "R54867xR54861", "comparison_id": "R54867", "paper_id": "R54861", "text": "The impact of fire, and its potential role in limiting the distribution of Bryophyllum delagoense (Crassulaceae) in southern Africa Increasing emphasis has been placed on identifying traits of introduced species which predispose them to invade, and characteristics of ecosystems which make them susceptible to invasion. Habitat disturbance such as floods, fires and tree-falls may make ecosystems more prone to invasion. However, in this study the absence of fire was considered to be a factor in facilitating the invasion potential of a Madagascan endemic, Bryophyllum delagoense. Fire trials in South Africa killed 89 and 45% of B. delagoense plants in a high and low intensity controlled fire, respectively, with tall plants and those growing in clumps more likely to escape being killed. A reduction in the incidence and intensity of fires may therefore facilitate the invasion of B. delagoense and contribute to its invasive potential. Overgrazing, which reduces the frequency and intensity of fires probably facilitates the invasion of large and small succulent species. In South Africa, B. delagoense is still considered to be a minor weed or garden escape, despite its introduction to southern Africa 175 years earlier than in Australia, where it is extremely invasive. However, other succulents such as Opuntia species have become invasive on both continents, confounding our hypothesis that fire may be inhibiting B. delagoense from becoming invasive in southern Africa. However, closer analysis of Opuntia literature indicates that smaller species, similar in size to B. delagoense, are more likely to be killed, even by low intensity fires. We speculate that B. delagoense is more invasive in Australia because of a reduction in the frequency and intensity of fires and that fire is, amongst other factors, largely responsible for inhibiting its invasion potential in southern Africa." }, { "instance_id": "R54867xR54767", "comparison_id": "R54867", "paper_id": "R54767", "text": "Predicting Richness of Native, Rare, and Exotic Plants in Response to Habitat and Disturbance Variables across a Variegated Landscape Species richness of native, rare native, and exotic understorey plants was recorded at 120 sites in temperate grassy vegetation in New South Wales. Linear models were used to predict the effects of environment and disturbance on the richness of each of these groups. Total native species and rare native species showed similar responses, with rich- ness declining on sites of increasing natural fertility of par- ent material as well as declining under conditions of water" }, { "instance_id": "R54867xR54611", "comparison_id": "R54867", "paper_id": "R54611", "text": "Macroinvertebrates in North American tallgrass prairie soils: effects of fire, mowing, and fertilization on density and biomass The responses of tallgrass prairie plant communities and ecosystem processes to fire and grazing are well characterized. However, responses of invertebrate consumer groups, and particularly soil-dwelling organisms, to these disturbances are not well known. At Konza Prairie Biological Station, we sampled soil macroinvertebrates in 1994 and 1999 as part of a long-term experiment designed to examine the effects and interactions of annual fire, mowing, and fertilization (N and P) on prairie soil communities and processes. For nearly all taxa, in both years, responses were characterized by significant treatment interactions, but some general patterns were evident. Introduced European earthworms (Aporrectodea spp. and Octolasion spp.) were most abundant in plots where fire was excluded, and the proportion of the total earthworm community consisting of introduced earthworms was greater in unburned, unmowed, and fertilized plots. Nymphs of two Cicada genera were collected (Cicadetta spp. and Tibicen spp.). Cicadetta nymphs were more abundant in burned plots, but mowing reduced their abundance. Tibicen nymphs were collected almost exclusively from unburned plots. Treatment effects on herbivorous beetle larvae (Scarabaeidae, Elateridae, and Curculionidae) were variable, but nutrient additions (N or P) usually resulted in greater densities, whereas mowing usually resulted in lower densities. Our results suggest that departures from historical disturbance regimes (i.e. frequent fire and grazing) may render soils more susceptible to increased numbers of European earthworms, and that interactions between fire, aboveground biomass removal, and vegetation responses affect the structure and composition of invertebrate communities in tallgrass prairie soils." }, { "instance_id": "R54867xR54618", "comparison_id": "R54867", "paper_id": "R54618", "text": "Introduced and native ground beetle assemblages (Coleoptera: Carabidae) along a successional gradient in an urban landscape According to the intermediate disturbance hypothesis (IDH), species diversity should be higher at sites with intermediate levels of disturbance. We tested this hypothesis using ground beetles (Coleoptera: Carabidae) collected in pitfall traps from sites that varied in time since last disturbance. This successional gradient was embedded in an urban landscape near Montreal, Quebec. We predicted that diversity in young forests and old fields would be higher than in agricultural fields and old forests. Fifty-five species (2932 individuals) were found in 2003 and 46 species (2207 individuals) in 2004. In both years, species richness was highest from traps placed in agricultural fields. We collected nine introduced species; these had higher catch rates than the native species in both years (64.8% of total catch). When introduced species were removed from the Nonmetric Multidimensional Scaling ordination analysis, the assemblages from agricultural fields were less distinct compared to those of the other habitats, suggesting the introduced fauna is important in structuring carabid assemblages from the agricultural fields. Introduced species may play a significant role in the community composition of ground beetles in urban landscapes, and their influence may be the cause of the lack of support found for the IDH." }, { "instance_id": "R54867xR54665", "comparison_id": "R54867", "paper_id": "R54665", "text": "Impacts and interactions of multiple human perturbations in a California salt marsh Multiple disturbances to ecosystems can influence community structure by modifying resistance to and recovery from invasion by non-native species. Predicting how invasibility responds to multiple anthropogenic impacts is particularly challenging due to the variety of potential stressors and complex responses. Using manipulative field experiments, we examined the relative impact of perturbations that primarily change abiotic or biotic factors to promote invasion in coastal salt marsh plant communities. Specifically we test the hypotheses that nitrogen enrichment and human trampling facilitate invasion of upland weeds into salt marsh, and that the ability of salt marsh communities to resist and/or recover from invasion is modified by hydrological conditions. Nitrogen enrichment affected invasion of non-native upland plants at only one of six sites, and increased aboveground native marsh biomass at only two sites. Percent cover of native marsh plants declined with trampling at all sites, but recovered earlier at tidally flushed sites than at tidally restricted sites. Synergistic interactions between trampling and restricting tidal flow resulted in significantly higher cover of non-native upland plants in trampled plots at tidally restricted sites. Percent cover of non-native plants recovered to pre-trampling levels in fully tidal sites, but remained higher in tidally restricted sites after 22 months. Thus, perturbations that reduce biotic resistance interact with perturbations that alter abiotic conditions to promote invasion. This suggests that to effectively conserve or restore native biodiversity in altered systems, one must consider impacts of multiple human disturbances, and the interactions between them." }, { "instance_id": "R54867xR54692", "comparison_id": "R54867", "paper_id": "R54692", "text": "TEMPERATURE EFFECTS ON SEEDLING EMERGENCE FROM BOREAL WETLAND SOILS - IMPLICATIONS FOR CLIMATE CHANGE Abstract Temperature treatments simulated global warming effects on seedling emergence of wetland species from soil seed banks of the Peace-Athabasca Delta, Alberta, Canada. Introduced weedy species, such as Tanacetum vulgare L., had up to a 10-fold greater emergence at high temperature (30\u00b0C for 18 h with light, 15\u00b0C for 6 h in the dark) than at low temperature (20/10\u00b0C). Seedling emergence of native weedy species, such as Calamagrostis canadensis (Michx.) Beauv., was 1.5\u20133 times greater at low temperature. Other native weedy species, such as Rubus idaeus L., emerged only from samples at low temperature. Emergence of native non-weedy species was greatest at high temperature, even though mature plants of species such as Ranunculus hyperboreus Rottb. and Carex eburnea Boott are normally found in cool and moist habitats. Of those species expected to persist in warm and dry habitats, only introduced weedy species showed consistent and significantly greater seedling emergence at high temperature. It is hypothesized, therefore, that the abundance of introduced weedy species would increase in disturbed or sparsely vegetated zones around water bodies as these zones become dry and warm with climate change." }, { "instance_id": "R54867xR54597", "comparison_id": "R54867", "paper_id": "R54597", "text": "Diversity patterns of small mammals in the Zambales Mts., Luzon, Philippines Abstract In 2004 and 2005, we conducted a survey of the small mammals on Mt. Tapulao (=Mt. High Peak, 2037 m) in the Zambales Mountains, Luzon Island, Philippines in order to obtain the first information on the mammals of this newly discovered center of endemism. We also tested two hypotheses regarding the relationship of species richness with elevation and the impact of alien species on native mammals. The survey covered five localities representing habitats from regenerating lowland rain forest at 860 m to mossy rain forest near the peak at 2024 m. We recorded 11 species, including 1 native shrew, 1 alien shrew, 8 native rodents, and 1 alien rodent. Two species of Apomys and one species of Rhynchomys are endemic to Zambales; this establishes the Zambales Mountains as a significant center of mammalian endemism. Species richness of native small mammals increased with elevation, from five species in the lowlands at 925 m to seven species in mossy forest at 2024 m; total relative abundance of native small mammals increased from 925 to 1690 m, then declined at 2024 m. Alien small mammals were restricted to highly disturbed areas. Our results support the prediction that maximum species richness of small mammals would occur in lower mossy forest near the peak, not near the center of the gradient. Our results also support the hypothesis that when a diverse community of native Philippine small mammals is present in either old-growth or disturbed forest habitat, \u201cinvasive\u201d alien species are unable to penetrate and maintain significant populations in forest." }, { "instance_id": "R54867xR54834", "comparison_id": "R54867", "paper_id": "R54834", "text": "Growth patterns of the alien perennial Bunias orientalis L (Brassicaceae) underlying its rising dominance in some native plant assemblages The polycarpic perennial Bunias orientalis L. (Brassicaceae) introduced to Central Europe in the 18th century recently entered a phase of rapid spread accompanied by sudden establishments of extensive dominance stands mainly on roadside locations. We studied vegetation structure and expansion rate of B. orientalis stands and performed a series of experiments to investigate key factors that underlie the colonizing and establishment of B. orientalis. Reiterated observations exposed a high current expansion rate of B. orientalis populations and vegetation surveys of B. orientalis stands showed that these stands were mainly composed of Artemisietea and Arrhenateretea species. Regeneration experiments with root fragments revealed high regeneration capacities: root fragments of 3 cm length showed 93% regeneration, varying water content (10\u201350% water loss) and separation into root cortex and root stele yielded regeneration of 30 to 50%. In a field study high regrowth after mowing with varying mowing intensity indicates B. orientalis to be well adapted to disturbed sites as its preferential locations for development of dominance stands. Vegetative growth parameters were studied in two controlled growth experiments with elevated nutrient availabilities. B. orientalis exhibited a high sensitivity to nutrient addition and rosette sizes of maximal 90 cm were reached. Biomass was comparable or even higher than that of native ruderals grown in the same experiment. Measurements of reproductive parameters revealed a high reproductive effort (0.2 to 0.45 g g-1) even under intense mowing regimes, resulting in a dense seed bank with maximal values of about 400 fruits (\u2245550 seeds) l-1 soil. With respect to colonization and establishment of B. orientalis the results of our study enable the formulation of three hypotheses." }, { "instance_id": "R54867xR54720", "comparison_id": "R54867", "paper_id": "R54720", "text": "The influence of anthropogenic disturbance and environmental suitability on the distribution of the nonindigenous amphipod, Echinogammarus ischnus, at Laurentian Great Lakes coastal margins ABSTRACT Invasion ecology offers a unique opportunity to examine drivers of ecological processes that regulate communities. Biotic resistance to nonindigenous species establishment is thought to be greater in communities that have not been disturbed by human activities. Alternatively, invasion may occur wherever environmental conditions are appropriate for the colonist, regardless of the composition of the existing community and the level of disturbance. We tested these hypotheses by investigating distribution of the nonindigenous amphipod, Echinogammarus ischnus Stebbing, 1899, in co-occurrence with a widespread amphipod, Gammarus fasciatus Say, 1818, at 97 sites across the Laurentian Great Lakes coastal margins influenced by varying types and levels of anthropogenic stress. E. Ischnus was distributed independently of disturbance gradients related to six anthropogenic disturbance variables that summarized overall nutrient input, nitrogen, and phosphorus load carried from the adjacent coastal watershed, agricultural land area, human population density, overall pollution loading, and the site-specific dominant stressor, consistent with the expectations of regulation by general environmental characteristics. Our results support the view that the biotic facilitation by dreissenid mussels and distribution of suitable habitats better explain E. ischnus' distribution at Laurentian Great Lakes coastal margins than anthropogenic disturbance." }, { "instance_id": "R54867xR54791", "comparison_id": "R54867", "paper_id": "R54791", "text": "EFFECTS OF DISTURBANCE ON HERBACEOUS EXOTIC PLANT-SPECIES ON THE FLOODPLAIN OF THE POTOMAC RIVER -The objective of this study was to investigate specific effects of disturbance on exotic species in floodplain environments and to provide baseline data on the abundance of exotic herbs in the Potomac River floodplain. Frequency of exotics generally increased with man-made disturbance (forest fragmentation and recreational use of land) and decreased with increasing flooding frequency. Species richness of exotics followed a similar pattern. Some variation was found in individual species' responses to disturbance. The spread of Alliaria officinalis and Glecoma hederacea, the most frequent exotic species, was inhibited by forest fragmentation." }, { "instance_id": "R54867xR54749", "comparison_id": "R54867", "paper_id": "R54749", "text": "Large Herbivore Grazing and Non-native Plant Invasions in Montane Grasslands of Central Argentina ABSTRACT: Grazing by large herbivores has the potential to facilitate invasion of natural grasslands by non-native plant species. Often, both herbivore identity and plant community type modulate this effect. The objective of this study was to evaluate the impact of grazing on non-native plant species richness and cover in montane grasslands of central Argentina as related to herbivore identity (horse or cattle) and plant community type. The study was conducted in piedmont valleys of the Ventania Mountains. The area is occupied by two major types of plant communities: short-needlegrass and tall-tussock grasslands. Short-needlegrass grasslands occupy poor soils and have higher plant species diversity compared to tall-tussock grasslands which typically grow on rich soils. Part of the study area is devoted to cattle husbandry, part is inhabited by feral horses, and part has been free of grazing by large herbivores for the last 15 years. We compared non-native species richness and cover at three levels of grazing (horse grazing, cattle grazing, grazing exclusion) and two levels of plant community type (short-needlegrass grassland and tall-tussock grassland) at the end of the growing season in 2006 and 2007. Thirty-one nonnative plant species were found growing in the study area. Grazing increased non-native species richness and cover and was highest under horse grazing and in communities on resource-rich soils. Our results are consistent with the hypothesis that grazing by large non-native herbivores can facilitate non-native plant species invasion of natural grasslands. They also suggest that herbivore identity and community type modulate the effect of large herbivore grazing on grassland invasion by non-native plant species." }, { "instance_id": "R54867xR54770", "comparison_id": "R54867", "paper_id": "R54770", "text": "Disturbance-mediated competition and the spread of Phragmites australis in a coastal marsh In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species." }, { "instance_id": "R54867xR54585", "comparison_id": "R54867", "paper_id": "R54585", "text": "The incidence of exotic species following clearfelling of Eucalyptus regnans forest in the Central Highlands, Victoria Invasion by exotic species following clearfelling of Eucalyptus regnans F. Muell. (Mountain Ash) forest was examined in the Toolangi State Forest in the Central Highlands of Victoria. Coupes ranging in age from < 1- to 10-years-old and the spar-stage forests (1939 bushfire regrowth) adjacent to each of these coupes and a mature, 250-year-old forest were surveyed. The dispersal and establishment of weeds was facilitated by clearfelling. An influx of seeds of exotic species was detected in recently felled coupes but not in the adjacent, unlogged forests. Vehicles and frequently disturbed areas, such as roadside verges, are likely sources of the seeds of exotic species. The soil seed bank of younger coupes had a greater number and percentage of seeds of exotics than the 10-year-old coupes and the spar-stage and mature forests. Exotic species were a minor component (< 1% vegetation cover) in the more recently logged coupes and were not present in 10-year-old coupes and the spar-stage and mature forests. These particular exotic species did not persist in the dense regeneration nor exist in the older forests because the weeds were ruderal species (light-demanding, short-lived and short-statured plants). The degree of influence that these particular exotic species have on the regeneration and survival of native species in E. regnans forests is almost negligible. However, the current management practices may need to be addressed to prevent a more threatening exotic species from establishing in these coupes and forests." }, { "instance_id": "R54867xR54583", "comparison_id": "R54867", "paper_id": "R54583", "text": "Disturbance as a factor in the distribution of sugar maple and the invasion of Norway maple into a modified woodland Disturbances have the potential to increase the success of bi ological invasions. Norway maple {Acer platanoides), a common street tree native to Europe, is a foreign invasive with greater tolerance and more effi cient resource utilization than the native sugar maple (Acer saccharum). This study examined the role disturbances from a road and path played in the invasion of Norway maple and in the distribution of sugar maple. Disturbed areas on the path and nearby undisturbed areas were surveyed for both species along transects running perpendicular to a road. Norway maples were present in greater number closer to the road and on the path, while the number of sugar maples was not significantly associated with either the road or the path. These results suggest that human-caused disturbances have a role in facili tating the establishment of an invasive species." }, { "instance_id": "R54867xR54699", "comparison_id": "R54867", "paper_id": "R54699", "text": "Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest Abstract Huisinga, K. D., D. C. Laughlin, P. Z. Ful\u00e9, J. D. Springer, and C. M. McGlone (Ecological Restoration Institute and School of Forestry, Northern Arizona University, Box 15017, Flagstaff, AZ 86011). Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest. J. Torrey Bot. Soc. 132: 590\u2013601. 2005.\u2014Intense prescribed fire has been suggested as a possible method for forest restoration in mixed conifer forests. In 1993, a prescribed fire in a dense, never-harvested forest on the North Rim of Grand Canyon National Park escaped prescription and burned with greater intensity and severity than expected. We sampled this burned area and an adjacent unburned area to assess fire effects on understory species composition, diversity, and plant cover. The unburned area was sampled in 1998 and the burned area in 1999; 25% of the plots were resampled in 2001 to ensure that differences between sites were consistent and persistent, and not due to inter-annual climatic differences. Species composition differed significantly between unburned and burned sites; eight species were identified as indicators of the unburned site and thirteen as indicators of the burned site. Plant cover was nearly twice as great in the burned site than in the unburned site in the first years of measurement and was 4.6 times greater in the burned site in 2001. Average and total species richness was greater in the burned site, explained mostly by higher numbers of native annual and biennial forbs. Overstory canopy cover and duff depth were significantly lower in the burned site, and there were significant inverse relationships between these variables and plant species richness and plant cover. Greater than 95% of the species in the post-fire community were native and exotic plant cover never exceeded 1%, in contrast with other northern Arizona forests that were dominated by exotic species following high-severity fires. This difference is attributed to the minimal anthropogenic disturbance history (no logging, minimal grazing) of forests in the national park, and suggests that park managers may have more options than non-park managers to use intense fire as a tool for forest conservation and restoration." }, { "instance_id": "R54867xR54844", "comparison_id": "R54867", "paper_id": "R54844", "text": "Effects of fragmentation and invasion on native ant communities in coastal southern California We investigated the roles of habitat fragmentation and the invasion of an exotic species on the structure of ground-foraging ant communities in 40 scrub habitat fragments in coastal southern California. In particular, we asked: how do fragment age, fragment size, amount of urban edge, percentage of native vegetation, degree of isolation, and the relative abundance of an exotic species, the Argentine ant (Linepithema humile) affect native ants? Within these fragments, Argentine ants were more abundant near developed edges and in areas dominated by exotic vegetation. The number of native ground-foraging ant species at any point declined from an average of >7 to <2 species in the presence of the Argentine ant. Among fragments, a stepwise multiple regression revealed that the abundance of Argentine ants, the size of the fragment, and the number of years since it was isolated from larger continuous areas of scrub habitat best predict the number of remaining native ant species. The Argentine ant was found in every fragment surveyed as well as around the edges of larger unfragmented areas. Fragments had fewer native ant species than similar-sized plots within large unfragmented areas, and fragments with Argentine ant-free refugia had more native ant species than those without refugia. The relative vulnerability of native ants to habitat fragmentation and the subsequent presence of Argentine ants vary among species. The most sensitive species include army ants (Neivamyrmex spp.) and harvester ants (genera Messor and Pogonomyrmex), both of which are important to ecosystem-level processes. Our surveys suggest that the Argentine ant is widespread in fragmented coastal scrub habitats in southern California and strongly affects native ant communities." }, { "instance_id": "R54867xR54713", "comparison_id": "R54867", "paper_id": "R54713", "text": "Paving the Way for Invasive Species: Road Type and the Spread of Common Ragweed (Ambrosia artemisiifolia) Roads function as prime habitats and corridors for invasive plant species. Yet despite the diversity of road types, there is little research on the influence of these types on the spread of invaders. Common ragweed (Ambrosia artemisiifolia), a plant producing large amounts of allergenic pollen, was selected as a species model for examining the impact of road type on the spread of invasive plants. We examined this relationship in an agricultural region of Quebec, Canada. We mapped plant distribution along different road types, and constructed a model of species presence. Common ragweed was found in almost all sampling sites located along regional (97%) and local paved (81%) roads. However, verges of unpaved local roads were rarely (13%) colonized by the plant. A model (53% of variance explained), constructed with only four variables (paved regional roads, paved local roads, recently mown road verges, forest cover), correctly predicted (success rate: 89%) the spatial distribution of common ragweed. Results support the hypothesis that attributes associated with paved roads strongly favour the spread of an opportunistic invasive plant species. Specifically, larger verges and greater disturbance associated with higher traffic volume create propitious conditions for common ragweed. To date, emphasis has been placed on controlling the plant in agricultural fields, even though roadsides are probably a much larger seed source. Strategies for controlling the weed along roads have only focused on major highways, even though the considerable populations along local roads also contribute to the production of pollen. Management prioritizations developed to control common ragweed are thus questionable." }, { "instance_id": "R54867xR54824", "comparison_id": "R54867", "paper_id": "R54824", "text": "Pre-fire fuel reduction treatments influence plant communities and exotic species 9 years after a large wildfire Questions: How did post-wildfire understorey plant community response, including exotic species response, differ between pre-fire treated areas that were less severely burned, and pre-fire untreated areas that were more severely burned? Were these differences consistent through time? Location: East-central Arizona, southwestern US. Methods: We used a multi-year data set from the 2002 Rodeo\u2013Chediski Fire to detect post-fire trends in plant community response in burned ponderosa pine forests. Within the burn perimeter, we examined the effects of pre-fire fuels treatments on post-fire vegetation by comparing paired treated and untreated sites on the Apache-Sitgreaves National Forest. We sampled these paired sites in 2004, 2005 and 2011. Results: There were significant differences in pre-fire treated and untreated plant communities by species composition and abundance in 2004 and 2005, but these communities were beginning to converge in 2011. Total understorey plant cover was significantly higher in untreated areas for all 3 yr. Plant cover generally increased between 2004 and 2005 and markedly decreased in 2011, with the exception of shrub cover, which steadily increased through time. The sharp decrease in forb and graminoid cover in 2011 is likely related to drought conditions since the fire. Annual/biennial forb and graminoid cover decreased relative to perennial cover through time, consistent with the initial floristics hypothesis. Exotic plant response was highly variable and not limited to the immediate post-fire, annual/biennial community. Despite low overall exotic forb and graminoid cover for all years (<2.5%), several exotic species increased in frequency, and the relative proportion of exotic to native cover increased through time. Conclusions: Pre-treatment fuel reduction treatments helped maintain foundation overstorey species and associated native plant communities following this large wildfire. The overall low cover of exotic species on these sites supports other findings that the disturbance associated with high-severity fire does not always result in exotic species invasions. The increase in relative cover and frequency though time indicates that some species are proliferating, and continued monitoring is recommended. Patterns of exotic species invasions after severe burning are not easily predicted, and are likely more dependent on site-specific factors such as propagules, weather patterns and management." }, { "instance_id": "R54867xR54609", "comparison_id": "R54867", "paper_id": "R54609", "text": "An experimental study of plant community invasibility A long\u2014term field experiment in limestone grassland near Buxton (North Derbyshire, United Kingdom) was designed to identify plant attributes and vegetation characteristics conducive to successful invasion. Plots containing crossed, continuous gradients of fertilizer addition and disturbance intensity were subjected to a single\u2014seed inoculum comprising a wide range of plant functional types and 54 species not originally present at the site. Several disturbance treatments were applied; these included the creation of gaps of contrasting size and the mowing of the vegetation to different heights and at different times of the year. This paper analyzes the factors controlling the initial phase of the resulting invasions within the plots subject to gap creation. The susceptibility of the indigenous community to invasion was strongly related to the availability of bare ground created, but greatest success occurred where disturbance coincided with eutrophication. Disturbance damage to the indigenous dominants (particularly Festuca ovina) was an important determinant of seedling establishment by the sown invaders. Large seed size was identified as an important characteristic allowing certain species to establish relatively evenly across the productivity\u2014disturbance matrix; smaller\u2014seeded species were more dependent on disturbance for establishment. Successful and unsuccessful invaders were also distinguished to some extent by differences in germination requirements and present geographical distribution." }, { "instance_id": "R54867xR54801", "comparison_id": "R54867", "paper_id": "R54801", "text": "Herbaceous layer contrast and alien plant occurrence in utility corridors and riparian forests of the Allegheny High Plateau communities by alien plant species that can adversely affect community structure and function. To determine how corridor establishment influences riparian vegetation of the Allegheny High Plateau of northwestern Pennsylvania, we compared the species composition and richness of the herbaceous layer (all vascular plants s 1 m tall) of utility corridors and adjacent headwater riparian forests, and tested the hypothesis that utility corridors serve as foci for the invasion of adjacent riparian forest by alien vascular plants. We contrasted plant species richness and vegetative cover, cover by growth form, species richness and cover of alien plants and cover of microhabitat components (open soil, rock, leaf litter, log, bryophyte) in utility corridors and adjacent riparian forest at 17 sites. Cluster analysis revealed that herbaceous layer species assemblages in corridors and riparian forest were compositionally distinct. Herbaceous layer cover and species richness were significantly (P s 0.05) greater in corridors than in riparian forest. Fern, graminoid, and forb species co-dominated herbaceous layer cover in corridors; fern cover dominated riparian forests. Cover of alien plants was significantly greater in corridors than in riparian forest. Alien plant species richness and cover were significantly and positively correlated with open soil, floodplain width, and active channel width in corridors but were significantly and negatively correlated with litter cover in riparian forest. Given that the majority of alien plant species we found in corridors were shade-intolerant and absent from riparian forests, we conclude that open utility corridors primarily serve as habitat refugia, rather than as invasion foci, for alien plant species in riparian forests of the Allegheny High Plateau." }, { "instance_id": "R54867xR54832", "comparison_id": "R54867", "paper_id": "R54832", "text": "The paradox of invasion in birds: competitive superiority or ecological opportunism? Why can alien species succeed in environments to which they have had no opportunity to adapt and even become more abundant than many native species? Ecological theory suggests two main possible answers for this paradox: competitive superiority of exotic species over native species and opportunistic use of ecological opportunities derived from human activities. We tested these hypotheses in birds combining field observations and experiments along gradients of urbanization in New South Wales (Australia). Five exotic species attained densities in the study area comparable to those of the most abundant native species, and hence provided a case for the invasion paradox. The success of these alien birds was not primarily associated with a competitive superiority over native species: the most successful invaders were smaller and less aggressive than their main native competitors, and were generally excluded from artificially created food patches where competition was high. More importantly, exotic birds were primarily restricted to urban environments, where the diversity and abundance of native species were low. This finding agrees with previous studies and indicates that exotic and native species rarely interact in nature. Observations and experiments in the field revealed that the few native species that exploit the most urbanized environments tended to be opportunistic foragers, adaptations that should facilitate survival in places where disturbances by humans are frequent and natural vegetation has been replaced by man-made structures. Successful invaders also shared these features, suggesting that their success is not a paradox but can be explained by their capacity to exploit ecological opportunities that most native species rarely use." }, { "instance_id": "R54867xR54603", "comparison_id": "R54867", "paper_id": "R54603", "text": "Non-indigenous woody invasive plants in a rural New England town We investigated the abundance of non-indigenous woody invasive plants in Farmington, Maine, a rural New England town in a forested landscape. We found 12 invasive species and more than 7 patches per km from surveys on 33 transects (54.3 km) along field edges, abandoned railroad right-of-ways, roadsides, and riparian zones. Invasive abundance was apparently lower than for more developed areas of the northeastern US, where, in contrast to western Maine, invasives have extensively penetrated forest interiors. Invasive abundance increased with the amount of landscaping and proximity to town, suggesting a close association between local horticulture and the spread of woody invasives. Invasive abundance and diversity were highest in riparian areas, probably due to relatively high levels of propagule pressure. Species differed in the extent of invasiveness, ranging from those still dependent on planted parent trees to fully invasive populations. The invasive species recorded in this study have caused environmental and economic damage elsewhere. The lower levels of invasiveness in Farmington are likely a result of the isolation, small human population, and forested landscape rather than low levels of invasibility. This suggests the potential for future risks, and the importance of intervention while populations can still be eradicated or controlled." }, { "instance_id": "R54867xR54725", "comparison_id": "R54867", "paper_id": "R54725", "text": "Fire and grazing impacts on plant diversity and alien plant invasions in the southern Sierra Nevada Patterns of native and alien plant diversity in response to disturbance were examined along an elevational gradient in blue oak savanna, chaparral, and coniferous forests. Total species richness, alien species richness, and alien cover declined with elevation, at scales from 1 to 1000 m2. We found no support for the hypothesis that community diversity inhibits alien invasion. At the 1-m2 point scale, where we would expect competitive interactions between the largely herbaceous flora to be most intense, alien species richness as well as alien cover increased with increasing native species richness in all communities. This suggests that aliens are limited not by the number of native competitors, but by resources that affect establishment of both natives and aliens. Blue oak savannas were heavily dominated by alien species and consistently had more alien than native species at the 1-m2 scale. All of these aliens are annuals, and it is widely thought that they have displaced native bunchgrasses. If true, this..." }, { "instance_id": "R54867xR54717", "comparison_id": "R54867", "paper_id": "R54717", "text": "Road verges as invasion corridors? A spatial hierarchical test in an arid ecosystem Disturbed habitats are often swiftly colonized by alien plant species. Human inhabited areas may act as sources from which such aliens disperse, while road verges have been suggested as corridors facilitating their dispersal. We therefore hypothesized that (i) houses and urban areas are propagule sources from which aliens disperse, and that (ii) road verges act as corridors for their dispersal. We sampled presence and cover of aliens in 20 plots (6 \u00d7 25 m) per road at 5-km intervals for four roads, nested within three localities around cities (n = 240). Plots consisted of three adjacent nested transects. Houses (n = 3,349) were mapped within a 5-km radius from plots using topographical maps. Environmental processes as predictors of alien composition differed across spatial levels. At the broadest scale road-surface type, soil type, and competition from indigenous plants were the strongest predictors of alien composition. Within localities disturbance-related variables such as distance from dwellings and urban areas were associated with alien composition, but their effect differed between localities. Within roads, density and proximity of houses was related to higher alien species richness. Plot distance from urban areas, however, was not a significant predictor of alien richness or cover at any of the spatial levels, refuting the corridor hypothesis. Verges hosted but did not facilitate the spread of alien species. The scale dependence and multiplicity of mechanisms explaining alien plant communities found here highlight the importance of considering regional climatic gradients, landscape context and road-verge properties themselves when managing verges." }, { "instance_id": "R54867xR54654", "comparison_id": "R54867", "paper_id": "R54654", "text": "Anthropogenic and environmental effects on invasive mammal distribution in northern Patagonia, Argentina Abstract Anthropogenic disturbance is an important factor influencing biological invasions. The European hare (Lepus europaeus) and wild boar (Sus scrofa) are invasive species known to cause substantial environmental damage, and were introduced to Argentina during the early 1900s. We compared the relative importance of anthropogenic and environmental factors in hare and boar occurrence in Nahuel Huapi National Park, Argentina, and assessed the hypothesis that invasion can occur regardless of anthropogenic disturbance. Also, we assessed whether hare and boar occupancy offered support for the disturbance hypothesis, which states that invasive species are facilitated by anthropogenic disturbance. We deployed 80 cameras from February to May 2012 and January to April 2013 and at each site measured three environmental (land cover, horizontal cover, and percentage herbaceous vegetation) and three anthropogenic (distance to nearest human settlement, distance to nearest road, and average daily number of people) variables. We used likelihood-based occupancy modeling to estimate site occurrence and detectability. We obtained 480 independent detections of hares and 134 of boars in 1680 camera days. Environmental factors had a greater effect on hare occupancy than anthropogenic disturbances, and hare occupancy was greater in more open areas and closer to human settlements, supporting both hypotheses. Boar occurrence was equally influenced by anthropogenic and environmental factors, and offered mixed support for both hypotheses; boars were present only in humid land covers, and occupancy was lesser closer to settlements but greater closer to roads. Species responses to anthropogenic and environmental factors can vary based on life history traits and role in human society. Identifying the effect of environmental factors and human disturbances on species is fundamental for allocating limited resources in management and conservation." }, { "instance_id": "R54867xR54576", "comparison_id": "R54867", "paper_id": "R54576", "text": "Plant invasions along mountain roads: the altitudinal amplitude of alien Asteraceae forbs in their native and introduced ranges Studying plant invasions along environmental gradients is a promising approach to dissect the relative importance of multiple interacting factors that affect the spread of a species in a new range. Along altitudinal gradients, factors such as propagule pressure, climatic conditions and biotic interactions change simultaneously across rather small geographic scales. Here we investigate the distribution of eight Asteraceae forbs along mountain roads in both their native and introduced ranges in the Valais (southern Swiss Alps) and the Wallowa Mountains (northeastern Oregon, USA). We hypothesised that a lack of adaptation and more limiting propagule pressure at higher altitudes in the new range restricts the altitudinal distribution of aliens relative to the native range. However, all but one of the species reached the same or even a higher altitude in the new range. Thus neither the need to adapt to changing climatic conditions nor lower propagule pressure at higher altitudes appears to have prevented the altitudinal spread of introduced populations. We found clear differences between regions in the relative occurrence of alien species in ruderal sites compared to roadsides, and in the degree of invasion away from the roadside, presumably reflecting differences in disturbance patterns between regions. Whilst the upper altitudinal limits of these plant invasions are apparently climatically constrained, factors such as anthropogenic disturbance and competition with native vegetation appear to have greater influence than changing climatic conditions on the distribution of these alien species along altitudinal gradients." }, { "instance_id": "R54867xR54588", "comparison_id": "R54867", "paper_id": "R54588", "text": "Invasion patterns of ground-dwelling arthropods in Canarian laurel forests Patterns of invasive species in four different functional groups of ground-dwelling arthropods (Carnivorous ground dwelling beetles; Chilopoda; Diplopoda; Oniscoidea) were examined in laurel forests of the Canary Islands. The following hypotheses were tested: (A) increasing species richness is connected with decreasing invasibility as predicted by the Diversity\u2013invasibility hypothesis (DIH); (B) disturbed or anthropogenically influenced habitats are more sensitive for invasions than natural and undisturbed habitats; and (C) climatic differences between laurel forest sites do not affect the rate of invasibility. A large proportion of invasives (species and abundances) was observed in most of the studied arthropod groups. However, we did not find any support for the DIH based on the examined arthropod groups. Regarding the impact of the extrinsic factors \u2018disturbance\u2019 and \u2018climate\u2019 on invasion patterns, we found considerable differences between the studied functional groups. Whereas the \u2018disturbance parameters\u2019 played a minor role and only affected the relative abundances of invasive centipedes (positively) and millipedes (negatively), the \u2018climate parameters\u2019 were significantly linked with the pattern of invasive detritivores. Interactions between native and invading species have not been observed thus far, but cannot completely be excluded." }, { "instance_id": "R55219xR55103", "comparison_id": "R55219", "paper_id": "R55103", "text": "Historical records of passerine introductions to New Zealand fail to support the propagule pressure hypothesis Blackburn et al. (Biodiver Conserv 20:2189\u20132199, 2011) claim that a reanalysis of passerine introductions to New Zealand supports the propagule pressure hypothesis. The conclusions of Blackburn et al. (2011) are invalid for three reasons: First, the historical record is so flawed that there is no sound basis for identifying the mechanisms behind extinction following introduction, or whether species were successful because they were introduced in large numbers or were introduced in large numbers because earlier releases succeeded. Second, the GLIMMIX analysis of Blackburn et al. (2011) is biased in favor of the propagule pressure hypothesis. Third, the population viability analysis presented by Blackburn et al. (2011) is based on unjustified and questionable assumptions. It is likely that the outcome of passerine bird introductions to New Zealand depended on species characteristics, site characteristics, and human decisions more than on a simple summing of the numbers introduced." }, { "instance_id": "R55219xR55088", "comparison_id": "R55219", "paper_id": "R55088", "text": "Effects of soil fungi, disturbance and propagule pressure on exotic plant recruitment and establishment at home and abroad Summary 1. Biogeographic experiments that test how multiple interacting factors influence exotic plant abundance in their home and recipient communities are remarkably rare. We examined the effects of soil fungi, disturbance and propagule pressure on seed germination, seedling recruitment and adult plant establishment of the invasive Centaurea stoebe in its native European and non-native North American ranges. 2. Centaurea stoebe can establish virtual monocultures in parts of its non-native range, but occurs at far lower abundances where it is native. We conducted parallel experiments at four European and four Montana (USA) grassland sites with all factorial combinations of suppression of soil fungi, disturbance and low versus high knapweed propagule pressure [100 or 300 knapweed seeds per 0.3 m 9 0.3 m plot (1000 or 3000 per m 2 )]. We also measured germination in buried bags containing locally collected knapweed seeds that were either treated or not with fungicide. 3. Disturbance and propagule pressure increased knapweed recruitment and establishment, but did so similarly in both ranges. Treating plots with fungicides had no effect on recruitment or establishment in either range. However, we found: (i) greater seedling recruitment and plant establishment in undisturbed plots in Montana compared to undisturbed plots in Europe and (ii) substantially greater germination of seeds in bags buried in Montana compared to Europe. Also, across all treatments, total plant establishment was greater in Montana than in Europe. 4. Synthesis. Our results highlight the importance of simultaneously examining processes that could influence invasion in both ranges. They indicate that under \u2018background\u2019 undisturbed conditions, knapweed recruits and establishes at greater abundance in Montana than in Europe. However, our results do not support the importance of soil fungi or local disturbances as mechanisms for knapweed\u2019s differential success in North America versus Europe." }, { "instance_id": "R55219xR55092", "comparison_id": "R55219", "paper_id": "R55092", "text": "Inferring Process from Pattern in Plant Invasions: A Semimechanistic Model Incorporating Propagule Pressure and Environmental Factors Propagule pressure is intuitively a key factor in biological invasions: increased availability of propagules increases the chances of establishment, persistence, naturalization, and invasion. The role of propagule pressure relative to disturbance and various environmental factors is, however, difficult to quantify. We explored the relative importance of factors driving invasions using detailed data on the distribution and percentage cover of alien tree species on South Africa\u2019s Agulhas Plain (2,160 km2). Classification trees based on geology, climate, land use, and topography adequately explained distribution but not abundance (canopy cover) of three widespread invasive species (Acacia cyclops, Acacia saligna, and Pinus pinaster). A semimechanistic model was then developed to quantify the roles of propagule pressure and environmental heterogeneity in structuring invasion patterns. The intensity of propagule pressure (approximated by the distance from putative invasion foci) was a much better predictor of canopy cover than any environmental factor that was considered. The influence of environmental factors was then assessed on the residuals of the first model to determine how propagule pressure interacts with environmental factors. The mediating effect of environmental factors was species specific. Models combining propagule pressure and environmental factors successfully predicted more than 70% of the variation in canopy cover for each species." }, { "instance_id": "R55219xR55032", "comparison_id": "R55219", "paper_id": "R55032", "text": "Reconstructing 50 years of Opuntia stricta invasion in the Kruger National Park, South Africa: environmental determinants and propagule pressure Many factors influence the spread dynamics and distribution of invasive alien organisms. Despite progress in unravelling the determinants of invasiveness and invasibility, robust, spatially-explicit predictive models for explaining real-world invasion dynamics remain illusive. Reconstructing invasion episodes is a useful way of determining the roles of different factors in mediating spread and proliferation. In many cases, however, human-aided dispersal and other anthropogenic factors blur the roles of natural controlling factors. We describe the reconstruction of an isolated invasion event from a known source: the 50-year invasion history of Opuntia stricta in the Kruger National Park. Our aim was to explore the relative roles of environment and propagule supply in shaping the invasion pattern. Environmental variables (landscape heterogeneity and distance from water sources) were moderately useful for explaining the presence/absence of O. stricta in 1-ha cells across the 660 km2 (53% of cells correctly classified). Adding fire frequency increased the accuracy of the model (68%). However, when we considered the role of propagule pressure (measured as the distance of sites from the known primary invasion focus and putative secondary invasion foci), model accuracy was greatly improved (77%). No environmental variables or propagule pressure correctly explained spatial variation in abundance (expressed as cladode density in 1-ha cells). We discuss implications of the importance of propagule supply for modelling and managing invasions." }, { "instance_id": "R55219xR54981", "comparison_id": "R55219", "paper_id": "R54981", "text": "Dealing with scarce data to understand how environmental gradients and propagule pressure shape fine-scale alien distribution patterns on coastal dunes Questions: On sandy coastal habitats, factors related to substrate and to wind action vary along the sea\u2013inland ecotone, forming a marked directional disturbance and stress gradient. Further, input of propagules of alien plant species associated to touristic exploitation and development is intense. This has contributed to establishment and spread of aliens in coastal systems. Records of alien species in databases of such heterogeneous landscapes remain scarce, posing a challenge for statistical modelling. We address this issue and attempt to shed light on the role of environmental stress/disturbance gradients and propagule pressure on invasibility of plant communities in these typical model systems. Location: Sandy coasts of Lazio (Central Italy). Methods: We proposed an innovative methodology to deal with low prevalence of alien occurrence in a data set and high cost of field-based sampling by taking advantage, through predictive modelling, of the strong interrelation between vegetation and abiotic features in coastal dunes. We fitted generalized additive models to analyse (1) overall patterns of alien occurrence and spread and (2) specific patterns of the most common alien species recorded. Conclusion: Even in the presence of strong propagule pressure, variation in local abiotic conditions can explain differences in invasibility within a local environment, and intermediate levels of natural disturbance and stress offer the best conditions for spread of alien species. However, in our model system, propagule pressure is actually the main determinant of alien species occurrence and spread. We demonstrated that extending the information of environmental features measured in a subsample of vegetation plots through predictive modelling allows complex questions in invasion biology to be addressed without requiring disproportionate funding and sampling effort." }, { "instance_id": "R55219xR54967", "comparison_id": "R55219", "paper_id": "R54967", "text": "Succession of floodplain grasslands following reduction in land use intensity: the importance of environmental conditions, management and dispersal Summary 1. Classical ecological theory predicts a succession towards plant communities that are determined by environmental conditions. However, in ecological restoration, species composition often remains different from the predicted target community, compromising the success of restoration measures. 2. We analysed the relative importance of environmental conditions, management and distance to source populations for floodplain grassland succession following re-conversion from intensive to traditional use. The study was established at 33 grassland sites in central German river valleys. Species composition, environmental variables, past and current management, and the distance to source populations of characteristic species of traditional management (indicator species) were recorded and compared using multivariate statistics. We further tested the speed of colonization by two indicator species, Silaum silaus and Serratula tinctoria , along transects from source populations into unoccupied fields. 4. The species composition of the successional grassland was mainly determined by elevation, total soil nitrogen, distance to remnant species-rich grasslands and frequency of mowing or grazing. Elevation and distance were negatively, and frequency was positively related to the occurrence of late successional species. 5. Colonization by indicator species was only dependent on the distance to source populations; other explanatory variables were not significant. Migration from adjacent source sites of S. silaus and S. tinctoria into re-converted grasslands was slow, reaching only 40 m and 15 m after 15 years. 6. Synthesis and applications . The results demonstrated the limitations of the deterministic view on plant succession and the high relative importance of propagule availability in grassland restoration. Natural colonization will only be successful if source populations of the target species are adjacent to the restoration sites. Artificial introduction techniques are recommended to overcome dispersal barriers." }, { "instance_id": "R55219xR55021", "comparison_id": "R55219", "paper_id": "R55021", "text": "Assessing the Relative Importance of Disturbance, Herbivory, Diversity, and Propagule Pressure in Exotic Plant Invasion The current rate of invasive species introductions is unprecedented, and the dramatic impacts of exotic invasive plants on community and ecosystem properties have been well documented. Despite the pressing management implications, the mechanisms that control exotic plant invasion remain poorly understood. Several factors, such as disturbance, propagule pressure, species diversity, and herbivory, are widely believed to play a critical role in exotic plant invasions. However, few studies have examined the relative importance of these factors, and little is known about how propagule pressure interacts with various mechanisms of ecological resistance to determine invasion success. We quantified the relative importance of canopy disturbance, propagule pressure, species diversity, and herbivory in determining exotic plant invasion in 10 eastern hemlock forests in Pennsylvania and New Jersey (USA). Use of a maximum-likelihood estimation framework and information theoretics allowed us to quantify the strength of evidence for alternative models of the influence of these factors on changes in exotic plant abundance. In addition, we developed models to determine the importance of interactions between ecosystem properties and propagule pressure. These analyses were conducted for three abundant, aggressive exotic species that represent a range of life histories: Alliaria petiolata, Berberis thunbergii, and Microstegium vimineum. Of the four hypothesized determinants of exotic plant invasion considered in this study, canopy disturbance and propagule pressure appear to be the most important predictors of A. petiolata, B. thunbergii, and M. vimineum invasion. Herbivory was also found to be important in contributing to the invasion of some species. In addition, we found compelling evidence of an important interaction between propagule pressure and canopy disturbance. This is the first study to demonstrate the dominant role of the interaction between canopy disturbance and propagule pressure in determining forest invasibility relative to other potential controlling factors. The importance of the disturbance-propagule supply interaction, and its nonlinear functional form, has profound implications for the management of exotic plant species populations. Improving our ability to predict exotic plant invasions will require enhanced understanding of the interaction between propagule pressure and ecological resistance mechanisms." }, { "instance_id": "R55219xR54979", "comparison_id": "R55219", "paper_id": "R54979", "text": "Geographical variability in propagule pressure and climatic suitability explain the European distribution of two highly invasive crayfish Aim We assess the relative contribution of human, biological and climatic factors in explaining the colonization success of two highly invasive freshwater decapods: the signal crayfish (Pacifastacus leniusculus) and the red swamp crayfish (Procambarus clarkii). Location Europe. Methods We used boosted regression trees to evaluate the relative influence of, and relationship between, the invader's current pattern of distribution and a set of spatially explicit variables considered important to their colonization success. These variables are related to four well-known invasion hypotheses, namely the role of propagule pressure, climate matching, biotic resistance from known competitors, and human disturbance. Results Model predictions attained a high accuracy for the two invaders (mean AUC \u2265 0.91). Propagule pressure and climatic suitability were identified as the primary drivers of colonization, but the former had a much higher relative influence on the red swamp crayfish. Climate matching was shown to have limited predictive value and climatic suitability models based on occurrences from other invaded areas had consistently higher relative explanatory power than models based on native range data. Biotic resistance and human disturbance were also shown to be weak predictors of the distribution of the two invaders. Main conclusions These results contribute to our general understanding of the factors that enable certain species to become notable invaders. Being primarily driven by propagule pressure and climatic suitability, we expect that, given their continued dispersal, the future distribution of these problematic decapods in Europe will increasingly represent their fundamental climatic niche." }, { "instance_id": "R55219xR55048", "comparison_id": "R55219", "paper_id": "R55048", "text": "Modeling Invasive Plant Spread: The Role of Plant-Environment Interactions and Model Structure Alien plants invade many ecosystems worldwide and often have substantial negative effects on ecosystem structure and functioning. Our ability to quantitatively predict these impacts is, in part, limited by the absence of suitable plant-spread models and by inadequate parameter estimates for such models. This paper explores the effects of model, plant, and environmental attributes on predicted rates and patterns of spread of alien pine trees (Pinus spp.) in South African fynbos (a mediterranean-type shrubland). A factorial experimental design was used to: (1) compare the predictions of a simple reaction-diffusion model and a spatially explicit, individual-based simulation model; (2) investigate the sensitivity of predicted rates and patterns of spread to parameter values; and (3) quantify the effects of the simulation model's spatial grain on its predictions. The results show that the spatial simulation model places greater emphasis on interactions among ecological processes than does the reaction-diffusion model. This ensures that the predictions of the two models differ substantially for some factor combinations. The most important factor in the model is dispersal ability. Fire frequency, fecundity, and age of reproductive maturity are less important, while adult mortality has little effect on the model's predictions. The simulation model's predictions are sensitive to the model's spatial grain. This suggests that simulation models that use matrices as a spatial framework should ensure that the spatial grain of the model is compatible with the spatial processes being modeled. We conclude that parameter estimation and model development must be integrated pro- cedures. This will ensure that the model's structure is compatible with the biological pro- cesses being modeled. Failure to do so may result in spurious predictions." }, { "instance_id": "R55219xR55015", "comparison_id": "R55219", "paper_id": "R55015", "text": "Insect herbivory and propagule pressure influence Cirsium vulgare invasiveness across the landscape A current challenge in ecology is to better understand the magnitude, variation, and interaction in the factors that limit the invasiveness of exotic species. We conducted a factorial experiment involving herbivore manipulation (insecticide-in-water vs. water-only control) and seven densities of introduced nonnative Cirsium vulgare (bull thistle) seed. The experiment was repeated with two seed cohorts at eight grassland sites uninvaded by C. vulgare in the central Great Plains, USA. Herbivory by native insects significantly reduced thistle seedling density, causing the largest reductions in density at the highest propagule inputs. The magnitude of this herbivore effect varied widely among sites and between cohort years. The combination of herbivory and lower propagule pressure increased the rate at which new C. vulgare populations failed to establish during the initial stages of invasion. This experiment demonstrates that the interaction between biotic resistance by native insects, propagule pressure, and spatiotemporal variation in their effects were crucial to the initial invasion by this Eurasian plant in the western tallgrass prairie." }, { "instance_id": "R55219xR55101", "comparison_id": "R55219", "paper_id": "R55101", "text": "A reassessment of the role of propagule pressure in influencing fates of passerine introductions to New Zealand Several studies have argued that principal factor in determining the fate of bird introductions is introduction effort. In large part, these studies have emerged from analyses of historical records from a single place\u2014New Zealand. Here we raise two concerns about these conclusions. First, we argue that although many bird species were introduced repeatedly to New Zealand, in many cases the introductions apparently occurred only after the species were already successfully naturalized. The inclusion of such seemingly superfluous introductions may exaggerate the importance of propagule pressure. And second, we question the reliability of the records themselves. In many cases these records are equivocal, as inconsistencies appear in separate studies of the same records. Our analysis indicates that species were successful not because they were introduced frequently and in high numbers, but rather it is likely that they were introduced frequently and in high numbers because the initial releases were successful." }, { "instance_id": "R55219xR55146", "comparison_id": "R55219", "paper_id": "R55146", "text": "Propagule pressure drives establishment of introduced freshwater fish: quantitative evidence from an irrigation network Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology." }, { "instance_id": "R55219xR55064", "comparison_id": "R55219", "paper_id": "R55064", "text": "Seed Viability and Dispersal of the Wind-Dispersed Invasive Ailanthus altissima in Aqueous Environments In mesic forest environments, seeds of wind-dispersed plant species may frequently be deposited in aqueous environments (e.g., lakes and rivers). The consequences of deposition in an aqueous medium depend on whether seed viability is maintained. If seeds survive there, secondary dispersal in water may transport seeds long distances to suitable habitats. Using the exotic species, tree-of-heaven (Ailanthus altissima (Mill.) Swingle), in this study we estimated seed dispersal into water as a function of distance and experimentally tested seed buoyancy, secondary dispersal, and germinability after dispersal in water and on land. We found that biologically significant numbers of seeds disperse directly into water, remain buoyant, and are transported long distances by water. Germination rates for seeds that were kept in aqueous environments (Cheat Lake and the Monongahela River, near Morgantown, WV) were found to be similar to or higher than those in nearby terrestrial controls (F 10.94, P 0.0057). Seeds kept in aqueous environments retained high germination rates (94.4 1.1%) even after 5 months. Although A. altissima may not disperse primarily through water environments, this study suggests that secondary dispersal by water is possible and may allow for long-distance dispersal more than two orders of magnitude farther than recorded primary dispersal. FOR .S CI. 54(5):490-496." }, { "instance_id": "R55219xR54971", "comparison_id": "R55219", "paper_id": "R54971", "text": "Propagule pressure as a driver of establishment success in deliberately introduced exotic species: fact or artefact? A central paradigm in invasion biology is that more releases of higher numbers of individuals increase the likelihood that an exotic population successfully establishes and persists. Recently, however, it has been suggested that, in cases where the data are sourced from historical records of purposefully released species, the direction of causality is reversed, and that initial success leads to higher numbers being released. Here, we explore the implications of this alternative hypothesis, and derive six a priori predictions from it. We test these predictions using data on Acclimatization Society introductions of passerine bird species to New Zealand, which have previously been used to support both hypotheses for the direction of causality. All our predictions are falsified. This study reaffirms that the conventional paradigm in invasion biology is indeed the correct one for New Zealand passerine bird introductions, for which numbers released determine establishment success. Our predictions are not restricted to this fauna, however, and we keenly anticipate their application to other suitable datasets." }, { "instance_id": "R55219xR55057", "comparison_id": "R55219", "paper_id": "R55057", "text": "Role of propagule pressure in colonization success: disentangling the relative importance of demographic, genetic and habitat effects High propagule pressure is arguably the only consistent predictor of colonization success. More individuals enhance colonization success because they aid in overcoming demographic consequences of small population size (e.g. stochasticity and Allee effects). The number of founders can also have direct genetic effects: with fewer individuals, more inbreeding and thus inbreeding depression will occur, whereas more individuals typically harbour greater genetic variation. Thus, the demographic and genetic components of propagule pressure are interrelated, making it difficult to understand which mechanisms are most important in determining colonization success. We experimentally disentangled the demographic and genetic components of propagule pressure by manipulating the number of founders (fewer or more), and genetic background (inbred or outbred) of individuals released in a series of three complementary experiments. We used Bemisia whiteflies and released them onto either their natal host (benign) or a novel host (challenging). Our experiments revealed that having more founding individuals and those individuals being outbred both increased the number of adults produced, but that only genetic background consistently shaped net reproductive rate of experimental populations. Environment was also important and interacted with propagule size to determine the number of adults produced. Quality of the environment interacted also with genetic background to determine establishment success, with a more pronounced effect of inbreeding depression in harsh environments. This interaction did not hold for the net reproductive rate. These data show that the positive effect of propagule pressure on founding success can be driven as much by underlying genetic processes as by demographics. Genetic effects can be immediate and have sizable effects on fitness." }, { "instance_id": "R55219xR54969", "comparison_id": "R55219", "paper_id": "R54969", "text": "Passerine introductions to New Zealand support a positive effect of propagule pressure on establishment success There is growing consensus in the literature on biological invasions that propagule pressure (or a component thereof) is the primary determinant of establishment success in introduced species. However, a recent paper (Moulton et al. Biodiver Conserv 20:607\u2013623, 2011) questions whether this consensus is justified. It argues that the effect of propagule pressure is not general because most of the evidence for it comes from analyses of historical bird data to New Zealand, and, moreover, that both the analyses and the data on which they are based are faulty. Moulton et al. (Biodiver Conserv 20:607\u2013623, 2011) present a re-analysis that fails to find a relationship between establishment success and propagule pressure in New Zealand bird introductions. Here, we show why these criticisms are unjustified. A robust analysis of New Zealand bird data reveals that propagule pressure is indeed positively related to establishment success, and we present a simple population viability analysis to demonstrate why the method adopted by Moulton et al. (Biodiver Conserv 20:607\u2013623, 2011) fails to demonstrate this result. We further show that there is abundant evidence for a relationship between establishment success and propagule pressure in biological invasions outside of historical bird introductions to New Zealand. We conclude that propagule pressure is indeed a primary determinant of establishment success in introduced species." }, { "instance_id": "R55219xR55129", "comparison_id": "R55219", "paper_id": "R55129", "text": "Propagule pressure and resource availability determine plant community invasibility in a temperate forest understorey Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways." }, { "instance_id": "R55219xR55141", "comparison_id": "R55219", "paper_id": "R55141", "text": "Landscape factors that shape a slow and persistent aquatic invasion: brown trout in Newfoundland 1883-2010 Aim We investigated watershed-scale abiotic environmental factors associated with population establishment of one of the \u2018world\u2019s 100 worst alien invaders\u2019 on a temperate Atlantic island. Within the context of the conservation implications, we aimed to quantify (1) the early history and demographics (numbers and origins) of human-mediated brown trout (Salmo trutta) introductions, (2) the current distribution of established populations, and (3) the watershed-scale environmental factors that may resist or facilitate trout establishment. Location Island of Newfoundland, Canada. Methods We combined field sampling with historical and contemporary records from literature to assemble a presence\u2013absence and physical habitat database for 312 watersheds on Newfoundland. Probability of watershed establishment was modelled with general additive ANCOVA models to control for nonlinear effects of propagule pressure (i.e. the distance to and number of invasion foci within a biologically relevant range) and model performance based on AIC. Results Between 1883 and 1906, 16 watersheds were introduced with brown trout from the Howietoun Hatchery, near Stirling, Scotland. Since that time, populations have established in 51 additional watersheds at an estimated rate of spread of 4 km per year. We did not detect any obvious abiotic barriers to resist trout establishment, but showed that for a given amount of propagule pressure that relatively large and productive watersheds were most likely to be established. Main conclusions Brown trout have successfully invaded and established populations in watersheds of Newfoundland and are currently slowly expanding on the island. Populations are more likely to establish in relatively large and productive watersheds, thereby supporting predictions of island biogeography theory. However, we suggest that all watersheds in Newfoundland are potentially susceptible to successful brown trout invasion and that abiotic factors alone are unlikely to act sufficiently as barriers to population establishment." }, { "instance_id": "R55219xR55090", "comparison_id": "R55219", "paper_id": "R55090", "text": "Dispersal and recruitment limitation in native versus exotic tree species: life-history strategies and Janzen-Connell effects Life-history traits of invasive exotic plants are typically considered to be exceptional vis-a-vis native species. In particular, hyper-fecundity and long range dispersal are regarded as invasive traits, but direct comparisons with native species are needed to identify the life-history stages behind invasiveness. Until recently, this task was particularly problematic in forests as tree fecundity and dispersal were difficult to characterize in closed stands. We used inverse modelling to parameterize fecundity, seed dispersal and seedling dispersion functions for two exotic and eight native tree species in closed-canopy forests in Connecticut, USA. Interannual variation in seed production was dramatic for all species, with complete seed crop failures in at least one year for six native species. However, the average per capita seed production of the exotic Ailanthus altissima was extraordinary: \ue02e 40 times higher than the next highest species. Seed production of the shade tolerant exotic Acer platanoides was average, but much higher than the native shade tolerant species, and the density of its established seedlings (\ue024 3 years) was higher than any other species. Overall, the data supported a model in which adults of native and exotic species must reach a minimum size before seed production occurred. Once reached, the relationship between tree diameter and seed production was fairly flat for seven species, including both exotics. Seed dispersal was highly localized and usually showed a steep decline with increasing distance from parent trees: only Ailanthus altissima and Fraxinus americana had mean dispersal distances \ue02e 10 m. Janzen-Connell patterns were clearly evident for both native and exotic species, as the mode and mean dispersion distance of seedlings were further from potential parent trees than seeds. The comparable intensity of Janzen-Connell effects between native and exotic species suggests that the enemy escape hypothesis alone cannot explain the invasiveness of these exotics. Our study confirms the general importance of colonization processes in invasions, yet demon strates how invasiveness can occur via divergent colonization strategies. Dispersal limitation of Acer platanoides and recruitment limitation of Ailanthus altissima will likely constitute some limit on their invasiveness in closed-canopy forests." }, { "instance_id": "R55219xR54958", "comparison_id": "R55219", "paper_id": "R54958", "text": "Quarantine arthropod invasions in Europe: the role of climate, hosts and propagule pressure Aim To quantify the relative importance of propagule pressure, climate-matching and host availability for the invasion of agricultural pest arthropods in Europe and to forecast newly emerging pest species and European areas with the highest risk of arthropod invasion under current climate and a future climate scenario (A1F1). Location Europe. Methods We quantified propagule pressure, climate-matching and host availability by aggregating large global databases for trade, European arthropod interceptions, Koeppen\u2013Geiger world climate classification (including the A1F1 climate change scenario until 2100) and host plant distributions for 118 quarantine arthropod species. Results As expected, all the three factors, propagule pressure, climate suitability and host availability, significantly explained quarantine arthropod invasions in Europe, but the propagule pressure only had a positive effect on invasion success when considered together with climate suitability and host availability. Climate change according to the A1F1 scenario generally increased the climate suitability of north-eastern European countries and reduced the climate suitability of central European countries for pest arthropod invasions. Main conclusions To our knowledge, this is the first demonstration that propagule pressure interacts with other factors to drive invasions and is not alone sufficient to explain arthropod establishment patterns. European countries with more suitable climate and large agricultural areas of suitable host plants for pest arthropods should thus be more vigilant about introduction pathways. Moreover, efforts to reduce the propagule pressure, such as preventing pests from entering pathways and strengthening border controls, will become more important in north-eastern Europe in the future as the climate becomes more favourable to arthropod invasions." }, { "instance_id": "R55219xR54765", "comparison_id": "R55219", "paper_id": "R54765", "text": "Limits to tree species invasion in pampean grassland and forest plant communities Factors limiting tree invasion in the Inland Pampas of Argentina were studied by monitoring the establishment of four alien tree species in remnant grassland and cultivated forest stands. We tested whether disturbances facilitated tree seedling recruitment and survival once seeds of invaders were made available by hand sowing. Seed addition to grassland failed to produce seedlings of two study species, Ligustrum lucidum and Ulmus pumila, but did result in abundant recruitment of Gleditsia triacanthos and Prosopis caldenia. While emergence was sparse in intact grassland, seedling densities were significantly increased by canopy and soil disturbances. Longer-term surveys showed that only Gleditsia became successfully established in disturbed grassland. These results support the hypothesis that interference from herbaceous vegetation may play a significant role in slowing down tree invasion, whereas disturbances create microsites that can be exploited by invasive woody plants. Seed sowing in a Ligustrum forest promoted the emergence of all four study species in understorey and treefall gap conditions. Litter removal had species-specific effects on emergence and early seedling growth, but had little impact on survivorship. Seedlings emerging under the closed forest canopy died within a few months. In the treefall gap, recruits of Gleditsia and Prosopis survived the first year, but did not survive in the longer term after natural gap closure. The forest community thus appeared less susceptible to colonization by alien trees than the grassland. We conclude that tree invasion in this system is strongly limited by the availability of recruitment microsites and biotic interactions, as well as by dispersal from existing propagule sources." }, { "instance_id": "R55219xR55000", "comparison_id": "R55219", "paper_id": "R55000", "text": "Movement, colonization, and establishment success of a planthopper of prairie potholes, Delphacodes scolochloa (Hemiptera: Delphacidae) Abstract 1. Movement, and particularly the colonisation of new habitat patches, remains one of the least known aspects of the life history and ecology of the vast majority of species. Here, a series of experiments was conducted to rectify this problem with Delphacodes scolochloa Cronin & Wilson, a wing\u2010dimorphic planthopper of the North American Great Plains." }, { "instance_id": "R55219xR55023", "comparison_id": "R55219", "paper_id": "R55023", "text": "The importance of quantifying propagule pressure to understand invasion: an examination of riparian forest invasibility The widely held belief that riparian communities are highly invasible to exotic plants is based primarily on comparisons of the extent of invasion in riparian and upland communities. However, because differences in the extent of invasion may simply result from variation in propagule supply among recipient environments, true comparisons of invasibility require that both invasion success and propagule pressure are quantified. In this study, we quantified propagule pressure in order to compare the invasibility of riparian and upland forests and assess the accuracy of using a community's level of invasion as a surrogate for its invasibility. We found the extent of invasion to be a poor proxy for invasibility. The higher level of invasion in the studied riparian forests resulted from greater propagule availability rather than higher invasibility. Furthermore, failure to account for propagule pressure may confound our understanding of general invasion theories. Ecological theory suggests that species-rich communities should be less invasible. However, we found significant relationships between species diversity and invasion extent, but no diversity-invasibility relationship was detected for any species. Our results demonstrate that using a community's level of invasion as a surrogate for its invasibility can confound our understanding of invasibility and its determinants." }, { "instance_id": "R55219xR55019", "comparison_id": "R55219", "paper_id": "R55019", "text": "The role of propagule pressure, genetic diversity and microsite availability for Senecio vernalis invasion Genetic diversity is supposed to support the colonization success of expanding species, in particular in situations where microsite availability is constrained. Addressing the role of genetic diversity in plant invasion experimentally requires its manipulation independent of propagule pressure. To assess the relative importance of these components for the invasion of Senecio vernalis, we created propagule mixtures of four levels of genotype diversity by combining seeds across remote populations, across proximate populations, within single populations and within seed families. In a first container experiment with constant Festuca rupicola density as matrix, genotype diversity was crossed with three levels of seed density. In a second experiment, we tested for effects of establishment limitation and genotype diversity by manipulating Festuca densities. Increasing genetic diversity had no effects on abundance and biomass of S. vernalis but positively affected the proportion of large individuals to small individuals. Mixtures composed from proximate populations had a significantly higher proportion of large individuals than mixtures composed from within seed families only. High propagule pressure increased emergence and establishment of S. vernalis but had no effect on individual growth performance. Establishment was favoured in containers with Festuca, but performance of surviving seedlings was higher in open soil treatments. For S. vernalis invasion, we found a shift in driving factors from density dependence to effects of genetic diversity across life stages. While initial abundance was mostly linked to the amount of seed input, genetic diversity, in contrast, affected later stages of colonization probably via sampling effects and seemed to contribute to filtering the genotypes that finally grew up. In consequence, when disentangling the mechanistic relationships of genetic diversity, seed density and microsite limitation in colonization of invasive plants, a clear differentiation between initial emergence and subsequent survival to juvenile and adult stages is required." }, { "instance_id": "R55219xR55002", "comparison_id": "R55219", "paper_id": "R55002", "text": "Factors explaining alien plant invasion success in a tropical ecosystem differ at each stage of invasion Summary 1. Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2. Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3. Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy-feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopyfeeding animals and with greater seed mass were more likely to be established in closed forest. 4. Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5. Synthesis . This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests." }, { "instance_id": "R55219xR55120", "comparison_id": "R55219", "paper_id": "R55120", "text": "Planting intensity, residence time, and species traits determine invasion success of alien woody species We studied the relative importance of residence time, propagule pressure, and species traits in three stages of invasion of alien woody plants cultivated for about 150 years in the Czech Republic, Central Europe. The probability of escape from cultivation, naturalization, and invasion was assessed using classification trees. We compared 109 escaped-not-escaped congeneric pairs, 44 naturalized-not-naturalized, and 17 invasive-not-invasive congeneric pairs. We used the following predictors of the above probabilities: date of introduction to the target region as a measure of residence time; intensity of planting in the target area as a proxy for propagule pressure; the area of origin; and 21 species-specific biological and ecological traits. The misclassification rates of the naturalization and invasion model were low, at 19.3% and 11.8%, respectively, indicating that the variables used included the major determinants of these processes. The probability of escape increased with residence time in the Czech Republic, whereas the probability of naturalization increased with the residence time in Europe. This indicates that some species were already adapted to local conditions when introduced to the Czech Republic. Apart from residence time, the probability of escape depends on planting intensity (propagule pressure), and that of naturalization on the area of origin and fruit size; it is lower for species from Asia and those with small fruits. The probability of invasion is determined by a long residence time and the ability to tolerate low temperatures. These results indicate that a simple suite of factors determines, with a high probability, the invasion success of alien woody plants, and that the relative role of biological traits and other factors is stage dependent. High levels of propagule pressure as a result of planting lead to woody species eventually escaping from cultivation, regardless of biological traits. However, the biological traits play a role in later stages of invasion." }, { "instance_id": "R55219xR54961", "comparison_id": "R55219", "paper_id": "R54961", "text": "Propagule pressure and stream characteristics influence introgression: cutthroat and rainbow trout in British Columbia Hybridization and introgression between introduced and native salmonids threaten the continued persistence of many inland cutthroat trout species. Environmental models have been developed to predict the spread of introgression, but few studies have assessed the role of propagule pressure. We used an extensive set of fish Stocking records and geographic information system (GIS) data to produce a spatially explicit index of potential propagule pressure exerted by introduced rainbow trout in the Upper Kootenay River, British Columbia, Canada. We then used logistic regression and the information-theoretic approach to test the ability of a set of environmental and spatial variables to predict the level of introgression between native westslope cutthroat trout and introduced rainbow trout. Introgression was assessed using between four and seven co-dominant, diagnostic nuclear markers at 45 sites in 31 different streams. The best model for predicting introgression included our GIS propagule pressure index and an environmental variable that accounted for the biogeoclimatic zone of the site (r2=0.62). This model was 1.4 times more likely to explain introgression than the next-best model, which consisted of only the propagule pressure index variable. We created a composite model based on the model-averaged results of the seven top models that included environmental, spatial, and propagule pressure variables. The propagule pressure index had the highest importance weight (0.995) of all variables tested and was negatively related to sites with no introgression. This study used an index of propagule pressure and demonstrated that propagule pressure had the greatest influence on the level of introgression between a native and introduced trout in a human-induced hybrid zone." }, { "instance_id": "R55219xR55107", "comparison_id": "R55219", "paper_id": "R55107", "text": "Population structure, propagule pressure, and conservation biogeography in the sub-Antarctic: lessons from indigenous and invasive springtails The patterns in and the processes underlying the distribution of invertebrates among Southern Ocean islands and across vegetation types on these islands are reasonably well understood. However, few studies have examined the extent to which populations are genetically structured. Given that many sub-Antarctic islands experienced major glaciation and volcanic activity, it might be predicted that substantial population substructure and little genetic isolation-by-distance should characterize indigenous species. By contrast, substantially less population structure might be expected for introduced species. Here, we examine these predictions and their consequences for the conservation of diversity in the region. We do so by examining haplotype diversity based on mitochondrial cytochrome c oxidase subunit I sequence data, from two indigenous (Cryptopygus antarcticus travei, Tullbergia bisetosa) and two introduced (Isotomurus cf. palustris, Ceratophysella denticulata) springtail species from Marion Island. We find considerable genetic substructure in the indigenous species that is compatible with the geological and glacialogical history of the island. Moreover, by employing ecological techniques, we show that haplotype diversity is likely much higher than our sequenced samples suggest. No structure is found in the introduced species, with each being represented by a single haplotype only. This indicates that propagule pressure is not significant for these small animals unlike the situation for other, larger invasive species: a few individuals introduced once are likely to have initiated the invasion. These outcomes demonstrate that sampling must be more comprehensive if the population history of indigenous arthropods on these islands is to be comprehended, and that the risks of within- and among-island introductions are substantial. The latter means that, if biogeographical signal is to be retained in the region, great care must be taken to avoid inadvertent movement of indigenous species among and within islands. Thus, quarantine procedures should also focus on among-island movements." }, { "instance_id": "R55219xR55114", "comparison_id": "R55219", "paper_id": "R55114", "text": "Propagule pressure hypothesis not supported by an 80-year experiment on woody species invasion Ecological filters and availability of propagules play key roles structuring natural communities. Propagule pressure has recently been suggested to be a fundamental factor explaining the success or failure of biological introductions. We tested this hypothesis with a remarkable data set on trees introduced to Isla Victoria, Nahuel Huapi National Park, Argentina. More than 130 species of woody plants, many known to be highly invasive elsewhere, were introduced to this island early in the 20th century, as part of an experiment to test their suitability as commercial forestry trees for this region. We obtained detailed data on three estimates of propagule pressure (number of introduced individuals, number of areas where introduced, and number of years during which the species was planted) for 18 exotic woody species. We matched these data with a survey of the species and number of individuals currently invading the island. None of the three estimates of propagule pressure predicted the current pattern of invasion. We suggest that other factors, such as biotic resistance, may be operating to determine the observed pattern of invasion, and that propagule pressure may play a relatively minor role in explaining at least some observed patterns of invasion success and failure." }, { "instance_id": "R55219xR55099", "comparison_id": "R55219", "paper_id": "R55099", "text": "Invasive alien plants infiltrate bird-mediated shrub nucleation processes in arid savanna Summary 1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird-dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy-fruited species than indigenous species, we mapped the distribution of alien fleshy-fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy-fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy-fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy-fruited plants were found both beneath host trees and in the open, alien fleshy-fruited plants were found only beneath trees. 4 Abundance of fleshy-fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy-fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy-fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy-fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi-arid African savanna by alien fleshy-fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem-level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy-fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions." }, { "instance_id": "R55219xR54996", "comparison_id": "R55219", "paper_id": "R54996", "text": "The demography of introduction pathways, propagule pressure and occurrences of non-native freshwater fish in England 1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10\u00d710 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. \u00a9 Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd." }, { "instance_id": "R55219xR55036", "comparison_id": "R55219", "paper_id": "R55036", "text": "Genetic evidence for high propagule pressure and long-distance dispersal in monk parakeet (Myiopsitta monachus) invasive populations The monk parakeet (Myiopsitta monachus) is a successful invasive species that does not exhibit life history traits typically associated with colonizing species (e.g., high reproductive rate or long\u2010distance dispersal capacity). To investigate this apparent paradox, we examined individual and population genetic patterns of microsatellite loci at one native and two invasive sites. More specifically, we aimed at evaluating the role of propagule pressure, sexual monogamy and long\u2010distance dispersal in monk parakeet invasion success. Our results indicate little loss of genetic variation at invasive sites relative to the native site. We also found strong evidence for sexual monogamy from patterns of relatedness within sites, and no definite cases of extra\u2010pair paternity in either the native site sample or the examined invasive site. Taken together, these patterns directly and indirectly suggest that high propagule pressure has contributed to monk parakeet invasion success. In addition, we found evidence for frequent long\u2010distance dispersal at an invasive site (\u223c100 km) that sharply contrasted with previous estimates of smaller dispersal distance made in the native range (\u223c2 km), suggesting long\u2010range dispersal also contributes to the species\u2019 spread within the United States. Overall, these results add to a growing body of literature pointing to the important role of propagule pressure in determining, and thus predicting, invasion success, especially for species whose life history traits are not typically associated with invasiveness." }, { "instance_id": "R55219xR54998", "comparison_id": "R55219", "paper_id": "R54998", "text": "COMPETITION BETWEEN NATIVE PERENNIAL AND EXOTIC ANNUAL GRASSES: IMPLICATIONS FOR AN HISTORICAL INVASION Though established populations of invasive species can exert substantial competitive effects on native populations, exotic propagules may require disturbances that decrease competitive interference by resident species in order to become established. We compared the relative competitiveness of native perennial and exotic annual grasses in a California coastal prairie grassland to test whether the introduction of exotic propagules to coastal grasslands in the 19th century was likely to have been sufficient to shift community composition from native perennial to exotic annual grasses. Under experimental field con- ditions, we compared the aboveground productivity of native species alone to native species competing with exotics, and exotic species alone to exotic species competing with natives. Over the course of the four-year experiment, native grasses became increasingly dominant in the mixed-assemblage plots containing natives and exotics. Although the competitive interactions in the first growing season favored the exotics, over time the native grasses significantly reduced the productivity of exotic grasses. The number of exotic seedlings emerging and the biomass of dicot seedlings removed during weeding were also significantly lower in plots containing natives as compared to plots that did not contain natives. We found evidence that the ability of established native perennial species to limit space available for exotic annual seeds to germinate and to limit the light available to exotic seedlings reduced exotic productivity and shifted competitive interactions in favor of the natives. If interactions between native perennial and exotic annual grasses follow a similar pattern in other coastal grassland habitats, then the introduction of exotic grass propagules alone without changes in land use or climate, or both, was likely insufficient to convert the region's grasslands." }, { "instance_id": "R55219xR55011", "comparison_id": "R55219", "paper_id": "R55011", "text": "The role of competition and introduction effort in the success of passeriform birds introduced to New Zealand The finding that passeriform birds introduced to the islands of Hawaii and Saint Helena were more likely to successfully invade when fewer other introduced species were present has been interpreted as strong support for the hypothesis that interspecific competition influences invasion success. I tested whether invasions were more likely to succeed when fewer species were present using the records of passeriform birds introduced to four acclimatization districts in New Zealand. I also tested whether introduction effort, measured as the number of introductions and the total number of birds released, could predict invasion outcomes, a result previously established for all birds introduced to New Zealand. I found patterns consistent with both competition and introduction effort as explanations for invasion success. However, data supporting the two explanations were confounded such that the greater success of invaders arriving when fewer other species were present could have been due to a causal relationship between invasion success and introduction effort. Hence, without data on introduction effort, previous studies may have overestimated the degree to which the number of potential competitors could independently explain invasion outcomes and may therefore have overstated the importance of competition in structuring introduced avian assemblages. Furthermore, I suggest that a second pattern in avian invasion success previously attributed to competition, the morphological overdispersion of successful invaders, could also arise as an artifact of variation in introduction effort." }, { "instance_id": "R55219xR55136", "comparison_id": "R55219", "paper_id": "R55136", "text": "Correlates of Introduction Success in Exotic New Zealand Birds Whether or not a bird species will establish a new population after invasion of uncolonized habitat depends, from theory, on its life-history attributes and initial population size. Data about initial population sizes are often unobtainable for natural and deliberate avian invasions. In New Zealand, however, contemporary documentation of introduction efforts allowed us to systematically compare unsuccessful and successful invaders without bias. We obtained data for 79 species involved in 496 introduction events and used the present-day status of each species as the dependent variable in fitting multiple logistic regression models. We found that introduction efforts for species that migrated within their endemic ranges were significantly less likely to be successful than those for nonmigratory species with similar introduction efforts. Initial population size, measured as number of releases and as the minimum number of propagules liberated in New Zealand, significantly increased the probability of translocation success. A null model showed that species released more times had a higher probability per release of successful establishment. Among 36 species for which data were available, successful invaders had significantly higher natality/mortality ratios. Successful invaders were also liberated at significantly more sites. Invasion of New Zealand by exotic birds was therefore primarily related to management, an outcome that has implications for conservation biology." }, { "instance_id": "R55219xR55085", "comparison_id": "R55219", "paper_id": "R55085", "text": "From the backyard to the backcountry: how ecological and biological traits explain the escape of garden plants into Mediterranean old fields To explain current ornamental plant invasions, or predict future ones, it is necessary to determine which factors increase the probability of an alien species becoming invasive. Here, we focused on the early phases of ornamental plant invasion in order to identify which plant features and cultivation practices may favor the escape of ornamental plants from domestic gardens to abandoned agricultural land sites in the Mediterranean Region. We used an original approach which consisted in visiting 120 private gardens in an urbanizing rural area of the French Mediterranean backcountry, and then visited surrounding old fields to determine which planted species had escaped out of the gardens. We built a database of 407 perennial ornamental alien species (most of which were animal-dispersed), and determined nineteen features that depicted the strength of species\u2019 propagule pressure within gardens, the match between species requirements and local physical environment, and each species\u2019 reproductive characteristics. Using standard and phylogenetic logistic regression, we found that ornamental alien plants were more likely to have escaped if they were planted in gardens\u2019 margins, if they had a preference for dry soil, were tolerant to high-pH or pH-indifferent, and if they showed a capacity for clonal growth. Focusing only on animal-dispersed plants, we found that alien plants were more likely to have escaped if they were abundant in gardens and showed preference for dry soil. This suggests that gardening practices have a primary impact on the probability of a species to escape from cultivation, along with species pre-adaptation to local soil conditions, and capacity of asexual reproduction. Our results may have important implications for the implementation of management practices and awareness campaigns in order to limit ornamental plants to becoming invasive species in Mediterranean landscapes." }, { "instance_id": "R55219xR55143", "comparison_id": "R55219", "paper_id": "R55143", "text": "Predictors of avian and mammalian translocation success: reanalysis with phylogenetically independent contrasts Abstract We use the phylogenetically based statistical method of independent contrasts to reanalyze the Wolf et al., 1996 translocation data set for 181 programs involving 17 mammalian and 28 avian species. Although still novel in conservation and wildlife biology, the incorporation of phylogenetic information into analyses of interspecific comparative data is widely accepted and routinely used in several fields. To facilitate application of independent contrasts, we converted the dichotomous (success/failure) dependent variable ( Wolf et al., 1996 , Griffith et al., 1989 . Translocations as a species conservation tool: status and strategy. Science 245, 477\u2013480) into a more descriptive, continuous variable with the incorporation of persistence of the translocated population beyond the last release year, relative to the species' longevity. For comparison, we present three models: nonphylogenetic multiple logistic regression with the dichotomous dependent variable (the method used by Wolf et al. 1996 and Griffith et al. 1989 ), nonphylogenetic multiple regression with the continuous dependent variable, and multiple regression using phylogenetically independent contrasts with the continuous dependent variable. Results of the phylogenetically based multiple regression analysis indicate statistical significance of three independent variables: habitat quality of the release area, range of the release site relative to the historical distribution of the translocated species, and number of individuals released. Evidence that omnivorous species are more successful than either herbivores or carnivores is also presented. The results of our reanalysis support several of the more important conclusions of the Wolf et al. (1996) and Griffith et al. (1989) studies and increase our confidence that the foregoing variables should be considered carefully when designing a translocation program. However, the phylogenetically based analysis does not support either the Wolf et al. (1996) or Griffith et al. (1989) findings with respect to the statistical significance of taxonomic class (bird vs mammal) and status (game vs threatened, endangered, or sensitive), or the Griffith et al. (1989) findings with respect to the significance of reproductive potential of the species and program length." }, { "instance_id": "R55219xR55162", "comparison_id": "R55219", "paper_id": "R55162", "text": "Number of source populations as a potential driver of pine invasions in Brazil To understand current patterns of Pinus invasion in an Araucaria forest in southern Brazil, we quantified invasion at the local scale and compared it with habitat characteristics, propagule size, and number of source populations, using generalized linear models. We also compared observed and expected invasive species status based on a previously developed model (Z scores) using Chi square and correlation tests to evaluate the predictability of species status based on their traits. Of the 16 Pinus species currently present in the site, three are invasive (P. elliottii, P. glabra, and P. taeda), three are naturalized (P. clausa, P. oocarpa, and P. pseudostrobus), and ten are present only as the originally planted individuals. While P. taeda spread the farthest, P. glabra had greater overall density, but none of the invasive species has spread more than 250 m in 45 years. Invasive Pinus plants were found where forest tree density was below 805 trees ha\u22121, and invasive Pinus density decreased log-linearly with an increase in native tree density. Number of individuals introduced and number of source populations were strong predictors of naturalization, thus both propagule size and propagule diversity can potentially be driving invasion success. Z scores based on species traits did not predict which species would invade in Rio Negro. Our findings suggest that Araucaria forests might not resist invasion by Pinus as recently suggested and support the hypothesis that propagule pressure is a fundamental driver of invasions with propagule diversity being a possible component of this mechanism." }, { "instance_id": "R55219xR55078", "comparison_id": "R55219", "paper_id": "R55078", "text": "Colonisation of sub-Antarctic Marion Island by a non-indigenous aphid parasitoid Aphidius matricariae (Hymenoptera, Braconidae) Over the past two decades seven non-indigenous vascular plant or arthropod species have established reproducing populations at sub-Antarctic Marion Island (46\u00b054\u2032S, 37\u00b055\u2032E). Here we record the eighth establishment, a braconid wasp Aphidius matricariae Haliday, which uses the aphid Rhopalosiphum padi (Linnaeus) as its only host on the island. Molecular markers (18S rDNA and mtCOI) support the conventional taxonomic identification and indicate that all individuals are characterized by a single haplotype. Surveys around the island show that adult abundance and the frequency of aphid parasitism are highest at Macaroni Bay on the east coast, and decline away from this region to low or zero values elsewhere on the coast. The South African research and supply vessel, the SA Agulhas, regularly anchors at Macaroni Bay, and Aphidius sp. have been collected from its galley hold. Current abundance structure, low haplotype diversity, and the operating procedures of the SA Agulhas all suggest that the parasitoid was introduced to the island by humans. Regular surveys indicate that this introduction took place between April 2001 and April 2003, the latter being the first month when this species was detected. The wasp\u2019s establishment has significantly added to trophic complexity on the island. Low haplotype diversity suggests that propagule pressure is of little consequence for insect introductions. Rather, single or just a few individuals are probably sufficient for successful establishment." }, { "instance_id": "R55219xR55134", "comparison_id": "R55219", "paper_id": "R55134", "text": "The roles of climate, phylogenetic relatedness, introduction effort, and reproductive traits in the establishment of non-native reptiles and amphibians We developed a method to predict the potential of non-native reptiles and amphibians (herpetofauna) to establish populations. This method may inform efforts to prevent the introduction of invasive non-native species. We used boosted regression trees to determine whether nine variables influence establishment success of introduced herpetofauna in California and Florida. We used an independent data set to assess model performance. Propagule pressure was the variable most strongly associated with establishment success. Species with short juvenile periods and species with phylogenetically more distant relatives in regional biotas were more likely to establish than species that start breeding later and those that have close relatives. Average climate match (the similarity of climate between native and non-native range) and life form were also important. Frogs and lizards were the taxonomic groups most likely to establish, whereas a much lower proportion of snakes and turtles established. We used results from our best model to compile a spreadsheet-based model for easy use and interpretation. Probability scores obtained from the spreadsheet model were strongly correlated with establishment success as were probabilities predicted for independent data by the boosted regression tree model. However, the error rate for predictions made with independent data was much higher than with cross validation using training data. This difference in predictive power does not preclude use of the model to assess the probability of establishment of herpetofauna because (1) the independent data had no information for two variables (meaning the full predictive capacity of the model could not be realized) and (2) the model structure is consistent with the recent literature on the primary determinants of establishment success for herpetofauna. It may still be difficult to predict the establishment probability of poorly studied taxa, but it is clear that non-native species (especially lizards and frogs) that mature early and come from environments similar to that of the introduction region have the highest probability of establishment." }, { "instance_id": "R55219xR55055", "comparison_id": "R55219", "paper_id": "R55055", "text": "Changing roles of propagule, climate, and land use during extralimital colonization of a rose chafer beetle Regardless of their ecosystem functions, some insects are threatened when facing environmental changes and disturbances, while others become extremely successful. It is crucial for successful conservation to differentiate factors supporting species\u2019 current distributions from those triggering range dynamics. Here, we studied the sudden extralimital colonization of the rose chafer beetle, Oxythyrea funesta, in the Czech Republic. Specifically, we depicted the range expansion using accumulated historical records of first known occurrences and then explained the colonization events using five transformed indices depicting changes in local propagule pressure (LPP), climate, land use, elevation, and landscape structure. The slow occupancy increase of O. funesta before 1990 changed to a phase of rapid occupancy increase after 1990, driven not only by changes in the environment (climate and land use) but also by the spatial accumulation of LPP. Climate was also found to play a significant role but only during the niche-filling stage before 1990, while land use became important during the phase of rapid expansion after 1990. Inland waters (e.g., riparian corridors) also contributed substantially to the spread in the Czech Republic. Our method of using spatially transformed variables to explain the colonization events provides a novel way of detecting factors triggering range dynamics. The results highlight the importance of LPP in driving sudden occupancy increase of extralimital species and recommend the use of LPP as an important predictor for modeling range dynamics." }, { "instance_id": "R55219xR55025", "comparison_id": "R55219", "paper_id": "R55025", "text": "Propagule Size and the Relative Success of Exotic Ungulate and Bird Introductions to New Zealand We investigated factors affecting the success of 14 species of ungulates introduced to New Zealand around 1851\u20131926. The 11 successful species had a shorter maximum life span and were introduced in greater numbers than the three unsuccessful species. Because introduction effort was confounded with other life\u2010history traits, we examined whether independent introductions of the same species were more likely to succeed when a greater number of individuals were introduced. For the six species with introductions that both succeeded and failed, successful introductions always involved an equal or greater number of individuals than unsuccessful introductions of the same species. For all independent introductions, there was a highly significant relationship between the number of individuals introduced and introduction success. When data for ungulate and bird introductions to New Zealand were combined, a variable categorizing species as ungulate or bird was a highly significant predictor of introduction success, after variation in introduction effort was controlled. For a given number of individuals introduced, ungulates were much more likely to succeed than birds." }, { "instance_id": "R55219xR54994", "comparison_id": "R55219", "paper_id": "R54994", "text": "Propagule pressure and the invasion risks of non-native freshwater fishes: a case study in England European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0\u00b705) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000\u20132004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000\u20132004). The implications for policy and management are discussed." }, { "instance_id": "R55219xR55013", "comparison_id": "R55219", "paper_id": "R55013", "text": "High predictability in introduction outcomes and the geographical range size of introduced Australian birds: a role for climate Summary 1 We investigated factors hypothesized to influence introduction success and subsequent geographical range size in 52 species of bird that have been introduced to mainland Australia. 2 The 19 successful species had been introduced more times, at more sites and in greater overall numbers. Relative to failed species, successfully introduced species also had a greater area of climatically suitable habitat available in Australia, a larger overseas range size and were more likely to have been introduced successfully outside Australia. After controlling for phylogeny these relationships held, except that with overseas range size and, in addition, larger-bodied species had a higher probability of introduction success. There was also a marked taxonomic bias: gamebirds had a much lower probability of success than other species. A model including five of these variables explained perfectly the patterns in introduction success across-species. 3 Of the successful species, those with larger geographical ranges in Australia had a greater area of climatically suitable habitat, traits associated with a faster population growth rate (small body size, short incubation period and more broods per season) and a larger overseas range size. The relationships between range size in Australia, the extent of climatically suitable habitat and overseas range size held after controlling for phylogeny. 4 We discuss the probable causes underlying these relationships and why, in retrospect, the outcome of bird introductions to Australia are highly predictable." }, { "instance_id": "R55219xR55122", "comparison_id": "R55219", "paper_id": "R55122", "text": "Behavioural plasticity associated with propagule size, resources, and the invasion success of the Argentine ant Linepithema humile Summary 1. The number of individuals involved in an invasion event, or \u2018propagule size\u2019, has a strong theoretical basis for influencing invasion success. However, rarely has propagule size been experimentally manipulated to examine changes in invader behaviour, and propagule longevity and success. 2. We manipulated propagule size of the invasive Argentine ant Linepithema humile in laboratory and field studies. Laboratory experiments involved L. humile propagules containing two queens and 10, 100, 200 or 1000 workers. Propagules were introduced into arenas containing colonies of queens and 200 workers of the competing native ant Monomorium antarcticum . The effects of food availability were investigated via treatments of only one central resource, or 10 separated resources. Field studies used similar colony sizes of L. humile , which were introduced into novel environments near an invasion front. 3. In laboratory studies, small propagules of L. humile were quickly annihilated. Only the larger propagule size survived and killed the native ant colony in some replicates. Aggression was largely independent of food availability, but the behaviour of L. humile changed substantially with propagule size. In larger propagules, aggressive behaviour was significantly more frequent, while L. humile were much more likely to avoid conflict in smaller propagules. 4. In field studies, however, propagule size did not influence colony persistence. Linepithema humile colonies persisted for up to 2 months, even in small propagules of 10 workers. Factors such as temperature or competitor abundance had no effect, although some colonies were decimated by M. antarcticum . 5. Synthesis and applications. Although propagule size has been correlated with invasion success in a wide variety of taxa, our results indicate that it will have limited predictive power with species displaying behavioural plasticity. We recommend that aspects of animal behaviour be given much more consideration in attempts to model invasion success. Secondly, areas of high biodiversity are thought to offer biotic resistance to invasion via the abundance of predators and competitors. Invasive pests such as L. humile appear to modify their behaviour according to local conditions, and establishment was not related to resource availability. We cannot necessarily rely on high levels of native biodiversity to repel invasions." }, { "instance_id": "R55219xR55127", "comparison_id": "R55219", "paper_id": "R55127", "text": "Propagule pressure and climate contribute to the displacement of Linepithema humile by Pachycondyla chinensis Identifying mechanisms governing the establishment and spread of invasive species is a fundamental challenge in invasion biology. Because species invasions are frequently observed only after the species presents an environmental threat, research identifying the contributing agents to dispersal and subsequent spread are confined to retrograde observations. Here, we use a combination of seasonal surveys and experimental approaches to test the relative importance of behavioral and abiotic factors in determining the local co-occurrence of two invasive ant species, the established Argentine ant (Linepithema humile Mayr) and the newly invasive Asian needle ant (Pachycondyla chinensis Emery). We show that the broader climatic envelope of P. chinensis enables it to establish earlier in the year than L. humile. We also demonstrate that increased P. chinensis propagule pressure during periods of L. humile scarcity contributes to successful P. chinensis early season establishment. Furthermore, we show that, although L. humile is the numerically superior and behaviorally dominant species at baits, P. chinensis is currently displacing L. humile across the invaded landscape. By identifying the features promoting the displacement of one invasive ant by another we can better understand both early determinants in the invasion process and factors limiting colony expansion and survival." }, { "instance_id": "R55219xR55076", "comparison_id": "R55219", "paper_id": "R55076", "text": "The relative importance of latitude matching and propagule pressure in the colonization success of an invasive forb Factors that influence the early stages of invasion can be critical to invasion success, yet are seldom studied. In particular, broad pre-adaptation to recipient climate may importantly influence early colonization success, yet few studies have explicitly examined this. I performed an experiment to determine how similarity between seed source and transplant site latitude, as a general indicator of pre-adaptation to climate, interacts with propagule pressure (100, 200 and 400 seeds/pot) to influence early colonization success of the widespread North American weed, St. John's wort Hypericum perforatum. Seeds originating from seven native European source populations were sown in pots buried in the ground in a field in western Montana. Seed source populations were either similar or divergent in latitude to the recipient transplant site. Across seed density treatments, the match between seed source and recipient latitude did not affect the proportion of pots colonized or the number of individual colonists per pot. In contrast, propagule pressure had a significant and positive effect on colonization. These results suggest that propagules from many climatically divergent source populations can be viable invaders." }, { "instance_id": "R55219xR55009", "comparison_id": "R55219", "paper_id": "R55009", "text": "Popularity and propagule pressure: determinants of introduction and establishment of aquarium fish Propagule pressure is frequently cited as an important determinant of invasion success for terrestrial taxa, but its importance for aquatic species is unclear. Using data on aquarium fishes in stores and historical records of fish introduced and established in Canadian and United States waters, we show clear relationships exist between frequency of occurrence in shops and likelihood of introduction and of establishment. Introduced and established taxa are also typically larger than those available from stores, consistent with the propagule pressure hypothesis in that larger fish may be released more frequently due to outgrowing their aquaria. Attempts to reduce the numbers of introductions may be the most practical mechanism to reduce the number of new successful invasions." }, { "instance_id": "R55219xR55053", "comparison_id": "R55219", "paper_id": "R55053", "text": "Propagule pressure of an invasive crab overwhelms native biotic resistance Over the last decade, the porcelain crab Petrolisthes armatus invaded oyster reefs of Georgia, USA, at mean densities of up to 11 000 adults m -2 . Interactions affecting the invasion are undocumented. We tested the effects of native species richness and composition on invasibility by constructing isolated reef communities with 0, 2, or 4 of the most common native species, by seeding adult P. armatus into a subset of the 4 native species communities and by constructing communities with and without native, predatory mud crabs. At 4 wk, recruitment of P. armatus juveniles to oyster shells lacking native species was 2.75 times greater than to the 2 native species treatment and 3.75 times greater than to the 4 native species treatment. The biotic resistance produced by 2 species of native filter feeders may have occurred due to competition with, or predation on, the settling juve- niles of the filter feeding invasive crab. Adding adult porcelain crabs to communities with 4 native species enhanced recruitment by a significant 3-fold, and countered the effects of native biotic resis- tance. Differences in recruitment at Week 4 were lost by Weeks 8 and 12, when densities of recent recruits reached ~17 000 to 34 000 crabs m -2 across all treatments. Thus, native species richness slows initial invasion, but early colonists stimulate settlement by later ones and produce tremendous propagule pressure that overwhelms the effects of biotic resistance." }, { "instance_id": "R55219xR55034", "comparison_id": "R55219", "paper_id": "R55034", "text": "Human activities, ecosystem disturbance and plant invasions in subantarctic Crozet, Kerguelen and Amsterdam Islands Abstract Recent floristic surveys of the French islands of the southern Indian Ocean (Ile de la Possession, in the Crozet archipelago, Iles Kerguelen and Ile Amsterdam) allow a comparison of the status of the alien vascular plant species in contrasted environmental and historical situations. Four points are established: (1) the current numbers of alien plant species are almost the same on Amsterdam (56) and La Possession (58), slightly higher on Kerguelen (68); (2) some of these species are common to two or three islands but a high number of them are confined to only one island (18, 28 and 28 on La Possession, Kerguelen and Amsterdam, respectively); (3) all the alien plant species are very common species in the temperate regions of the northern hemisphere and belong to the European flora; and (4) a high proportion of the introduced species are present on the research stations or their surroundings (100, 72 and 84% on La Possession, Kerguelen and Amsterdam, respectively). These results are discussed in term of propagule pressure (mainly attributed to ships visiting these islands), invasibility of such ecosystems (in relation to climatic conditions and degree of disturbance by previous or current human activities such as sheep farming or waste deposits) and invasion potential of alien plant species." }, { "instance_id": "R55219xR55039", "comparison_id": "R55219", "paper_id": "R55039", "text": "The Influence of Numbers Released on the Outcome of Attempts to Introduce Exotic Bird Species to New Zealand 1. Information on the approximate number of individuals released is available for 47 of the 133 exotic bird species introduced to New Zealand in the late 19th and early 20th centuries. Of these, 21 species had populations surviving in the wild in 1969-79. The long interval between introduction and assessment of outcome provides a rare opportunity to examine the factors correlated with successful establishment without the uncertainty of long-term population persistence associated with studies of short duration. 2. The probability of successful establishment was strongly influenced by the number of individuals released during the main period of introductions. Eight-three per cent of species that had more than 100 individuals released within a 10-year period became established, compared with 21% of species that had less than 100 birds released. The relationship between the probability of establishment and number of birds released was similar to that found in a previous study of introductions of exotic birds to Australia. 3. It was possible to look for a within-family influence on the success of introduction of the number of birds released in nine bird families. A positive influence was found within seven families and no effect in two families. This preponderance of families with a positive effect was statistically significant. 4. A significant effect of body weight on the probability of successful establishment was found, and negative effects of clutch size and latitude of origin. However, the statistical significance of these effects varied according to whether comparison was or was not restricted to within-family variation. After applying the Bonferroni adjustment to significance levels, to allow for the large number of variables and factors being considered, only the effect of the number of birds released was statistically significant. 5. No significant effects on the probability of successful establishment were apparent for the mean date of release, the minimum number of years in which birds were released, the hemisphere of origin (northern or southern) and the size and diversity of latitudinal distribution of the natural geographical range." }, { "instance_id": "R55219xR55027", "comparison_id": "R55219", "paper_id": "R55027", "text": "Climatic Suitability, Life-History Traits, Introduction Effort, and the Establishment and Spread of Introduced Mammals in Australia : Major progress in understanding biological invasions has recently been made by quantitatively comparing successful and unsuccessful invasions. We used such an approach to test hypotheses about the role of climatic suitability, life history, and historical factors in the establishment and subsequent spread of 40 species of mammal that have been introduced to mainland Australia. Relative to failed species, the 23 species that became established had a greater area of climatically suitable habitat available in Australia, had previously become established elsewhere, had a larger overseas range, and were introduced more times. These relationships held after phylogeny was controlled for, but successful species were also significantly more likely to be nonmigratory. A forward-selection model included only two of the nine variables for which we had data for all species: climatic suitability and introduction effort. When the model was adjusted for phylogeny, those same two variables were included, along with previous establishment success. Of the established species, those with a larger geographic range size in Australia had a greater area of climatically suitable habitat, had traits associated with a faster population growth rate (small body size, shorter life span, lower weaning age, more offspring per year), were nonherbivorous, and had a larger overseas range size. When the model was adjusted for phylogeny, the importance of climatic suitability and the life-history traits remained significant, but overseas range size was no longer important and species with greater introduction effort had a larger geographic range size. Two variables explained variation in geographic range size in a forward-selection model: species with smaller body mass and greater longevity tended to have larger range sizes in Australia. These results mirror those from a recent analysis of exotic-bird introductions into Australia, suggesting that, at least among vertebrate taxa, similar factors predict establishment and spread. Our approach and results are being used to assess the risks of exotic vertebrates becoming established and spreading in Australia. Resumen: Recientemente se ha logrado un progreso importante en el entendimiento de invasiones biologicas al comparar invasiones exitosas y no exitosas cuantitativamente. Utilizamos ese metodo para probar hipotesis acerca de la adaptabilidad climatica, historia de vida y factores historicos en el establecimiento y extension posterior de 40 especies de mamiferos que han sido introducidas en Australia. En relacion con especies no exitosas, las 23 especies que se establecieron tenian una mayor area de habitat adecuado climaticamente en Australia, se habian establecido en otras partes, tenian un mayor rango y fueron introducidas mas veces. Despues de controlar para filogenia, estas relaciones se mantuvieron, pero las especies exitosas tambien fueron significativamente no migratorias. Un modelo de seleccion anterior incluyo a solo dos de las nueve variables para las que teniamos datos para todas las especies: adaptabilidad climatica y esfuerzo de introduccion. Los ajustes para filogenia incluyeron esas dos mismas variables ademas del exito de establecimiento previo. De las especies establecidas, aquellas con un mayor rango geografico en Australia tenian un mayor area de habitat climaticamente adecuado, tenian caracteristicas asociadas con una tasa de crecimiento poblacional mas rapida (tamano corporal pequeno, menor longevidad, menor edad de destete, mas crias por ano), fueron no herbivoras y tenian un mayor rango de distribucion fuera de Australia. Los ajustes para filogenia la importancia de la adaptabilidad climatica y las caracteristicas de historia de vida permanecieron significativas, pero el rango fuera de Australia ya no fue importante y especies con mayor esfuerzo de introduccion tenian un mayor rango geografico. Dos variables explicaron la variacion en tamano de rango geografico en un modelo de seleccion anterior: especies con menor masa corporal y mayor longevidad tendieron a tener rangos de mayor tamano en Australia. Estos resultados son semejantes a los de analisis recientes de introducciones de aves exoticas a Australia, lo que sugiere que, por lo menos en taxones de vertebrados, factores similares predicen el establecimiento y extension. Nuestro metodo y resultados estan siendo utilizados para evaluar los riesgos del establecimiento y de la extension de vertebrados exoticos en Australia." }, { "instance_id": "R55219xR55139", "comparison_id": "R55219", "paper_id": "R55139", "text": "Habitat, dispersal and propagule pressure control exotic plant infilling within an invaded range Deep in the heart of a longstanding invasion, an exotic grass is still invading. Range infilling potentially has the greatest impact on native communities and ecosystem processes, but receives much less attention than range expansion. \u2018Snapshot' studies of invasive plant dispersal, habitat and propagule limitations cannot determine whether a landscape is saturated or whether a species is actively infilling empty patches. We investigate the mechanisms underlying invasive plant infilling by tracking the localized movement and expansion of Microstegium vimineum populations from 2009 to 2011 at sites along a 100-km regional gradient in eastern U.S. deciduous forests. We find that infilling proceeds most rapidly where the invasive plants occur in warm, moist habitats adjacent to roads: under these conditions they produce copious seed, the dispersal distances of which increase exponentially with proximity to roadway. Invasion then appears limited where conditions are generally dry and cool as propagule pressure tapers off. Invasion also is limited in habitats >1 m from road corridors, where dispersal distances decline precipitously. In contrast to propagule and dispersal limitations, we find little evidence that infilling is habitat limited, meaning that as long as M. vimineum seeds are available and transported, the plant generally invades quite vigorously. Our results suggest an invasive species continues to spread, in a stratified manner, within the invaded landscape long after first arriving. These dynamics conflict with traditional invasion models that emphasize an invasive edge with distinct boundaries. We find that propagule pressure and dispersal regulate infilling, providing the basis for projecting spread and landscape coverage, ecological effects and the efficacy of containment strategies." }, { "instance_id": "R55219xR54987", "comparison_id": "R55219", "paper_id": "R54987", "text": "Hotspots of plant invasion predicted by propagule pressure and ecosystem characteristics Aim: Biological invasions pose a major conservation threat and are occurring at an unprecedented rate. Disproportionate levels of invasion across the landscape indicate that propagule pressure and ecosystem characteristics can mediate invasion success. However, most invasion predictions relate to species\u2019 characteristics (invasiveness) and habitat requirements. Given myriad invaders and the inability to generalize from single-species studies, more general predictions about invasion are required. We present a simple new method for characterizing and predicting landscape susceptibility to invasion that is not species-specific. Location:? Corangamite catchment (13,340 km2), south-east Australia. Methods:? Using spatially referenced data on the locations of non-native plant species, we modelled their expected proportional cover as a function of a site\u2019s environmental conditions and geographic location. Models were built as boosted regression trees (BRTs). Results:? On average, the BRTs explained 38% of variation in occupancy and abundance of all exotic species and exotic forbs. Variables indicating propagule pressure, human impacts, abiotic and community characteristics were rated as the top four most influential variables in each model. Presumably reflecting higher propagule pressure and resource availability, invasion was highest near edges of vegetation fragments and areas of human activity. Sites with high vegetation cover had higher probability of occupancy but lower proportional cover of invaders, the latter trend suggesting a form of biotic resistance. Invasion patterns varied little in time despite the data spanning 34 years. Main conclusions:? To our knowledge, this is the first multispecies model based on occupancy and abundance data used to predict invasion risk at the landscape scale. Our approach is flexible and can be applied in different biomes, at multiple scales and for different taxonomic groups. Quantifying general patterns and processes of plant invasion will increase understanding of invasion and community ecology. Predicting invasion risk enables spatial prioritization of weed surveillance and control." }, { "instance_id": "R55219xR55117", "comparison_id": "R55219", "paper_id": "R55117", "text": "The comparative importance of species traits and introduction characteristics in tropical plant invasions Aim We used alien plant species introduced to a botanic garden to investigate the relative importance of species traits (leaf traits, dispersal syndrome) and introduction characteristics (propagule pressure, residence time and distance to forest) in explaining establishment success in surrounding tropical forest. We also used invasion scores from a weed risk assessment protocol as an independent measure of invasion risk and assessed differences in variables between high- and low-risk species. Location East Usambara mountains, Tanzania. Methods Forest transect surveys identified species establishing in disturbed and intact forest. Leaf traits (specific leaf area and foliar nutrient concentrations) were measured from leaves sampled in high-light environments. Results A leaf traits spectrum was apparent, but species succeeding or failing to establish in either disturbed or intact forest were not located in different parts of the spectrum. Species with high invasion risk did not differ in their location on the leaf trait spectrum compared with low-risk species but were more likely to be bird/primate-dispersed. For 15 species establishing in forest quadrats, median canopy cover of quadrats where seedlings were present was correlated with a species value along the leaf trait spectrum. Species establishing in disturbed forest were planted in twice as many plantations and were marginally more likely to be bird- or primate-dispersed than species failing to become established in disturbed forest. Establishment in intact forest was more likely for species planted closer to forest edges. Main conclusions Leaf and dispersal traits appear less important in the colonization of tropical forest than introduction characteristics. It appears, given sufficient propagule pressure or proximity to forest, alien species are much more likely to establish independently of leaf traits or dispersal syndrome in continental tropical forests." }, { "instance_id": "R55219xR54963", "comparison_id": "R55219", "paper_id": "R54963", "text": "Colonization Success in Roesel's Bush-Cricket Metrioptera roeseli: The Effects of Propagule Size Assessing the colonizing ability of a species is important for predicting its future distribution or for planning the introduction or reintroduction of that species for conservation purposes. The best way to assess colonizing ability is by making experimental introductions of the species and monitoring the outcome. In this study, different-sized propagules of Roesel's bush-cricket, Metrioptera roeseli, were experimentally introduced into 70 habitat islands, previously uninhabited by the species, in farmland fields in south- eastern Sweden. The areas of introduction were carefully monitored for 2-3 yr to determine whether the propagules had successfully colonized the patches. The study showed that large propagules resulted in larger local populations during the years following introduction. Probability of colonization for each propagule size was measured and showed that propagule size had a significant effect on colonization success, i.e., large propagules were more successful in colonizing new patches. If future introductions were to be made with this or a similar species, a propagule size of at least 32 individuals would be required to establish a viable population with a high degree of certainty." }, { "instance_id": "R55219xR55070", "comparison_id": "R55219", "paper_id": "R55070", "text": "Online trading tools as a method of estimating propagule pressure via the pet-release pathway The increasing amount of internet trade in live animals has facilitated the sale and circulation of exotic species all over the world. This is an area of concern, as the deliberate or accidental release of pets is an important pathway by which exotic species are often introduced into new environments, often with negative effects on the local species and ecosystems. Internet trading sites were used to determine the distribution and magnitude of propagule pressure of red-eared slider turtles (RES; Trachemys scripta elegans) within the New Zealand pet trade. Sites were tracked daily from October 1, 2007 to September 30, 2009 and information on age, sex, season, and location was recorded. More than 1,500 sales and 80 reports of lost/found RES were recorded. Unsurprisingly, the highest number of sales and lost/found RES was in Auckland, the region with the highest human population. Females were more often reported as lost or found than males, despite a similar sex ratio of sales. The type and quality of information gathered in this manner is not perfect, as it only provides an estimate of minimum numbers of animals that are being traded/lost into the environment, but nonetheless, provides useful data when planning a management or eradication plan for feral turtles in New Zealand. Of concern, our results highlighted areas where turtles were most often being released in New Zealand, being those areas predicted to be the most climatically-suitable for this species, and in which incubation conditions were most likely to be met. Monitoring online sales of exotic species provides useful demographic information, as well as an indication of propagule pressure via the pet-release pathway. This technique is applicable to other species and may be a useful tool to help determine locations at risk of the establishment of other exotic species." }, { "instance_id": "R55219xR55050", "comparison_id": "R55219", "paper_id": "R55050", "text": "ECOLOGICAL RESISTANCE TO BIOLOGICAL INVASION OVERWHELMED BY PROPAGULE PRESSURE Models and observational studies have sought patterns of predictability for invasion of natural areas by nonindigenous species, but with limited success. In a field experiment using forest understory plants, we jointly manipulated three hypothesized determinants of biological invasion outcome: resident diversity, physical disturbance and abiotic conditions, and propagule pressure. The foremost constraints on net habitat invasibility were the number of propagules that arrived at a site and naturally varying resident plant density. The physical environment (flooding regime) and the number of established resident species had negligible impact on habitat invasibility as compared to propagule pressure, despite manipulations that forced a significant reduction in resident richness, and a gradient in flooding from no flooding to annual flooding. This is the first experimental study to demonstrate the primacy of propagule pressure as a determinant of habitat invasibility in comparison with other candidate controlling factors." }, { "instance_id": "R55219xR55074", "comparison_id": "R55219", "paper_id": "R55074", "text": "Predicting invasions by woody species in a temperate zone: a test of three risk assessment schemes in the Czech Republic (Central Europe) To assess the validity of previously developed risk assessment schemes in the conditions of Central Europe, we tested (1) Australian weed risk assessment scheme (WRA; Pheloung et al . 1999); (2) WRA with additional analysis by Daehler et al . (2004); and (3) decision tree scheme of Reichard and Hamilton (1997) developed in North America, on a data set of 180 alien woody species commonly planted in the Czech Republic. This list included 17 invasive species, 9 naturalized but non-invasive, 31 casual aliens, and 123 species not reported to escape from cultivation. The WRA model with additional analysis provided best results, rejecting 100% of invasive species, accepting 83.8% of non-invasive, and recommending further 13.0% for additional analysis. Overall accuracy of the WRA model with additional analysis was 85.5%, higher than that of the basic WRA scheme (67.9%) and the Reichard\u2010Hamilton model (61.6%). Only the Reichard\u2010Hamilton scheme accepted some invaders. The probability that an accepted species will become an invader was zero for both WRA models and 3.2% for the Reichard\u2010Hamilton model. The probability that a rejected species would have been an invader was 77.3% for both WRA models and 24.0% for the Reichard\u2010Hamilton model. It is concluded that the WRA model, especially with additional analysis, appears to be a promising template for building a widely applicable system for screening out invasive plant introductions." }, { "instance_id": "R55219xR54984", "comparison_id": "R55219", "paper_id": "R54984", "text": "Global patterns of introduction effort and establishment success in birds Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds." }, { "instance_id": "R56110xR56098", "comparison_id": "R56110", "paper_id": "R56098", "text": "Establishment success across convergent Mediterranean ecosystems: an analysis of bird introductions Concern over the impact of invaders on biodiversity and on the functioning of ecosystems has gen- erated a rising tide of comparative analyses aiming to unveil the factors that shape the success of introduced species across different regions. One limitation of these studies is that they often compare geographically rather than ecologically defined regions. We propose an approach that can help address this limitation: comparison of invasions across convergent ecosystems that share similar climates. We compared avian invasions in five convergent mediterranean climate systems around the globe. Based on a database of 180 introductions repre- senting 121 avian species, we found that the proportion of bird species successfully established was high in all mediterranean systems (more than 40% for all five regions). Species differed in their likelihood to become estab- lished, although success was not higher for those originating from mediterranean systems than for those from nonmediterranean regions. Controlling for this taxonomic effect with generalized linear mixed models, species introduced into mediterranean islands did not show higher establishment success than those introduced to the mainland. Susceptibility to avian invaders, however, differed substantially among the different mediterranean regions. The probability that a species will become established was highest in the Mediterranean Basin and lowest in mediterranean Australia and the South African Cape. Our results suggest that many of the birds recently introduced into mediterranean systems, and especially into the Mediterranean Basin, have a high potential to establish self-sustaining populations. This finding has important implications for conservation in these biologically diverse hotspots." }, { "instance_id": "R56110xR56108", "comparison_id": "R56110", "paper_id": "R56108", "text": "Are island plant communities more invaded than their mainland counterparts? Questions: Are island vegetation communities moreinvaded than their mainland counterparts? Is thispattern consistent among community types?Location: The coastal provinces of Catalonia andthepara-oceanicBalearicIslands,bothinNESpain.These islands were connected to the continent morethan 5.35 million years ago and are now locatedo200km from the coast.Methods: We compiled a database of almost 3000phytosociological releve\u00b4s from the Balearic Islandsand Catalonia and compared the level of invasionby alien plants in island versus mainland commu-nities. Twenty distinct plant community typeswere compared between island and mainland coun-terparts.Results: The percentage of plots with alien species,number, percentage and cover percentage of alienspecies per plot was greater in Catalonia than in theBalearic Islands in most communities. Overall,across communities, more alien species were foundin the mainland (53) compared to the islands (onlynine). Despite these differences, patterns of the levelof invasion in communities were highly consistentbetween the islands and mainland. The most in-vaded communities were ruderal and riparian.Main conclusion: Our results indicate that para-oceanic island communities such as the BalearicIslands are less invaded than their mainlandcounterparts. This difference re\ufb02ects a smaller re-gional alien species pool in the Balearic Islands thanin the adjacent mainland, probably due to differ-ences in landscape heterogeneity and propagulepressure.Keywords: alien plants; Balearic Islands; communitysimilarity; Mediterranean communities; para-ocea-nic islands; releve\u00b4; species richness.Nomenclature: Bolo`s & Vigo (1984\u20132001), Rivas-Martinez et al. (2001)." }, { "instance_id": "R56110xR56094", "comparison_id": "R56110", "paper_id": "R56094", "text": "Are islands more susceptible to plant invasion than continents? A test using Oxalis pes-caprae L. in the western Mediterranean Aim We tested the relative vulnerability of islands to Oxalis pes-caprae L. invasion compared to mainland regions. Oxalis pes-caprae is a South African annual geophyte that reproduces via bulbils, and has spread in many Mediterranean and temperate regions of the world where introduced. Our study is one of the first detailed regional analyses of the occurrence and local abundance of a non-native plant. Methods We conducted an extensive survey (2000 sampling points) to examine local and coarse-scale patterns in both the occurrence and abundance of O. pes-caprae on islands and in neighbouring mainland regions of Spain. Location We analysed occurrence (number of samples where present) and abundance (percentage cover) on two Balearic Islands (Menorca and Mallorca)" }, { "instance_id": "R56110xR53276", "comparison_id": "R56110", "paper_id": "R53276", "text": "Learning from failures: testing broad taxonomic hypotheses about plant naturalization Our understanding of broad taxonomic patterns of plant naturalizations is based entirely on observations of successful naturalizations. Omission of the failures, however, can introduce bias by conflating the probabilities of introduction and naturalization. Here, we use two comprehensive datasets of successful and failed plant naturalizations in New Zealand and Australia for a unique, flora-wide comparative test of several major invasion hypotheses. First, we show that some taxa are consistently more successful at naturalizing in these two countries, despite their environmental differences. Broad climatic origins helped to explain some of the differences in success rates in the two countries. We further show that species with native relatives were generally more successful in both countries, contrary to Darwin's naturalization hypothesis, but this effect was inconsistent among families across the two countries. Finally, we show that contrary to studies based on successful naturalizations only, islands need not be inherently more invisible than continents." }, { "instance_id": "R56110xR56096", "comparison_id": "R56110", "paper_id": "R56096", "text": "Across islands and continents, mammals are more successful invaders than birds Many invasive species cause ecological or economic damage, and the fraction of introduced species that become invasive is an important determinant of the overall costs caused by invaders. According to the widely quoted tens rule, about 10% of all introduced species establish themselves and about 10% of these established species become invasive. Global taxonomic differences in the fraction of species becoming invasive have not been described. In a global analysis of mammal and bird introductions, I show that both mammals and birds have a much higher invasion success than predicted by the tens rule, and that mammals have a significantly higher success than birds. Averaged across islands and continents, 79% of mammals and 50% of birds introduced have established themselves and 63% of mammals and 34% of birds established have become invasive. My analysis also does not support the hypothesis that islands are more susceptible to invaders than continents, as I did not find a significant relationship between invasion success and the size of the island or continent to which the species were introduced. The data set used in this study has a number of limitations, e.g. information on propagule pressure was not available at this global scale, so understanding the mechanisms behind the observed patterns has to be postponed to future studies." }, { "instance_id": "R56110xR56084", "comparison_id": "R56110", "paper_id": "R56084", "text": "A comparative analysis of the relative success of introduced land birds on islands It has been suggested that more species have been successfully introduced to oceanic islands than to mainland regions. This suggestion has attracted considerable ecological interest and several theoretical mechanisms havebeen proposed. However, few data are available to test the hypotheses directly, and the pattern may simply result from many more species being transported to islands rather than mainland regions. Here I test this idea using data for global land birds and present evidence that introductions to islands have a higher probability of success than those to mainland regions. This difference between island and mainland landforms is not consistent among either taxonomic families or biogeographic regions. Instead, introduction attempts within the same biogeographic region have been significantly more successful than those that have occurred between two different biogeographic regions. Subsequently, the proportion of introduction attempts that have occurred within a single biogeographic region is thus a significant predictor of the observed variability in introduction success. I also show that the correlates of successful island introductions are probably different to those of successful mainland introductions." }, { "instance_id": "R56110xR56090", "comparison_id": "R56110", "paper_id": "R56090", "text": "Macroecological drivers of alien conifer naturalizations worldwide Understanding the factors that drive the global distribution of alien species is a pivotal issue in invasion biology. Here, we used data on naturalized conifers (Pinaceae, Cupressaceae) from sixty temperate and subtropical regions and five continents to test how environmental and socio-economic conditions of recipient areas as well as introduction efforts affect naturalization probabilities. We collated 18 predictor variables for each region describing environmental, biogeographic and socio-economic conditions as well as a measure of the macro-climatic match with the species' native ranges, and the extent to which alien conifers are used in commercial forestry. Naturalization probabilities across all species and regions were then related to these predictor variables by means of generalized linear mixed models. For both Pinaceae and Cupressaceae, naturalization probabilities were generally higher in the Southern Hemisphere, and increased with indicators of habitat diversity of the recipient region. The match in macro-climatic conditions between the native and introduced regions was a significant predictor of conifer naturalization, but socio-economic variables were less powerful predictors. Only for Cupressaceae did a socio-economic variable (human population density) affect naturalization probabilities. Key attributes facilitating naturalization were related to introduction effort. Moreover, usage in commercial forestry generally fostered naturalization, although the actual size of alien conifer plantations in a region was only correlated with the naturalization of Pinaceae. Our results suggest that climate matching, habitat diversity and introduction effort co-determine the probability of naturalization, which additionally, is modulated by biogeographic features of the recipient area, such as incidence of natural enemies or competitors. To date, the most widely used tools for invasive plant risk assessment only account for climate match and rarely factor in other attributes of the recipient environment. Future tools should additionally consider biotic environment and introduction effort if risk assessment is to be effective." }, { "instance_id": "R56110xR56087", "comparison_id": "R56110", "paper_id": "R56087", "text": "Invasibility of tropical islands by introduced plants: partitioning the influence of isolation and propagule pressure All else being equal, more isolated islands should be more susceptible to invasion because their native species are derived from a smaller pool of colonists, and isolated islands may be missing key functional groups. Although some analyses seem to support this hypothesis, previous studies have not taken into account differences in the number of plant introductions made to different islands, which will affect invasibility estimates. Furthermore, previous studies have not assessed invasibility in terms of the rates at which introduced plant species attain different degrees invasion or naturalization. I compared the naturalization status of introduced plants on two pairs of Pacific island groups that are similar in most respects but that differ in their distances from a mainland. Then, to factor out differences in propagule pressure due to differing numbers of introductions, I compared the naturalization status only among shared introductions. In the first comparison, Hawai\u2018i (3700 km from a mainland) had three times more casual/weakly naturalized, naturalized and pest species than Taiwan (160 km from a mainland); however, roughly half (54%) of this difference can be attributed to a larger number of plant introductions to Hawai\u2018i. In the second comparison, Fiji (2500 km from a mainland) did not differ in susceptibility to invasion in comparison to New Caledonia (1000 km from a mainland); the latter two island groups appear to have experienced roughly similar propagule pressure, and they have similar invasibility. The rate at which naturalized species have become pests is similar for Hawai\u2018i and other island groups. The higher susceptibility of Hawai\u2018i to invasion is related to more species entering the earliest stages in the invasion process (more casual and weakly naturalized species), and these higher numbers are then maintained in the naturalized and pest pools. The number of indigenous (not endemic) species was significantly correlated with susceptibility to invasion across all four island groups. When islands share similar climates and habitat diversity, the number of indigenous species may be a better predictor of invasibility than indices of physical isolation because it is a composite measure of biological isolation." }, { "instance_id": "R56110xR56104", "comparison_id": "R56110", "paper_id": "R56104", "text": "Diversity-invasibility relationships across multiple scales in disturbed forest understoreys Non-native plant species richness may be either negatively or positively correlated with native species due to differences in resource availability, propagule pressure or the scale of vegetation sampling. We investigated the relationships between these factors and both native and non-native plant species at 12 mainland and island forested sites in southeastern Ontario, Canada. In general, the presence of non-native species was limited: <20% of all species at a site were non-native and non-native species cover was <4% m\u22122 at 11 of the 12 sites. Non-native species were always positively correlated with native species, regardless of spatial scale and whether islands were sampled. Additionally, islands had a greater abundance of non-native species. Non-native species richness across mainland sites was significantly negatively correlated with mean shape index, a measure of the ratio of forest edge to area, and positively correlated with the mean distance to the nearest forest patch. Other factors associated with disturbance and propagule pressure in northeastern North America forests, including human land use, white-tailed deer populations, understorey light, and soil nitrogen, did not explain non-native richness nor cover better than the null models. Our results suggest that management strategies for controlling non-native plant invasions should aim to reduce the propagule pressure associated with human activities, and maximize the connectivity of forest habitats to benefit more poorly dispersed native species." }, { "instance_id": "R56110xR56082", "comparison_id": "R56110", "paper_id": "R56082", "text": "Determinants of establishment success in introduced birds A major component of human-induced global change is the deliberate or accidental translocation of species from their native ranges to alien environments, where they may cause substantial environmental and economic damage. Thus we need to understand why some introductions succeed while others fail. Successful introductions tend to be concentrated in certain regions, especially islands and the temperate zone, suggesting that species-rich mainland and tropical locations are harder to invade because of greater biotic resistance. However, this pattern could also reflect variation in the suitability of the abiotic environment at introduction locations for the species introduced, coupled with known confounding effects of nonrandom selection of species and locations for introduction. Here, we test these alternative hypotheses using a global data set of historical bird introductions, employing a statistical framework that accounts for differences among species and regions in terms of introduction success. By removing these confounding effects, we show that the pattern of avian introduction success is not consistent with the biotic resistance hypothesis. Instead, success depends on the suitability of the abiotic environment for the exotic species at the introduction site." }, { "instance_id": "R56110xR56100", "comparison_id": "R56110", "paper_id": "R56100", "text": "Why islands are easier to invade: human influences on bullfrog invasion in the Zhoushan archipelago and neighboring mainland China Islands are often considered easier to invade than mainland locations because of lower biotic resistance, but this hypothesis is difficult to test. We compared invasion success (the probability of establishing a wild reproducing population) for bullfrogs (Rana catesbeiana) introduced to enclosures on 26 farms on islands in the Zhoushan archipelago and 15 farms in neighboring mainland China. Bullfrogs were more likely to invade farms located on islands with lower native frog species richness than mainland farms, consistent with the biotic resistance hypothesis. However, human frog hunting pressure also differed between islands and the mainland and, along with the number of bullfrogs raised in enclosures, was a stronger predictor of invasion success than native frog richness in multiple regression. Variation in hunting pressure was also able to account for the difference in invasion success between islands and mainlands: islands had lower hunting pressure and thus higher invasion probability. We conclude that the ease with which bullfrogs have invaded islands of the Zhoushan archipelago relative to the mainland has little to do with biotic resistance but results from variation in factors under human control." }, { "instance_id": "R56945xR56630", "comparison_id": "R56945", "paper_id": "R56630", "text": "Exploitative competition between invasive herbivores benefits a native host plant Although biological invasions are of considerable concern to ecologists, relatively little attention has been paid to the potential for and consequences of indirect interactions between invasive species. Such interactions are generally thought to enhance invasives' spread and impact (i.e., the \"invasional meltdown\" hypothesis); however, exotic species might also act indirectly to slow the spread or blunt the impact of other invasives. On the east coast of the United States, the invasive hemlock woolly adelgid (Adelges tsugae, HWA) and elongate hemlock scale (Fiorinia externa, EHS) both feed on eastern hemlock (Tsuga canadensis). Of the two insects, HWA is considered far more damaging and disproportionately responsible for hemlock mortality. We describe research assessing the interaction between HWA and EHS, and the consequences of this interaction for eastern hemlock. We conducted an experiment in which uninfested hemlock branches were experimentally infested with herbivores in a 2 x 2 factorial design (either, both, or neither herbivore species). Over the 2.5-year course of the experiment, each herbivore's density was approximately 30% lower in mixed- vs. single-species treatments. Intriguingly, however, interspecific competition weakened rather than enhanced plant damage: growth was lower in the HWA-only treatment than in the HWA + EHS, EHS-only, or control treatments. Our results suggest that, for HWA-infested hemlocks, the benefit of co-occurring EHS infestations (reduced HWA density) may outweigh the cost (increased resource depletion)." }, { "instance_id": "R56945xR56913", "comparison_id": "R56945", "paper_id": "R56913", "text": "Scaling the consequences of interactions between invaders from the indivdual to the population level Abstract The impact of human\u2010induced stressors, such as invasive species, is often measured at the organismal level, but is much less commonly scaled up to the population level. Interactions with invasive species represent an increasingly common source of stressor in many habitats. However, due to the increasing abundance of invasive species around the globe, invasive species now commonly cause stresses not only for native species in invaded areas, but also for other invasive species. I examine the European green crab Carcinus maenas, an invasive species along the northeast coast of North America, which is known to be negatively impacted in this invaded region by interactions with the invasive Asian shore crab Hemigrapsus sanguineus. Asian shore crabs are known to negatively impact green crabs via two mechanisms: by directly preying on green crab juveniles and by indirectly reducing green crab fecundity via interference (and potentially exploitative) competition that alters green crab diets. I used life\u2010table analyses to scale these two mechanistic stressors up to the population level in order to examine their relative impacts on green crab populations. I demonstrate that lost fecundity has larger impacts on per capita population growth rates, but that both predation and lost fecundity are capable of reducing population growth sufficiently to produce the declines in green crab populations that have been observed in areas where these two species overlap. By scaling up the impacts of one invader on a second invader, I have demonstrated that multiple documented interactions between these species are capable of having population\u2010level impacts and that both may be contributing to the decline of European green crabs in their invaded range on the east coast of North America." }, { "instance_id": "R56945xR56943", "comparison_id": "R56945", "paper_id": "R56943", "text": "Predicting spatial extent of invasive earthworms on na oceanic island Aim Invasions of non-native earthworms into previously earthworm-free regions are a major conservation concern because they alter ecosystems and threaten biological diversity. Little information is available, however, about effects of earthworm invasions outside of temperate and boreal forests, particularly about invasions of islands. For San Clemente Island (SCI), California (USA) \u2013 an oceanic island with numerous endemic and endangered plant and vertebrate species \u2013 we assessed the spatial extent and drivers of earthworm invasion and examined relationships between earthworms and plant and soil microbial communities. Location San Clemente Island, southern California, USA. Methods Using a stratified random sampling approach, we sampled earthworms, vegetation, soils and microbial communities across SCI. We examined the relationship between the presence of invasive earthworms and soil and landscape variables using logistic regression models and implemented a spatial representation of the best model to represent potential site suitability for earthworms. We evaluated the relationship between invasive earthworms and vegetation and microbial variables using ANOVA. Results We found that the likelihood of encountering earthworms increased close to roads and streams and in high moisture conditions, which correspond to higher elevation and a north-eastern aspect on SCI. The presence of earthworms was positively associated with total ground vegetation cover, grass cover and non-native plant cover; however, there was no significant relationship between earthworms and microbial biomass. These results suggest that the earthworm invasion on SCI is at an early stage and closely tied to roads and high moisture conditions. Main conclusions Climatic variables and potential sources of earthworm introduction and dispersal (e.g. roads and streams) should be broadly useful for predicting current and future sites of earthworm invasions on both islands and continents. Furthermore, the significant positive relationship between non-native plant cover and invasive earthworm presence raises the possibility of an emerging invasional \u2018meltdown\u2019 on SCI. Additional study of earthworm invasions on human-inhabited oceanic islands is necessary to identify additional invasions and their potential for negative impacts on unique insular biota." }, { "instance_id": "R56945xR56632", "comparison_id": "R56945", "paper_id": "R56632", "text": "Biological invasion into the nested assemblage of tree-beetle associations on the oceanic Ogasawara Islands Invasion by alien organisms is a common worldwide phenomenon, and many alien species invade native communities. Invasion by alien species is especially likely to occur on oceanic islands. To determine how alien species become integrated into island plant\u2013insect associations, we analyzed the structure of tree\u2013beetle associations using host plant records for larval feeding by wood-feeding beetles (Coleoptera: Cerambycidae) on the oceanic Ogasawara Islands in the northwestern Pacific Ocean. The host plant records comprised 109 associations among 28 tree (including 8 alien) and 26 cerambycid (including 5 alien) species. Of these associations, 41.3% involved at least one alien species. Most native cerambycid species feed on host trees that have recently died. Alien trees were used by as many native cerambycid species (but by significantly more alien cerambycid species) as were native trees. Native cerambycid species used as many alien tree species (but significantly more native tree species) as did alien cerambycids. Thus, we observed many types of interactions among native and alien species. A network analysis revealed a significant nested structure in tree\u2013cerambycid associations regardless of whether alien species were excluded from the analysis. The original nested associations on the Ogasawara Islands may thus have accepted alien species." }, { "instance_id": "R56945xR56690", "comparison_id": "R56945", "paper_id": "R56690", "text": "Abundance and habitat preferences of the southernmost population of mink: implications for managing a recent island invasion Since 2001 invasive American mink has been known to populate Navarino Island, an island located in the pristine wilderness of the Cape Horn Biosphere Reserve, Chile, lacking native carnivorous mammals. As requested by scientists and managers, our study aims at understanding the population ecology of mink in order to respond to conservation concerns. We studied the abundance of mink in different semi-aquatic habitats using live trapping (n = 1,320 trap nights) and sign surveys (n = 68 sites). With generalized linear models we evaluated mink abundance in relation to small-scale habitat features including habitats engineered by invasive beavers (Castor canadensis). Mink have colonized the entire island and signs were found in 79% of the surveys in all types of semi-aquatic habitats. Yet, relative population abundance (0.75 mink/km of coastline) was still below densities measured in other invaded or native areas. The habitat model accuracies indicated that mink were generally less specific in habitat use, probably due to the missing limitations normally imposed by predators or competitors. The selected models predicted that mink prefer to use shrubland instead of open habitat, coastal areas with heterogeneous shores instead of flat beaches, and interestingly, that mink avoid habitats strongly modified by beavers. Our results indicate need for immediate mink control on Navarino Island. For this future management we suggest that rocky coastal shores should be considered as priority sites deserving special conservation efforts. Further research is needed with respect to the immigration of mink from adjacent islands and to examine facilitating or hampering relationships between the different invasive species present, especially if integrative management is sought." }, { "instance_id": "R56945xR56766", "comparison_id": "R56945", "paper_id": "R56766", "text": "A fast-track for invasion: invasive plants promote the performance of na invasive herbivore With the greater frequency of biological invasions worldwide there is an increased likelihood that exotic species will interact with each other, and such interactions could enhance one another\u2019s invasion potential. Although direct and indirect interactions between exotic species have been well documented for plant-herbivore interactions, the majority of studies have focused on a single interaction and on plant rather than herbivore performance. In this study we investigated whether invasive exotic plants could contribute to the invasion of California by an exotic generalist herbivore (Epiphyas postvittana). We tested this expectation in the greenhouse by monitoring the performance of larval and pupal stages of E. postvittana on six pairs of congeneric invasive and native plants. Larval survivorship and pupal weight of E. postvittana were both greater on the invasive species, and larval development time was shorter on the invasive plant species for two of the plant genera. Our results suggest that prior invasion of exotic plants could function as a catalyst for the subsequent invasion of an exotic insect herbivore, at least in the case where they have shared some history, thereby accelerating the invasion process and expansion of its novel geographic range." }, { "instance_id": "R56945xR56698", "comparison_id": "R56945", "paper_id": "R56698", "text": "Facilitative interactions between na exotic mammal and native and exotic plants: hog deer (Axis porcinus) as seed dispersers in south-eastern Australia Endozoochory by exotic mammalian herbivores could modify vegetation composition by facilitating the dispersal and establishment of exotic and native plant species. We examined the potential for endozoochoric dispersal of native and exotic plants by exotic hog deer (Axis porcinus) in south-eastern Australia. We quantified the germinable seed content of hog deer faecal pellets collected in five vegetation types within a 10,500-ha study area that was representative of their Australian range. Twenty exotic and 22 native species germinated from hog deer faecal pellets and significantly more native species germinated compared to exotic species. Seedlings of the encroaching native shrub Acacia longifolia var. sophorae emerged, but no native trees emerged and the percentage of grasses that germinated was low (11%). The species composition of germinants was similar among the five vegetation types. We estimated that the hog deer population in our study area could potentially disperse >130,000 viable seeds daily. Our study shows how an exotic mammal can disperse seeds from both native and invasive plants and highlights the need for endozoochory to be considered more widely in studies assessing the impacts of exotic mammals on plant communities." }, { "instance_id": "R56945xR56752", "comparison_id": "R56945", "paper_id": "R56752", "text": "Multiple predator effects and native prey responses to two non-native Everglades cichlids \u2013 Non-native predators may have negative impacts on native communities, and these effects may be dependent on interactions among multiple non-native predators. Sequential invasions by predators can enhance risk for native prey. Prey have a limited ability to respond to multiple threats since appropriate responses may conflict, and interactions with recent invaders may be novel. We examined predator\u2013prey interactions among two non-native predators, a recent invader, the African jewelfish, and the longer-established Mayan cichlid, and a native Florida Everglades prey assemblage. Using field enclosures and laboratory aquaria, we compared predatory effects and antipredator responses across five prey taxa. Total predation rates were higher for Mayan cichlids, which also targeted more prey types. The cichlid invaders had similar microhabitat use, but varied in foraging styles, with African jewelfish being more active. The three prey species that experienced predation were those that overlapped in habitat use with predators. Flagfish were consumed by both predators, while riverine grass shrimp and bluefin killifish were eaten only by Mayan cichlids. In mixed predator treatments, we saw no evidence of emergent effects, since interactions between the two cichlid predators were low. Prey responded to predator threats by altering activity but not vertical distribution. Results suggest that prey vulnerability is affected by activity and habitat domain overlap with predators and may be lower to newly invading predators, perhaps due to novelty in the interaction." }, { "instance_id": "R56945xR56781", "comparison_id": "R56945", "paper_id": "R56781", "text": "Reciprocally beneficial interactions between introduced plants and ants are induced by the presence of a third introduced species Interspecific interactions play an important role in the success of introduced species. For example, the \u2018enemy release\u2019 hypothesis posits that introduced species become invasive because they escape top\u2013down regulation by natural enemies while the \u2018invasional meltdown\u2019 hypothesis posits that invasions may be facilitated by synergistic interactions between introduced species. Here, we explore how facilitation and enemy release interact to moderate the potential effect of a large category of positive interactions \u2013 protection mutualisms. We use the interactions between an introduced plant (Japanese knotweed Fallopia japonica), an introduced herbivore (Japanese beetle Popillia japonica), an introduced ant (European red ant Myrmica rubra), and native ants and herbivores in riparian zones of the northeastern United States as a model system. Japanese knotweed produces sugary extrafloral nectar that is attractive to ants, and we show that both sugar reward production and ant attendance increase when plants experience a level of leaf damage that is typical in the plants\u2019 native range. Using manipulative experiments at six sites, we demonstrate low levels of ant patrolling, little effect of ants on herbivory rates, and low herbivore pressure during midsummer. Herbivory rates and the capacity of ants to protect plants (as evidenced by effects of ant exclusion) increased significantly when plants were exposed to introduced Japanese beetles that attack plants in the late summer. Beetles were also associated with greater on-plant foraging by ants, and among-plant differences in ant-foraging were correlated with the magnitude of damage inflicted on plants by the beetles. Last, we found that sites occupied by introduced M. rubra ants almost invariably included Japanese knotweed. Thus, underlying variation in the spatiotemporal distribution of the introduced herbivore influences the provision of benefits to the introduced plant and to the introduced ant. More specifically, the presence of the introduced herbivore converts an otherwise weak interaction between two introduced species into a reciprocally beneficial mutualism. Because the prospects for facilitation are linked to the prospects for enemy release in protection mutualisms, species" }, { "instance_id": "R56945xR56863", "comparison_id": "R56945", "paper_id": "R56863", "text": "Non-native earthworms promote plant invasion by ingesting seeds and modifying soil properties Abstract Earthworms can have strong direct effects on plant communities through consumption and digestion of seeds, however it is unclear how earthworms may influence the relative abundance and composition of plant communities invaded by non-native species. In this study, earthworms, seed banks, and the standing vegetation were sampled in a grassland of central California. Our objectives were i) to examine whether the abundances of non-native, invasive earthworm species and non-native grassland plant species are correlated, and ii) to test whether seed ingestion by these worms alters the soil seed bank by evaluating the composition of seeds in casts relative to uningested soil. Sampling locations were selected based on historical land-use practices, including presence or absence of tilling, and revegetation by seed using Phalaris aquatica . Only non-native earthworm species were found, dominated by the invasive European species Aporrectodea trapezoides . Earthworm abundance was significantly higher in the grassland blocks dominated by non-native plant species, and these sites had higher carbon and moisture contents. Earthworm abundance was also positively related to increased emergence of non-native seedlings, but had no effect on that of native seedlings. Plant species richness and total seedling emergence were higher in casts than in uningested soils. This study suggests that there is a potential effect of non-native earthworms in promoting non-native and likely invasive plant species within grasslands, due to seed-plant-earthworm interactions via soil modification or to seed ingestion by earthworms and subsequent cast effects on grassland dynamics. This study supports a growing body of literature for earthworms as ecosystem engineers but highlights the relative importance of considering non-native-native interactions with the associated plant community." }, { "instance_id": "R56945xR56793", "comparison_id": "R56945", "paper_id": "R56793", "text": "Assessing the potential to restore historic grazing ecosystems with tortoise ecological replacements The extinction of large herbivores, often keystone species, can dramatically modify plant communities and impose key biotic thresholds that may prevent an ecosystem returning to its previous state and threaten native biodiversity. A potentially innovative, yet controversial, landscape-based long-term restoration approach is to replace missing plant-herbivore interactions with non-native herbivores. Aldabran giant (Aldabrachelys gigantea) and Madagascan radiated (Astrochelys radiata) tortoises, taxonomically and functionally similar to the extinct Mauritian giant tortoises (Cylindraspis spp.), were introduced to Round Island, Mauritius, in 2007 to control the non-native plants that were threatening persistence of native species. We monitored the response of the plant community to tortoise grazing for 11 months in enclosures before the tortoises were released and, compared the cost of using tortoises as weeders with the cost of using manual labor. At the end of this period, plant biomass; vegetation height and cover; and adult, seedling, flower, and seed abundance were 3-136 times greater in adjacent control plots than in the tortoise enclosures. After their release, the free-roaming tortoises grazed on most non-native plants and significantly reduced vegetation cover, height, and seed production, reflecting findings from the enclosure study. The tortoises generally did not eat native species, although they consumed those native species that increased in abundance following the eradication of mammalian herbivores. Our results suggest that introduced non-native tortoises are a more cost-effective approach to control non-native vegetation than manual weeding. Numerous long-term outcomes (e.g., change in species composition and soil seed bank) are possible following tortoise releases. Monitoring and adaptive management are needed to ensure that the replacement herbivores promote the recovery of native plants." }, { "instance_id": "R56945xR56634", "comparison_id": "R56945", "paper_id": "R56634", "text": "The effect of herbivory on seedling survival of the invasive exotic species Pinus radiata and Eucalyptus globulus in a Mediterranean ecosystem of central Chile Herbivory may be an important factor affecting seedling survival of exotic species invading new habitats. We evaluated the effect of vertebrate herbivory on the seedling survival of two widely planted and invasive tree species (Pinus radiata and Eucalyptus globulus), in a Mediterranean-type ecosystem of central Chile. An important role of herbivory on seedling survival of these two species in their introduced ranges has previously been documented. However, this has mainly been evaluated in forest plantations where habitat and vegetation conditions differ from wild habitats in which invasion occurs. We planted seedlings with and without protection against vertebrate herbivores in different aspects (a mesic south-facing slope and a xeric north-facing slope) and vegetation cover (open sites and sites with patchy tree cover). We found that regardless of aspect or vegetation cover, herbivory, in this case mainly caused by exotic vertebrates, significantly and negatively affected seedling survival of both species. However, while the effect of herbivory on P. radiata was significant in every vegetation and habitat condition, for E. globulus, the effect of herbivory was significant only for open sites in the mesic habitat. These results suggest that, as observed in forestry plantations, vertebrate herbivory may constrain seedling establishment of these two exotic trees and potentially impede the invasion. However, the importance of herbivory in controlling exotic species may vary depending on the vegetation and habitat conditions in some species such as E. globulus." }, { "instance_id": "R56945xR56553", "comparison_id": "R56945", "paper_id": "R56553", "text": "Interaction and impacts of two introduced species on a soft-sediment marine assemblage in SE Tasmania Introduced species are having major impacts in terrestrial, freshwater and marine ecosystems world-wide. It is increasingly recognised that effects of multiple species often cannot be predicted from the effect of each species alone, due to complex interactions, but most investigations of invasion impacts have examined only one non-native species at a time and have not addressed the interactive effects of multiple species. We conducted a field experiment to compare the individual and combined effects of two introduced marine predators, the northern Pacific seastar Asterias amurensis and the European green crab Carcinus maenas, on a soft-sediment invertebrate assemblage in Tasmania. Spatial overlap in the distribution of these invaders is just beginning in Tasmania, and appears imminent as their respective ranges expand, suggesting a strong overlap in food resources will result from the shared proclivity for bivalve prey. A. amurensis and C. maenas provide good models to test the interaction between multiple introduced predators, because they leave clear predator-specific traces of their predatory activity for a number of common prey taxa (bivalves and gastropods). Our experiments demonstrate that both predators had a major effect on the abundance of bivalves, reducing populations of the commercial bivalves Fulvia tenuicostata and Katelysia rhytiphora. The interaction between C. maenas and A. amurensis appears to be one of resource competition, resulting in partitioning of bivalves according to size between predators, with A. amurensis consuming the large and C. maenas the small bivalves. At a large spatial scale, we predict that the combined effect on bivalves may be greater than that due to each predator alone simply because their combined distribution is likely to cover a broader range of habitats. At a smaller scale, in the shallow subtidal, where spatial overlap is expected to be most extensive, our results indicate the individual effects of each predator are likely to be modified in the presence of the other as densities increase. These results further highlight the need to consider the interactive effects of introduced species, especially with continued increases in the number of established invasions." }, { "instance_id": "R56945xR56561", "comparison_id": "R56945", "paper_id": "R56561", "text": "Relationship betwen alien plants and na alien bird species on Reunion island Many studies have shown that plant or bird invasions can be facilitated by native species, but few have demonstrated the possibility of a positive interaction between introduced species. We analysed the relationships between four invasive alien fleshy-fruited plants, Clidemia hirta, Rubus alceifolius, Lantana camara, Schinus terebinthifolius , and an invasive alien bird, the red-whiskered bulbul Pycnonotus jocosus introduced to Reunion Island (Indian Ocean). We compared the distribution of food items in the bulbul diet according to seasons and to abundance classes of this bird. Pycnonotus jocosus is mostly frugivorous and frequently eats the main alien plants (more than 80% frequency of food items). Sites with alien species, such as Clidemia hirta , providing fruits throughout the year supported more birds than sites providing fruits, such as Schinus terebinthifolius , seasonally. The birds facilitated seed germination by removing the pulp of fruit: the final per cent germination (FG) of cleaned seeds was higher than those within the fruit for three of the four plant species and in some cases passage through birds significantly increased FG ( Schinus terebinthifolius ) or Coefficient of Velocity (CV) ( Lantana camara )." }, { "instance_id": "R56945xR56857", "comparison_id": "R56945", "paper_id": "R56857", "text": "Historical anthropogenic disturbances influence patterns of non-native earthworm and plant invasions in a temperate primary forest Time lags are of potentially great importance during biological invasions. For example, significant delays can occur between the human activities permitting the arrival of an invader, the establishment of this new species, and the manifestation of its impacts. In this context, to assess the influence of anthropogenic disturbances, it may become necessary to include a historical perspective. In this study, we reconstructed the history of human activities in a temperate forest now protected as a nature reserve to evaluate the magnitude and duration of the impact of human disturbances (e.g. trails, old quarries), as well as environmental factors, in explaining the probability of occurrence and the intensity of invasion by non-native earthworms and plants. The present-day patterns of distribution and intensity of earthworms and plants were better explained by proximity to the oldest human disturbances (initiated more than a century ago) than by proximity to more recent disturbances or to all disturbances combined. We conclude that understanding present-day patterns of non-native species invasions may often require reconstructing the history of human disturbances that occurred decades or even centuries in the past." }, { "instance_id": "R56945xR56879", "comparison_id": "R56945", "paper_id": "R56879", "text": "Invader-invader mutualism influences land snail community composition and alters invasion success of alien species in tropical rainforest Mutualism between invaders may alter a key characteristic of the recipient community, leading to the entry or in situ release of other exotic species. We considered whether mutualism between invasive yellow crazy ant Anoplolepis gracilipes and exotic honeydew-producing scale insects indirectly facilitated land snails (exotic and native) via the removal of a native omnivore, the red land crab Gecarcoidea natalis. In plateau rainforest on Christmas Island, Indian Ocean, the land snail community was surveyed at 28 sites representing four forest states that differed in the density of red crabs, the abundance of yellow crazy ants and management history. One-way ANOVAs and multivariate analyses were used to determine differences in land snail species abundance and composition between forest states. Sample-based rarefaction was used to determine differences in species richness. The removal of the red land crab by supercolonies of yellow crazy ants was associated with a significant increase in the abundance of both invasive (14 species) and native (four species) land snails. Compositional differences in the land snail community were driven most strongly by the significantly greater abundance of a few common species in forest states devoid of red crabs. In forest where the crab population had recovered following management for ants, the land snail assemblage did not differ from intact, uninvaded forest. The land snail community was dominated by exotic species that can coexist alongside red crabs in rainforest uninvaded by exotic ants and scale insects. However, the ant\u2013scale mutualism significantly increased land snail abundance and altered their composition indirectly though the alteration of the recipient community. We suggest these constitute \u2018population-release\u2019 secondary invasion in which the impacts of previously successful invaders facilitate a significant increase in abundance of other exotic species already established at low density within the community. Understanding facilitative interactions between invaders and indirect consequences of impacts will provide invaluable insights for conservation in heavily invaded ecosystems." }, { "instance_id": "R56945xR56559", "comparison_id": "R56945", "paper_id": "R56559", "text": "Exotic species replacement: shifting dominance of dreissenid mussels in the Soulanges Canal, upper St. Lawrence River, Canada Abstract During the early 1990s, 2 Eurasian macrofouling mollusks, the zebra mussel Dreissena polymorpha and the quagga mussel D. bugensis, colonized the freshwater section of the St. Lawrence River and decimated native mussel populations through competitive interference. For several years, zebra mussels dominated molluscan biomass in the river; however, quagga mussels have increased in abundance and are apparently displacing zebra mussels from the Soulanges Canal, west of the Island of Montreal. The ratio of quagga mussel biomass to zebra mussel biomass on the canal wall is correlated with depth, and quagga mussels constitute >99% of dreissenid biomass on bottom sediments. This dominance shift did not substantially affect the total dreissenid biomass, which has remained at 3 to 5 kg fresh mass /m2 on the canal walls for nearly a decade. The mechanism for this shift is unknown, but may be related to a greater bioenergetic efficiency for quaggas, which attained larger shell sizes than zebra mussels at all depths. Similar events have occurred in the lower Great Lakes where zebra mussels once dominated littoral macroinvertebrate biomass, demonstrating that a well-established and prolific invader can be replaced by another introduced species without prior extinction." }, { "instance_id": "R56945xR56877", "comparison_id": "R56945", "paper_id": "R56877", "text": "Single and interactive effects of deer and earthworms on non-native plants Abstract Understanding drivers of plant invasions is essential to predict and successfully manage invasions. Across forests in North America, increased white-tailed deer (Odocoileus virginianus) abundance and non-native earthworms may facilitate non-native plant invasions. While each agent may exert independent effects, earthworms and deer often co-occur and their combined effects are difficult to predict based solely on knowledge of their individual effects. Using a network of twelve forested sites that differ in earthworm density, we evaluated deer exclusion effects (30 \u00d7 30 m; with an adjacent similar sized unfenced control plot) on cover, growth and reproduction of three non-native plant species: Alliaria petiolata, Berberis thunbergii and Microstegium vimineum. In addition, we assessed interactive effects of deer exclusion and earthworm invasions on B. thunbergii ring-growth. Five years after fence construction, A. petiolata frequency and density, B. thunbergii height, and M. vimineum cover were all significantly lower in fenced compared to open plots. In addition, B. thunbergii ring-growth was significantly lower in fenced compared to open plots, and ring-growth was positively correlated with earthworm density. Moreover, deer access and earthworm density synergistically interacted resulting in highest B. thunbergii ring-growth in open plots at sites with higher earthworm density. Results indicate facilitative effects of deer on non-native plant species and highlight the importance of understanding interactions among co-occurring factors in order to understand non-native species success. Successful long-term control of invasive plants may require a reduction in deer abundance, rather than just removing invasive plant species." }, { "instance_id": "R56945xR56702", "comparison_id": "R56945", "paper_id": "R56702", "text": "Invasive leaf resources alleviate density dependence in the invasive mosquito, Aedes albopictus Interactions between invasive species can have important consequences for the speed and impact of biological invasions. Containers occupied by the invasive mosquito, Aedes albopictus Skuse, may be sensitive to invasive plants whose leaves fall into this larval habitat. To examine the potential for interactions between invasive leaf species and larval A. albopictus, we conducted a field survey of leaf material found with A. albopictus in containers in Palm Beach County, Florida and measured density dependent responses of A. albopictus larvae to two invasive and one native leaf species in laboratory experiments. We found increased diversity of leaf species, particularly invasive species, in areas further from the urbanized coast, and a significant positive association between the presence of Schinus terebinthifolious (Brazilian pepper) and the abundance of A. albopictus. In laboratory experiments, we determined that larval growth and survivorship were significantly affected by both larval density and leaf species which, in turn, resulted in higher population performance on the most abundant invasive species (Brazilian pepper) relative to the most abundant native species, Quercus virginiana (live oak). These results suggest invasive leaf species can alleviate density dependent reductions in population performance in A. albopictus, and may contribute to its invasion success and potential to spread infectious disease." }, { "instance_id": "R56945xR56688", "comparison_id": "R56945", "paper_id": "R56688", "text": "Combining data-driven methods and lab studies to analyse the ecology of Dikerogammarus villosus Abstract The spread of aquatic invasive species is a worldwide problem. In the aquatic environment, especially exotic Crustacea are affecting biodiversity. The amphipod Dikerogammarus villosus is such an exotic species in Flanders, which is rapidly spreading and probably has a serious impact on aquatic communities. The purpose of the present study was to make use of lab results, field data and modelling techniques to investigate the potential impact of this species on other macroinvertebrates. All types of prey that were used in predator\u2013prey experiments ( Gammarus pulex , Gammarus tigrinus , Crangonyx pseudogracilis , Asellus aquaticus , Cloeon dipterum and Chironomus species) were consumed by D. villosus , especially species that were less mobile such as the Chironomus species. The presence of gravel somewhat reduced predation by providing shelter to the prey. Substrate preference experiments indicated that D. villosus preferred a stony substrate. Using decisions trees to construct habitat suitability models based on field observations, it could be concluded that D. villosus is mainly found in habitats with an artificial bank structure, a high oxygen saturation and a low conductivity, which corresponds with canals with a good chemical water quality. Moreover, a synecological classification tree, based on the abundance of the taxa present in the macroinvertebrate communities, indicated that the presence of D. villosus negatively affected the presence of the indigenous G. pulex . When the laboratory experiments and the field observations are combined, it can be concluded that D. villosus can seriously affect macroinvertebrate communities in Flanders." }, { "instance_id": "R56945xR56783", "comparison_id": "R56945", "paper_id": "R56783", "text": "Do exotic pine plantations favour the spread of invasive herbivorous mammals in Patagonia? Changes in land use patterns and vegetation can trigger ecological change in occupancy and community composition. Among the potential ecological consequences of land use change is altered susceptibility to occupancy by invasive species. We investigated the responses of three introduced mammals (red deer, Cervus elaphus; wild boar, Sus scrofa; and European hare, Lepus europaeus) to replacement of native vegetation by exotic pine plantations in the Patagonian forest-steppe ecotone using camera-trap surveys (8633 trap-days). We used logistic regression models to relate species presence with habitat variables at stand and landscape scales. Red deer and wild boar used pine plantations significantly more frequently than native vegetation. In contrast, occurrence of European hares did not differ between pine plantations and native vegetation, although hares were recorded more frequently in firebreaks than in plantations or native vegetation. Presence of red deer and wild boar was positively associated with cover of pine plantations at the landscape scale, and negatively associated with mid-storey cover and diversity at the stand scale. European hares preferred sites with low arboreal and mid-storey cover. Our results suggest that pine plantations promote increased abundances of invasive species whose original distributions are associated with woodlands (red deer and wild boar), and could act as source or pathways for invasive species to new areas." }, { "instance_id": "R56945xR56628", "comparison_id": "R56945", "paper_id": "R56628", "text": "Effects of the invasive Barbary ground squirrel (Atlantoxerus getulus) on seed dispersal systems of insular xeric environments The interaction of native and introduced fruit consumers (especially the squirrel Atlantoxerus getulus) with native and non-native fleshy-fruited plant species was studied in the semi-desertic Fuerteventura Island (Canary Islands). The ecological effect of the A. getulus squirrel was compared to that of another introduced mammal (the rabbit Oryctolagus cuniculus) and a native seed disperser (the lizard Gallotia atlantica). Fleshy fruits were an essential food and water resource in this xeric island. Coinciding with maximum fruit availability, consumption of native plant fruits occurred mainly in the spring while introduced plants were ingested in autumn. A significant number of Rubia fruticosa fruits were consumed by lizards, whereas squirrels ate a large amount of Lycium intricatum fruits. Asparagus pastorianus was consumed in similar quantities by each of the three fruit consumers. Fruits from Opuntia were mainly eaten by the squirrels. Lizards should be considered as legitimate seed dispersers for the three native species, while the two mammals are illegitimate dispersal agents. However, in the case of the non-native Opuntia, squirrels produce an invasional meltdown effect in the colonization of this cactus on Fuerteventura Island. While this invasive squirrel plays a significant negative predatory role on native seed plants, it is an effective disperser of some introduced plants. Thus, it constitutes an appropriate example from which to elucidate the mechanisms underlying the disruption impacts of introduced species in island ecosystems." }, { "instance_id": "R56945xR56805", "comparison_id": "R56945", "paper_id": "R56805", "text": "Two co-occorring invasive woody shrubs alter soil properies and promote subdominant invasive species Summary 1. Though co-occurrence of invasive plant species is common, few studies have compared the community and ecosystem impacts of invaders when they occur alone and when they co-occur. Prioritization of invasive species management efforts requires sufficient knowledge of impacts \u2013 both among individual invasive species and among different sets of co-occurring invaders \u2013 to target resources towards management of sites expected to undergo the largest change. 2. Here, we observed differences in above- and below-ground impacts of two invasive woody shrubs, Lonicera maackii and Ligustrum sinense, among plots containing both shrubs (mixed), each species singly or lacking both species (control). 3. We found additive and non-additive effects of these co-occurring invasives on plant communities and soil processes. Mixed plots contained two times more subdominant invasive plant species than L. maackii or L. sinense plots. Compared to control plots, mixed plots had three times the potential activity of b-glucosidase, a carbon-degrading extracellular soil enzyme. L. maackii plots and mixed plots had less acidic soils, while L. sinense plots had higher soil moisture than control plot soils. Differences in soil properties among plots explained plant- and ground-dwelling arthropod community composition as well as the potential microbial function in soils. 4. Synthesis and applications. Our study highlights the importance of explicitly studying the impacts of co-occurring invasive plant species singly and together. Though Lonicera maackii and Ligustrum sinense have similar effects on ecosystem structure and function when growing alone, our data show that two functionally similar invaders can have non-additive impacts on ecosystems. These results suggest that sites with both species should be prioritized for invasive plant management over sites containing only one of these species. Furthermore, this study provides a valuable template for future studies exploring how and when invasion by co-occurring species alters above- and below-ground function in ecosystems with different traits." }, { "instance_id": "R56945xR56803", "comparison_id": "R56945", "paper_id": "R56803", "text": "Experimental evidence for indirect facilitation among invasive plants Summary Facilitation among species may promote non-native plant invasions through alteration of environmental conditions, enemies or mutualists. However, the role of non-trophic indirect facilitation in invasions has rarely been examined. We used a long-term field experiment to test for indirect facilitation by invasions of Microstegium vimineum (stiltgrass) on a secondary invasion of Alliaria petiolata (garlic mustard) by introducing Alliaria seed into replicated plots previously invaded experimentally by Microstegium. Alliaria more readily colonized control plots without Microstegium but produced almost seven times more biomass and nearly four times as many siliques per plant in Microstegium-invaded plots. Improved performance of Alliaria in Microstegium-invaded plots compared to control plots overwhelmed differences in total number of plants such that, on average, invaded plots contained 327% greater total Alliaria biomass and 234% more total siliques compared to control plots. The facilitation of Alliaria in Microstegium-invaded plots was associated with an 85% reduction in the biomass of resident species at the peak of the growing season and significantly greater light availability in Microstegium-invaded than control plots early in the growing season. Synthesis. Our results demonstrate that an initial plant invasion associated with suppression of resident species and increased resource availability can facilitate a secondary plant invasion. Such positive interactions among species with similar habitat requirements, but offset phenologies, may exacerbate invasions and their impacts on native ecosystems." }, { "instance_id": "R56945xR56535", "comparison_id": "R56945", "paper_id": "R56535", "text": "Invasion of pollination networks on oceanic islands: importance of invader complexes and endemic super generalists Abstract. The structure of pollination networks is described for two oceanic islands, the Azorean Flores and the Mauritian Ile aux Aigrettes. At each island site, all interactions between endemic, non-endemic native and introduced plants and pollinators were mapped. Linkage level, i.e. number of species interactions per species, was significantly higher for endemic species than for non-endemic native and introduced species. Linkage levels of the two latter categories were similar. Nine types of interaction may be recognized among endemic, non-endemic native and introduced plants and pollinators. Similar types had similar frequencies in the two networks. Specifically, we looked for the presence of \u2018invader complexes\u2019 of mutualists, defined as groups of introduced species interacting more with each other than expected by chance and thus facilitating each other\u2019s establishment. On both islands, observed frequencies of interactions between native (endemic and non-endemic) and introduced pollinators and plants differed from random. Introduced pollinators and plants interacted less than expected by chance. Thus, the data did not support the existence of invader complexes. Instead, our study suggested that endemic super-generalist species, i.e. pollinators or plant species with a very wide pollination niche, include new invaders in their set of food plants or pollinators and thereby improve establishment success of the invaders. Reviewing other studies, super generalists seem to be a widespread island phenomenon, i.e. island pollination networks include one or a few species with a very high generalization level compared to co-occurring species. Low density of island species may lead to low interspecific competition, high abundance and ultimately wide niches and super generalization." }, { "instance_id": "R56945xR56758", "comparison_id": "R56945", "paper_id": "R56758", "text": "Herbivory by na introduced Asian weevil negatively affects population growth of na invasive Brazilian shrub in Florida The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide." }, { "instance_id": "R56945xR56597", "comparison_id": "R56945", "paper_id": "R56597", "text": "Strong below-ground competition shapes tree regeneration in invasive Cinnamomum verum forests Summary 1 Plant species invading nutrient-poor ecosystems are likely to have their greatest impact on the native plant community by competing for resources below-ground. We investigated how root competition by an invasive tree, Cinnamomum verum, affects regeneration in nutrient-poor tropical secondary forests, in the Seychelles. 2 We performed three trenching experiments to investigate the effects of severing the root systems of mature trees on the growth of juveniles. These experiments had the following objectives: (i) to compare the responses of native and invasive saplings to release from root competition; (ii) to compare how seedlings ( 50 cm tall) of C. verum respond to trenching; and (iii) to compare the response of C. verum seedlings to trenching in forest stands with and without C. verum as the dominant species. 3 The results indicate that the dense topsoil root mat produced by mature C. verum trees suppresses the growth of young trees, mainly by increasing competition for scarce nutrients. Growth responses to trenching were stronger for seedlings than saplings, and stronger for juveniles of invasive than of native species. We conclude that stands of C. verum exert a strong below-ground filtering effect on seedling regeneration. This effect is likely to influence secondary forest succession by selectively reducing the establishment of invasive and small-seeded species. 4 Because of the bias in invasion biology towards relatively nutrient-rich, productive ecosystems, few studies have investigated the role of below-ground resource competition in plant invasions. Our results for an infertile, phosphorus-poor ecosystem show that root competition by an alien species can exert a strong influence on forest regeneration. We suggest that this mechanism may be of general importance in nutrient-poor tropical forests invaded by alien tree species." }, { "instance_id": "R56945xR56656", "comparison_id": "R56945", "paper_id": "R56656", "text": "Impact of alien plant invaders on pollination networks in two archipelagos Mutualistic interactions between plants and animals promote integration of invasive species into native communities. In turn, the integrated invaders may alter existing patterns of mutualistic interactions. Here we simultaneously map in detail effects of invaders on parameters describing the topology of both plant-pollinator (bi-modal) and plant-plant (uni-modal) networks. We focus on the invader Opuntia spp., a cosmopolitan alien cactus. We compare two island systems: Tenerife (Canary Islands) and Menorca (Balearic Islands). Opuntia was found to modify the number of links between plants and pollinators, and was integrated into the new communities via the most generalist pollinators, but did not affect the general network pattern. The plant uni-modal networks showed disassortative linkage, i.e. species with many links tended to connect to species with few links. Thus, by linking to generalist natives, Opuntia remained peripheral to network topology, and this is probably why native network properties were not affected at least in one of the islands. We conclude that the network analytical approach is indeed a valuable tool to evaluate the effect of invaders on native communities." }, { "instance_id": "R56945xR56929", "comparison_id": "R56945", "paper_id": "R56929", "text": "Asiatic Callosciurus squirrels as seed dispersers of exotic plants in the Pampas Abstract Seed dispersal by exotic mammals exemplifies mutualistic interactions that can modify the habitat by facilitating the establishment of certain species. We examined the potential for endozoochoric dispersal of exotic plants by Callosciurus erythraeus introduced in the Pampas Region of Argentina. We identified and characterized entire and damaged seeds found in squirrel faeces and evaluated the germination capacity and viability of entire seeds in laboratory assays. We collected 120 samples of squirrel faeces that contained 883 pellets in seasonal surveys conducted between July 2011 and June 2012 at 3 study sites within the main invasion focus of C. erythraeus in Argentina. We found 226 entire seeds in 21% of the samples belonging to 4 species of exotic trees and shrubs. Germination in laboratory assays was recorded for Morus alba and Casuarina sp.; however, germination percentage and rate was higher for seeds obtained from the fruits than for seeds obtained from the faeces. The largest size of entire seeds found in the faeces was 4.2 \u00d7 4.0 mm, whereas the damaged seeds had at least 1 dimension \u2265 4.7 mm. Our results indicated that C. erythraeus can disperse viable seeds of at least 2 species of exotic trees. C. erythraeus predated seeds of other naturalized species in the region. The morphometric description suggested a restriction on the maximum size for the passage of entire seeds through the digestive tract of squirrels, which provides useful information to predict its role as a potential disperser or predator of other species in other invaded communities." }, { "instance_id": "R56945xR56569", "comparison_id": "R56945", "paper_id": "R56569", "text": "Recent biological invasion may hasten invasional meltdown by accelerating historical introductions Biological invasions are rapidly producing planet-wide changes in biodiversity and ecosystem function. In coastal waters of the U.S., >500 invaders have become established, and new introductions continue at an increasing rate. Although most species have little impact on native communities, some initially benign introductions may occasionally turn into damaging invasions, although such introductions are rarely documented. Here, I demonstrate that a recently introduced crab has resulted in the rapid spread and increase of an introduced bivalve that had been rare in the system for nearly 50 yr. This increase has occurred through the positive indirect effects of predation by the introduced crab on native bivalves. I used field and laboratory experiments to show that the mechanism is size-specific predation interacting with the different reproductive life histories of the native (protandrous hermaphrodite) and the introduced (dioecious) bivalves. These results suggest that positive interactions among the hundreds of introduced species that are accumulating in coastal systems could result in the rapid transformation of previously benign introductions into aggressively expanding invasions. Even if future management efforts reduce the number of new introductions, given the large number of species already present, there is a high potential for positive interactions to produce many future management problems. Given that invasional meltdown is now being documented in natural systems, I suggest that coastal systems may be closer to this threshold than currently believed." }, { "instance_id": "R56945xR56589", "comparison_id": "R56945", "paper_id": "R56589", "text": "A null model of temporal trends in biological invasion records Biological invasions are a growing aspect of global biodiversity change. In many regions, introduced species richness increases supralinearly over time. This does not, however, necessarily indicate increasing introduction rates or invasion success. We develop a simple null model to identify the expected trend in invasion records over time. For constant introduction rates and success, the expected trend is exponentially increasing. Model extensions with varying introduction rate and success can also generate exponential distributions. We then analyse temporal trends in aquatic, marine and terrestrial invasion records. Most data sets support an exponential distribution (15/16) and the null invasion model (12/16). Thus, our model shows that no change in introduction rate or success need be invoked to explain the majority of observed trends. Further, an exponential trend does not necessarily indicate increasing invasion success or 'invasional meltdown', and a saturating trend does not necessarily indicate decreasing success or biotic resistance." }, { "instance_id": "R56945xR56919", "comparison_id": "R56945", "paper_id": "R56919", "text": "Strong invaders are strong defenders - implications for the resistance of invaded communities Many ecosystems receive a steady stream of non-native species. How biotic resistance develops over time in these ecosystems will depend on how established invaders contribute to subsequent resistance. If invasion success and defence capacity (i.e. contribution to resistance) are correlated, then community resistance should increase as species accumulate. If successful invaders also cause most impact (through replacing native species with low defence capacity) then the effect will be even stronger. If successful invaders instead have weak defence capacity or even facilitative attributes, then resistance should decrease with time, as proposed by the invasional meltdown hypothesis. We analysed 1157 introductions of freshwater fish in Swedish lakes and found that species' invasion success was positively correlated with their defence capacity and impact, suggesting that these communities will develop stronger resistance over time. These insights can be used to identify scenarios where invading species are expected to cause large impact." }, { "instance_id": "R56945xR56789", "comparison_id": "R56945", "paper_id": "R56789", "text": "Exotic mammals disperse exotic fungi that promote invasion by exotic trees Biological invasions are often complex phenomena because many factors influence their outcome. One key aspect is how non-natives interact with the local biota. Interaction with local species may be especially important for exotic species that require an obligatory mutualist, such as Pinaceae species that need ectomycorrhizal (EM) fungi. EM fungi and seeds of Pinaceae disperse independently, so they may use different vectors. We studied the role of exotic mammals as dispersal agents of EM fungi on Isla Victoria, Argentina, where many Pinaceae species have been introduced. Only a few of these tree species have become invasive, and they are found in high densities only near plantations, partly because these Pinaceae trees lack proper EM fungi when their seeds land far from plantations. Native mammals (a dwarf deer and rodents) are rare around plantations and do not appear to play a role in these invasions. With greenhouse experiments using animal feces as inoculum, plus observational and molecular studies, we found that wild boar and deer, both non-native, are dispersing EM fungi. Approximately 30% of the Pinaceae seedlings growing with feces of wild boar and 15% of the seedlings growing with deer feces were colonized by non-native EM fungi. Seedlings growing in control pots were not colonized by EM fungi. We found a low diversity of fungi colonizing the seedlings, with the hypogeous Rhizopogon as the most abundant genus. Wild boar, a recent introduction to the island, appear to be the main animal dispersing the fungi and may be playing a key role in facilitating the invasion of pine trees and even triggering their spread. These results show that interactions among non-natives help explain pine invasions in our study area." }, { "instance_id": "R56945xR56895", "comparison_id": "R56945", "paper_id": "R56895", "text": "Differential benthic community response to increased habitat complexity mediated by na invasive barnacle Invasive species threaten native ecosystems worldwide. However, these species can interact positively with local communities, increasing their richness, or the abundance of some species. Many invasive species are capable of influencing the habitat itself, by ameliorating physical stress and facilitating the colonization and survival of other organisms. Barnacles are common engineer species that can change the physical structure of the environment, its complexity, and heterogeneity through their own structure. Balanus glandula is a native barnacle of the rocky shores of the west coast of North America. In Argentina, this invasive species not only colonizes rocky shores but it also has successfully colonized soft-bottom salt marshes, where hard substrata are a limiting resource. In these environments, barnacles form three-dimensional structures that increase the structural complexity of the invaded salt marshes. In this work, we compared the composition, density, richness, and diversity of the macroinvertebrate assemblages associated with habitats of different structural complexity in two Patagonian salt marshes where B. glandula is well established. Our results showed differences in the relative distribution and abundances of the invertebrate species between habitats of different complexities. Furthermore, the response of the communities to the changes in the structural complexity generated by B. glandula was different in the two marshes studied. This highlights the fact that B. glandula facilitates other invertebrates and affect community structure, mainly where the settlement substrata (Spartina vs. mussels) are not functionally similar to the barnacle. Thus, our work shows that the rocky shore B. glandula is currently a critical structuring component of the native invertebrate community of soft-bottom environments where this species was introduced along the coast of southern South America." }, { "instance_id": "R56945xR56885", "comparison_id": "R56945", "paper_id": "R56885", "text": "Feeding preferences of na invasive Ponto-Caspian goby for native and non-native gammarid prey SUMMARY 1. When an invasive predator encounters native and invasive prey, two scenarios are possible: thepredator may bene\ufb01t from the presence of na\u20acive native prey or choose prey from its region of origin,re\ufb02ecting their common evolutionary history.2. To determine interactions between an invasive predator and native and invasive prey, we usedthe Ponto-Caspian racer goby Babka gymnotrachelus as predator and gammarids as prey: nativeGammarus fossarum and Ponto-Caspian Dikerogammarus villosus and Pontogammarus robustoides. We hy-pothesised that prey origin would affect \ufb01sh preferences and growth rate and conducted a series oflaboratory experiments on \ufb01sh predation and growth and estimated pro\ufb01tability of prey of differentorigin.3. The goby preferred native prey to the Ponto-Caspian gammarids, irrespective of prey motility, thepresence of shelters or waterborne chemical cues. Moreover, \ufb01sh grew better when fed native prey.4. Thus, we suggest that \ufb01sh selectivity was based on the assessment of prey quality during directcontact with gammarids. A diet consisting of Ponto-Caspian gammarids did not facilitate an invaderoriginating from the same region, which bene\ufb01ted more from the presence of a local prey species.5. Ponto-Caspian gammarids and gobies are successful invaders in inland waters, usually main riv-ers. The gobies, in contrast to the invasive gammarids, enter smaller tributaries that serve as refugiafor native gammarids. We show that the gobies may bene\ufb01t from the presence of native prey speciesin such locations.Keywords: Babka gymnotrachelus, invasional meltdown, invasive species, predator\u2013prey relationship, preyquality" }, { "instance_id": "R56945xR56839", "comparison_id": "R56945", "paper_id": "R56839", "text": "Novel interactions between non-native mammals and fungi facilitate establishment of invasive pines Summary The role of novel ecological interactions between mammals, fungi and plants in invaded ecosystems remains unresolved, but may play a key role in the widespread successful invasion of pines and their ectomycorrhizal fungal associates, even where mammal faunas originate from different continents to trees and fungi as in New Zealand. We examine the role of novel mammal associations in dispersal of ectomycorrhizal fungal inoculum of North American pines (Pinus contorta, Pseudotsuga menziesii), and native beech trees (Lophozonia menziesii) using faecal analyses, video monitoring and a bioassay experiment. Both European red deer (Cervus elaphus) and Australian brushtail possum (Trichosurus vulpecula) pellets contained spores and DNA from a range of native and non-native ectomycorrhizal fungi. Faecal pellets from both animals resulted in ectomycorrhizal infection of pine seedlings with fungal genera Rhizopogon and Suillus, but not with native fungi or the invasive fungus Amanita muscaria, despite video and DNA evidence of consumption of these fungi. Native L. menziesii seedlings never developed any ectomycorrhizal infection from faecal pellet inoculation. Synthesis. Our results show that introduced mammals from Australia and Europe facilitate the co-invasion of invasive North American trees and Northern Hemisphere fungi in New Zealand, while we find no evidence that introduced mammals benefit native trees or fungi. This novel tripartite \u2018invasional meltdown\u2019, comprising taxa from three kingdoms and three continents, highlights unforeseen consequences of global biotic homogenization." }, { "instance_id": "R56945xR56811", "comparison_id": "R56945", "paper_id": "R56811", "text": "Niche differentiation among invasive crayfish and their impacts on ecosystem structure and functioning 1.Many aquatic ecosystems sustain multiple invasive species and interactions among them have important implications for ecosystem structure and functioning. Here, we examine interactions among two pairs of invasive crayfish species because of their close proximity and thus chance of sympatric populations in the near future within the Thames catchment, U.K. (signal, Pacifastacus leniusculus and virile crayfish, Orconectes virilis within a river system; red swamp, Procambarus clarkii and Turkish crayfish, Astacus leptodactylus found within a suite of ponds). We address two questions: do sympatric invasive crayfish occupy a smaller niche than their allopatric counterparts due to potential resource competition? and do interactions among invasive species amplify or mitigate one another's impacts on the ecosystem? 2.Two fully factorial mesocosm experiments (one for each crayfish pair) were used to investigate crayfish diet and their impact on benthic invertebrate community structure, benthic algal standing stock and leaf litter decomposition rates in allopatric and sympatric populations, compared with a crayfish-free control. We used stable isotope analysis to examine crayfish diet in the mesocosms and in allopatric populations of each species in the Thames catchment. 3.Isotopic niche width did not vary significantly between allopatric and sympatric populations of crayfish in the mesocosm experiments, and isotopic niche partitioning in all the wild populations suggests the invaders can coexist. 4.All four species altered benthic invertebrate community structure but with differing functional effects, often mediated via trophic cascades. Red swamp crayfish predation upon snails evidently promoted benthic algal standing stock via reduction in grazing pressure. However, a trophic cascade whereby the crayfish consumed native invertebrate shredders, causing a reduction in net leaf litter decomposition, was decoupled by red swamp and signal crayfish since they consumed leaf litter directly and thus moderated the cascade to a trickle when in sympatry with Turkish or virile crayfish, respectively. 5.Benthic invertebrate predator abundance was significantly reduced by sympatric red swamp and Turkish crayfish but not independently when in allopatry, indicating an amplified effect overall when in sympatry. 6.Our results suggest that the combined effect of multiple invasions on the ecosystem can reflect either an additive effect of their independent impacts or an amplified effect, which is greater than the sum of their independent impacts. A lack of general pattern in their effects makes any potential management strategy more complex." }, { "instance_id": "R56945xR56787", "comparison_id": "R56945", "paper_id": "R56787", "text": "Feeding ecology and ecological impact of na alien 'warm-water' omnivore in cold lakes Abstract The present study attempted to investigate the feeding ecology and ecological impact of Procambarus clarkii, the world's worst invasive crayfish and a recent invader in colder climates, by linking stomach-content analysis with an in situ enclosure experiment in lakes in southern Germany. The stomach-content analysis showed that P. clarkii is a polytrophic omnivore that feeds on macrophytes, detritus and macroinvertebrates. The trophic diversity of its diet was highest in mid-summer and in smaller crayfish. Chironomidae larvae and Dreissena polymorpha were the most preferred prey, whereas sediment-dwelling taxa were rarely consumed. The number of consumed small and agile prey negatively correlated with the crayfish size, suggesting an ontogenetic shift in diet. A five-week enclosure experiment was used to determine the impact of P. clarkii on the basal levels of a typical littoral food web of cold lakes at different crayfish densities (0, 2.5, and 5 crayfish m\u22122). The abundance of aquatic snails sharply decreased with increasing crayfish density and conditioned leaf breakdown was up to five times higher in the presence of crayfish than in the control treatment without crayfish. Crayfish also had a negative effect on macrophyte biomass, resulting from both consumption and uprooting. However, the impact mechanisms and outcomes differed among macrophyte species. In the crayfish treatments, the final biomass of the indigenous Myriophyllum spicatum and Chara sp. was significantly reduced relative to the initially stocked biomass, whereas the alien Elodea nuttallii was able to gain biomass. This finding is consistent with an invasional meltdown scenario, in that P. clarkii indirectly facilitated a dominance of E. nuttallii. Overall, the results concordantly suggest that P. clarkii is a keystone species that can profoundly alter recipient communities via direct trophic links and non-consumptive destruction, and may indirectly facilitate other invasive alien species." }, { "instance_id": "R56945xR56668", "comparison_id": "R56945", "paper_id": "R56668", "text": "Destruction without extinction: long-term impacts of na invasive tree species on Gal\u00e1pagos highland vegetation Summary 1. A common belief in invasion ecology is that invasive species are a major threat to biodiversity, but there is little evidence yet that competition from an exotic plant species has led to the extinction of any native plant species at the landscape scale. However, effects of invasive species at community and ecosystem levels can severely compromise conservation goals. 2. Our model species, the red quinine tree (Cinchona pubescens), was introduced to the Galapagos Islands in the 1940s and today extends over at least 11 000 ha in the highlands of Santa Cruz Island. It is also invasive on other oceanic islands. 3. We adopted a long-term approach, analysing permanent plots in the Fern-Sedge vegetation zone over 7 years, to test for impacts of C. pubescens density on resident plant species composition and on microclimate variables. We also tested whether the C. pubescens invasion facilitated the invasion of other species. 4. The rapid pace of the C. pubescens invasion was indicated by a more than doubling of percentage cover, a 4.6-fold increase in mean stand basal area and a 4-fold increase in the number of stems ha\u22121 in 7 years. 5. Photosynthetically active radiation was reduced by 87% under the C. pubescens canopy while precipitation increased because of enhanced fog interception. 6. Cinchona pubescens significantly decreased species diversity and the cover of most species by at least 50%. Endemic herbaceous species were more adversely affected than non-endemic native species. Stachys agraria, another invasive species, colonized bare ground that developed under the C. pubescens canopy. 7. The numbers of native, endemic and introduced species in the study area remained constant throughout the 7-year period. 8. Synthesis. This study clearly established C. pubescens as a habitat transformer, although its average cover did not exceed 20%. Despite the fact that no plant species has been lost completely from the study area so far, the introduction of the novel tree life form to a formerly treeless environment led to significant changes in stand structure and environmental conditions and to decreases in species diversity and cover. Such changes clearly conflict with conservation goals as set by the Convention on Biological Diversity." }, { "instance_id": "R56945xR56606", "comparison_id": "R56945", "paper_id": "R56606", "text": "Predicting habitat use and trophic interactions of Eurasian ruffe, round gobies, and zebra mussels in nearshore areas of Great Lakes The Laurentian Great Lakes have been subject to numerous introductions of nonindigenous species, including two recent benthic fish invaders, Eurasian ruffe (Gymnocephalus cernuus) and round gobies (Neogobius melanostomus), as well as the benthic bivalve, zebra mussel (Dreissena polymorpha). These three exotic species, or \u201cexotic triad,\u201d may impact nearshore benthic communities due to their locally high abundances and expanding distributions. Laboratory experiments were conducted to determine (1) whether ruffe and gobies may compete for habitat and invertebrate food in benthic environments, and (2) if zebra mussels can alter those competitive relationships by serving as an alternate food source for gobies. In laboratory mesocosms, both gobies and ruffe preferred cobble and macrophyte areas to open sand either when alone or in sympatry. In a 9-week goby\u2013ruffe competition experiment simulating an invasion scenario with a limited food base, gobies grew faster than did ruffe, suggesting that gobies may be competitively superior at low resource levels. When zebra mussels were added in a short-term experiment, the presence or absence of mussels did not affect goby or ruffe growth, as few zebra mussels were consumed. This finding, along with other laboratory evidence, suggests that gobies may prefer soft-bodied invertebrate prey over zebra mussels. Studies of interactions among the \u201cexotic triad\u201d, combined with continued surveillance, may help Great Lakes fisheries managers to predict future population sizes and distributions of these invasive fish, evaluate their impacts on native food webs, and direct possible control measures to appropriate species." }, { "instance_id": "R56945xR56676", "comparison_id": "R56945", "paper_id": "R56676", "text": "Co-invasion by Pinus and its mycorrhizal fungi SUMMARY *The absence of co-evolved mutualists of plants invading a novel habitat is the logical corollary of the more widely recognized 'enemy escape'. To avoid or overcome the loss of mutualists, plants may co-invade with nonnative mutualists, form novel associations with native mutualists or form associations with native cosmopolitan mutualists, which are native but not novel to the invading plant. *We tested these hypotheses by contrasting the ectomycorrhizal fungal communities associated with invasive Pinus contorta in New Zealand with co-occurring endemic Nothofagus solandri var. cliffortioides. *Fungal communities on Pinus were species poor (14 ectomycorrhizal species) and dominated by nonnative (93%) and cosmopolitan fungi (7%). Nothofagus had a species-rich (98 species) fungal community dominated by native Cortinarius and two cosmopolitan fungi. *These results support co-invasion by mutualists rather than novel associations as an important mechanism by which plants avoid or overcome the loss of mutualists, consistent with invasional meltdown." }, { "instance_id": "R56945xR56873", "comparison_id": "R56945", "paper_id": "R56873", "text": "Invasive Scotch broom (Cytisus scoparius, Fabaceae) and the pollination success of three Garry oak-associated plant species A growing number of studies have reported an effect of invasive species on the pollination and reproductive success of co-flowering plants, over and above direct competition for resources. In this study, we investigate the effect of the invader Scotch broom (Cytisus scoparius) on the visitation, pollen deposition, and female reproductive output of three co-flowering species (two native, one exotic) of the critically endangered Garry oak grassland ecosystem on the Saanich peninsula of Vancouver Island. The presence of C. scoparius was largely neutral, with the exception of some facilitation of pollen deposition to the native Camassia leichtlinii, the one species exhibiting pollinator overlap with Scotch broom. Yet, this pattern occurred despite a decreased visitation rate from pollinators. There was little observed effect of the invader on the native Collinsia parviflora or the exotic Geranium molle. Because broom was not favourited by any of the observed pollinators, this study provides evidence that the spread of Scotch broom is not due to the reduction of pollination success of natives nor is C. scoparius likely to be facilitating the pollination of other exotics in Garry oak ecosystem remnants." }, { "instance_id": "R56945xR56710", "comparison_id": "R56945", "paper_id": "R56710", "text": "Contrasting effects of na invasive ant on a native and na invasive plant When invasive species establish in new environments, they may disrupt existing or create new interactions with resident species. Understanding of the functioning of invaded ecosystems will benefit from careful investigation of resulting species-level interactions. We manipulated ant visitation to compare how invasive ant mutualisms affect two common plants, one native and one invasive, on a sub-tropical Indian Ocean island. Technomyrmex albipes, an introduced species, was the most common and abundant ant visitor to the plants. T. albipes were attracted to extrafloral nectaries on the invasive tree (Leucaenaleucocephala) and deterred the plant\u2019s primary herbivore, the Leucaena psyllid (Heteropsylla cubana). Ant exclusion from L. leucocephala resulted in decreased plant growth and seed production by 22% and 35%, respectively. In contrast, on the native shrub (Scaevola taccada), T. albipes frequently tended sap-sucking hemipterans, and ant exclusion resulted in 30% and 23% increases in growth and fruit production, respectively. Stable isotope analysis confirmed the more predacious and herbivorous diets of T. albipes on the invasive and native plants, respectively. Thus the ants\u2019 interactions protect the invasive plant from its main herbivore while also exacerbating the effects of herbivores on the native plant. Ultimately, the negative effects on the native plant and positive effects on the invasive plant may work in concert to facilitate invasion by the invasive plant. Our findings underscore the importance of investigating facilitative interactions in a community context and the multiple and diverse interactions shaping novel ecosystems." }, { "instance_id": "R56945xR56905", "comparison_id": "R56945", "paper_id": "R56905", "text": "Introduced blackbirds and song thrushes: useful substitutes for lost mid-sized native frugivores, or weed vectors The New Zealand avifauna has declined from human impacts, which might leave some larger-seeded native plants vulnerable to dispersal failure. We studied fruit dispersal in a lowland secondary forest near Kaikoura, where the only remaining native frugivores are relatively small (silvereye Zosterops lateralis, and bellbird Anthornis melanura). We tested whether two larger exotic frugivores (blackbird Turdus merula and song thrush T. philomelos) dispersed native plants with seeds too large for the two smaller native frugivores. Diet breadth was measured by identifying seeds in the faeces of 221 mist-netted birds, and by observations of birds foraging. We then compared the plant species dispersed to the range of locally available fruits. All four bird species had varied diets (6\u20139 plant species per bird species) that differed significantly, although Coprosma robusta was always the most-eaten fruit. As predicted, the maximum fruit size eaten was larger for exotic birds (11.3 mm diameter) than natives (7.4\u20137.7 mm diameter), but all birds ate mainly smaller fruits. However, 7/21 fruiting plant species were not seen to be dispersed by any species, and the chance of being undispersed was independent of fruit size. Blackbirds and song thrushes jointly dispersed all four woody weeds with fruits >7.5 mm diameter, but neither of the two similar-sized native plants. Although the two species of exotic birds dispersed some native plants, our study suggests that their net effect is negative through facilitating the spread of invasive weeds. Studies evaluating the contribution of exotic frugivores to novel plant communities need to distinguish potential effects (what the frugivores might be capable of doing) from actual effects (what the frugivores are observed doing)." }, { "instance_id": "R56945xR56861", "comparison_id": "R56945", "paper_id": "R56861", "text": "Host-plant stickiness disrupts novel ant-mealybug association Abstract Ants commonly engage in facultative mutualisms with honeydew-excreting homopterans such as mealybugs and other scale insects. Attendant ants obtain a high-energy carbohydrate of predictable availability, while the homopteran trophobiont gains protection from natural enemies and potential benefits of sanitation (honeydew removal), maintenance of host-plant quality, and transport. In a California, USA, arboretum, we observed large numbers of dead and dying Argentine ants (Linepithema humile) that had become entrapped on viscid flower buds and flowers of South African species of Erica as they attempted to tend a South African mealybug (Delottococcusconfusus). Mealybugs on viscid ericas were found on clusters of small buds before they had become sticky (and later in other areas that minimized exposure to stickiness). As buds developed, they became viscid, enclosing mealybugs within sticky flower parts and precluding further attendance by ants. Ants, however, were able to tend mealybugs without disruption on nonsticky ericas. Counts (n = 118) of haphazardly chosen stems of sticky ericas showed that significantly more dead ants were present on mealybug-infested stems. We suggest that evolutionary histories help explain the disparate outcomes for ants and mealybugs on sticky ericas. The Argentine ant lacks an evolutionary history with sticky ericas, whereas the native South African mealybug presumably shares an evolutionary relationship with species of Erica in South Africa\u2019s Cape Floristic Region. We propose that the mealybug\u2019s behavior and waxy coating are adaptations for circumventing plant stickiness. Our observations might represent the first documentation of plant stickiness disrupting an ant\u2013homopteran association." }, { "instance_id": "R56945xR56831", "comparison_id": "R56945", "paper_id": "R56831", "text": "Habitat degradation and introduction of exotic plants favor persistence of invasive species and population growth of native polyphagus fruit fly pests in a Northwestern Argentinean mosaic Expansion of agricultural land is one of the most significant human alterations to the global environment because it entails not only native habitat loss but also introduction of exotic species. These alterations affect habitat structure and arthropod dynamics, such as those among host plants, tephritid fruit flies, and their natural enemies. We compared abundance and dynamics of pest and non-pest tephritids and their natural enemies over a mosaic of habitats differing in structure, diversity and disturbance history on the Sierra de San Javier in Tucuman, Argentina. Our prediction was that conserved habitats would be more resistant to the establishment and spread of invasive tephritid species due in part to a greater abundance of natural enemies, a greater diversity of native species in the same family and trophic level, and a greater wealth of biotic interactions. We further predicted that native species with broad host ranges should be more sensitive to habitat loss yet more competitive in less disturbed habitats than generalist native and exotic species. We found that environmental degradation, and introduction and spread of exotic host plants strongly affected distribution patterns, abundance, and phenology of native and exotic tephritids. Monophagous tephritid species and several specialized parasitoids were more sensitive to habitat loss than polyphagous species and parasitoids exhibiting a wide host range. In contrast, native monophagous species and native parasitoids appeared to exclude the invasive Mediterranean fruit fly from conserved patches of native vegetation. Nevertheless, the Mediterranean fruit fly persisted in uncontested exotic host plants and thrived in highly degradeted urban landscapes." }, { "instance_id": "R56945xR56817", "comparison_id": "R56945", "paper_id": "R56817", "text": "Can differential predation of native and alien corixids explain the success of Trichocorixa verticalis verticalis (Hemiptera, Corixidae) in the Iberian Peninsula? Invasive species represent an increasing fraction of aquatic biota. However, studies on the role and consequences of facilitative interactions among aliens remain scarce. Here, we investigated whether the spread of the alien water boatman Trichocorixa verticalis verticalis in the Iberian Peninsula is related to reduced mortality from predation compared with native Corixidae, especially since Trichocorixa co-occurs with the invasive fishes Gambusia holbrooki and Fundulus heteroclitus. All three invaders have a common native range in North America and are widespread in and around Do\u00f1ana in SW Spain. Using laboratory experiments, we compared the predation rates by the two exotic fish and native Odonata larvae on Trichocorixa and the native Sigara lateralis. We found no evidence to suggest that Trichocorixa suffers lower predation rates. However, when both corixids were mixed together, predation of Trichocorixa by Odonata larvae was higher. Odonata larvae were size-limited predators and the proportion of corixids ingested was positively correlated with mask length. Since Trichocorixa is smaller than its native competitors, this may explain their higher susceptibility to predation by Odonata. This may be one of various factors explaining why Trichocorixa is particularly dominant in saline habitats where Odonata are rare, while it is still scarce in fresh waters." }, { "instance_id": "R56945xR56541", "comparison_id": "R56945", "paper_id": "R56541", "text": "Seed and seedling demography of invasive and native trees of subtropical Pacific islands Abstract Bischofia javanica is an invasive tree of the Bonin Islands in the western Pacific, Japan. This species has aggressive growth, competitively replacing native trees in the natural forest of the islands. The aim of this study was to examine seed and seedling factors which might confer an advantage to the establishment of Bischofia over native trees. During a 5-yr period we compared the demographic parameters of early life history of Bischofia and Elaeocarpus photiniaefolius, a native canopy dominant, in actively invaded forests. Predation of Elaeocarpus seeds by in troduced rodents was much higher before (27.9\u201332.9%) and after (41.3\u2013100%) dispersal of seeds than that of B. javanica. Most Elaeocarpus seeds lost viability ca. 6 mo after burial in forest soil while some seeds of Bischofia remained viable for more than 2 yr. Seedling survival in the first 2 yr was much higher in Bischofia (16%) than in Elaeocarpus (1.3%). The high persistence of Bischofia in the shade, coupled to its rapid acclimation to high light levels, is an unusual combination because in forest tree species there is generally a trade-off between seedling survival in the shade and response to canopy opening. Compared with a native canopy dominant, greater seed longevity, lower seed predation by introduced rodents, longer fruiting periods and the ability to form seedling banks under closed canopy appear to have contributed to the invasive success of Bischofia on the Bonin Islands. Nomenclature: Satake et al. (1989)." }, { "instance_id": "R56945xR56889", "comparison_id": "R56945", "paper_id": "R56889", "text": "Effects of precipitation change and neighboring plants on population dynamics of Bromus tectorum Shifting precipitation patterns resulting from global climate change will influence the success of invasive plant species. In the Front Range of Colorado, Bromus tectorum (cheatgrass) and other non-native winter annuals have invaded grassland communities and are becoming more abundant. As the global climate warms, more precipitation may fall as rain rather than snow in winter, and an increase in winter rain could benefit early-growing winter annuals, such as B. tectorum, to the detriment of native species. In this study we measured the effects of simulated changes in seasonal precipitation and presence of other plant species on population growth of B. tectorum in a grassland ecosystem near Boulder, Colorado, USA. We also performed elasticity analyses to identify life transitions that were most sensitive to precipitation differences. In both study years, population growth rates were highest for B. tectorum growing in treatments receiving supplemental winter precipitation and lowest for those receiving the summer drought treatment. Survival of seedlings to flowering and seed production contributed most to population growth in all treatments. Biomass of neighboring native plants was positively correlated with reduced population growth rates of B. tectorum. However, exotic plant biomass had no effect on population growth rates. This study demonstrates how interacting effects of climate change and presence of native plants can influence the population growth of an invasive species. Overall, our results suggest that B. tectorum will become more invasive in grasslands if the seasonality of precipitation shifts towards wetter winters and allows B. tectorum to grow when competition from native species is low." }, { "instance_id": "R56945xR56563", "comparison_id": "R56945", "paper_id": "R56563", "text": "Positive interactions between nonindigenous species facilitate transport by human vectors Numerous studies have shown how interactions between nonindigenous spe- cies (NIS) can accelerate the rate at which they establish and spread in invaded habitats, leading to an \"invasional meltdown.\" We investigated facilitation at an earlier stage in the invasion process: during entrainment of propagules in a transport pathway. The introduced bryozoan Watersipora subtorquata is tolerant of several antifouling biocides and a common component of hull-fouling assemblages, a major transport pathway for aquatic NIS. We predicted that colonies of W. subtorquata act as nontoxic refugia for other, less tolerant species to settle on. We compared rates of recruitment of W. subtorquata and other fouling organisms to surfaces coated with three antifouling paints and a nontoxic primer in coastal marinas in Queensland, Australia. Diversity and abundance of fouling taxa were compared between bryozoan colonies and adjacent toxic or nontoxic paint surfaces. After 16 weeks immersion, W. subtorquata covered up to 64% of the tile surfaces coated in antifouling paint. Twenty-two taxa occurred exclusively on W. subtorquata and were not found on toxic surfaces. Other fouling taxa present on toxic surfaces were up to 248 times more abundant on W. subtorquata. Because biocides leach from the paint surface, we expected a positive relationship between the size of W. subtorquata colonies and the abundance and diversity of epibionts. To test this, we compared recruitment of fouling organisms to mimic W. subtorquata colonies of three different sizes that had the same total surface area. Sec- ondary recruitment to mimic colonies was greater when the surrounding paint surface contained biocides. Contrary to our predictions, epibionts were most abundant on small mimic colonies with a large total perimeter. This pattern was observed in encrusting and erect bryozoans, tubiculous amphipods, and serpulid and sabellid polychaetes, but only in the presence of toxic paint. Our results show that W. subtorquata acts as a foundation species for fouling assemblages on ship hulls and facilitates the transport of other species at greater abundance and frequency than would otherwise be possible. Invasion success may be increased by positive interactions between NIS that enhance the delivery of prop- agules by human transport vectors." }, { "instance_id": "R56945xR56937", "comparison_id": "R56945", "paper_id": "R56937", "text": "Do high-impact invaders have the strongest negative effects on abundant and functionally similar resident species? Summary Although invasive plants may out-compete and cause local-scale extirpations of resident species, it is widely observed that they cause few extinctions at larger spatial scales. One possible explanation is that highly successful invaders tend to be functionally similar to, and therefore compete most strongly with, resident species that are also relatively successful and widespread. High abundance may then protect these functionally similar residents from complete extinction over large areas despite their stronger competition with the invader. We tested this idea in a native-rich grassland where a novel invader, Aegilops triuncialis (barb goatgrass), strongly affects community diversity at the local scale. We compared resident species abundances in paired invaded and uninvaded plots with similar soil and community characteristics and invasibility as assayed by experimentally planted Aegilops. We found that the negative effects of the invader on abundance were strongest on resident species belonging to the same functional group as the invader (annual grasses). Within grasses, multivariate functional similarity to the invader also predicted decline in abundance under invasion. However, we did not find the predicted general relationship between abundance, functional similarity to the invader, and tendency to decline under invasion. Additional factors, such as spatial heterogeneity in the invaded community, must contribute to the relative scarcity of large-scale extinctions under invasion." }, { "instance_id": "R56945xR56887", "comparison_id": "R56945", "paper_id": "R56887", "text": "Impact of na alien invasive shrub on ecology of native and alien invasive mosquito species (Diptera: Culicidae) ABSTRACT We examined how leaf litter of alien invasive honeysuckle (Lonicera maackii Rupr.) either alone or in combination with leaf litter of one of two native tree species, sugar maple (Acer saccharum Marshall) and northern red oak (Quercus rubra L.), affects the ecology of Culex restuans Theobald, Ochlerotatus triseriatus Say, and Ochlerotatus japonicus Theobald. Experimental mesocosms containing single species litter or a mixture of honeysuckle and one of two native tree species litter were established at South Farms and Trelease Woods study sites in Urbana, IL, and examined for their effect on 1) oviposition site selection by the three mosquito species, and 2) adult production and body size of Oc. triseriatus and Oc. japonicus. There were no significant effects of study site and leaf treatment on Oc. japonicus and Oc. triseriatus oviposition preference and adult production. In contrast, significantly more Cx. restuans eggs rafts were collected at South Farms relative to Trelease Woods and in honeysuckle litter relative to native tree species litter. Significantly larger adult females of Oc. japonicus and Oc. triseriatus were collected at South Farms relative to Trelease Woods and in honeysuckle litter relative to native tree species litter. Combining honeysuckle litter with native tree species litter had additive effects on Cx. restuans oviposition preference and Oc. japonicus and Oc. triseriatus body size, with the exception of honeysuckle and northern red oak litter combination, which had antagonistic effects on Oc. triseriatus body size. We conclude that input of honeysuckle litter into container aquatic habitats may alter the life history traits of vector mosquito species." }, { "instance_id": "R56945xR55099", "comparison_id": "R56945", "paper_id": "R55099", "text": "Invasive alien plants infiltrate bird-mediated shrub nucleation processes in arid savanna Summary 1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird-dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy-fruited species than indigenous species, we mapped the distribution of alien fleshy-fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy-fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy-fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy-fruited plants were found both beneath host trees and in the open, alien fleshy-fruited plants were found only beneath trees. 4 Abundance of fleshy-fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy-fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy-fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy-fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi-arid African savanna by alien fleshy-fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem-level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy-fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions." }, { "instance_id": "R56945xR56686", "comparison_id": "R56945", "paper_id": "R56686", "text": "Treatment-based Markov chain models clarify mechanisms of invasion in na invaded grassland community What are the relative roles of mechanisms underlying plant responses in grassland communities invaded by both plants and mammals? What type of community can we expect in the future given current or novel conditions? We address these questions by comparing Markov chain community models among treatments from a field experiment on invasive species on Robinson Crusoe Island, Chile. Because of seed dispersal, grazing and disturbance, we predicted that the exotic European rabbit ( Oryctolagus cuniculus ) facilitates epizoochorous exotic plants (plants with seeds that stick to the skin an animal) at the expense of native plants. To test our hypothesis, we crossed rabbit exclosure treatments with disturbance treatments, and sampled the plant community in permanent plots over 3 years. We then estimated Markov chain model transition probabilities and found significant differences among treatments. As hypothesized, this modelling revealed that exotic plants survive better in disturbed areas, while natives prefer no rabbits or disturbance. Surprisingly, rabbits negatively affect epizoochorous plants. Markov chain dynamics indicate that an overall replacement of native plants by exotic plants is underway. Using a treatment-based approach to multi-species Markov chain models allowed us to examine the changes in the importance of mechanisms in response to experimental impacts on communities." }, { "instance_id": "R56945xR56897", "comparison_id": "R56945", "paper_id": "R56897", "text": "Plantation of coniferous trees modifies risk and size of Padus serotina Borkh. Invasion - Evidence form a Rog\u00f3w Arboretum case study Abstract Density of natural regeneration of black cherry ( Padus serotina ) depends on distance from the propagule source. Ecological success of this species is higher in coniferous than deciduous forests. The main aim of this study was to assess the interaction between the distance from propagule source and tree stand type (coniferous, deciduous and mixed) on occurrence and density of natural regeneration of black cherry. The study was conducted on 202 experimental plots in Rogow Arboretum (Central Poland), consisting of plantations of alien and native tree species, growing on potential habitats of fertile deciduous forest. The density of natural regeneration was measured in four height classes: 0\u20130.5 m, 0.5\u20132 m, 2\u20135 m and over 5 m. Natural regeneration of black cherry occurred on 79 of the 202 plots, and its density varied from 0 to 25,660 ind. ha \u22121 . The mean density of black cherry was statistically significantly higher ( p \u22121 ) than in deciduous (138.3 \u00b1 48.3 ind. ha \u22121 ) and mixed (29.3 \u00b1 12.3 ind. ha \u22121 ) stands. There was also a negative relationship between distance from propagule source (stand of P. serotina established in 1932 in the central part of the Arboretum) and density of natural black cherry regeneration ( R 2 = 0.19, p p p P. serotina invasion." }, { "instance_id": "R56945xR56585", "comparison_id": "R56945", "paper_id": "R56585", "text": "Consumption rates and prey preference of the invasive gastropod Rapana venosa in the Northern Adriatic Sea The alien Asian gastropod Rapana venosa (Valenciennes 1846) was first recorded in 1973 along the Italian coast of the Northern Adriatic Sea. Recently, this predator of bivalves has been spreading all around the world oceans, probably helped by ship traffic and aquaculture trade. A caging experiment in natural environment was performed during the summer of 2002 in Cesenatico (Emilia-Romagna, Italy) in order to estimate consumption rates and prey preference of R. venosa. The prey items chosen were the Mediterranean mussel Mytilus galloprovincialis (Lamarck 1819), the introduced carpet clam Tapes philippinarum (Adams and Reeve 1850), both supporting the local fisheries, and the Indo-Pacific invasive clam Anadara (Scapharca) inaequivalvis (Brugui\u00e8re 1789). Results showed an average consumption of about 1 bivalve prey per day (or 1.2 g wet weight per day). Predation was species and size selective towards small specimens of A. inaequivalvis; consumption of the two commercial species was lower. These results might reduce the concern about the economical impact on the local bivalve fishery due to the presence of the predatory gastropod. On the other hand, selective predation might probably alter local community structure, influencing competition amongst filter feeder/suspension feeder bivalve species and causing long-term ecological impact. The large availability of food resource and the habitat characteristics of the Emilia-Romagna littoral makes this area an important breeding ground for R. venosa in the Mediterranean Sea, thus worthy of consideration in order to understand the bioinvasion ecology of this species and to control its likely further dispersal." }, { "instance_id": "R56945xR56567", "comparison_id": "R56945", "paper_id": "R56567", "text": "Positive effects of a dominant invader on introduced and native mudflat species Many introduced species have negative impacts on native species, but some develop positive interactions with both native species and other invaders. Facilitation between invaders may lead to an overall acceleration in invasion success and impacts. Mechanisms of facilitation include habitat alteration, or ecosystem engineering, and trophic interactions. In marine systems, only a handful of positive effects have been reported for invading species. In an unusual NE Pacific marine assemblage dominated by 5 conspicuous invaders and 2 native species, we identified positive effects of the most abundant invader, the Asian hornsnail Batillaria attramentaria, on all other species. B. attramentaria reached densities >1400 m -2 , providing an average of 600 cm of hard substrate per m 2 on this mudflat. Its shells were used as habitat almost exclusively by the introduced Atlantic slipper shell Crepidula convexa, the introduced Asian anemone Diadumene lineata, and 2 native hermit crabs Pagurus hirsutiusculus and P. granosimanus. In addition, manipulative experiments showed that the abundance of the mudsnail Nassarius fraterculus and percentage cover of the eelgrass Zostera japonica, both introduced from the NW Pacific, increased significantly in the presence of B. attramentaria. The most likely mechanisms for these facilitations are indirect grazing effects and bioturbation, respectively. Since the precise arrival dates of all these invaders are unknown, the role of B. attramentaria's positive interactions in their initial invasion success is unknown. Nevertheless, by providing habitat for 2 non-native epibionts and 2 native species, and by facilitating 2 other invaders, the non-native B. attramentaria enhances the level of invasion by all 6 species." }, { "instance_id": "R56945xR56891", "comparison_id": "R56945", "paper_id": "R56891", "text": "Ecological impacts of the austral-most population of Crassostrea gigas in South America: a matter of time? Abstract The Pacific oyster Crassostrea gigas is one of the most invasive species worldwide. This oyster has a preponderant ecological role in the invaded environments, for example structuring the benthic community through the provision of micro-habitats. Twenty-five years after its introduction in Argentina, the species is colonizing new areas along the coast, extending northwards and southwards its local distribution. In this study, we provide the first ecological characterization of the southern-most population of C. gigas; where the composition, density, richness and diversity of the macroinvertebrate assemblages associated with zones with oysters were compared with zones where it is absent at four different times of the year. Additionally, the main epibionts taxa settled on the oyster shells were studied. Our results showed differences in the assemblage composition between zones. However, these differences were not consistent throughout the year. Furthermore, density, richness and diversity were higher in the zones with oysters only in one of the surveys and the parameters did not differ between zones in the remaining months. Moreover, the majority of oysters were used as settlement substrate by the sessile common species present in the area. Thus, our work provides new information about the ecology of C. gigas in recently invaded areas that enhance our understanding of the role that facilitation plays in physically stressful ecosystems and the importance that density and time since the invasion may have in the engineering effects of the species." }, { "instance_id": "R56945xR56770", "comparison_id": "R56945", "paper_id": "R56770", "text": "The complex interaction network among multiple invasive bird species in a cavity-nesting community Alien invasive species have detrimental effects on invaded communities. Aliens do not invade a vacuum, but rather a community consisting of native and often other alien species. Our current understanding of the pathways and network of interactions among multiple invasive species within whole communities is limited. Eradication efforts often focus on a single target species, potentially leading to unexpected outcomes on interacting non-target species. We aimed to examine the interaction network in a cavity-nesting community consisting of native and invasive birds. We studied the nesting cavities in the largest urban park in Israel over two breeding seasons. We found evidence for a complex interaction network that includes negative, neutral and positive interactions, but no synergistic positive interactions among aliens. Three major factors shaped the interaction network: breeding timing, nesting preferences and the ability to excavate or widen the cavities, which were found to be a limited resource. Cavity enlargement by the early-breeding invasive rose-ringed parakeet may enhance breeding of the invasive common myna in previously unavailable holes. The myna excludes the smaller invasive vinous-breasted starling, a direct competitor of the primary nest excavator, the native Syrian woodpecker. Therefore, management and eradication efforts directed towards the common myna alone may actually release the vinous-breasted starling from competitive exclusion by the common myna, increasing the negative impact of the vinous-breasted starling on the native community. As found here, interactions among multiple alien species can be crucial in shaping invasion success and should be carefully considered when aiming to effectively manage biological invasions." }, { "instance_id": "R56945xR56722", "comparison_id": "R56945", "paper_id": "R56722", "text": "Native herbivores and plant facilitation mediate the performance and distribution of na exotic grass Summary 1. Exotic plant species have become increasingly prominent features of ecological landscapes throughout the world, and their interactions with native and exotic taxa in these novel environments may play critical roles in mediating the dynamics of such invasions. 2. Here, we summarize results from comparative and experimental studies that explore the effects of two factors \u2013 herbivory and facilitation \u2013 on the performance and distribution of an invasive South African grass, Ehrharta calycina, in a coastal foredune system in northern California, USA. 3. Using a 2-year exclosure experiment, we show that a native herbivore, black-tailed jackrabbits (Lepus californicus), significantly reduced the height, shoot production, fecundity and aboveground biomass of this exotic grass. 4. Data from two comparative studies and a neighbour-removal experiment revealed that Ehrharta frequently escaped herbivores by associating with three neighbouring plant species \u2013 an exotic perennial grass, Ammophila arenaria, an exotic perennial succulent, Carpobrotus edulis, and a native perennial shrub, Baccharis pilularis. Ehrharta growing in association with neighbours was taller, had fewer grazed shoots, produced greater numbers of spikelets and had greater above-ground biomass than unassociated individuals. Furthermore, removing neighbours generally eliminated these benefits in 7 months, although effects differed among neighbour species. 5. An additional neighbour-removal experiment conducted in the absence of jackrabbits indicated that neighbour removals did not have significant impacts on Ehrharta height, shoot production, spikelet production or above-ground dry biomass. These results suggest that the primary means by which Ehrharta benefits from neighbouring plants is protection from herbivores \u2013 either because they are less apparent to herbivores or less accessible \u2013 and that Ehrharta likely incurred minimal costs from associating with neighbours. 6. Ehrharta was more frequently associated with neighbours than expected due to chance, and less frequently found in open dune habitat. These results are consistent with the hypothesis that the effects of herbivory and facilitation have been sufficiently strong to shape the local distribution of this invader in the landscape. 7. Synthesis. Our research has demonstrated that herbivory and facilitation have jointly influenced the dynamics of a biological invasion, and highlights the importance of evaluating the effects of multiple interactions on invasions in a single system." }, { "instance_id": "R56945xR56575", "comparison_id": "R56945", "paper_id": "R56575", "text": "Invasion by a N2-fixing tree alters function and structure in wet lowland foress of Hawaii Invasive species pose major threats to the integrity and functioning of ecosystems. When such species alter ecosystem processes, they have the potential to change the environmental context in which other species survive and reproduce and may also facilitate the invasion of additional species. We describe impacts of an invasive N2-fixing tree, Falcataria moluccana, on some of the last intact remnants of native wet lowland forest undergoing primary succession on 48-, 213-, and 300-yr-old lava flows of Kilauea Volcano on the island of Hawai\u2018i. We measured litterfall, soil nitrogen (N) and phosphorus (P) availability, light availability, species composition, and forest structure in native-dominated stands and in stands invaded by Falcataria. Litter inputs increased 1.3\u20138.6 times, N mass of litterfall increased 4\u201355 times, and P mass of litterfall increased 2\u201328 times in invaded stands relative to native stands. C:N and C:P ratios of litterfall were lower, and N:P ratios higher, in invaded stands relative to native stands. Resin-captured soil N and P values were 17\u2013121 and 2\u201324 times greater, respectively, in invaded stands relative to native stands on each of the three lava flows. Native species accounted for nearly 100% of total basal area and stem density in native stands, while alien species accounted for 68\u2013 99% of total basal area, and 82\u201391% of total stem density, in invaded stands. Compositional changes following Falcataria invasion were due both to increases in alien species, particularly Psidium cattleianum, and decreases in native species, particularly Metrosideros polymorpha. Results provide a clear example of how invasive tree species, by modifying the function and structure of the ecosystems that they invade, can facilitate invasion by additional nonnative species and eliminate dominant native species. Given the rarity and limited extent of remaining native-dominated wet lowland forests in Hawaii, and the degree to which Falcataria invasion alters them, we expect that the continued existence of these unique ecosystems will be determined, in large part, by the spread of this invasive species." }, { "instance_id": "R56945xR56785", "comparison_id": "R56945", "paper_id": "R56785", "text": "Strategies of the invasive macrophyte Ludwigia grandiflora in its introduced range: Competition, facilitation or coexistence with native and exotic species? Abstract The success of invasive species is due to their ability to displace other species by direct competition. Our hypothesis is that the strategy of the invasive L. grandiflora differs according to the growth form of this plant (submerged/emergent) and to its density, and the presence and the density of neighbouring species, during the first step of introduction phase. Moreover, we also suppose that the invasive species L. grandiflora affects the European native aquatic macrophyte and that invasive species can facilitate the establishment, growth and spread of exotic species coming from the same biogeographical area (the \u201cInvasional Meltdown Hypothesis\u201d). We studied the relationships between three exotic species coming from South America ( Ludwigia grandiflora , Egeria densa and Myriophyllum aquaticum ) and two European macrophyte species ( Ceratophyllum demersum , Mentha aquatica ) in monocultures and in mixed cultures. The experiments were carried out in containers placed in a greenhouse for one month in spring 2011. We measured six morphological traits to test the intraspecific and interspecific interferences. In accordance with our hypothesis, the strategy of L. grandiflora differed between its emergent growth form and its submerged growth form, whereas the establishment and the growth of L. grandiflora did not seem to be facilitated by other exotic species (i.e. E. densa and M. aquaticum ). The interspecific effect between C. demersum or E. densa on submerged L. grandiflora was stronger in inhibiting plant growth than the intraspecific interferences of L. grandiflora on itself. Mutual inhibition of root production and growth was observed between L. grandiflora and M. aquatica . However, L. grandiflora seemed to have little impact on native species, which may coexist with L. grandiflora during the early stages of L. grandiflora establishment in the introduction area. L. grandiflora stimulated the growth and the vegetative reproduction of E. densa . L. grandiflora facilitated the establishment of E.densa in accordance with the \u201cInvasional Meltdown Hypothesis\u201d. L. grandiflora stimulated the root production and the growth of M. aquaticum at low densities and inhibited it at high densities. L. grandiflora , the first introduced plant in France, could slightly facilitate the growth of E. densa . However, spatial heterogeneity or differential use of resources could explain the coexistence of L. grandiflora and M. aquaticum in the same environment." }, { "instance_id": "R56945xR56764", "comparison_id": "R56945", "paper_id": "R56764", "text": "Cane toads on cowpats: commercial livestock production facilitates toad invasion in tropical Australia Habitat disturbance and the spread of invasive organisms are major threats to biodiversity, but the interactions between these two factors remain poorly understood in many systems. Grazing activities may facilitate the spread of invasive cane toads (Rhinella marina) through tropical Australia by providing year-round access to otherwise-seasonal resources. We quantified the cane toad\u2019s use of cowpats (feces piles) in the field, and conducted experimental trials to assess the potential role of cowpats as sources of prey, water, and warmth for toads. Our field surveys show that cane toads are found on or near cowpats more often than expected by chance. Field-enclosure experiments show that cowpats facilitate toad feeding by providing access to dung beetles. Cowpats also offer moist surfaces that can reduce dehydration rates of toads and are warmer than other nearby substrates. Livestock grazing is the primary form of land use over vast areas of Australia, and pastoral activities may have contributed substantially to the cane toad\u2019s successful invasion of that continent." }, { "instance_id": "R56945xR56591", "comparison_id": "R56945", "paper_id": "R56591", "text": "Avian seed dispersal of an invasive shrub The incorporation of an animal-dispersed exotic plant species into the diet of native frugivores can be an important step to that species becoming invasive. We investigated bird dispersal of Lonicera maackii, an Asian shrub invasive in eastern North America. We (i) determined which species of birds disperse viable L. maackii seeds, (ii) tested the effect of gut passage on L. maackii seeds, and (iii) projected the seed shadow based on habitat use by a major disperser. We found that four native and one exotic bird species dispersed viable L. maackii seeds. Gut passage through American robins did not inhibit germination, but gut passage through cedar waxwings did. American robins moved mostly along woodlot edges and fencerows, leading us to project that most viable seeds would be defecated in such habitats, which are very suitable for L. maackii. We conclude that L. maackii has been successfully incorporated into the diets of native and exotic birds and that American robins preferentially disperse seeds to suitable habitat." }, { "instance_id": "R56945xR56644", "comparison_id": "R56945", "paper_id": "R56644", "text": "Epiphytic macroinvertebrate communities on Eurasian watermilfoil (Myriophyllum spicatum) and native milfoils Myriophyllum sibericum and Myriophyllum alterniflorum in eastern North America Aquatic macrophytes play an important role in the survival and proliferation of invertebrates in freshwater ecosystems. Epiphytic invertebrate communities may be altered through the replacement of native macrophytes by exotic macrophytes, even when the macrophytes are close relatives and have similar morphology. We sampled an invasive exotic macrophyte, Eurasian watermilfoil ( Myriophyllum spicatum ), and native milfoils Myriophyllum sibericum and Myriophyllum alterniflorum in four bodies of water in southern Quebec and upstate New York during the summer of 2005. Within each waterbody, we compared the abundance, diversity, and community composition of epiphytic macroinvertebrates on exotic and native Myriophyllum. In general, both M. sibericum and M. alterniflorum had higher invertebrate diversity and higher invertebrate biomass and supported more gastropods than the exotic M. spicatum. In late summer, invertebrate density tended to be higher on M. sibericum than on M. spicatum, but lower on M. alterniflorum than on M. spicatum. Our results demonstrate that M. spicatum supports macroinvertebrate communities that may differ from those on structurally similar native macrophytes, although these differences vary across sites and sampling dates. Thus, the replacement of native milfoils by M. spicatum may have indirect effects on aquatic food webs." }, { "instance_id": "R56945xR56823", "comparison_id": "R56945", "paper_id": "R56823", "text": "Invasive species contribute to biotic resistence: negative effect of caprellid amphipods on na invasive tunicate As the number of introductions of non-indigenous species (NIS) continues to rise, ecologists are faced with new and unique opportunities to observe interactions between species that do not naturally co-exist. These interactions can have important implications on the invasion process, potentially determining whether NIS become widespread and abundant, survive in small numbers, or fail to establish and disappear. Although many studies have naturally focused on the interactions between NIS and native species to examine their effects and the biological resistance of the recipient community to invasion, few have examined the effects that NIS have on each other. In some cases, interactions can facilitate the invasion process of one or both species (i.e., \u201cinvasional meltdowns\u201d), but competition or predation can lead to negative interactions as well. The introduction of the vase tunicate, Ciona intestinalis, in Prince Edward Island (Canada) has harmed mussel aquaculture via heavy biofouling of equipment and mussels. Through both a broad-scale survey and small-scale field experiments, we show that Ciona recruitment is drastically reduced by caprellid amphipods, including the NIS Caprella mutica. This study provides an exciting example of how established invasive species can negatively impact the recruitment of a secondary invader, highlighting the potential for non-additive effects of multiple invasions." }, { "instance_id": "R56945xR56579", "comparison_id": "R56945", "paper_id": "R56579", "text": "Invasive mutualisms and the structure of plant-pollinator interactions in the temperate forests of north-west Patagonia, Argentina Summary 1 Alien species may form plant\u2013animal mutualistic complexes that contribute to their invasive potential. Using multivariate techniques, we examined the structure of a plant\u2013pollinator web comprising both alien and native plants and flower visitors in the temperate forests of north-west Patagonia, Argentina. Our main objective was to assess whether plant species origin (alien or native) influences the composition of flower visitor assemblages. We also examined the influence of other potential confounding intrinsic factors such as flower symmetry and colour, and extrinsic factors such as flowering time, site and habitat disturbance. 2 Flowers of alien and native plant species were visited by a similar number of species and proportion of insects from different orders, but the composition of the assemblages of flower-visiting species differed between alien and native plants. 3 The influence of plant species origin on the composition of flower visitor assemblages persisted after accounting for other significant factors such as flowering time, bearing red corollas, and habitat disturbance. This influence was at least in part determined by the fact that alien flower visitors were more closely associated with alien plants than with native plants. The main native flower visitors were, on average, equally associated with native and alien plant species. 4 In spite of representing a minor fraction of total species richness (3.6% of all species), alien flower visitors accounted for > 20% of all individuals recorded on flowers. Thus, their high abundance could have a significant impact in terms of pollination. 5 The mutualistic web of alien plants and flower-visiting insects is well integrated into the overall community-wide pollination web. However, in addition to their use of the native biota, invasive plants and flower visitors may benefit from differential interactions with their alien partners. The existence of these invader complexes could contribute to the spread of aliens into novel environments." }, { "instance_id": "R56945xR56654", "comparison_id": "R56945", "paper_id": "R56654", "text": "Gut passage effect of the introduced red-whiskered bulbul (Pycnonotus jocosus) on germination of invasive plant species in Mauritius In Mauritius, many of the worst invasive plant species have fleshy fruits and rely on animals for dispersal. The introduced red-whiskered bulbul (Pycnonotus jocosus) feeds on many fleshy-fruited species, and often moves from invaded and degraded habitats into higher quality native forests, thus potentially acting as a mediator of continued plant invasion into these areas. Furthermore, gut passage may influence seed germination. To investigate this, we fed fleshy fruits of two invasive plant species, Ligustrum robustum and Clidemia hirta, to red-whiskered bulbuls. Gut passage times of seeds were recorded. Gut-passed seeds were sown and their germination rate and germination success compared with that of hand-cleaned seeds, as well as that of seeds in whole fruits. Gut passage and hand-cleaning had significant positive effects on germination of both species. Gut-passed seeds of both C. hirta and L. robustum germinated faster than hand-cleaned seeds. However, for L. robustum, this was only true when compared with hand-cleaned seeds with intact endocarp; when compared with hand-cleaned seeds without endocarp, there was no difference. For overall germination success, there was a positive effect of gut passage for C. hirta, but not for L. robustum. For both C. hirta and L. robustum, no seeds in intact fruits geminated, suggesting that removal of pulp is essential for germination. Our results suggest that, first, the initial invasion of native forests in Mauritius may not have happened so rapidly without efficient avian seed dispersers like the red-whiskered bulbul. Second, the bulbul is likely to be a major factor in the continued re-invasion of C. hirta and L. robustum into weeded and restored conservation management areas." }, { "instance_id": "R56945xR56837", "comparison_id": "R56945", "paper_id": "R56837", "text": "Canal type affects invasiveness of the apple snail Pomacea canaliculata through its effects on animal species richness and waterweed invasion Loss of complex natural microhabitats due to human activity is a major cause of decreased biodiversity but its effects on biological invasion are not well understood. The effects of physical environmental factors, especially the type of agricultural canals, on the invasive freshwater snail Pomacea canaliculata were studied at 33 sites in the Chikugogawa River basin, Kyushu, Japan. Differences among sites in the local fauna and vegetation were also monitored. Structural equation modeling with a model selection procedure revealed that canals with a concrete lining had more snails. The effect was indirect in that the concrete lining reduced animal species richness and increased the invasive waterweed Egeria densa, which may serve as a refuge, protecting the snails from predation. A tethering experiment conducted simultaneously indicated high predation pressure on the snails: over 20 % of the tethered snails were lost within a day. Thus, human impacts may increase biological invasion by reducing biotic resistance and increasing the risk of invasional meltdown." }, { "instance_id": "R56945xR56742", "comparison_id": "R56945", "paper_id": "R56742", "text": "Long-term coexistence of non-indigenous species in aquaculture facilities Non-indigenous species (NIS) are a growing problem globally and, in the sea, aquaculture activities are critical vectors for their introduction. Aquaculture introduces NIS, intentionally or unintentionally, and can provide substratum for the establishment of other NIS. Little is known about the co-occurrence of NIS over long periods and we document the coexistence over decades of a farmed NIS (a mussel) with an accidently introduced species (an ascidian). Both are widespread and cause serious fouling problems worldwide. We found partial habitat segregation across depth and the position of rafts within the studied farm, which suggests competitive exclusion of the mussel in dark, sheltered areas and physiological exclusion of the ascidian elsewhere. Both species exhibit massive self-recruitment, with negative effects on the industry, but critically the introduction of NIS through aquaculture facilities also has strong detrimental effects on the natural environment." }, { "instance_id": "R56945xR56801", "comparison_id": "R56945", "paper_id": "R56801", "text": "Responses to invasion and invader removal differ between native and exotic plant groups in a coastal dune The spread of exotic, invasive species is a global phenomenon that is recognized as a major source of environmental change. Although many studies have addressed the effects of exotic plants on the communities they invade, few have quantified the effects of invader removal on plant communities, or considered the degree to which different plant groups vary in response to invasion and invader removal. We evaluated the effects of an exotic succulent, iceplant (Carpobrotus edulis), on a coastal dune plant community in northern California, as well as the community responses to its removal. To assess possible mechanisms by which iceplant affects other plants, we also evaluated its above- and belowground influences on the germination and growth of a dominant exotic annual grass, Bromus diandrus. We found that iceplant invasion was associated with reduced native plant cover as well as increased cover and density of some exotic plants\u2014especially exotic annual grasses. However, iceplant removal did not necessarily lead to a reversal of these effects: removal increased the cover and density of both native and exotic species. We also found that B. diandrus grown in iceplant patches, or in soil where iceplant had been removed, had poorer germination and growth than B. diandrus grown in soil not influenced by iceplant. This suggests that the influence of iceplant on this dune plant community occurs, at least in part, due to belowground effects, and that these effects remain after iceplant has been removed. Our study demonstrates the importance of considering how exotic invasive plants affect not only native species, but also co-occurring exotic taxa. It also shows that combining observational studies with removal experiments can lead to important insights into the influence of invaders and the mechanisms of their effects." }, { "instance_id": "R56945xR56907", "comparison_id": "R56945", "paper_id": "R56907", "text": "Novel species interactions in a highly modified estuary: association of largemouth bass with Brazilian waterweed Egeria densa AbstractFrequent invasions in coastal ecosystems result in novel species interactions that have unknown ecological consequences. Largemouth Bass Micropterus salmoides and Brazilian waterweed Egeria densa are introduced species in the Sacramento\u2013San Joaquin River Delta (the Delta) of California, a highly modified estuary. In this system, Brazilian waterweed and Largemouth Bass have seen marked increases in distribution and abundance in recent decades, but their association has not been specifically studied until now. We conducted a 2-year, bimonthly electrofishing survey with simultaneous sampling of water quality and submerged aquatic vegetation (SAV) biomass at 33 locations throughout the Delta. We used generalized linear mixed models to assess the relative influences of water temperature, conductivity, Secchi depth, and SAV biomass density on the abundance of both juvenile-sized and larger Largemouth Bass. Water temperature had a positive relationship with the abundance of both size-classes, but only ju..." }, { "instance_id": "R56945xR56813", "comparison_id": "R56945", "paper_id": "R56813", "text": "Structural, compositional and trait differences between native- and non-native-dominated grassland patches Summary Non-native species with growth forms that are different from the native flora may alter the physical structure of the area they invade, thereby changing the resources available to resident species. This in turn can select for species with traits suited for the new growing environment. We used adjacent uninvaded and invaded grassland patches to evaluate whether the shift in dominance from a native perennial bunchgrass, Nassella pulchra, to the early season, non-native annual grass, Bromus diandrus, affects the physical structure, available light, plant community composition and community-weighted trait means. Our field surveys revealed that the exotic grass B. diandrus alters both the vertical and horizontal structure creating more dense continuous vegetative growth and dead plant biomass than patches dominated by N. pulchra. These differences in physical structure are responsible for a threefold reduction in available light and likely contribute to the lower diversity, especially of native forbs in B. diandrus-dominated patches. Further, flowering time began earlier and seed size and plant height were higher in B. diandrus patches relative to N. pulchra patches. Our results suggest that species that are better suited (earlier phenology, larger seed size and taller) for low light availability are those that coexist with B. diandrus, and this is consistent with our hypothesis that change in physical structure with B. diandrus invasion is an important driver of community and trait composition. The traits of species able to coexist with invaders are rarely considered when assessing community change following invasion; however, this may be a powerful approach for predicting community change in environments with high anthropogenic pressures, such as disturbance and nutrient enrichment. It also provides a means for selecting species to introduce when trying to enhance native diversity in an otherwise invaded community." }, { "instance_id": "R56945xR56618", "comparison_id": "R56945", "paper_id": "R56618", "text": "Range expansion and population dynamics of co-occuring invasive herbivores Although a range of studies have suggested that competition plays a critical role in determining herbivore assemblages, there has been little work addressing the nature of interactions between competing invasive herbivores. We report the results of research on the hemlock woolly adelgid Adelges tsugae (\u2018HWA\u2019) and elongate hemlock scale Fiorinia externa (\u2018EHS\u2019), invasive herbivores that both feed on eastern hemlock (Tsuga canadensis). HWA has been linked to hemlock mortality throughout the East Coast of the US; the loss of hemlock threatens to permanently alter surrounding ecosystems. We assessed the spread and impact of both species by resurveying 142 hemlock stands across a 7,500 km2 latitudinal transect, running from coastal CT to northern MA, for HWA and EHS density as well as hemlock mortality. These stands had been previously surveyed in either 1997\u20131998 (CT) or 2002\u20132004 (MA). While the number of HWA-infested stands has increased, per-stand HWA density has substantially decreased. In contrast, EHS distribution and density has increased dramatically since 1997\u20131998. Hemlock mortality was much more strongly related to HWA density than to EHS density, and many stands remain relatively healthy despite an overall increase in hemlock mortality. There was a positive correlation between HWA and EHS densities in stands with low mean HWA densities, suggesting the potential for host-plant-mediated facilitation of EHS by HWA. Our findings underline the importance of research explicitly addressing interactions between competing invasive species, and of determining the potential consequences of these interactions for the invaded ecosystem." }, { "instance_id": "R56945xR56603", "comparison_id": "R56945", "paper_id": "R56603", "text": "Collapse of ant-scale mutualism in a rainforest on Christmas Island Positive interactions play a widespread role in facilitating biological invasions. Here we use a landscape-scale ant exclusion experiment to show that widespread invasion of tropical rainforest by honeydew-producing scale insects on Christmas Island (Indian Ocean) has been facilitated by positive interactions with the invasive ant Anoplolepis gracilipes. Toxic bait was used to exclude A. gracilipes from large (9-35 ha) forest patches. Within 11 weeks, ant activity on the ground and on trunks had been reduced by 98-100%, while activity on control plots remained unchanged. The exclusion of ants caused a 100% decline in the density of scale insects in the canopies of three rainforest trees in 12 months (Inocarpus fagifer, Syzygium nervosum and Barringtonia racemosa), but on B. racemosa densities of scale insects also declined in control plots, resulting in no effect of ant exclusion on this species. This study demonstrates the role of positive interactions in facilitating biological invasions, and supports recent models calling for greater recognition of the role of positive interactions in structuring ecological communities." }, { "instance_id": "R56945xR56545", "comparison_id": "R56945", "paper_id": "R56545", "text": "Ecological traits of the amphipod invader Dikerogammarus villosus on a mesohabitat scale Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted." }, { "instance_id": "R56945xR56865", "comparison_id": "R56945", "paper_id": "R56865", "text": "Impacts of Celastrus-primed soil on common native and invasive woodland species Invasive plant species have been shown to alter soil environments resulting in changes in soil chemistry, biota, and nutrient cycling. Few studies have focused on how soil changes affect co-occurring native species or plants of different growth forms. This study, located in Connecticut, USA, focused on the soil effects of the liana, Celastrus orbiculatus (oriental bittersweet), a prominent invader of eastern North America, using two different approaches. In a litter addition experiment, addition of C. orbiculatus leaf litter to uninvaded field soils showed an increase in soil nutrients, pH, and nitrogen mineralization over 2 years across a range of soil and forest community types. In a complimentary common garden-pot experiment, a suite of common, ecologically similar, native and invasive lianas and shrubs were grown in soils primed with C. orbiculatus. Invaded soil was compared to uninvaded field and control potting soils. The change in soil attributes was not significantly different when grown with native or invasive plants; however, soils grown with lianas had a greater decrease in nutrients than those grown with shrubs. Although soils from locations with C. orbiculatus were higher in nutrients than uninvaded soils, plant growth, as measured in root:shoot, root and stem biomass, relative growth rate of volume, and final biomass were not different in invaded and uninvaded soils for either lianas or shrubs. However, lianas had similar growth patterns in nutrient-sparse potting soil, while shrubs growing in potting soil had lower growth. Thus, negative impacts of invaded soils on plant growth are not universal, and the plant community may show a varied response to C. orbiculatus-primed soils depending on the level of resource competition." }, { "instance_id": "R56945xR56819", "comparison_id": "R56945", "paper_id": "R56819", "text": "Over-invasion by functionally equivalent invasive species Multiple invasive species have now established at most locations around the world, and the rate of new species invasions and records of new invasive species continue to grow. Multiple invasive species interact in complex and unpredictable ways, altering their invasion success and impacts on biodiversity. Incumbent invasive species can be replaced by functionally similar invading species through competitive processes; however the generalized circumstances leading to such competitive displacement have not been well investigated. The likelihood of competitive displacement is a function of the incumbent advantage of the resident invasive species and the propagule pressure of the colonizing invasive species. We modeled interactions between populations of two functionally similar invasive species and indicated the circumstances under which dominance can be through propagule pressure and incumbent advantage. Under certain circumstances, a normally subordinate species can be incumbent and reject a colonizing dominant species, or successfully colonize in competition with a dominant species during simultaneous invasion. Our theoretical results are supported by empirical studies of the invasion of islands by three invasive Rattus species. Competitive displacement is prominent in invasive rats and explains the replacement of R. exulans on islands subsequently invaded by European populations of R. rattus and R. norvegicus. These competition outcomes between invasive species can be found in a broad range of taxa and biomes, and are likely to become more common. Conservation management must consider that removing an incumbent invasive species may facilitate invasion by another invasive species. Under very restricted circumstances of dominant competitive ability but lesser impact, competitive displacement may provide a novel method of biological control." }, { "instance_id": "R56945xR56730", "comparison_id": "R56945", "paper_id": "R56730", "text": "Integration of invasive Opuntia spp. by native and alien seed dispersers in the Mediterranean area and the Canary Islands The success of many alien plant species depends on mutualistic relationships with other species. We describe the assemblage of seed dispersers on three species of alien Opuntia invading Mediterranean and Macaronesian habitats, and examine the quality of such plant-animal interactions. We identified vertebrates consuming O. maxima, O. dillenii and O. stricta fruits by direct observation and collecting droppings and pellets. Phenology of the alien species, as well as that of coexisting native species, was monitored for an entire year. Germination tests of ingested and non-ingested seeds were performed both in the greenhouse and in the field. Seed coat thickness and viability were also measured for all treatments. A great variety of taxa, including reptiles, birds and mammals actively participate in the seed dispersal of Opuntia. Phenology of Opuntia fruits in Menorca and Tenerife overlaps with only a few native fleshy-fruited plants present in the study areas, which suggests an advantage for the invader. Most seeds germinated during the second year of the experiment, independently of the effect produced by the dispersers\u2019 guts. We found great variation in the germination percentage of Opuntia after gut passage and in the effects of ingestion on seed coat thickness. Seed viability was somewhat reduced after gut passage compared to manually depulped seeds. Our results show how different Opuntia species are integrated into native communities by means of mutualistic interactions, with both native and alien dispersers. Although with heterogeneous effects, either type of disperser potentially contributes to the spread of these alien cacti in the recipient areas." }, { "instance_id": "R56945xR56724", "comparison_id": "R56945", "paper_id": "R56724", "text": "Alien pollinator promotes invasive mutualism in na insular pollination system The alien predatory lizard, Anolis carolinensis, has reduced the insect fauna on the two main islands of the Ogasawara archipelago in Japan. As a result of this disturbance, introduced honeybees are now the dominant visitors to flowers instead of endemic bees on these islands. On the other hand, satellite islands not invaded by alien anoles have retained the native flower visitors. The effects of pollinator change on plant reproduction were surveyed on these contrasting island groups. The total visitation rates and the number of interacting visitor groups on main islands were 63% and 30% lower than that on satellite islands, respectively. On the main islands, the honeybees preferred to visit alien flowers, whereas the dominant endemic bees on satellite islands tended to visit native flowers more frequently than alien flowers. These results suggest that alien anoles destroy the endemic pollination system and caused shift to alien mutualism. On the main islands, the natural fruit set of alien plants was significantly higher than that of native plants. In addition, the natural fruit set was positively correlated with the visitation rate of honeybees. Pollen limitation was observed in 53.3% of endemic species but only 16.7% of alien species. These data suggest that reproduction of alien plants was facilitated by the floral preference of introduced honeybees." }, { "instance_id": "R56945xR56921", "comparison_id": "R56945", "paper_id": "R56921", "text": "Twelve years of repeated wild hog activity promotes population maintenance of na invasive clonal plant in a coastal dune ecosystem Abstract Invasive animals can facilitate the success of invasive plant populations through disturbance. We examined the relationship between the repeated foraging disturbance of an invasive animal and the population maintenance of an invasive plant in a coastal dune ecosystem. We hypothesized that feral wild hog (Sus scrofa) populations repeatedly utilized tubers of the clonal perennial, yellow nutsedge (Cyperus esculentus) as a food source and evaluated whether hog activity promoted the long\u2010term maintenance of yellow nutsedge populations on St. Catherine's Island, Georgia, United States. Using generalized linear mixed models, we tested the effect of wild hog disturbance on permanent sites for yellow nutsedge culm density, tuber density, and percent cover of native plant species over a 12\u2010year period. We found that disturbance plots had a higher number of culms and tubers and a lower percentage of native live plant cover than undisturbed control plots. Wild hogs redisturbed the disturbed plots approximately every 5 years. Our research provides demographic evidence that repeated foraging disturbances by an invasive animal promote the long\u2010term population maintenance of an invasive clonal plant. Opportunistic facultative interactions such as we demonstrate in this study are likely to become more commonplace as greater numbers of introduced species are integrated into ecological communities around the world." }, { "instance_id": "R56945xR56736", "comparison_id": "R56945", "paper_id": "R56736", "text": "Long-term impacts of invasive grasses and subsequent fire in seasonally dry Hawaiian woodlands Invasive nonnative grasses have altered the composition of seasonally dry shrublands and woodlands throughout the world. In many areas they coexist with native woody species until fire occurs, after which they become dominant. Yet it is not clear how long their impacts persist in the absence of further fire. We evaluated the long-term impacts of grass invasions and subsequent fire in seasonally dry submontane habitats on Hawai'i, USA. We recensused transects in invaded unburned woodland and woodland that had burned in exotic grass-fueled fires in 1970 and 1987 and had last been censused in 1991. In the unburned woodlands, we found that the dominant understory grass invader, Schizachyrium condensatum, had declined by 40%, while native understory species were abundant and largely unchanged from measurements 17 years ago. In burned woodland, exotic grass cover also declined, but overall values remained high and recruitment of native species was poor. Sites that had converted to exotic grassland after a 1970 fire remained dominated by exotic grasses with no increase in native cover despite 37 years without fire. Grass-dominated sites that had burned twice also showed limited recovery despite 20 years of fire suppression. We found limited evidence for \"invasional meltdown\": Exotic richness remained low across burned sites, and the dominant species in 1991, Melinis minutiflora, is still dominant today. Twice-burned sites are, however, being invaded by the nitrogen-fixing tree Morella faya, an introduced species with the potential to greatly alter the successional trajectory on young volcanic soils. In summary, despite decades of fire suppression, native species show little recovery in burned Hawaiian woodlands. Thus, burned sites appear to be beyond a threshold for \"natural recovery\" (e.g., passive restoration)." }, { "instance_id": "R56945xR56809", "comparison_id": "R56945", "paper_id": "R56809", "text": "Context- and density-dependent effects of introduced oysters on biodiversity AbstractPacific oysters, Crassostrea gigas, have been introduced throughout much of the world, become invasive in many locations and can alter native assemblage structure, biodiversity and the distribution and abundance of other species. It is not known, however, to what extent their effects on biodiversity change as their cover increases, and how these effects may differ depending on the environmental context. Experimental plots with increasing cover of oysters were established within two estuaries in two different habitats commonly inhabited by C. gigas, (mussel-beds and mud-flats) and were sampled after 4 and 15 months. Within mud-flat habitats, macroscopic species living on or in the substratum increased in richness, Shannon\u2013Wiener diversity and number of individuals with oyster cover. In mussel-bed habitats, however, these indices were unaffected by the cover of oysters except at one estuary after 15 months when species richness was significantly lower in plots with the greatest cover of oysters. Assemblage structure differed with oyster cover in mud-flats but not in mussel-beds, except at 100 % cover in one location and at one time. Within mud-flats at one location and time (of four total tests), assemblages became more homogenous with increasing cover of oysters leading to a significant decrease in \u03b2-diversity. These responses were primarily underpinned by the facilitation of several taxa including a grazing gastropod (Littorina littorea), an invasive barnacle (Austrominius modestus) and a primary producer (Fucus vesiculosus) with increasing cover of oysters. Although there were consistent positive effects of C. gigas on mud-flat biodiversity, effects were weak or negative at higher cover on mussel-beds. This highlights the need for the impacts of invasive species to be investigated at a range of invader abundances within different environmental contexts." }, { "instance_id": "R56945xR56734", "comparison_id": "R56945", "paper_id": "R56734", "text": "Hawaiian ant-flower networks. Nectar-thieving ants prefer undefended native over introduced plants with floral defense Ants are omnipresent in most terrestrial ecosystems, and plants have responded to their dominance by evolving traits that either facilitate positive interactions with ants or reduce negative ones. Because ants are generally poor pollinators, plants often protect their floral nectar against ants. Ants were historically absent from the geographically isolated Hawaiian archipelago, which harbors one of the most endemic floras in the world. We hypothesized that native Hawaiian plants lack floral features that exclude ants and therefore would be heavily exploited by introduced, invasive ants. To test this hypothesis, ant\u2013flower interactions involving co-occurring native and introduced plants were observed in 10 sites on three Hawaiian Islands. We quantified the residual interaction strength of each pair of ant\u2013plant species as the deviation of the observed interaction frequency from a null-model prediction based on available nectar sugar in a local plant community and local ant activity at sugar baits. As pred..." }, { "instance_id": "R56945xR56807", "comparison_id": "R56945", "paper_id": "R56807", "text": "Effects of invasive cordgrass on presence of marsh grassbird in na area where it is not native The threatened Marsh Grassbird (Locustella pryeri) first appeared in the salt marsh in east China after the salt marsh was invaded by cordgrass (Spartina alterniflora), a non-native invasive species. To understand the dependence of non-native Marsh Grassbird on the non-native cordgrass, we quantified habitat use, food source, and reproductive success of the Marsh Grassbird at the Chongming Dongtan (CMDT) salt marsh. In the breeding season, we used point counts and radio-tracking to determine habitat use by Marsh Grassbirds. We analyzed basal food sources of the Marsh Grassbirds by comparing the \u03b4(13) C isotope signatures of feather and fecal samples of birds with those of local plants. We monitored the nests through the breeding season and determined the breeding success of the Marsh Grassbirds at CMDT. Density of Marsh Grassbirds was higher where cordgrass occurred than in areas of native reed (Phragmites australis) monoculture. The breeding territory of the Marsh Grassbird was composed mainly of cordgrass stands, and nests were built exclusively against cordgrass stems. Cordgrass was the major primary producer at the base of the Marsh Grassbird food chain. Breeding success of the Marsh Grassbird at CMDT was similar to breeding success within its native range. Our results suggest non-native cordgrass provides essential habitat and food for breeding Marsh Grassbirds at CMDT and that the increase in Marsh Grassbird abundance may reflect the rapid spread of cordgrass in the coastal regions of east China. Our study provides an example of how a primary invader (i.e., cordgrass) can alter an ecosystem and thus facilitate colonization by a second non-native species." }, { "instance_id": "R56945xR56772", "comparison_id": "R56945", "paper_id": "R56772", "text": "The impact of zebra mussel (Dreissena polymorpha) periostracum and biofilm cues on habitat selection by a Ponto-Caspian amphipod Dikerogammarus haemobaphes Dikerogammarus haemobaphes is one of several Ponto-Caspian gammarids invading Europe in recent decades. Previously, it exhibited active preferences for habitats associated with another Ponto-Caspian alien, zebra mussel. Now we tested gammarid preferences for living mussels and their empty shells with biofilm and/or periostracum present or absent, to find the exact cues driving gammarid responses. We observed a strong preference of gammarids for biofilmed shells, even if the biofilm was relatively young (2-day old). However, the biofilm quality, related to the substratum on which it had developed (shells with or without the periostracum, or coated with nail varnish) did not affect their behaviour. In the absence of biofilm, gammarids positively responded to the shell periostracum. Furthermore, they clearly preferred living zebra mussels over old empty shells, independent of the presence or absence of biofilm, confirming the importance of a periostracum-associated cue in their substratum recognition. On the other hand, shells obtained shortly after mussels\u2019 death were preferred over living bivalves. Thus, the attractant is associated with fresh mussel shells, rather than with living mussels themselves. The ability of alien gammarids to locate sites inhabited by zebra mussels may contribute to their invasion success in novel areas inhabited by this habitat-forming bivalve." }, { "instance_id": "R56945xR56587", "comparison_id": "R56945", "paper_id": "R56587", "text": "Facilitations between the introduced nitrogen-fixing tree, Robinia pseudoacacia, and nonnative plant species in the glacial outwash upland ecosystem of Cape Cod, MA Robinia pseudoacacia, a nitrogen-fixing, clonal tree species native to the central Appalachian and Ozark Mountains, is considered to be one of the top 100 worldwide woody plant invaders. We initiated this project to determine the impact of black locust (Robinia pseudoacacia) on an upland coastal ecosystem and to estimate the spread of this species within Cape Cod National Seashore (CCNS). We censused 20 \u00d7 20 m plots for vegetation cover and environmental characteristics in the center of twenty randomly-selected Robinia pseudoacacia stands. Additionally, paired plots were surveyed under native overstory stands, comprised largely of pitch pine (Pinus rigida) and mixed pitch pine\u2013oak (Quercus velutina and Quercus alba) communities. These native stands were located 20 m from the edge of the sampled locust stand and had similar land use histories. To determine the historical distribution of black locust in CCNS, we digitized and georeferenced historical and current aerial photographs of randomly-selected stands. Ordination analyses revealed striking community-level differences between locust and pine\u2013oak stands in their immediate vicinity. Understory nonnative species richness and abundance values were significantly higher under Robinia stands than under the paired native stands. Additionally, animal-dispersed plant species tended to occur in closer stands, suggesting their spread between locust stands. Robinia stand area significantly decreased from the 1970\u2019s to 2002, prompting us to recommend no management action of black locust and a monitoring program and possible removal of associated animal-dispersed species. The introduction of a novel functional type (nitrogen-fixing tree) into this xeric, nutrient-poor, upland forested ecosystem resulted in \u2018islands of invasion\u2019 within this resistant system." }, { "instance_id": "R56945xR56581", "comparison_id": "R56945", "paper_id": "R56581", "text": "Interactions of an introduced shrub and introduced earthworms in an Illinois urban woodland: Impact on leaf litter decomposition Summary This study examined an \u2018invasional meltdown\u2019, where the invasion of a Midwestern woodland by an exotic shrub ( Rhamnus cathartica L.P. Mill) and the invasion by Eurasian earthworms facilitated one another. Using a litterbag approach, we examined mass loss of four substrates ( R. cathartica , Acer saccharum, Quercus rubra , and Quercus alba) along a gradient of Eurasian earthworm density and biomass throughout a 40.5 ha oak woodland in Glencoe, Illinois. Earthworm densities and biomass were greatest in patches where R. cathartica prevailed, and populations were lowest in an upland forest subcommunity within the woodland. At each of three points along this earthworm gradient, we placed replicated litterbags constructed either to permit or to deny access to the litter by earthworms. The treatments were, therefore, plot treatments (low, medium and high earthworm density and biomass) and litterbag treatments (earthworm access and earthworm excluded). We found that earthworms promoted a very rapid loss of litter from R. cathartica bags. Within 3 months greater than 90% of this litter was lost from the litterbags. Earthworm impacts on other substrates followed the sequence A. saccharum>Q. alba=Q. rubra . Effects of both litterbag and plot treatments were found within 3 months for A. saccharum but Quercus species were affected only after a year. We propose that the impact of earthworms on litter breakdown creates conditions that promote and sustain invasion by R. cathartica . Previous work has demonstrated that R. cathartica may alter soil properties in a way that promotes and sustains invasion by earthworms. These findings have implications for the restoration management of these systems, since the legacy of R. cathartica on soil properties and earthworm populations may persist even after the plant has been physically removed." }, { "instance_id": "R56945xR56565", "comparison_id": "R56945", "paper_id": "R56565", "text": "Invasional meltdown potential: Facilitation between introduced plants and mammals on French Mediterranean islands ABSTRACT In the increasingly important domain of insular invasion ecology, the role of facilitation between different introduced taxa has been mentioned, but rarely studied. This paper outlines facilitation between introduced mammals and the invasive succulents Carpobrotus edulis and C. aff. acinaciformis on offshore islands in southeast France. Rats and rabbits are the primary seed dispersers of Carpobrotus sp. on the islands studied. No such dispersal activity was detected on the adjacent mainland. Seed digestion by rats and rabbits also enhanced percent seed germination and speed, in spite of an associated reduction in seed size. In return, Carpobrotus provides a water/energy-rich food source during the dry summer season, thus demonstrating a clear case of mutualism between invaders." }, { "instance_id": "R56945xR56555", "comparison_id": "R56945", "paper_id": "R56555", "text": "Exotic weed invasion increases the susceptibility of native plants to attack by a biocontrol herbivore Landscape change has great, yet infrequently measured, potential to influence the susceptibility of natural systems to invasive species impacts. We quantified attack by an invasive biological control weevil (Rhinocyllus conicus) on native thistles in relation to two types of landscape change: agricultural intensification and invasion by an exotic thistle, Carduus nutans, the original target of biological control. Weevil egg load was measured on native thistles in three landscape types: (1) agriculture dominated, (2) grassland dominated with exotic thistles, and, (3) grassland dominated without exotic thistles. We found no difference in egg load on native thistles within grassland landscapes without exotic thistles vs. within agricultural landscapes, suggesting that agricultural intensification per se does not influence levels of weevil attack. However, attack on the native Cirsium undulatum increased significantly (three- to fivefold) with increasing exotic thistle density. Within-patch exotic thistle density explained >50% of the variation in both the intensity and frequency of weevil attack. Since R. conicus feeding dramatically reduces seed production, exotic thistles likely exert a negative indirect effect on native thistles. This study provides some of the first empirical evidence that invasion by an exotic plant can increase attack of native plants by shared insect herbivores." }, { "instance_id": "R56945xR56664", "comparison_id": "R56945", "paper_id": "R56664", "text": "Rapid dispersal and establisment of a benthic Ponto-Caspian goby in Lake Erie: diel vertical migration of early juvenile round goby The round goby, Apollonia melanostoma, a molluscivore specialist, was introduced to the Great Lakes in the early 1990s and rapidly expanded its distribution, especially in Lake Erie. Adult round goby morphology suggests low dispersal and migration potential due to the lack of a swim bladder and benthic life style. Given that the larval stage occurs inside the benthic egg, and juveniles have adult morphologies, it has been suspected that dispersal and invasion potential is low for early life stages also. However, we identified early juvenile round gobies in the nocturnal pelagic in Lake Erie and thus we conducted a sampling study to determine the extent to which this life stage uses the nocturnal pelagic. Replicate ichthyoplankton samples were collected at 3-h intervals (1900\u20130700 h) at three depths (2 m, 5 m, 8 m) in western Lake Erie (water depth = 10 m) in July and August 2002 and June 2006. Early juvenile round gobies (6\u201323 mm TL) were present almost exclusively in the nocturnal samples (2200 h, 0100 h, 0400 h) with peak densities approaching 60 individuals per 100 m3 of water sampled. Nocturnal density was also significantly greater at 8-m depth versus 2-m and only the smallest fish (6\u20138 mm TL) migrated to the surface (2-m). Analyses of diet clearly demonstrated that these fish are foraging on plankton at night and thus may not be light limited for foraging in ship ballast tanks. In ships that take on thousands of tonnes of water for ballast, nocturnal ballasting could easily result in transport of thousands of young round gobies at a time. Additionally, within-lake dispersal at this lifestage is likely common and may facilitate downstream passage across barriers designed to limit range expansion." }, { "instance_id": "R56945xR56754", "comparison_id": "R56945", "paper_id": "R56754", "text": "Invasional interference due to similar inter- and intraspecific competition between invaders may affect management As the number of biological invasions increases, the potential for invader-invader interactions also rises. The effect of multiple invaders can be superadditive (invasional meltdown), additive, or subadditive (invasional interference); which of these situations occurs has critical implications for prioritization of management efforts. Carduus nutans and C. acanthoides, two congeneric invasive weeds, have a striking, segregated distribution in central Pennsylvania, U.S.A. Possible hypotheses for this pattern include invasion history and chance, direct competition, or negative interactions mediated by other species, such as shared pollinators. To explore the role of resource competition in generating this pattern, we conducted three related experiments using a response-surface design throughout the life cycles of two cohorts. Although these species have similar niche requirements, we found no differential response to competition between conspecifics vs. congeners. The response to combined density was relatively weak for both species. While direct competitive interactions do not explain the segregated distributional patterns of these two species, we predict that invasions of either species singly, or both species together, would have similar impacts. When prioritizing which areas to target to prevent the spread of one of the species, it is better to focus on areas as yet unaffected by its congener; where the congener is already present, invasional interference makes it unlikely that the net effect will change." }, { "instance_id": "R56945xR56573", "comparison_id": "R56945", "paper_id": "R56573", "text": "Effects of Acer platanoides invasion on understory plant communities and tree regeneration in the northern Rocky Mountains Quantitative studies are necessary to determine whether invasive plant species displace natives and reduce local biodiversity, or if they increase local biodiversity. Here we describe the effects of invasion by Norway maple Acer platanoides on riparian plant communities and tree regeneration at two different scales (individual tree vs stand scales) in western Montana, USA, using both descriptive and experimental approaches. The three stands differed in community composition with the stand most dominated by A. platanoides invasion being more compositionally homogenous, and less species rich (-67%), species even (-40%), and diverse ( -75%) than the two other stands. This sharp decrease in community richness and diversity of the highly invaded stand, relative to the other stands, corresponded with a 28-fold increase in A. platanoides seedlings and saplings. The dramatic difference between stand 1 vs 2 and 3 suggests that A. platanoides invasion is associated with a dramatic change in community composition and local loss of species diversity; however, other unaccounted for differences between stands may be the cause. These whole-stand correlations were corroborated by community patterns under individual A. platanoides trees in a stand with intermediate levels of patchy invasion. At the scale of individual A. platanoides canopies within a matrix of native trees, diversity and richness of species beneath solitary A. platanoides trees declined as the size of the trees increased. These decreases in native community properties corresponded with an increase in the density of A. platanoides seedlings. The effect of A. platanoides at the stand scale was more dramatic than at the individual canopy scale; however, at this smaller scale we only collected data from the stand with intermediate levels of invasion and not from the stand with high levels of invasion. Transplant experiments with tree seedlings demonstrated that A. platanoides seedlings performed better when grown beneath conspecific canopies than under natives, but Populus and Pinus seedlings performed better when grown beneath Populus canopies, the dominant native. Our results indicate that A. platanoides trees suppress most native species, including the regeneration of the natural canopy dominants, but facilitate conspecifics in their understories." }, { "instance_id": "R56945xR56716", "comparison_id": "R56945", "paper_id": "R56716", "text": "The introduced Micropterus salmoides in na equatorial lake: a paradoxal loser in na invasion meltdown scenario? Micropterus salmoides is a North American piscivorous fish on the IUCN list of 100 of the world\u2019s worst invasive alien species. Introduced into Lake Naivasha (Kenya) in 1929, their current population abundance is significantly depressed in a lake that has recently become dominated by fishes of the Cyprinidae family; the introduced cyprinid Cyprinus carpio now dominates catches in the commercial fishery and Barbus paludinous is now numerically dominant in the fish community. Long-term diet studies of M. salmoides based on gut contents analysis (GCA) have defined their diet spectrum, feeding preferences and ontogenetic dietary shifts. Between 1987 and 1991, diet was size-structured; fish <260 mm were mainly insectivorous and fish >260 mm fed mainly on invasive crayfish Procambarus clarkia with B. paludinosus rarely taken. More recent GCA data revealed that up to 2003, these size-structured trophic relationships were still evident, but there has been a subsequent shift to their feeding almost exclusively on small (<100 mm) B. paludinosus, coincident with a size-related functional switch whereby M. salmoides >120 mm were now piscivorous. However, a Bayesian stable isotope mixing model (SIAR) suggested M. salmoides diet actually remained relatively varied in 2006 and 2007; it indicated P. clarkii were still contributing more to their diet than B. paludinosus in fish <260 mm and provided only partial support for the functional shift. The consequence of the M. salmoides depressed population abundance is their predation pressure on prey fishes is limited and preventing top-down effects. This is in contrast to their invasive populations elsewhere in the world and the likely result of invasion meltdown processes in Naivasha involving the introduced C. carpio and P. clarkii that have produced sub-optimal foraging conditions for M. salmoides." }, { "instance_id": "R56945xR56853", "comparison_id": "R56945", "paper_id": "R56853", "text": "Core-satellite species hypothesis and native versus exotic species in secondary succession A number of hypotheses exist to explain species\u2019 distributions in a landscape, but these hypotheses are not frequently utilized to explain the differences in native and exotic species distributions. The core-satellite species (CSS) hypothesis predicts species occupancy will be bimodally distributed, i.e., many species will be common and many species will be rare, but does not explicitly consider exotic species distributions. The parallel dynamics (PD) hypothesis predicts that regional occurrence patterns of exotic species will be similar to native species. Together, the CSS and PD hypotheses may increase our understanding of exotic species\u2019 distribution relative to natives. We selected an old field undergoing secondary succession to study the CSS and PD hypotheses in conjunction with each other. The ratio of exotic to native species (richness and abundance) was observed through 17 years of secondary succession. We predicted species would be bimodally distributed and that exotic:native species ratios would remain steady or decrease through time under frequent disturbance. In contrast to the CSS and PD hypotheses, native species occupancies were not bimodally distributed at the site, but exotic species were. The exotic:native species ratios for both richness (E:Nrichness) and abundance (E:Ncover) generally decreased or remained constant throughout supporting the PD hypothesis. Our results suggest exotic species exhibit metapopulation structure in old field landscapes, but that metapopulation structures of native species are disrupted, perhaps because these species are dispersal limited in the fragmented landscape." }, { "instance_id": "R56945xR56756", "comparison_id": "R56945", "paper_id": "R56756", "text": "Does mutualism drive the invasion of two alien species? The case of Solenopsis invicta and Phenacoccus solenopsis Although mutualism between ants and honeydew-producing hemipterans has been extensively recognized in ecosystem biology, however few attempts to test the hypothesis that mutualism between two alien species leads to the facilitation of the invasion process. To address this problem, we focus on the conditional mutualism between S. invicta and P. solenopsis by field investigations and indoor experiments. In the laboratory, ant colony growth increased significantly when ants had access to P. solenopsis and animal-based food. Honeydew produced by P. solenopsis also improved the survival of ant workers. In the field, colony density of P. solenopsis was significantly greater on plots with ants than on plots without ants. The number of mealybug mummies on plants without fire ants was almost three times that of plants with fire ants, indicating a strong effect of fire ants on mealybug survival. In addition, the presence of S. invicta successfully contributed to the spread of P. solenopsis. The quantity of honeydew consumption by S. invicta was significantly greater than that of a presumptive native ant, Tapinoma melanocephalum. When compared with the case without ant tending, mealybugs tended by ants matured earlier and their lifespan and reproduction increased. T. melanocephalum workers arrived at honeydew more quickly than S. invicta workers, while the number of foraging S. invicta workers on plants steadily increased, eventually exceeding that number of T. melanocephalum foragers. Overall, these results suggest that the conditional mutualism between S. invicta and P. solenopsis facilitates population growth and fitness of both species. S. invicta tends to acquire much more honeydew and drive away native ants, promoting their predominance. These results suggest that the higher foraging tempo of S. invicta may provide more effective protection of P. solenopsis than native ants. Thus mutualism between these two alien species may facilitate the invasion success of both species." }, { "instance_id": "R56945xR56829", "comparison_id": "R56945", "paper_id": "R56829", "text": "Effectiveness of zebra mussels to act as shelters from fish predators differs between native and invasive amphipod prey Biological invasions cause organisms to face new predators, but also supply new anti-predator shelters provided by alien ecosystem engineers. We checked the level of anti-predator protection provided to three gammarid species by an invasive Ponto-Caspian zebra mussel Dreissena polymorpha, known for its habitat modification abilities. We used gammarids differing in their origin and level of association with mussels: Ponto-Caspian aliens Dikerogammarus villosus (commonly occurring in mussel beds) and Pontogammarus robustoides (not associated with mussels), as well as native European Gammarus fossarum (not co-occurring with dreissenids). The gammarids were exposed to predation of two fish species: the racer goby Babka gymnotrachelus (Ponto-Caspian) and Amur sleeper Perccottus glenii (Eastern Asian). This set of organisms allowed us to check whether the origin and level of association with mussels of both prey and predators affect the ability of gammarids to utilize zebra mussel beds as shelters. We tested gammarid survival in the presence of fish and one of five substrata: sand, macrophytes, stones, living mussels and empty mussel valves. D. villosus survived better than its congeners on all substrata, and its survival was highest in living dreissenids. The survival of the other gammarids was similar on all substrata. Both fish species exhibited similar predation efficiency. Thus, D. villosus, whose affinity to dreissenids has already been established, utilizes them as protection from fish predators, including allopatric predators, more efficiently than other amphipods. Therefore, the presence of dreissenids in areas invaded by D. villosus is likely to help the invader establish itself in a new place." }, { "instance_id": "R56945xR56549", "comparison_id": "R56945", "paper_id": "R56549", "text": "Apparent facilitation of an invasive mealybug by na invasive ant SummaryIn the southeast United States, the invasive ant Solenopsis invicta is known to derive important carbohydrate (honeydew) resources from mealybugs utilizing grasses. Most important appears to be an invasive mealybug, Antonina graminis. We studied whether this mealybug and a similar native species also benefit from association with S. invicta. We found that mealybug occurrence increases significantly with increasing proximity to S. invicta mounds, suggesting that mealybugs benefit as well. Mutual benefits derived by S. invicta and A. graminis are consistent with a hypothesis proposing that associations among invasive species can be important in their success at introduced locations." }, { "instance_id": "R56945xR56700", "comparison_id": "R56945", "paper_id": "R56700", "text": "Contrasting patterns of spread in interacting invasive species: Membranipora membranacea and Codium fragile off Nova Scotia In the Northwest Atlantic, overgrowth of the competitively dominant, native kelps by an invasive bryozoan Membranipora membranacea increases frond erosion, which has facilitated the establishment and spread of the invasive macroalga Codium fragile ssp fragile. To document the spread of both introduced species along the Atlantic coast of Nova Scotia from initial introduction points (the \u2018epicentre\u2019) southwest of Halifax, we conducted video-surveys of shallow rocky habitats along the southwestern shore of Nova Scotia (100 km linear distance, encompassing the range of M. membranacea) in 2000, and then along the entire Atlantic coast in 2007 (650 km). Membranipora membranacea was observed continuously throughout the surveyed ranges in 2000 and 2007, wherever kelps were present, suggesting natural dispersal via planktonic larvae. Codium fragile was observed along 95 km of the surveyed range in 2000 and along 445 km in 2007, with a relatively patchy distribution beyond the epicentre, suggesting a combination of natural and anthropogenic dispersal mechanisms. Rockweed-dominated (Fucus spp.) or mixed algal assemblages common outside the epicentre may alter the interaction between M. membranacea and C. fragile, since seaweeds other than kelp are not subject to defoliation by the bryozoan. Percent cover of kelp at the epicentre generally increased from 2000 to 2007, while that of C. fragile generally decreased. Codium fragile was the dominant canopy alga at 54% of sites in 2000 and at only 15% of sites in 2007. These findings indicate that, at near decadal timescales, C. fragile does not prevent re-colonization by native kelps." }, { "instance_id": "R56945xR56835", "comparison_id": "R56945", "paper_id": "R56835", "text": "The targeting of large-sized benthic macrofauna by na invasive portunid predator: evidence form a caging study The Portunid crab Charybdis japonica was first found in Waitemata Harbour, New Zealand, in 2000. It has established breeding populations and has been spreading, yet information on its dietary preferences in New Zealand are unknown. We conducted field caging experiments to elucidate prey choices and potential impacts of Charybdis on benthic communities. We tested the hypothesis that Charybdis would reduce the previously demonstrated positive influence of native pinnid bivalves, Atrinazelandica, on the abundance and richness of surrounding soft-sediment macrofauna. Adult male Charybdis were introduced to cages with and without Atrina that included soft-sediment macrofaunal communities of ambient composition and abundance. After leaving the crabs to feed overnight, changes in community structure (relative to sediments without crabs) were determined by coring the sediment and analysing the resident macrofauna. Prey choices were verified by extracting taxa from the stomachs of crabs collected from the cages in which they had been feeding. The abundance of large taxa including burrowing urchins, bivalves and native crabs was lower in the presence of Charybdis compared to areas without this invader. The stomach contents of Charybdis were dominated by these same three taxa, constituting 85 % of the prey abundance when using stomach fullness as a weighting factor. Our hypothesis was supported with the greatest net losses occurring in cages with Charybdis and Atrina. Reduction in the abundance of Echinocardium cordatum by Charybdis could have cascading ecological effects, as these urchins play a critical role in benthic soft-sediment ecosystems in New Zealand via bioturbation and biogenic disturbance." }, { "instance_id": "R56945xR56577", "comparison_id": "R56945", "paper_id": "R56577", "text": "Functional diversity of mammalian predators and extinction in island birds The probability of a bird species going extinct on oceanic islands in the period since European colonization is predicted by the number of introduced predatory mammal species, but the exact mechanism driving this relationship is unknown. One possibility is that larger exotic predator communities include a wider array of predator functional types. These predator communities may target native bird species with a wider range of behavioral or life history characteristics. We explored the hypothesis that the functional diversity of the exotic predators drives bird species extinctions. We also tested how different combinations of functionally important traits of the predators explain variation in extinction probability. Our results suggest a unique impact of each introduced mammal species on native bird populations, as opposed to a situation where predators exhibit functional redundancy. Further, the impact of each additional predator may be facilitated by those already present, suggesting the possibility of \u201cinvasional meltdown.\u201d" }, { "instance_id": "R56945xR56825", "comparison_id": "R56945", "paper_id": "R56825", "text": "Biotic resistance and invasional meltdown: consequences of acquired interspecific interactions for na invasive orchid, Spathoglottis plicata in Puerto Rico Invasiveness of non-native species often depends on acquired interactions with either native or naturalized species. A natural colonizer, the autogamous, invasive orchid Spathoglottis plicata has acquired at least three interspecific interactions in Puerto Rico: a mycorrhizal fungus essential for seed germination and early development; a native, orchid-specialist weevil, Stethobaris polita, which eats perianth parts and oviposits in developing fruits; and ants, primarily invasive Solenopsis invicta, that forage at extrafloral nectaries. We tested in field experiments and from observational data whether weevils affect reproductive success in the orchid; and whether this interaction is density-dependent. We also examined the effectiveness of extrafloral nectaries in attracting ants that ward off weevils. Only at small spatial scales were weevil abundance and flower damage correlated with flower densities. Plants protected from weevils had less floral damage and higher fruit set than those accessible to weevils. The more abundant ants were on inflorescences, the less accessible fruits were to weevils, resulting in reduced fruit loss from larval infections. Ants did not exclude weevils, but they affected weevil activity. Native herbivores generally provide some biotic resistance to plant invasions yet Spathoglottis plicata remains an aggressive colonizer despite the acquisition of a herbivore/seed predator partly because invasive ants attracted to extrafloral nectaries inhibited weevil behavior. Thus, the invasion of one species facilitates the success of another as in invasional meltdowns. For invasive plant species of disturbed habitats, having ant-tended extrafloral nectaries and producing copious quantities of seed, biotic resistance to plant invasions can be minimal." }, { "instance_id": "R56945xR56666", "comparison_id": "R56945", "paper_id": "R56666", "text": "Preferences of the Ponto-Caspian amphipod Dikerogammarus haemobaphes for living zebra mussels A Ponto-Caspian amphipod Dikerogammarus haemobaphes has recently invaded European waters. In the recipient area, it encountered Dreissena polymorpha, a habitat-forming bivalve, co-occurring with the gammarids in their native range. We assumed that interspecific interactions between these two species, which could develop during their long-term co-evolution, may affect the gammarid behaviour in novel areas. We examined the gammarid ability to select a habitat containing living mussels and searched for cues used in that selection. We hypothesized that they may respond to such traits of a living mussel as byssal threads, activity (e.g. valve movements, filtration) and/or shell surface properties. We conducted the pairwise habitat-choice experiments in which we offered various objects to single gammarids in the following combinations: (1) living mussels versus empty shells (the general effect of living Dreissena); (2) living mussels versus shells with added byssal threads and shells with byssus versus shells without it (the effect of byssus); (3) living mussels versus shells, both coated with nail varnish to neutralize the shell surface (the effect of mussel activity); (4) varnished versus clean living mussels (the effect of shell surface); (5) varnished versus clean stones (the effect of varnish). We checked the gammarid positions in the experimental tanks after 24 h. The gammarids preferred clean living mussels over clean shells, regardless of the presence of byssal threads under the latter. They responded to the shell surface, exhibiting preferences for clean mussels over varnished individuals. They were neither affected by the presence of byssus nor by mussel activity. The ability to detect and actively select zebra mussel habitats may be beneficial for D. haemobaphes and help it establish stable populations in newly invaded areas." }, { "instance_id": "R56945xR56696", "comparison_id": "R56945", "paper_id": "R56696", "text": "The composition and density of fauna utilizing burrow microhabitats created by non-native burrowing crustacean (Sphaeroma quoianum) The non-native isopod, Sphaeroma quoianum, has invaded many estuaries of the Pacific coast of North America. It creates extensive burrow microhabitats in intertidal and subtidal substrata that provide habitat for estuarine organisms. We sampled burrows to determine the effects of substratum type on the community of inquilines (burrow inhabitants). The density of inquilines was higher in wood and sandstone than marsh banks. Inquilines, representing 58 species from seven phyla, were present in 86% of samples. Inquilines equaled or outnumbered S. quoianum in 49% of the samples. Non-native fauna comprised 29% of the species and 35% of the abundance of inquilines, which is higher than other estuarine habitats in Coos Bay. Sessile non-native species were found living within burrows at tidal heights higher than their typical range. Thus, the novel habitat provided by burrows of S. quoianum may alter the densities and intertidal distribution of both native and non-native estuarine fauna." }, { "instance_id": "R56945xR56694", "comparison_id": "R56945", "paper_id": "R56694", "text": "Potencial and realized interactions between two aquatic invasive species: Eurasian watermilfoil (Myriophyllum spicatum) and rusty crayfish (Orconectes rusticus) With multiple invasions, the potential arises for interactions between invasives inhibiting or promoting spread. Our goal was to investigate the interaction between two invasives, Eurasian watermilfoil ( Myriophyllum spicatum ) and rusty crayfish ( Orconectes rusticus ), which co-occur in several lakes in western Quebec, Canada, and to determine their overlap with littoral fish communities. Crayfish potentially aid milfoil dispersal by fragmentation or, alternatively, inhibit its proliferation through destruction and direct consumption. With a mesocosm experiment, we quantified milfoil fragment production versus biomass reduction by crayfish. More fragments were produced at medium to high crayfish densities, with a significant reduction of milfoil only at the highest densities, demonstrating the potential for both positive and negative interactions. Second, we determined the habitat preferences of each species by conducting a survey in the same lake. There was little overlap in the species\u2019 distributions, with each preferring different habitat features, indicating either a low probability of interaction or that interaction occurred historically, resulting in a contemporary exclusion pattern. While our experiment showed a potential for significant interaction, the low natural co-occurrence of these species suggests that they do not currently influence each other or that they previously excluded each other." }, { "instance_id": "R56945xR56855", "comparison_id": "R56945", "paper_id": "R56855", "text": "Patterns of adult abundance vary with recruitment of na invasive barnacle species in Hawaii Abstract The ability of non-native species to establish and spread is an interplay between the characteristics of the recipient ecosystem (e.g., biotic resistance) and the characteristics of the invading species (e.g., if it is a habitat generalist, fast-growing, highly fecund). In the Hawaiian Islands, the successful establishment and high impacts of many non-native species have been attributed to a disjunct flora and fauna that results in resources that can be readily exploited. Along these lines, the speed with which the Caribbean-Atlantic barnacle Chthamalus proteus became widespread in Hawaii has been attributed to the availability of settlement space in the intertidal zone. I investigated the relative importance of competition and recruitment in regulating the abundance of C. proteus at three sites on the island of Oahu. Recruitment of C. proteus and other barnacle species mirrored adult abundance. Competition with adult barnacles did not appear to be a factor at two sites. At a third site, where recruitment was the highest, competition between C. proteus and an early invader, Amphibalanus reticulatus , in the form of space pre-emption was occurring, apparently mediated at least in part by the preference of A. reticulatus to settle near conspecifics. C. proteus and the native barnacle Nesochthamalus intertextus displayed no such settlement preference. Earlier studies linked differences in recruitment rates of barnacles and other invertebrate species on Oahu to differences in circulation patterns that bring offshore waters into shore at some sites and hold near-shore water flowing out of lagoons and harbors close to shore in others. The present study adds to a growing body of work that suggests that while the relative importance of pre-settlement vs. post-settlement factors to invasion success varies with location, knowledge of local oceanographic conditions could help predict the spread of non-native marine species." }, { "instance_id": "R56945xR56692", "comparison_id": "R56945", "paper_id": "R56692", "text": "Disruption of na exotic mutualism can improve management of na invasive plant: varroa mite, honeybees and biological control of Scotch broom Cytisus scoparius in New Zealand Summary 1. A seed-feeding biocontrol agent Bruchidius villosus was released in New Zealand (NZ) to control the invasive European shrub, broom Cytisus scoparius, in 1988 but it was subsequently considered unable to destroy sufficient seed to suppress broom populations. We hypothesized that an invasive mite Varroa destructor, which has caused honeybee decline in NZ, may cause pollinator limitation, so that the additional impact of B. villosus might now reach thresholds for population suppression. 2. We performed manipulative pollination treatments and broad-scale surveys of pollination, seed rain and seed destruction by B. villosus to investigate how pollinator limitation and biocontrol interact throughout the NZ range of broom. 3. The effect of reduced pollination in combination with seed-destruction was explored using a population model parameterized for NZ populations. 4. Broom seed rain ranged from 59 to 21 416 seeds m )2 from 2004 to 2008, and was closely correlated with visitation frequency of honeybees and bumblebees. Infestation of broom seeds by B. villosus is expected to eventually reach 73% (the average rate observed at the localities adjacent to early release sites). 5. The model demonstrated that 73% seed destruction, combined with an absence of honeybee pollination, could cause broom extinction at many sites and, where broom persists, reduce the intensity of treatment required to control broom by conventional means. 6. Nevertheless, seed rain was predicted to be sufficient to maintain broom invasions over many sites in NZ, even in the presence of the varroa mite and B. villosus, largely due to the continued presence of commercial beehives that are treated for varroa mite infestation. 7. Synthesis and applications. Reduced pollination through absence of honeybees can reduce broom seed set to levels at which biocontrol can be more effective. To capitalize on the impact of the varroa mite on feral honeybees, improved management of commercial beehives (for example, withdrawal of licences for beekeepers to locate hives on Department of Conservation land) could be used as part of a successful integrated broom management programme at many sites in NZ." }, { "instance_id": "R56945xR56799", "comparison_id": "R56945", "paper_id": "R56799", "text": "Linking the pattern to the mechanism: how na introduced mammal facilitates plant invasions Non-native mammals that are disturbance agents can promote non-native plant invasions, but to date there is scant evidence on the mechanisms behind this pattern. We used wild boar (Sus scrofa) as a model species to evaluate the role of non-native mammals in promoting plant invasion by identifying the degree to which soil disturbance and endozoochorous seed dispersal drive plant invasions. To test if soil disturbance promotes plant invasion, we conducted an exclosure experiment in which we recorded emergence, establishment and biomass of seedlings of seven non-native plant species planted in no-rooting, boar-rooting and artificial rooting patches in Patagonia, Argentina. To examine the role of boar in dispersing seeds we germinated viable seeds from 181 boar droppings and compared this collection to the soil seed bank by collecting a soil sample adjacent to each dropping. We found that both establishment and biomass of non-native seedlings in boar-rooting patches were double those in no-rooting patches. Values in artificial rooting patches were intermediate between those in boar-rooting and no-rooting treatments. By contrast, we found that the proportion of non-native seedlings in the soil samples was double that in the droppings, and over 80% of the germinated seeds were native species in both samples. Lastly, an effect size test showed that soil disturbance by wild boar rather than endozoochorous dispersal facilitates plant invasions. These results have implications for both the native and introduced ranges of wild boar, where rooting disturbance may facilitate community composition shifts." }, { "instance_id": "R56945xR56547", "comparison_id": "R56945", "paper_id": "R56547", "text": "Invasional 'meltdown' on an oceanic island Islands can serve as model systems for understanding how biological invasions affect community structure and ecosystem function. Here we show invasion by the alien crazy ant Anoplolepis gracilipes causes a rapid, catastrophic shift in the rain forest ecosystem of a tropical oceanic island, affecting at least three trophic levels. In invaded areas, crazy ants extirpate the red land crab, the dominant endemic consumer on the forest floor. In doing so, crazy ants indirectly release seedling recruitment, enhance species richness of seedlings, and slow litter breakdown. In the forest canopy, new associations between this invasive ant and honeydew-secreting scale insects accelerate and diversify impacts. Sustained high densities of foraging ants on canopy trees result in high population densities of hostgeneralist scale insects and growth of sooty moulds, leading to canopy dieback and even deaths of canopy trees. The indirect fallout from the displacement of a native keystone species by an ant invader, itself abetted by introduced/cryptogenic mutualists, produces synergism in impacts to precipitate invasional meltdown in this system." }, { "instance_id": "R56945xR56622", "comparison_id": "R56945", "paper_id": "R56622", "text": "Canopy effects of the invasive shrub Pyracantha angustifolia on seed bank composition, richness and density in a montane shrubland (Cordoba, Argentina) Abstract Invasive woody species frequently change the composition of the established vegetation and the properties of the soil under their canopies. Accordingly, invasion may well affect regenerative phases of the community, especially at the seed bank level, likely influencing community restoration. Pyracantha angustifolia (Rosaceae) is an invasive shrub in central Argentina that affects woody recruitment, particularly enhancing the recruitment of other exotic woody species. There is though no information regarding its effect on the soil seed bank within the invaded community. The present study was set up to gain further insight into the canopy effects of P. angustifolia. We aimed to assess whether the invasive shrub affects seed bank composition, richness and seed density as compared with the dominant native shrub Condalia montana (Rhamnaceae), and to relate the observed seed bank patterns with those of the established vegetation. We evaluated the composition of the germinable seed bank and the established vegetation under the canopy of 16 shrubs of P. angustifolia, 16 shrubs of C. montana, and in 16 control plots (10 m2) without shrub cover. The floristic composition of the seed bank differed among canopy treatments. However, seed bank richness did not differ significantly. There was an overall high seed density of exotic species throughout the study site, though exotic forbs showed significantly lower seed densities under the invasive shrub. Pyracantha angustifolia would not promote the incorporation of new species into the seed bank of the invaded community but rather favour the establishment of woody species that do not depend on seed banks. The absence of dominant woody species in the seed bank, the dominance of exotic forbs, and the high similarity between established exotic species and those present in the seed bank may surely affect community restoration following the main disturbances events observed in the region." }, { "instance_id": "R56945xR56843", "comparison_id": "R56945", "paper_id": "R56843", "text": "Does whirling disease mediate hybridization between a native and nonnative trout? AbstractThe spread of nonnative species over the last century has profoundly altered freshwater ecosystems, resulting in novel species assemblages. Interactions between nonnative species may alter their impacts on native species, yet few studies have addressed multispecies interactions. The spread of whirling disease, caused by the nonnative parasite Myxobolus cerebralis, has generated declines in wild trout populations across western North America. Westslope Cutthroat Trout Oncorhynchus clarkii lewisi in the northern Rocky Mountains are threatened by hybridization with introduced Rainbow Trout O. mykiss. Rainbow Trout are more susceptible to whirling disease than Cutthroat Trout and may be more vulnerable due to differences in spawning location. We hypothesized that the presence of whirling disease in a stream would (1) reduce levels of introgressive hybridization at the site scale and (2) limit the size of the hybrid zone at the whole-stream scale. We measured levels of introgression and the spatial ext..." }, { "instance_id": "R56945xR56646", "comparison_id": "R56945", "paper_id": "R56646", "text": "Exotic herbivores directly facilitate the exotic grasses they graze: mechanisms for na unexpected positive feedback between invaders The ability of an exotic species to establish in a system may depend not only on the invasibility of the native community, but also on its interactions with other exotic species. Though examples of mutually beneficial interactions between exotic species are known, few studies have quantified these effects or identified specific mechanisms. We used the co-invasion of an endangered island ecosystem by exotic Canada geese (Branta canadensis) and nine exotic annual grasses to study the effects of an invading herbivore on the success of invading grasses. On our study islands in southwestern Canada, we found that geese fed selectively on the exotic grasses and avoided native forbs. Counter to current theory suggesting that the grasses should be limited by a selective enemy, however, the grasses increased in proportional abundance under grazing whereas forbs showed declining abundance. Testing potential mechanisms for the effects of grazing on grasses, we found that the grasses produced more stems per area when grazing reduced vegetation height and prevented litter accumulation. Forming dense mats of short stems appeared to be an efficient reproductive and competitive strategy that the Eurasian grasses have evolved in the presence of grazers, conferring a competitive advantage in a system where the native species pool has very few annual grasses and no grazers. Germination trials further demonstrated that selective herbivory by geese enables their dispersal of exotic grass seed between heavily invaded feeding areas and the small islands used for nesting. In summary, the exotic geese facilitated both the local increase and the spatial spread of exotic grasses, which in turn provided the majority of their diet. This unexpected case of positive feedback between exotic species suggests that invasion success may depend on the overall differences between the evolutionary histories of the invaders and the evolutionary history of the native community they enter." }, { "instance_id": "R56945xR56714", "comparison_id": "R56945", "paper_id": "R56714", "text": "Experimental test of biotic resistance to na invasive herbivore provided by potential plant mutualists Understanding the influence of resident species on the success of invaders is a core objective in the study and management of biological invasions. We asked whether facultative food-for-protection mutualism between resident, nectar-feeding ants and extrafloral nectar-bearing plants confers biotic resistance to invasion by a specialist herbivore. Our research focused on the South American cactus-feeding moth Cactoblastis cactorum Berg (Lepidopetra: Pyralidae) in the panhandle region of Florida. This species has been widely and intentionally redistributed as a biological control agent against weedy cacti (Opuntia spp.) but arrived unintentionally in the southeast US, where it attacks native, non-target cacti and is considered a noxious invader. The acquired host-plants of C. cactorum in Florida secrete extrafloral nectar, especially on young, vegetative structures, and this attracts ants. We conducted ant-exclusion experiments over 2 years (2008 and 2009) at two sites using potted plants of two vulnerable host species (O. stricta and O. ficus-indica) to evaluate the influence of cactus-visiting ants (total of eight species) at multiple points in the moth life cycle (oviposition, egg survival, and larval survival). We found that the presence of ants often increased the mortality of lab-reared C. cactorum eggsticks (stacks of cohered eggs) and larvae that we introduced onto plants in the field, although these effects were variable across sites, years, host-plant species, ant species, and/or between old and young plant structures. In contrast to these \u201cstaged\u201d encounters, we found that ants had little influence on the survival of cactus moths that occurred naturally at our field sites, or on moth damage and plant growth. In total, our experimental results suggest that the influence of cactus-visiting ants on C. cactorum invasion dynamics is weak and highly variable." }, { "instance_id": "R56945xR56791", "comparison_id": "R56945", "paper_id": "R56791", "text": "Anthropogenic subsidies mitigate environmental variability for insular rodents The exogenous input of nutrients and energy into island systems fuels a large array of consumers and drives bottom-up trophic cascades in island communities. The input of anthropogenic resources has increased on islands and particularly supplemented non-native consumers with extra resources. We test the hypothesis that the anthropogenic establishments of super-abundant gulls and invasive iceplants Carpobrotus spp. have both altered the dynamics of an introduced black rat Rattus rattus population. On Bagaud Island, two habitats have been substantially modified by the anthropogenic subsidies of gulls and iceplants, in contrast to the native Mediterranean scrubland with no anthropogenic inputs. Rats were trapped in all three habitats over two contrasting years of rainfall patterns to investigate: (1) the effect of anthropogenic subsidies on rat density, age-ratio and growth rates, and (2) the role of rainfall variability in modulating the effects of subsidies between years. We found that the growth rates of rats dwelling in the non-subsidized habitat varied with environmental fluctuation, whereas rats dwelling in the gull colony maintained high growth rates during both dry and rainy years. The presence of anthropogenic subsidies apparently mitigated environmental stress. Age ratio and rat density varied significantly and predictably among years, seasons, and habitats. While rat densities always peaked higher in the gull colony, especially after rat breeding in spring, higher captures of immature rats were recorded during the second year in all habitats, associated with higher rainfall. The potential for non-native rats to benefit from anthropogenic resources has important implications for the management of similar species on islands." }, { "instance_id": "R56945xR56925", "comparison_id": "R56945", "paper_id": "R56925", "text": "Estuarine fouling communities are dominated by nonindigenous species in the presence of na invasive crab Interactions between anthropogenic disturbances and introduced and native species can shift ecological communities, potentially leading to the successful establishment of additional invaders. Since its discovery in New Jersey in 1988, the Asian shore crab (Hemigrapsus sanguineus) has continued to expand its range, invading estuarine and coastal habitats in eastern North America. In estuarine environments, H.sanguineus occupies similar habitats to native, panopeid mud crabs. These crabs, and a variety of fouling organisms (both NIS and native), often inhabit man-made substrates (like piers and riprap) and anthropogenic debris. In a series of in situ experiments at a closed dock in southwestern Long Island (New York, USA), we documented the impacts of these native and introduced crabs on hard-substrate fouling communities. We found that while the presence of native mud crabs did not significantly influence the succession of fouling communities compared to caged and uncaged controls, the presence of introduced H. sanguineus reduced the biomass of native tunicates (particularly Molgulamanhattensis), relative to caged controls. Moreover, the presence of H. sanguineus favored fouling communities dominated by introduced tunicates (especially Botrylloides violaceous and Diplosoma listerianum). Altogether, our results suggest that H. sanguineus could help facilitate introduced fouling tunicates in the region, particularly in locations where additional solid substrates have created novel habitats." }, { "instance_id": "R56945xR56927", "comparison_id": "R56945", "paper_id": "R56927", "text": "Species diversity, phenology, and temporal flight patterns of Hypothenemus Pygmy Borers (Coleoptera: Curculionidae: Scolytinae) in South Florida Abstract Hypothenemus are some of the most common and diverse bark beetles in natural as well as urban habitats, particularly in tropical and subtropical regions. Despite their ecological success and ubiquitous presence, very little is known about the habits of this genus. This study aimed to understand species diversity and daily and seasonal trends in host-seeking flight patterns of Hypothenemus in a suburban environment by systematic collections with ethanol baiting over a 15-mo period in South Florida. A total of 481 specimens were collected and identified as eight species, most of them nonnative. Hypothenemus formed the overwhelming majority of bark beetles (Scolytinae) collected, confirming the dominance of the genus in urban environments. Hypothenemus brunneus (Hopkins) and Hypothenemus seriatus (Eichhoff) were most abundant, comprising 74% of the capture. Rarefaction showed that the sample was sufficient to characterize the local diversity and composition. The seasonal pattern in Hypothenemus capture was positively correlated to day-time temperature, not to season as in most temperate Scolytinae. Another significant observation in the community dynamics was the synchronized occurrence of two common species (H. birmanus and H. javanus), unrelated to season. Hypothenemus were predominantly diurnal with a broad flight window. Females flew as early as 11: 00 hours (EDST), with peak flight occurring at 15: 00 hours, significantly earlier than flight patterns of most other Scolytinae. Surprisingly, male Hypothenemus were frequently collected, despite their lack of functional wings. Several potential explanations are discussed. This is the first study into the ecology of an entire community of the twig-feeding Hypothenemus." }, { "instance_id": "R56945xR56680", "comparison_id": "R56945", "paper_id": "R56680", "text": "Intra-regional transportation of a tugboat fouling community between the ports of Recife and Natal, northeast Brazil This study aimed to identify the incrusting and sedentary animals associated with the hull of a tugboat active in the ports of Pernambuco and later loaned to the port of Natal, Rio Grande do Norte. Thus, areas with dense biofouling were scraped and the species then classified in terms of their bioinvasive status for the Brazilian coast. Six were native to Brazil, two were cryptogenic and 16 nonindigenous; nine of the latter were classified as established (Musculus lateralis, Sphenia fragilis, Balanus trigonus, Biflustra savartii, Botrylloides nigrum, Didemnum psammatodes, Herdmania pallida, Microscosmus exasperatus, and Symplegma rubra) and three as invasive (Mytilopsis leucophaeta, Amphibalanus reticulatus, and Striatobalanus amaryllis). The presence of M. leucophaeata, Amphibalanus eburneus and A. reticulatus on the boat's hull propitiated their introduction onto the Natal coast. The occurrence of a great number of tunicate species in Natal reflected the port area's benthic diversity and facilitated the inclusion of two bivalves - Musculus lateralis and Sphenia fragilis - found in their siphons and in the interstices between colonies or individuals, respectively. The results show the role of biofouling on boat hulls in the introduction of nonindigenous species and that the port of Recife acts as a source of some species." }, { "instance_id": "R56945xR56712", "comparison_id": "R56945", "paper_id": "R56712", "text": "Strong feeding preference of na exotic generalist herbivore for na exotic forb: a case of invasional antagonism Many hypotheses dealing with the success of invasive plant species concern plant\u2013herbivore interactions. The invasional meltdown and enemy inversion hypotheses suggest that non-native herbivores may indirectly facilitate the invasion of a non-native plant species by either favorably changing environmental conditions or reducing competition from native plant species. Our objective was to determine the role of herbivory by the non-native snail Otala lactea in structuring California grassland communities. We conducted two experiments to examine the feeding preferences of O. lactea for eight representative grassland species. Overall, O. lactea preferred Brassica nigra, a non-native forb, over all other species tested. Field monocultures of B. nigra supported significantly higher snail densities than monocultures of any of the other species tested. O. lactea also preferred B. nigra over all other species tested in controlled laboratory feeding trials. However, based on trait comparisons of each of the eight grassland species, we cannot pinpoint the preference for B. nigra to a basic nutritional requirement on the part of the herbivore or an allocation to defense on the part of the plants. Our study provides evidence for an antagonistic relationship between a non-native herbivore and a non-native plant species in their invasive range. We term this relationship \u201cinvasional antagonism\u201d." }, { "instance_id": "R56945xR56539", "comparison_id": "R56945", "paper_id": "R56539", "text": "Community-wide effects on nonindigenous species on temperate rocky reefs Ecological interactions among invading species are common and may often be important in facilitating invasions. Indeed, the presence of one nonindigenous species can act as an agent of disturbance that facilitates the invasion of a second species. However, most studies of nonindigenous species are anecdotal and do not provide substantive evidence that interactions among nonindigenous species have any community-level effects. Here, using a combination of field experiments and observations we examine interactions among introduced species in New England kelp forests and ask whether these interactions have altered paradigms describing subtidal communities in the Gulf of Maine. The green alga Codium fragile was observed at the Isles of Shoals, Maine, USA, in 1983 and has since replaced the native kelp as the dominant seaweed on leeward shores. Experiments manipulating kelp and Codium reveal that Codium does not directly inhibit growth or survival of kelp. Codium does, however, successfully recruit to gaps in the kelp bed and, once established, inhibits recruitment of kelp. A second nonindigenous species, Membranipora membranacea, grows epiphytically on kelp, and experiments reveal that the presence of Membranipora reduces growth and survival of kelp, resulting in defoliation of kelp plants and gap formation in kelp beds. In the absence of Codium, kelp recolonizes these gaps, but when present, Codium colonizes and prevents kelp recolonization. Manipulations of herbivores demonstrate that herbivory will reinforce Codium dominance. Thus, the demise of New England kelp beds appears to result from one invasive species facilitating the spread of a second nonindigenous species." }, { "instance_id": "R56945xR56883", "comparison_id": "R56945", "paper_id": "R56883", "text": "The positive interaction between two nonindigenous species, Casuarina (Casuarina equisetifolia) and Acacia (Acacia mangium), in the tropical coastal zone of south China: stand dynamics and soil nutrients The role of mixed forests in tropical coastal South China is unclear due to a long history of afforestation with a Casuarina (C asuarina equisetifolia) monoculture. In this study, we determined how the stand dynamics and soil nutrients in monoculture stands of C asuarina equisetifolia were influenced by Acacia ( Acacia mangium), a fast-growing pioneer species, when the two tree species were combined in two initial proportions. We also compared the canopy conditions of mixed and monoculture stands of C. equisetifolia at the young stage. Over a period of ten years, the density of stems was relatively low in C. equisetifolia \u00d7 Acacia mangium mixed stands compared to C. equisetifolia monoculture stands. By contrast, the aboveground biomass, understory diversity and soil nutrients were relatively high in C. equisetifolia \u00d7 A. mangium mixed stands, particularly when the initial mixing proportion of A. mangium was greater. Moreover, C. equisetifolia can protect A. mangium in the windy coastal environment by ensuring evenly distributed crown growth, intact canopy conditions, and high leaf area index (LAI) during the young stage. In conclusion, the two species had a positive interaction in the mixed forests, which suggests that coastal conservation managers need to shift from their traditional focus on C. equisetifolia single-species afforestation to multi-tree species mixed afforestation." }, { "instance_id": "R56945xR56543", "comparison_id": "R56945", "paper_id": "R56543", "text": "Indirect facilitation of an anuran invasion by non-native fishes Positive interactions among non-native species could greatly exacerbate the problem of invasions, but are poorly studied and our knowledge of their occurrence is mostly limited to plant-pollinator and dispersal interactions. We found that invasion of bullfrogs is facilitated by the presence of co-evolved non-native fish, which increase tadpole survival by reducing predatory macroinvertebrate densities. Native dragonfly nymphs in Oregon, USA caused zero survival of bullfrog tadpoles in a replicated field experiment unless a non-native sunfish was present to reduce dragonfly density. This pattern was also evident in pond surveys where the best predictors of bullfrog abundance were the presence of non-native fish and bathymetry. This is the first experimental evidence of facilitation between two non-native vertebrates and supports the invasional meltdown hypothesis. Such positive interactions among non-native species have the potential to disrupt ecosystems by amplifying invasions, and our study shows they can occur via indirect mechanisms." }, { "instance_id": "R56945xR56624", "comparison_id": "R56945", "paper_id": "R56624", "text": "Plant resources and colony growth in na invasive ant: the importance of honeydew-producing hemiptera in Carbohydrate transfer across trophic levels Abstract Studies have suggested that plant-based nutritional resources are important in promoting high densities of omnivorous and invasive ants, but there have been no direct tests of the effects of these resources on colony productivity. We conducted an experiment designed to determine the relative importance of plants and honeydew-producing insects feeding on plants to the growth of colonies of the invasive ant Solenopsis invicta (Buren). We found that colonies of S. invicta grew substantially when they only had access to unlimited insect prey; however, colonies that also had access to plants colonized by honeydew-producing Hemiptera grew significantly and substantially (\u224850%) larger. Our experiment also showed that S. invicta was unable to acquire significant nutritional resources directly from the Hemiptera host plant but acquired them indirectly from honeydew. Honeydew alone is unlikely to be sufficient for colony growth, however, and both carbohydrates abundant in plants and proteins abundant in animals are likely to be necessary for optimal growth. Our experiment provides important insight into the effects of a common tritrophic interaction among an invasive mealybug, Antonina graminis (Maskell), an invasive host grass, Cynodon dactylon L. Pers., and S. invicta in the southeastern United States, suggesting that interactions among these species can be important in promoting extremely high population densities of S. invicta." }, { "instance_id": "R56945xR56760", "comparison_id": "R56945", "paper_id": "R56760", "text": "Facilitation and competition among invasive plants: A field experiment with alligatorweed and water hyacinth Ecosystems that are heavily invaded by an exotic species often contain abundant populations of other invasive species. This may reflect shared responses to a common factor, but may also reflect positive interactions among these exotic species. Armand Bayou (Pasadena, TX) is one such ecosystem where multiple species of invasive aquatic plants are common. We used this system to investigate whether presence of one exotic species made subsequent invasions by other exotic species more likely, less likely, or if it had no effect. We performed an experiment in which we selectively removed exotic rooted and/or floating aquatic plant species and tracked subsequent colonization and growth of native and invasive species. This allowed us to quantify how presence or absence of one plant functional group influenced the likelihood of successful invasion by members of the other functional group. We found that presence of alligatorweed (rooted plant) decreased establishment of new water hyacinth (free-floating plant) patches but increased growth of hyacinth in established patches, with an overall net positive effect on success of water hyacinth. Water hyacinth presence had no effect on establishment of alligatorweed but decreased growth of existing alligatorweed patches, with an overall net negative effect on success of alligatorweed. Moreover, observational data showed positive correlations between hyacinth and alligatorweed with hyacinth, on average, more abundant. The negative effect of hyacinth on alligatorweed growth implies competition, not strong mutual facilitation (invasional meltdown), is occurring in this system. Removal of hyacinth may increase alligatorweed invasion through release from competition. However, removal of alligatorweed may have more complex effects on hyacinth patch dynamics because there were strong opposing effects on establishment versus growth. The mix of positive and negative interactions between floating and rooted aquatic plants may influence local population dynamics of each group and thus overall invasion pressure in this watershed." }, { "instance_id": "R56945xR56660", "comparison_id": "R56945", "paper_id": "R56660", "text": "Are interactions among Ponto-Caspian invaders driving amphipod species replacement in the St. Lawrence River? ABSTRACT In Lake Erie and Lake Ontario, the Ponto-Caspian amphipod Echinogammarus ischnus has replaced the native amphipod Gammarus fasciatus on rocky substrates colonized by dreissenid mussels, which provide interstitial refugia for small invertebrates. Based on the premise that an invader's vulnerability to predation is influenced by its evolutionary experience with the predator and its ability to compete for refugia, we hypothesized that amphipod species replacement is facilitated through selective predation by the round goby Neogobius melanostomus, a Ponto-Caspian fish that invaded the Great Lakes in the early 1990s and is now colonizing the St. Lawrence River. In laboratory experiments, we determined if E. ischnus excludes G. fasciatus from mussel patches, and if the vulnerability of G. fasciatus to predation by gobies is increased in the presence of the invasive amphipod. E. ischnus and G. fasciatus did not differ in their use of mussel patches, either when alone or in each other's presence. Both species were equally vulnerable to predation by the round goby. In field experiments, we determined if the round goby exerts a stronger impact than native predators on the relative abundance of amphipod species. Our results suggest that E. ischnus is more vulnerable to native predators, but the round goby does not have a differential impact on the native amphipod. We conclude that competition with E. ischnus does not increase the vulnerability of G. fasciatus to goby predation, and that the round goby does not promote the replacement of G. fasciatus by E. ischnus in the St. Lawrence River. The outcome of antagonistic interactions between exotic and native amphipods is mediated more by abiotic factors than by shared evolutionary history with other co-occurring exotic species." }, { "instance_id": "R56945xR56815", "comparison_id": "R56945", "paper_id": "R56815", "text": "Replacement of nonnative rainbow trout by nonnative brown trout in the Chitose River system, Hokkaido, northern Japan In this study, evidence for interspecific interaction was provided by comparing distribution patterns of nonnative rainbow trout Onchorhynchus mykiss and brown trout Salmo trutta between the past and present in the Chitose River system, Hokkaido, northern Japan. O. mykiss was first introduced in 1920 in the Chitose River system and has since successfully established a population. Subsequently, another nonnative salmonid species, S. trutta have expanded the Chitose River system since the early 1980s. At present, S. trutta have replaced O. mykiss in the majority of the Chitose River, although O. mykiss have persisted in areas above migration barriers that prevent S. trutta expansion. In conclusion, the results of this study highlight the role of interspecific interactions between sympatric nonnative species on the establishment and persistence of populations of nonnative species." }, { "instance_id": "R56945xR56847", "comparison_id": "R56945", "paper_id": "R56847", "text": "Cascading ecological effects caused by the establishment of the emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) in European Russia Emerald ash borer, Agrilus planipennis, is a destructive invasive forest pest in North America and European Russia. This pest species is rapidly spreading in European Russia and is likely to arrive in other countries soon. The aim is to analyze the ecological consequences of the establishment of this pest in European Russia and investigate (1) what other xylophagous beetles develop on trees affected by A. planipennis, (2) how common is the parasitoid of the emerald ash borer Spathius polonicus (Hymenoptera: Braconidae: Doryctinae) and what is the level of parasitism by this species, and (3) how susceptible is the native European ash species Fraxinus excelsior to A. planipennis. A survey of approximately 1000 Fraxinus pennsylvanica trees damaged by A. planipennis in 13 localities has shown that Hylesinus varius (Coleoptera: Curculionidae: Scolytinae), Tetrops starkii (Coleoptera: Cerambycidae) and Agrilus convexicollis (Coleoptera: Buprestidae) were common on these trees. Spathius polonicus is frequently recorded. About 50 percent of late instar larvae of A. planipennis sampled were parasitized by S. polonicus. Maps of the distributions of T. starkii, A. convexicollis and S. polonicus before and after the establishment of A. planipennis in European Russia were compiled. It is hypothesized that these species, which are native to the West Palaearctic, spread into central European Russia after A. planipennis became established there. Current observations confirm those of previous authors that native European ash Fraxinus excelsior is susceptible to A. planipennis, increasing the threat posed by this pest. The establishment of A. planipennis has resulted in a cascade of ecological effects, such as outbreaks of other xylophagous beetles in A. planipennis-infested trees. It is likely that the propagation of S. polonicus will reduce the incidence of outbreaks of A. planipennis." }, { "instance_id": "R56945xR56620", "comparison_id": "R56945", "paper_id": "R56620", "text": "The role of exotic plants in the invasion of Seychelles by the polyphagus insect Aleurodicus dispersus: a phylogenetic controlled analysis The accidental introduction of the spiralling whitefly, Aleurodicus dispersus Russell (Homoptera: Aleyrodidae) to Seychelles in late 2003 is exploited during early 2005 to study interactions between A. dispersus, native and exotic host plants and their associated arthropod fauna. The numbers of A. dispersus egg spirals and pupae, predator and herbivore taxa were recorded for eight related native/exotic pairs of host plants found on Mah\u00e9, the largest island in Seychelles. Our data revealed no significant difference in herbivore density (excluding A. dispersus) between related native and exotic plants, which suggests that the exotic plants do not benefit from \u2018enemy release\u2019. There were also no differences in predator density, or combined species richness between native and exotic plants. Together these data suggest that \u2018biotic resistance\u2019 to invasion is also unlikely. Despite the apparent lack of differences in community structure significantly fewer A. dispersus egg spirals and pupae were found on the native plants than on the exotic plants. Additional data on A. dispersus density were collected on Cousin Island, a managed nature reserve in which exotic plants are carefully controlled. Significantly higher densities of A. dispersus were observed on Mah\u00e9, where exotic plants are abundant, than on Cousin. These data suggest that the rapid invasion of Seychelles by A. dispersus may largely be due to the high proportion of plant species that are both exotic and hosts of A. dispersus; no support was found for either the \u2018enemy release\u2019 or the \u2018biotic resistance\u2019 hypotheses." }, { "instance_id": "R56945xR56762", "comparison_id": "R56945", "paper_id": "R56762", "text": "An invasive tree alters the structure of seed dispersal networks between birds and plants in French Polynesia Aim We studied how the abundance of the highly invasive fruit-bearing tree Miconia calvescens DC. influences seed dispersal networks and the foraging patterns of three avian frugivores. Location Tahiti and Moorea, French Polynesia. Methods Our study was conducted at six sites which vary in the abundance of M. calvescens. We used dietary data from three frugivores (two introduced, one endemic) to determine whether patterns of fruit consumption are related to invasive tree abundance. We constructed seed dispersal networks for each island to evaluate how patterns of interaction between frugivores and plants shift at highly invaded sites. Results Two frugivores increased consumption of M. calvescens fruit at highly invaded sites and decreased consumption of other dietary items. The endemic fruit dove, Ptilinopus purpuratus, consumed more native fruit than either of the two introduced frugivores (the red-vented bulbul, Pycnonotus cafer, and the silvereye, Zosterops lateralis), and introduced frugivores showed a low potential to act as dispersers of native plants. Network patterns on the highly invaded island of Tahiti were dominated by introduced plants and birds, which were responsible for the majority of plant\u2010frugivore interactions. Main conclusions Shifts in the diet of introduced birds, coupled with reduced populations of endemic frugivores, caused differences in properties of the seed dispersal network on the island of Tahiti compared to the less invaded island of Moorea. These results demonstrate that the presence of invasive fruit-bearing plants and introduced frugivores can alter seed dispersal networks, and that the patterns of alteration depend both on the frugivore community and on the relative abundance of available fruit." }, { "instance_id": "R56945xR56684", "comparison_id": "R56945", "paper_id": "R56684", "text": "Introduced deer reduce native plant cover and facilitate invasion of non-native tree species: evidence for invasional meltdown Invasive species are a major threat to native communities and ecosystems worldwide. One factor frequently invoked to explain the invasiveness of exotic species is their release in the new habitat from control by natural enemies (enemy-release hypothesis). More recently, interactions between exotic species have been proposed as a potential mechanism to facilitate invasions (invasional meltdown hypothesis). We studied the effects of introduced deer on native plant communities and exotic plant species on an island in Patagonia, Argentina using five 400 m2 exclosures paired with control areas in an Austrocedruschilensis native forest stand. We hypothesized that introduced deer modify native understory composition and abundance and facilitate invasion of introduced tree species that have been widely planted in the region. After 4 years of deer exclusion, native Austrocedrus and exotic Pseudotsugamenziesii tree sapling abundances are not different inside and outside exclosures. However, deer browsing has strongly inhibited growth of native tree saplings (relative height growth is 77% lower with deer present), while exotic tree sapling growth is less affected (relative height growth is 3.3% lower). Deer significantly change abundance and composition of native understory plants. Cover of native plants in exclosures increased while cover in controls remained constant. Understory composition in exclosures after only 4 years differs greatly from that in controls, mainly owing to the abundance of highly-browsed native species. This study shows that introduced deer can aid the invasion of non-native tree species through negatively affecting native plant species." }, { "instance_id": "R56945xR56533", "comparison_id": "R56945", "paper_id": "R56533", "text": "Interactions among aliens: apparent replacement of one exotic species by another Although many studies have documented the impact of invasive species on indigenous flora and fauna, few have rigorously examined interactions among invaders and the potential for one exotic species to replace another. European green crabs (Carcinus maenas), once common in rocky intertidal habitats of southern New England, have recently declined in abundance coincident with the invasion of the Asian shore crab (Hemigrapsus sanguineus). Over a four-year period in the late 1990s we documented a significant (40- 90%) decline in green crab abundance and a sharp (10-fold) increase in H. sanguineus at three sites in southern New England. Small, newly recruited green crabs had a significant risk of predation when paired with larger H. sanguineus in the laboratory, and recruitment of 0-yr C. maenas was reduced by H. sanguineus as well as by larger conspecifics in field- deployed cages (via predation and cannibalism, respectively). In contrast, recruitment of 0-yr H. sanguineus was not affected by larger individuals of either crab species during the same experiments. The differential susceptibility of C. maenas and H. sanguineus recruits to predation and cannibalism likely contributed to the observed decrease in C. maenas abundance and the almost exponential increase in H. sanguineus abundance during the period of study, While the Asian shore crab is primarily restricted to rocky intertidal habitats, C. maenas is found intertidally, subtidally, and in a range of substrate types in New England. Thus, the apparent replacement of C. maenas by H. sanguineus in rocky intertidal habitats of southern New England may not ameliorate the economic and ecological impacts attributed to green crab populations in other habitats of this region. For example, field experiments indicate that predation pressure on a native bivalve species (Mytilus edulis) has not nec- essarily decreased with the declines of C. maenas. While H. sanguineus has weaker per capita effects than C. maenas, its densities greatly exceed those of C. maenas at present and its population-level effects are likely comparable to the past effects of C. maenas. The Carcinus-Hemigrapsus interactions documented here are relevant in other parts of the world where green crabs and grapsid crabs interact, particularly on the west coast of North America where C. maenas has recently invaded and co-occurs with two native Hemigrapsus species." }, { "instance_id": "R56945xR56531", "comparison_id": "R56945", "paper_id": "R56531", "text": "Promotion of seed set in yellow star-thistle by honey bees: evidence of an invasive mutualism We examined the role of nonnative honey bees (Apis mellifera) as pollinators of the invasive, nonnative plant species yellow star-thistle (Centaurea solstitialis), both introduced to the western United States in the early to middle 1800s. Using four different treatments (three exclosure types) at flower heads, we observed visitation rates of different pollinators. Honey bees were the most common visitors at each of three transects established at three study locales in California: University of California at Davis, Cosumnes River Preserve, and Santa Cruz Island. A significant correlation existed between honey bee visitation levels monitored in all these transects and the average number of viable seeds per seed head for the same transects. Selective exclusion of honey bees at flower heads using a 3 mm diameter mesh significantly reduced seed set per seed head at all locales. Seed set depression was less dramatic at the island locale because of high visitation rates by generalist halictid bees Augochlorella ..." }, { "instance_id": "R56945xR56537", "comparison_id": "R56945", "paper_id": "R56537", "text": "Widespread association of the invasive ant Solenopsis invicta with an invasive mealybug Factors such as aggressiveness and adaptation to disturbed environments have been suggested as important characteristics of invasive ant species, but diet has rarely been considered. However, because invasive ants reach extraordinary densities at introduced locations, increased feeding efficiency or increased exploitation of new foods should be important in their success. Earlier studies suggest that honeydew produced by Homoptera (e.g., aphids, mealybugs, scale insects) may be important in the diet of the invasive ant species Solenopsis invicta. To determine if this is the case, we studied associations of S. invicta and Homoptera in east Texas and conducted a regional survey for such associations throughout the species' range in the southeast United States. In east Texas, we found that S. invicta tended Ho- moptera extensively and actively constructed shelters around them. The shelters housed a variety of Homoptera whose frequency differed according to either site location or season, presumably because of differences in host plant availability and temperature. Overall, we estimate that the honeydew produced in Homoptera shelters at study sites in east Texas could supply nearly one-half of the daily energetic requirements of an S. invicta colony. Of that, 70% may come from a single species of invasive Homoptera, the mealybugAntonina graminis. Homoptera shelters were also common at regional survey sites and A. graminis occurred in shelters at nine of 11 survey sites. A comparison of shelter densities at survey sites and in east Texas suggests that our results from east Texas could apply throughout the range of S. invicta in the southeast United States. Antonina graminis may be an ex- ceptionally important nutritional resource for S. invicta in the southeast United States. While it remains largely unstudied, the tending of introduced or invasive Homoptera also appears important to other, and perhaps all, invasive ant species. Exploitative or mutually beneficial associations that occur between these insects may be an important, previously unrecognized factor promoting their success." }, { "instance_id": "R56945xR56728", "comparison_id": "R56945", "paper_id": "R56728", "text": "Are introduced rats (Rattus rattus) both seed predators and dispersers in Hawaii? Invasive rodents are among the most ubiquitous and problematic species introduced to islands; more than 80% of the world\u2019s island groups have been invaded. Introduced rats (black rat, Rattus rattus; Norway rat, R. norvegicus; Pacific rat, R. exulans) are well known as seed predators but are often overlooked as potential seed dispersers despite their common habit of transporting fruits and seeds prior to consumption. The relative likelihood of seed predation and dispersal by the black rat, which is the most common rat in Hawaiian forest, was tested with field and laboratory experiments. In the field, fruits of eight native and four non-native common woody plant species were arranged individually on the forest floor in four treatments that excluded vertebrates of different sizes. Eleven species had a portion (3\u2013100%) of their fruits removed from vertebrate-accessible treatments, and automated cameras photographed only black rats removing fruit. In the laboratory, black rats were offered fruits of all 12 species to assess consumption and seed fate. Seeds of two species (non-native Clidemia hirta and native Kadua affinis) passed intact through the digestive tracts of rats. Most of the remaining larger-seeded species had their seeds chewed and destroyed, but for several of these, some partly damaged or undamaged seeds survived rat exposure. The combined field and laboratory findings indicate that many interactions between black rats and seeds of native and non-native plants may result in dispersal. Rats are likely to be affecting plant communities through both seed predation and dispersal." }, { "instance_id": "R56945xR56827", "comparison_id": "R56945", "paper_id": "R56827", "text": "Removal of na invasive shrub (Chinese privet: Ligustrum sinense Lour) reduces exotic earthworm abundance and promotes recovery of native North American earthworms a b s t r a c t This study investigated the possibility of a facilitative relationship between Chinese privet (Ligustrum sinense) and exotic earthworms, in the southeastern region of the USA. Earthworms and selected soil properties were sampled five years after experimental removal of privet from flood plain forests of the Georgia Piedmont region. The earthworm communities and soil properties were compared between sites with privet, privet removal sites, and reference sites where privet had never established. Results showed that introduced European earthworms (Aporrectodea caliginosa, Lumbricus rubellus, and Octolasion tyr- taeum) were more prevalent under privet cover, and privet removal reduced their relative abundance (from >90% to \u223c70%) in the community. Conversely, the relative abundance of native species (Diplocardia michaelsenii) increased fourfold with privet removal and was highest in reference sites. Soils under privet were characterized by significantly higher pH relative to reference plots and privet removal facilitated a significant reduction in pH. These results suggest that privet-mediated effects on soil pH may confer a competitive advantage to European lumbricid earthworms. Furthermore, removal of the invasive shrub appears to reverse the changes in soil pH, and may allow for recovery of native earthworm fauna. Published by Elsevier B.V." }, { "instance_id": "R56945xR56593", "comparison_id": "R56945", "paper_id": "R56593", "text": "Interactions between two co-dominant, invasive plants in the understory of a temperate deciduous forest Negative interactions between non-indigenous and native species has been an important research topic of invasion biology. However, interactions between two or more invasive species may be as important in understanding biological invasions, but they have rarely been studied. In this paper, we describe three field experiments that investigated interactions between two non-indigenous plant species invasive in the eastern United States, Lonicera japonica (a perennial vine) and Microstegium vimineum (an annual grass). A press removal experiment conducted within a deciduous forest understory community indicated that M. vimineum was a superior competitor to L. japonica. We tested the hypothesis that the competitive success of M. vimineum was because it overgrew, and reduced light available to, L. japonica, by conducting a separate light gradient experiment within the same community. Shade cloth that simulated the M. vimineum canopy reduced the performance of L. japonica. In a third complementary experiment, we added experimental support hosts to test the hypothesis that the competitive ability of L. japonica is limited by support hosts, onto which L. japonica climbs to access light. We found that the abundance of climbing branches increased with the number of support hosts. Results of this experiment indicate that these two invasive species compete asymmetrically for resources, particularly light." }, { "instance_id": "R56945xR56901", "comparison_id": "R56945", "paper_id": "R56901", "text": "Regeneration response of Brazilian atlantic Forest woody species to four yers of Megathyrsus maximus removal Abstract Guinea-grass ( Megathyrsus maximus (Jacq.) B.K. Simon & S.W.L. Jacobs \u2013 Poaceae) is an invasive C4 grass that known to slow ecological succession in restoration sites. This study aimed to evaluate the responses of the woody species and M. maximus itself to manual removal in a 20-year-old reforestation site. Forty-five 5 \u00d7 5-m plots were established in 2008 and evaluated until 2012. The plots were divided into three treatments: control (CON), manual removal for one year (MR1), and manual removal for two years (MR2). All individuals of woody species >10 cm were sampled. Canopy cover, grass cover, and the number of seedlings of M. maximus were also recorded. After the first weeding, M. maximus seedlings were manually uprooted every four months, for a total of three times in MR1 (2008\u20132009) and six times in MR2 (2008\u20132010). After the manual removal was stopped, the seedlings of M. maximus were counted and were allowed to grow in order to assess its reestablishment potential. The number of new grass seedlings decreased drastically after the first year with repeated removals and did not differ between removal treatments at any time, even after removals halted, indicating that one year of removal was sufficient to displace the grass. With the exclusion of Guinea-grass, an increase in the abundance and richness of other exotic species was observed until the second year. However, the increase in canopy cover during the four years likely benefited native woody species and impaired M. maximus and other exotics. In addition, after four years, although the woody species richness did not differ between the three treatments, the total abundance and pioneer species richness were higher in both removal treatments. These findings suggest that the grass slows ecological succession. However, both the increase in canopy cover and decrease in grass cover indicate that, in the absence of fire, native vegetation will suppress Guinea-grass. Furthermore, grass removal from the understory of early secondary forests and reforestations can be used to accelerate succession, reducing both fire risk and failures of ecological restoration initiatives." }, { "instance_id": "R56945xR56881", "comparison_id": "R56945", "paper_id": "R56881", "text": "Persistence of a soil legacy following removal of a nitrogen-fixing invader Vast effort and resources are spent to control invasive plants. Often with the assumption that once these resources are spent and the invader is successfully removed, the impact of that species on the community is also eliminated. However, invasive species may change the environment in ways that persist, as legacy effects, after the species itself is gone. Here we evaluate the persistence of soil legacy effects following the death of Cytisusscoparius, an invasive nitrogen-fixer. In a field experiment, we periodically killed C.scoparius with herbicide, so that by the end of 2 years the invader had been absent from plots for different durations of time (22, 10, and 1 month). After the final C.scoparius removal treatment, we measured available soil nitrogen and phosphorus as well as the abundance of native and exotic vegetation. We planted Douglas-fir seedlings into the removal plots and tracked seedling success. One month after C.scoparius removal, there was a soil legacy effect in the form of a large initial pulse of inorganic N, presumably as a result of rapid decomposition of N-rich C.scoparius biomass. In the 10-month removal plots, this initial pulse of N had declined dramatically and was 70 % less than the invaded state. However, over the following year, there was little additional decline of N. Time since C.scoparius removal also affected Douglas-fir seedling growth, where seedlings planted into areas where C.scoparius had been removed for 22 months were smaller than seedlings planted into areas where C.scoparius had been removed for 1 and 10 months. This pattern may be caused by competition from a second wave of exotic invaders, whose cover increased with time following C.scoparius removal. Rather than providing a lasting positive fertilization effect on native vegetation, our results suggest that increased N availability instead favors the invasion of fast-growing, nitrophyllic exotic grasses and forbs, and that these species limit colonization and growth of native vegetation including the locally dominant tree Douglas-fir." }, { "instance_id": "R56945xR56875", "comparison_id": "R56945", "paper_id": "R56875", "text": "Mutualism between fire ants and mealybugs reduce lady beetle predation ABSTRACT Solenopsis invicta Buren is an important invasive pest that has a negative impact on biodiversity. However, current knowledge regarding the ecological effects of its interaction with honeydewproducing hemipteran insects is inadequate. To partially address this problem, we assessed whether the interaction between the two invasive species S. invicta and Phenacoccus solenopsis Tinsley mediated predation of P. solenopsis by Propylaea japonica Thunbery lady beetles using field investigations and indoor experiments. S. invicta tending significantly reduced predation by the Pr. japonica lady beetle, and this response was more pronounced for lady beetle larvae than for adults. A field investigation showed that the species richness and quantity of lady beetle species in plots with fire ants were much lower than in those without fire ants. In an olfaction bioassay, lady beetles preferred to move toward untended rather than tended mealybugs. Overall, these results suggest that mutualism between S. invicta and P. solenopsis may have a serious impact on predation of P. solenopsis by lady beetles, which could promote growth of P. solenopsis populations." }, { "instance_id": "R56945xR56601", "comparison_id": "R56945", "paper_id": "R56601", "text": "Competition between two invasive Hydrocharitaceae (Hydrilla verticillata (L.f.) (Royle) and Eigeria densa (Planch)) as influenced by sediment fertility and season Competition between two invasive plants of similar growth form, Hydrilla verticillata (L.f.) (Royle) and Egeria densa (Planch), was studied in response to season and sediment fertility. These two invasive species were grown in outdoor concrete tanks in monocultures and mixtures. Five fertilization rates were tested for monocultures and two for mixtures where six combinations of planting densities were used in two seasons (spring and fall). Monitoring of plant biomass was made at the end of each of these 2-month-experiments. In contrast to E. densa, clear seasonal patterns in biomass production and in reproductive allocations of H. verticillata were evident. Competitive pressure for both species was lower during the fall experiment. Biomass production increased with fertilization for H. verticillata in monocultures and changes either in allocative ratios or in tuber production patterns were shown in response to nutrient availability. However, E. densa growth was not affected by fertilization. In most cases, H. verticillata was a better competitor than E. densa except when sediment was pure sand. Competition occurred mainly for nutrient uptake rather than for light harvesting. These results suggest that despite the similar ecology, H. verticillata may outcompete E. densa in many situations, probably due to its higher plasticity." }, { "instance_id": "R56945xR56726", "comparison_id": "R56945", "paper_id": "R56726", "text": "Seed dispersal of alien and native plants by vertebrate herbivores Seed dispersal is crucial for the success and spread of alien plants. Herbivores often establish a dual relationship with plants: antagonist, through herbivory, and mutualist, through seed dispersal. By consuming plants, herbivores may disperse large amounts of seeds, and can facilitate the spread of alien plants. However, seed dispersal of alien plants by herbivores has been largely uninvestigated. I studied factors associated with dispersal of alien and native seeds by the three most important vertebrate herbivores in SW Australia: emus (Dromaius novaehollandia), western grey kangaroos (Macropus fuliginosus) and European rabbits (Oryctolagus cuniculus). Overall frequencies of alien and native seeds dispersed by these herbivores were determined by differences among them in (1) the plant groups they predominantly disperse, that differed in frequencies of aliens versus natives, and (2) the predominant dispersal of aliens or natives within those plant groups. Emus and kangaroos (natives) tended to disperse predominantly alien seeds within plant groups (defined by life forms, dispersal syndromes, and diaspore size), whereas rabbits (alien) tended to disperse predominantly natives. This agrees with the hypothesis that herbivores will use predominantly plants that have evolved in different areas, because of less effective defences against new enemies. Overall frequencies were consistent with this pattern in kangaroos and rabbits, but not in emus. Kangaroos dispersed mostly plant groups that were mainly aliens (herbaceous species and small and medium sized dispersal units and seeds), which together with their predominant use of aliens over natives within groups resulted in the highest overall frequency of alien seeds (73%). Rabbits were similar to kangaroos in the type of plants dispersed, but their predominant use of natives over aliens within groups contributed to an overall predominance of native seeds in their pellets (88%). Emus dispersed mostly plant groups that were mainly natives (e.g. woody species with big diaspores), resulting in low overall frequency of alien seeds (11%), despite their predominant use of aliens over natives within plant groups. Thus, the within-groups trend pointed to a facilitative role of native herbivores of plant invasions through seed dispersal, but was obscured by the different use by herbivores of plant groups with different frequency of aliens." }, { "instance_id": "R56945xR56849", "comparison_id": "R56945", "paper_id": "R56849", "text": "The effects of mice on stoats in southern beech forests Introduced stoats (Mustela erminea) are important invasive predators in southern beech (Nothofagus sp.) forests in New Zealand. In these forests, one of their primary prey species \u2013 introduced house mice (Mus musculus), fluctuate dramatically between years, driven by the irregular heavy seed-fall (masting) of the beech trees. We examined the effects of mice on stoats in this system by comparing the weights, age structure and population densities of stoats caught on two large islands in Fiordland, New Zealand \u2013 one that has mice (Resolution Island) and one that does not (Secretary Island). On Resolution Island, the stoat population showed a history of recruitment spikes and troughs linked to beech masting, whereas the Secretary Island population had more constant recruitment, indicating that rodents are probably the primary cause for the \u2018boom and bust\u2019 population cycle of stoats in beech forests. Resolutions Island stoats were 10% heavier on average than Secretary Island stoats, supporting the hypothesis that the availability of larger prey (mice verses w\u0113t\u0101) leads to larger stoats. Beech masting years on this island were also correlated with a higher weight for stoats born in the year of the masting event. The detailed demographic information on the stoat populations of these two islands supports previously suggested interactions among mice, stoats and beech masting. These interactions may have important consequences for the endemic species that interact with fluctuating populations of mice and stoats." }, { "instance_id": "R57101xR56977", "comparison_id": "R57101", "paper_id": "R56977", "text": "Little, but increasing evidence of impacts by alien bryophytes AbstractBased on data of bryophyte invasions into 82 regions on five continents of both hemispheres, we aim here at a first comprehensive overview of the impacts that bryophytes may have on biodiversity and socio-economy. Of the 139 bryophytes species which are alien in the study regions seven cause negative impacts on biodiversity in 26 regions, whereas three species cause negative impacts on socio-economic sectors in five regions. The vast majority of impacts stem from anecdotal observations, whereas only 14 field or experimental studies (mostly on Campylopus introflexus in Europe) have quantitatively assessed the impacts of an alien bryophyte. The main documented type of impact on biodiversity is competition (8 alien bryophytes), with native cryptogams being most affected. In particular, C. introflexus (9 regions) and Pseudoscleropodium purum (7 regions) affect resident species composition. The few socio-economic impacts are caused by alien bryophytes which form dense mats in lawns and are then considered a nuisance. Most negative impacts on biodiversity have been recorded in natural grasslands, forests, and wetlands. Impacts of alien bryophytes on biodiversity and socio-economy are a recent phenomenon, with >85 % of impacts on biodiversity, and 80 % of impacts on socio-economy recorded since 1990. On average, 40 years (impacts on biodiversity) and 25 years (impacts on socio-economy) elapsed between the year a bryophyte species has been first recorded as alien in a region and the year impacts have been recorded first. Taking into account the substantial time lag between first record and first recorded impact in a region, it seems to be likely that the currently moderate impacts of alien bryophytes will continue to increase. As quantitative studies on impacts of alien bryophytes are rare and restricted to few environments and biogeographic regions, there is a need for addressing potential impacts of alien bryophytes in yet understudied settings." }, { "instance_id": "R57101xR56961", "comparison_id": "R57101", "paper_id": "R56961", "text": "Patterns of success in passeriform bird introductions on Saint Helena Ecologists have long attempted to predict the success of species that are introduced into foreign environments. Some have emphasized qualities intrinsic to the species themselves, whereas others have argued that extrinsic forces such as competition may be more important. We test some of the predictions made by both the extrinsic and intrinsic hypotheses using passeriform birds introduced onto the island of Saint Helena. We found direct evidence that extrinsic forces are more important predictors of successful invasion. Species introduced when fewer other species were present were more likely to be successful. In a direct test of the alternative hypothesis that intrinsic forces play a more prominent role in success or failure, we found a tendency for species which successfully established on Saint Helena to be also successful when introduced elsewhere. However, the vast majority of species unsuccessful at establishing on Saint Helena had probabilities of success outside Saint Helena of 50% or greater, making this result somewhat equivocal. Finally, we found no evidence to support the hypothesis that species that are successful early are those that are intrinsically superior invaders. These results are consistent with similar analyses of the introduced avian communities on Oahu, Tahiti, and Bermuda." }, { "instance_id": "R57101xR57016", "comparison_id": "R57101", "paper_id": "R57016", "text": "Determinants for the successful establishment of exotic ants in New Zealand Biological invasions can dramatically alter ecosystems. An ability to predict the establishment success for exotic species is important for biosecurity and conservation purposes. I examine the exotic New Zealand ant fauna for characteristics that predict or determine an exotic species\u2019 ability to establish. Quarantine records show interceptions of 66 ant species: 17 of which have established, 43 have failed to establish, whereas nests of another six are periodically observed but have failed to establish permanently (called \u2018ephemeral\u2019 establishment). Mean temperature at the highest latitude and interception variables were the only factors significantly different between established, failed or ephemeral groups. Aspects of life history, such as competitive behaviour and morphology, were not different between groups. However, in a stepwise discriminant analysis, small size was a key factor influencing establishment success. Interception rate and climate were also secondarily important. The resulting classification table predicted establishment success with 71% accuracy. Because not all exotic species are represented in quarantine records, a further discriminant model is described without interception data. Though with less accuracy (65%) than the full model, it still correctly predicted the success or failure of four species not used in the previous analysis. Techniques for improving the prediction accuracy are discussed. Predicting which species will establish in a new area appears an achievable goal, which will be a valuable tool for conservation biology." }, { "instance_id": "R57101xR55034", "comparison_id": "R57101", "paper_id": "R55034", "text": "Human activities, ecosystem disturbance and plant invasions in subantarctic Crozet, Kerguelen and Amsterdam Islands Abstract Recent floristic surveys of the French islands of the southern Indian Ocean (Ile de la Possession, in the Crozet archipelago, Iles Kerguelen and Ile Amsterdam) allow a comparison of the status of the alien vascular plant species in contrasted environmental and historical situations. Four points are established: (1) the current numbers of alien plant species are almost the same on Amsterdam (56) and La Possession (58), slightly higher on Kerguelen (68); (2) some of these species are common to two or three islands but a high number of them are confined to only one island (18, 28 and 28 on La Possession, Kerguelen and Amsterdam, respectively); (3) all the alien plant species are very common species in the temperate regions of the northern hemisphere and belong to the European flora; and (4) a high proportion of the introduced species are present on the research stations or their surroundings (100, 72 and 84% on La Possession, Kerguelen and Amsterdam, respectively). These results are discussed in term of propagule pressure (mainly attributed to ships visiting these islands), invasibility of such ecosystems (in relation to climatic conditions and degree of disturbance by previous or current human activities such as sheep farming or waste deposits) and invasion potential of alien plant species." }, { "instance_id": "R57101xR56098", "comparison_id": "R57101", "paper_id": "R56098", "text": "Establishment success across convergent Mediterranean ecosystems: an analysis of bird introductions Concern over the impact of invaders on biodiversity and on the functioning of ecosystems has gen- erated a rising tide of comparative analyses aiming to unveil the factors that shape the success of introduced species across different regions. One limitation of these studies is that they often compare geographically rather than ecologically defined regions. We propose an approach that can help address this limitation: comparison of invasions across convergent ecosystems that share similar climates. We compared avian invasions in five convergent mediterranean climate systems around the globe. Based on a database of 180 introductions repre- senting 121 avian species, we found that the proportion of bird species successfully established was high in all mediterranean systems (more than 40% for all five regions). Species differed in their likelihood to become estab- lished, although success was not higher for those originating from mediterranean systems than for those from nonmediterranean regions. Controlling for this taxonomic effect with generalized linear mixed models, species introduced into mediterranean islands did not show higher establishment success than those introduced to the mainland. Susceptibility to avian invaders, however, differed substantially among the different mediterranean regions. The probability that a species will become established was highest in the Mediterranean Basin and lowest in mediterranean Australia and the South African Cape. Our results suggest that many of the birds recently introduced into mediterranean systems, and especially into the Mediterranean Basin, have a high potential to establish self-sustaining populations. This finding has important implications for conservation in these biologically diverse hotspots." }, { "instance_id": "R57101xR57053", "comparison_id": "R57101", "paper_id": "R57053", "text": "Non-indigenous species as stressors in estuarine and marine communities: Assessing invasion impacts and interactions Invasions by non-indigenous species (NIS) are recognized as important stressors of many communities throughout the world. Here, we evaluated available data on the role of NIS in marine and estuarine communities and their interactions with other anthropogenic stressors, using an intensive analysis of the Chesapeake Bay region as a case study. First, we reviewed the reported ecological impacts of 196 species that occur in tidal waters of the bay, including species that are known invaders as well as some that are cryptogenic (i.e., of uncertain origin). Second, we compared the impacts reported in and out of the bay region for the same 54 species of plants and fish from this group that regularly occur in the region\u2019s tidal waters. Third, we assessed the evidence for interaction in the distribution or performance of these 54 plant and fish species within the bay and other stressors. Of the 196 known and possible NIS, 39 (20%) were thought to have some significant impact on a resident population, community, habitat, or process within the bay region. However, quantitative data on impacts were found for only 12 of the 39, representing 31% of this group and 6% of all 196 species surveyed. The patterns of reported impacts in the bay for plants and fish were nearly identical: 29% were reported to have significant impacts, but quantitative impact data existed for only 7% (4/54) of these species. In contrast, 74% of the same species were reported to have significant impacts outside of the bay, and some quantitative impact data were found for 44% (24/54) of them. Although it appears that 20% of the plant and fish species in our analysis may have significant impacts in the bay region based upon impacts measured elsewhere, we suggest that studies outside the region cannot reliably predict such impacts. We surmise that quantitative impact measures for individual bays or estuaries generally exist for ,5% of the NIS present, and many of these measures are not particularly informative. Despite the increasing knowledge of marine invasions at many sites, it is evident that we understand little about the full extent and variety of the impacts they create\u2014singly and cumulatively. Given the multiple anthropogenic stressors that overlap with NIS in estuaries, we predict NIS\u2010stressor interactions play an important role in the pattern and impact of invasions." }, { "instance_id": "R57101xR57020", "comparison_id": "R57101", "paper_id": "R57020", "text": "Differentiating successful and failed molluscan invaders in estuarine ecosystems ABSTRACT: Despite mounting evidence of invasive species\u2019 impacts on the environment and society,our ability to predict invasion establishment, spread, and impact are inadequate. Efforts to explainand predict invasion outcomes have been limited primarily to terrestrial and freshwater ecosystems.Invasions are also common in coastal marine ecosystems, yet to date predictive marine invasion mod-els are absent. Here we present a model based on biological attributes associated with invasion suc-cess (establishment) of marine molluscs that compares successful and failed invasions from a groupof 93 species introduced to San Francisco Bay (SFB) in association with commercial oyster transfersfrom eastern North America (ca. 1869 to 1940). A multiple logistic regression model correctly classi-fied 83% of successful and 80% of failed invaders according to their source region abundance at thetime of oyster transfers, tolerance of low salinity, and developmental mode. We tested the generalityof the SFB invasion model by applying it to 3 coastal locations (2 in North America and 1 in Europe)that received oyster transfers from the same source and during the same time as SFB. The model cor-rectly predicted 100, 75, and 86% of successful invaders in these locations, indicating that abun-dance, environmental tolerance (ability to withstand low salinity), and developmental mode not onlyexplain patterns of invasion success in SFB, but more importantly, predict invasion success in geo-graphically disparate marine ecosystems. Finally, we demonstrate that the proportion of marine mol-luscs that succeeded in the latter stages of invasion (i.e. that establish self-sustaining populations,spread and become pests) is much greater than has been previously predicted or shown for otheranimals and plants.KEY WORDS: Invasion \u00b7 Bivalve \u00b7 Gastropod \u00b7 Mollusc \u00b7 Marine \u00b7 Oyster \u00b7 Vector \u00b7 Risk assessment" }, { "instance_id": "R57101xR56965", "comparison_id": "R57101", "paper_id": "R56965", "text": "Mistakes in the analysis of exotic species establishment: source pool designation and correlates of introduction success among parrots (Aves: Psittaciformes) of the world Aim To evaluate the effect of mis-specifying the correct comparison of species pools in the study of species characteristics associated with the biological introduction of exotic species. Methods We use a high quality data set on biological introductions of parrots (Aves: Psittaciformes). These data allow us to examine relationships between life history traits and probability of successful transition through an introduction stage when the species pool is both correctly and incorrectly specified. Results For the establishment of introduced parrot species, nearly half of the predictor variables showed different patterns of significance when an incorrect pool was specified. Multivariate analysis identified entirely different sets of variables as independent predictors of establishment success, depending on the species pool used. Correct pool specification identified that introduced parrot species have been more likely to establish if they have broader diets and are more sedentary. Main conclusions Conclusions from the analysis of biological introductions are likely to depend on the specification of the species pool for such analyses. In the analysis of parrot introductions, this was particularly apparent in establishment success following release. Further studies that analyse the introduction pathway need to examine the effects of pool mis-specification so that the generality of our results can be assessed." }, { "instance_id": "R57101xR55002", "comparison_id": "R57101", "paper_id": "R55002", "text": "Factors explaining alien plant invasion success in a tropical ecosystem differ at each stage of invasion Summary 1. Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2. Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3. Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy-feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopyfeeding animals and with greater seed mass were more likely to be established in closed forest. 4. Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5. Synthesis . This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests." }, { "instance_id": "R57101xR57042", "comparison_id": "R57101", "paper_id": "R57042", "text": "Comprehensive review of the records of the biota of the Indian Seas and introduction of non-indigenous species 1. Comparison of the pre-1960 faunal survey data for the Indian Seas with that for the post-1960 period showed that 205 non-indigenous taxa were introduced in the post-1960 period; shipping activity is considered a plausible major vector for many of these introductions. 2. Of the non-indigenous taxa, 21% were fish, followed by Polychaeta (<11%), Algae (10%), Crustacea (10%), Mollusca (10%), Ciliata (8%), Fungi (7%), Ascidians (6%) and minor invertebrates (17%). 3. An analysis of the data suggests a correspondence between the shipping routes between India and various regions. There were 75 species common to the Indian Seas and the coastal seas of China and Japan, 63 to the Indo-Malaysian region, 42 to the Mediterranean, 40 and 34 to western and eastern Atlantic respectively, and 41 to Australia and New Zealand. A further 33 species were common to the Caribbean region, 32 to the eastern Pacific, 14 and 24 to the west and east coasts of Africa respectively, 18 to the Baltic, 15 to the middle Arabian Gulf and Red Sea, and 10 to the Brazilian coast. 4. The Indo-Malaysian region can be identified as a centre of xenodiversity for biota from Southeast Asia, China, Japan, Philippines and Australian regions. 5. Of the introduced species, the bivalve Mytilopsis sallei and the serpulid Ficopomatus enigmaticus have become pests in the Indian Seas, consistent with the Williamson and Fitter \u2018tens rule\u2019. Included amongst the biota with economic impact are nine fouling and six wood-destroying organisms. 6. Novel occurrences of the human pathogenic vibrios, e.g. Vibrio parahaemolyticus, non-01 Vibrio cholerae, Vibrio vulnificus and Vibrio mimicus and the harmful algal bloom species Alexandrium spp. and Gymnodinium nagasakiense in the Indian coastal waters could be attributed to ballast water introductions. 7. Introductions of alien biota could pose a threat to the highly productive tropical coastal waters, estuaries and mariculture sites and could cause economic impacts and ecological surprises. 8. In addition to strict enforcement of a national quarantine policy on ballast water discharges, long-term multidisciplinary research on ballast water invaders is crucial to enhance our understanding of the biodiversity and functioning of the ecosystem. Copyright \u00a9 2005 John Wiley & Sons, Ltd." }, { "instance_id": "R57101xR57018", "comparison_id": "R57101", "paper_id": "R57018", "text": "Sexual selection and the risk of extinction of introduced birds on oceanic islands We test the hypothesis that response to sexual selection increases the risk of extinction by examining the fate of plumage-monomorphic versus plumage-dimorphic bird species introduced to the tropical islands of Oahu and Tahiti. We assume that plumage dimorphism is a response to sexual selection and we assume that the males of plumage-dimorphic species experience stronger sexual selection pressures than males of monomorphic species. On Oahu, the extinction rate for dimorphic species, 59%, is significantly greater than for monomorphic species, 23%. On Tahiti, only 7% of the introduced dimorphic species have persisted compared to 22% for the introduced monomorphic species. For the combined Oahu and Tahiti data sets, addition of plumage-by-fate interaction significantly improves the fit of the log-linear model, fate+island+plumage+(fate-by-island)+(island-by-plumage). To control for phylogenetic constraint, a logistic regression model is analyzed using a data subset consisting of only the two best represented families, Fringillidae and Passeridae. Here, plumage and the plumage-by-family interaction are significant. Plumage is significantly associated with increased risk of extinction for passerids but insignificantly associated for fringillids. Thus, the hypothesis that response to sexual selection increases the risk of extinction is supported for passerids and for the data set as a whole. The probability of extinction was correlated with the number of species already introduced. Thus, species that have responded to sexual selection may be poorer interspecific competitors when their communities contain many other species." }, { "instance_id": "R57101xR57092", "comparison_id": "R57101", "paper_id": "R57092", "text": "The varying success of invaders . ........ ... .. . . ....... ..... . ......... . ---------- . .... ........ .... . .... . ..... . ------------. ........ .... . ... ..... . . .. . .... . . ... . . .. . ........ .. . . ... . ..... .. . . .... .... .... ... ..... . . . .... .. . .. ... . . . . .. .. ...... . ....... ... ... . ... .. . . . . . .... ....... .... ......... ... ........ ........ ..." }, { "instance_id": "R57101xR56972", "comparison_id": "R57101", "paper_id": "R56972", "text": "Parental investment and fecundity, but not brain size, are associated with establishment success in introduced fishes Summary 1 Classical theory predicts that colonizing ability should increase with fecundity. Additionally, it has recently been shown that successful establishment of birds was correlated with relative brain size, which was suggested as possibly universal among vertebrates. 2 I conducted a comparative study of establishment success in global fish introductions, controlling for regional geographic differences, to test these hypothesized correlates. 3 In 133 introductions of 17 fish species, establishment success was negatively associated with fecundity while there was no evidence for an effect of relative brain size. In analysis of partially overlapping data, there was no evidence of a correlation between relative brain size and establishment rate across 39 species. 4 One explanation for the negative association with fecundity is that parental investment might be more important to establishment than fecundity. In 126 introductions of 14 species, reproductive behaviours associated with parental investment were significantly associated with establishment success. These results suggest that the correlation between brain size and establishment success is not universal." }, { "instance_id": "R57101xR57061", "comparison_id": "R57101", "paper_id": "R57061", "text": "Patterns of extinction in the introduced Hawaiian avifauna: a reexamination of the role of competition Among introduced passeriform and columbiform birds of the six major Hawaiian islands, some species (including most of those introduced early) may have an intrinsically high probability of successful invasion, whereas others (including many of those introduced from 1900 through 1936) may be intrinsically less likely to succeed. This hypothesis accords well with the observation that, of the 41 species introduced on more than one of the Hawaiian islands, all but four either succeeded everywhere they were introduced or failed everywhere they were introduced, no matter what other species or how many other species were present. Other hypotheses, including competitive ones, are possible. However, most other patterns that have been claimed to support the hypothesis that competitive interactions have been key to which species survived are ambiguous. We propose that the following patterns are true: (1) Extinction rate as a function of number of species present (S) is not better fit by addition of an S2 term. (2) Bill-length differences between pairs of species that invaded together may tend to be less for pairs in which at least one species became extinct, but the result is easily changed by use of one reasonable set of conventions rather than another. In any event, the relationship of bill-length differences to resource overlap has not been established for these species. (3) Surviving forest passeriforms on Oahu may be overdispersed in morphological space, although the species pool used to construct the space may not have been the correct one. (4) Densities of surviving species on species-poor islands have not been shown to exceed those on species-rich islands." }, { "instance_id": "R57101xR56102", "comparison_id": "R57101", "paper_id": "R56102", "text": "Are islands more susceptible to be invaded than continents? Birds say no Island communities are generally viewed as being more susceptible to invasion than those of mainland areas, yet empirical evidence is almost lacking. A species-by-species examination of introduced birds in two independent island-mainland comparisons is not consistent with this hypothesis. In the New Zealand-mainland Australia comparison, 16 species were successful in both regions, 19 always failed and only eight had mixed outcomes. Mixed results were observed less often than expected by chance, and in only 5 cases was the relationship in the predicted direction. This result is not biased by differences in introduction effort because, within species, the number of individuals released in New Zealand did not differ significantly from those released in mainland Australia. A similar result emerged in the Hawaiian islands-mainland USA comparison: among the 35 species considered, 15 were successful in both regions, seven always failed and 13 had mixed outcomes. In this occasion, the results fit well to those expected by chance, and in only seven cases was the relationship in the direction predicted. I therefore conclude that, if true, the view that islands are less resistant than continents to invasions is far from universal." }, { "instance_id": "R57101xR55039", "comparison_id": "R57101", "paper_id": "R55039", "text": "The Influence of Numbers Released on the Outcome of Attempts to Introduce Exotic Bird Species to New Zealand 1. Information on the approximate number of individuals released is available for 47 of the 133 exotic bird species introduced to New Zealand in the late 19th and early 20th centuries. Of these, 21 species had populations surviving in the wild in 1969-79. The long interval between introduction and assessment of outcome provides a rare opportunity to examine the factors correlated with successful establishment without the uncertainty of long-term population persistence associated with studies of short duration. 2. The probability of successful establishment was strongly influenced by the number of individuals released during the main period of introductions. Eight-three per cent of species that had more than 100 individuals released within a 10-year period became established, compared with 21% of species that had less than 100 birds released. The relationship between the probability of establishment and number of birds released was similar to that found in a previous study of introductions of exotic birds to Australia. 3. It was possible to look for a within-family influence on the success of introduction of the number of birds released in nine bird families. A positive influence was found within seven families and no effect in two families. This preponderance of families with a positive effect was statistically significant. 4. A significant effect of body weight on the probability of successful establishment was found, and negative effects of clutch size and latitude of origin. However, the statistical significance of these effects varied according to whether comparison was or was not restricted to within-family variation. After applying the Bonferroni adjustment to significance levels, to allow for the large number of variables and factors being considered, only the effect of the number of birds released was statistically significant. 5. No significant effects on the probability of successful establishment were apparent for the mean date of release, the minimum number of years in which birds were released, the hemisphere of origin (northern or southern) and the size and diversity of latitudinal distribution of the natural geographical range." }, { "instance_id": "R57101xR57072", "comparison_id": "R57101", "paper_id": "R57072", "text": "Environmental and economic impact assessment of alien and invasive fish species in Europe using the generic impact scoring system Invasions by alien species are one of the major threats to the native environment. There are multifold attempts to counter alien species, but limited resources for mitigation or eradication programmes makes prioritisation indispensable. We used the generic impact scoring system to assess the impact of alien fish species in Europe. It prioritises species, but also offers the possibility to compare the impact of alien invasive species between different taxonomic groups. For alien fish in Europe, we compiled a list of 40 established species. By literature research, we assessed the environmental impact (through herbivory, predation, competition, disease transmission, hybridisation and ecosystem alteration) and economic impact (on agriculture, animal production, forestry, human infrastructure, human health and human social life) of each species. The goldfish/gibel complex Carassius auratus/C. gibelio scored the highest impact points, followed by the grass carp Ctenopharyngodon idella and the topmouth gudgeon Pseudorasbora parva. According to our analyses, alien fish species have the strongest impact on the environment through predation, followed by competition with native species. Besides negatively affecting animal production (mainly in aquaculture), alien fish have no pronounced economic impact. At the species level, C. auratus/C. gibelio show similar impact scores to the worst alien mammals in Europe. This study indicates that the generic impact scoring system is useful to investigate the impact of alien fish, also allowing cross-taxa comparisons. Our results are therefore of major relevance for stakeholders and decision-makers involved in management and eradication of alien fish species." }, { "instance_id": "R57101xR57008", "comparison_id": "R57101", "paper_id": "R57008", "text": "Some alien birds have as severe an impact as the most effectual alien mammals in Europe Abstract Invasive alien species cause considerable economic and environmental damage. Nevertheless which species should be targeted first and exact control strategies are controversial matters. As no categorization of the impact of alien bird species is available so far, we adopted an impact scoring system for mammals to birds and scored the impact of the alien birds established in Europe. We investigated 26 established alien birds in Europe and compiled all known impact data for these species. The species with highest environmental impact were the Canada goose ( Branta canadensis ), sacred ibis ( Threskiornis aethiopicus ) and ruddy duck ( Oxyura jamaicensis ). The most severe impact on economy was exerted again by the Canada goose. Also the ring-necked parakeet ( Psittacula krameri ) and monk parakeet ( Myiopsitta monachus ) had high impact in this category. Combining these potential impact data with the current distribution generates a list of alien birds with highest actual impact. These two values can be used to prioritise preventive and control measures. In comparison to birds, mammals in general have higher potential and actual impact in Europe, but some bird species reach impact values as high as some of the worst mammal species. Still, these bird species \u2013 in contrast to mammals with high impact \u2013 are hardly targeted by control programmes. This study shows that there is no scientific reason for this. With the here presented scoring system we offer a decision tool to practitioners which supports them in finding an appropriate reaction to invasive birds." }, { "instance_id": "R57101xR57024", "comparison_id": "R57101", "paper_id": "R57024", "text": "Exotic species in the Great Lakes: a history of biotic crises and anthropogenic introductions Through literature review, we documented introductions of non-indigenous aquatic flora and fauna into the Great Lakes basin since the early 1800s. We focused on the origin, probable mechanism(s) of introduction, and the date and locality of first discovery of Great Lakes exotic species. The Laurentian Great Lakes have been subject to invasion by exotic species since settlement of the region by Europeans. Since the 1800s, 139 non-indigenous aquatic organisms have become established in the Great Lakes. The bulk of these organisms has been represented by plants (59), fishes (25), algae (24), and mollusks (14). Most species are native to Eurasia (55%) and the Atlantic Coast (13%). As human activity has increased in the Great Lakes watershed, the rate of introduction of exotic species has increased. Almost one-third of the organisms have been introduced in the past 30 years, a surge coinciding with the opening of the St. Lawrence Seaway in 1959. Five categories of entry mechanisms were identified: unintentional releases, ship-related introductions, deliberate releases, entry through or along canals, and movement along railroads and highways. Entry mechanisms were dominated by unintentional releases (29%) and ships (29%). Unintentional releases included escapees from cultivation and aquaculture, bait, aquarium, and other accidental releases. Ship-related introductions included ballast water (63%), solid ballast (31%), and fouling. Introductions via canals represent a small percentage of entries into the Great Lakes. We have identified 13 non-indigenous species (9%) that have substantially influenced the Great Lakes ecosystem, both economically and ecologically. The apparent lack of effects of 91 % of the exotic species in the Great Lakes does not mean that they have had little or no ecological impact. Alterations in community structure may predate modern investigations by decades or centuries, and the effects of many species have simply not been studied. As long as human activities provide the means through which future species can be transported into the Great Lakes basin, the largest freshwater resource in the world will continue to be at risk from the invasion of exotic organisms." }, { "instance_id": "R57101xR57097", "comparison_id": "R57101", "paper_id": "R57097", "text": "Fish and ships: relating dispersal frequency to success in biological invasions Abstract Most studies characterizing successful biological invaders emphasize those traits that help a species establish a new population. Invasions are, however, multi-phase processes with at least two phases, dispersal and introduction, that occur before establishment. Characteristics that enhance survival at any of these three phases will contribute to invasion success. Here, we synthesize information on the dispersal, introduction, and establishment of fishes mediated by ship ballast-water transport. We synthesize 54 reports of at least 31 fish species collected from ballast tanks (Phase 1), including 28 new reports from our recent studies (1986 to 1996). Our literature survey revealed 40 reports of 32 fish species whose introductions have been attributed to ballast transport (Phase 2), of which at least 24 survived to establish persistent populations (Phase 3). We detected little overlap at the species level between these two data sets (Phase 1 vs Phases 2 and 3), but patterns emerged at the family level. The Gobiidae (6 species), Clupeidae (4 species), and Gasterosteidae (1 species) were the most commonly found fish families in ballast tanks (Phase 1). The Gobiidae (13 species), Blenniidae (6 species) and Pleuronectidae (2 species) dominated the list of ballast-mediated introductions (Phase 2); gobies and blennies were the families most frequently established (Phase 3). The invasive success of gobies and blennies may be explained in part by their crevicolous nature: both groups seek refuge and lay eggs in small holes, and may take advantage of the ballast-intake holes on ship hulls. This behavior, not typically associated with invasive ability, may contribute to successful introduction and establishment by facilitating the dispersal phase of invasion. The failure of the pleuronectids to invade may reflect poor salinity match between donor and recipient regions. To develop a predictive framework of invasion success, organisms must be sampled at all three phases of the invasion process. Our comparison of two ballast sampling methods suggests that fishes have been undersampled in ballast-water studies, including our own, and that the role of ballast transport in promoting fish invasions has been underestimated." }, { "instance_id": "R57101xR55011", "comparison_id": "R57101", "paper_id": "R55011", "text": "The role of competition and introduction effort in the success of passeriform birds introduced to New Zealand The finding that passeriform birds introduced to the islands of Hawaii and Saint Helena were more likely to successfully invade when fewer other introduced species were present has been interpreted as strong support for the hypothesis that interspecific competition influences invasion success. I tested whether invasions were more likely to succeed when fewer species were present using the records of passeriform birds introduced to four acclimatization districts in New Zealand. I also tested whether introduction effort, measured as the number of introductions and the total number of birds released, could predict invasion outcomes, a result previously established for all birds introduced to New Zealand. I found patterns consistent with both competition and introduction effort as explanations for invasion success. However, data supporting the two explanations were confounded such that the greater success of invaders arriving when fewer other species were present could have been due to a causal relationship between invasion success and introduction effort. Hence, without data on introduction effort, previous studies may have overestimated the degree to which the number of potential competitors could independently explain invasion outcomes and may therefore have overstated the importance of competition in structuring introduced avian assemblages. Furthermore, I suggest that a second pattern in avian invasion success previously attributed to competition, the morphological overdispersion of successful invaders, could also arise as an artifact of variation in introduction effort." }, { "instance_id": "R57101xR56992", "comparison_id": "R57101", "paper_id": "R56992", "text": "From first reports to successful control: a plea for improved management of alien aquatic plant species in Germany Alien aquatic plant species can strongly affect all types of freshwater ecosystems. Their number has more than doubled between 1980 and 2009 in Germany, and currently 27 are known and their number is still increasing. Eleven have been classified as invasive, but only four are managed yet, mainly by weed cutting. Most of the alien aquatic plant species were probably introduced as aquarium and pond waste. Despite this fact, 18 of the 27 known alien species are traded as ornamentals for aquaria or garden ponds in German shops. Alien species can most successfully be controlled when their management starts as soon as possible after their introduction. In Germany, the delay between first records and start of management actions seems too long for successful control. The public awareness of alien aquatic plants and problems they can cause in Germany is still limited despite a number of recent projects. At present, Black lists are developed that help nature conservationists, stakeholders and politicians to select those alien species for which prevention measures should be implemented. These, however, are not legally binding and laws regulating trade in Black listed plant species are strongly needed to reduce their impact on the environment and economy." }, { "instance_id": "R57101xR57030", "comparison_id": "R57101", "paper_id": "R57030", "text": "A generic impact-scoring system applied to alien mammals in Europe We present a generic scoring system that compares the impact of alien species among members of large taxonomic groups. This scoring can be used to identify the most harmful alien species so that conservation measures to ameliorate their negative effects can be prioritized. For all alien mammals in Europe, we assessed impact reports as completely as possible. Impact was classified as either environmental or economic. We subdivided each of these categories into five subcategories (environmental: impact through competition, predation, hybridization, transmission of disease, and herbivory; economic: impact on agriculture, livestock, forestry, human health, and infrastructure). We assigned all impact reports to one of these 10 categories. All categories had impact scores that ranged from zero (minimal) to five (maximal possible impact at a location). We summed all impact scores for a species to calculate \"potential impact\" scores. We obtained \"actual impact\" scores by multiplying potential impact scores by the percentage of area occupied by the respective species in Europe. Finally, we correlated species' ecological traits with the derived impact scores. Alien mammals from the orders Rodentia, Artiodactyla, and Carnivora caused the highest impact. In particular, the brown rat (Rattus norvegicus), muskrat (Ondathra zibethicus), and sika deer (Cervus nippon) had the highest overall scores. Species with a high potential environmental impact also had a strong potential economic impact. Potential impact also correlated with the distribution of a species in Europe. Ecological flexibility (measured as number of different habitats a species occupies) was strongly related to impact. The scoring system was robust to uncertainty in knowledge of impact and could be adjusted with weight scores to account for specific value systems of particular stakeholder groups (e.g., agronomists or environmentalists). Finally, the scoring system is easily applicable and adaptable to other taxonomic groups." }, { "instance_id": "R57101xR57046", "comparison_id": "R57101", "paper_id": "R57046", "text": "Life-history traits of non-native fishes in Iberian watersheds across several invasion stages: a first approach Freshwater ecosystems are seriously imperiled by the spread of non-native fishes thus establishing profiles of their life-history characteristics is an emerging tool for developing conservation and management strategies. We did a first approach to determine characteristics of successful and failed non-native fishes in a Mediterranean-climate area, the Iberian Peninsula, for three stages of the invasion process: establishment, spread and integration. Using general linear models, we established which characteristics are most important for success at each invasion stage. Prior invasion success was a good predictor for all the stages of the invasion process. Biological variables relevant for more than one invasion stage were maximum adult size and size of native range. Despite these common variables, all models produced a different set of variables important for a successful invasion, demonstrating that successful invaders have a combination of biological traits that may favor success at all invasion stages. However, some differences were found in relation to published studies on fish invasions in other Mediterranean-climate areas, suggesting that characteristics of the recipient ecosystem are as relevant as the characteristics of the invading species." }, { "instance_id": "R57101xR57035", "comparison_id": "R57101", "paper_id": "R57035", "text": "Marketing time predicts naturalization of horticultural plants Horticulture is an important source of naturalized plants, but our knowledge about naturalization frequencies and potential patterns of naturalization in horticultural plants is limited. We analyzed a unique set of data derived from the detailed sales catalogs (1887-1930) of the most important early Florida, USA, plant nursery (Royal Palm Nursery) to detect naturalization patterns of these horticultural plants in the state. Of the 1903 nonnative species sold by the nursery, 15% naturalized. The probability of plants becoming naturalized increases significantly with the number of years the plants were marketed. Plants that became invasive and naturalized were sold for an average of 19.6 and 14.8 years, respectively, compared to 6.8 years for non-naturalized plants, and the naturalization of plants sold for 30 years or more is 70%. Unexpectedly, plants that were sold earlier were less likely to naturalize than those sold later. The nursery's inexperience, which caused them to grow and market many plants unsuited to Florida during their early period, may account for this pattern. Plants with pantropical distributions and those native to both Africa and Asia were more likely to naturalize (42%), than were plants native to other smaller regions, suggesting that plants with large native ranges were more likely to naturalize. Naturalization percentages also differed according to plant life form, with the most naturalization occurring in aquatic herbs (36.8%) and vines (30.8%). Plants belonging to the families Araceae, Apocynaceae, Convolvulaceae, Moraceae, Oleaceae, and Verbenaceae had higher than expected naturalization. Information theoretic model selection indicated that the number of years a plant was sold, alone or together with the first year a plant was sold, was the strongest predictor of naturalization. Because continued importation and marketing of nonnative horticultural plants will lead to additional plant naturalization and invasion, a comprehensive approach to address this problem, including research to identifyand select noninvasive forms and types of horticultural plants is urgently needed." }, { "instance_id": "R57101xR57032", "comparison_id": "R57101", "paper_id": "R57032", "text": "The importance of human mediation in species establishment: analysis of the alien flora in Estonia In order to analyse the mechanisms of the crossing of invasion phases by alien species, a comprehensive 787-species database of all alien neophytes ever recorded in the Estonian flora was compiled. The invasiveness (invasive status, abundance type, introduction mode, residence time, etc.) of each species was estimated and analysed. Our analysis shows that humans have played a more profound role in fostering plant naturalisations than by acting simply as dispersers - the percentage of naturalisation among the deliberately introduced species is considerably higher than among the accidentally introduced taxa. Cultivation has preferred long-lived species that have advantages for reaching greater abundance and naturalised status in the area, especially in (semi-)natural communities. Invasion success also increases with alien species residence time in the study area. There is definitely a need, in the future, to regulate introductions, especially to control the ornamental plant trade." }, { "instance_id": "R57101xR56951", "comparison_id": "R57101", "paper_id": "R56951", "text": "The potential impact of the New Zealand flatworm, a predator of earthworms, in western Europe The New Zealand flatworm Arthurdendyus triangulatus (=Artioposthia triangulata) is an example of an invasive organism that, by reducing lumbricid earthworm populations, could have a major impact on soil ecosystems in Britain and the Faroe Islands. How it was introduced into the British Isles is not known, but like many invasive species, it is suspected that it was introduced by humans and was associated with the trade between New Zealand and Britain. Once established in Britain it found in the large, readily available earthworm population a niche that it could exploit. The microclimate of the forests in the center and south of the South Island of New Zealand from whence the flatworm came is similar to that in parts of the British Isles and consequently conducive to its survival. Although when compared with many other invertebrate introductions (e.g., insects) the flatworm's rate of increase has been slow, a retrospective study strongly suggested that, in Scotland, they spread from botanic gardens to horti..." }, { "instance_id": "R57101xR55013", "comparison_id": "R57101", "paper_id": "R55013", "text": "High predictability in introduction outcomes and the geographical range size of introduced Australian birds: a role for climate Summary 1 We investigated factors hypothesized to influence introduction success and subsequent geographical range size in 52 species of bird that have been introduced to mainland Australia. 2 The 19 successful species had been introduced more times, at more sites and in greater overall numbers. Relative to failed species, successfully introduced species also had a greater area of climatically suitable habitat available in Australia, a larger overseas range size and were more likely to have been introduced successfully outside Australia. After controlling for phylogeny these relationships held, except that with overseas range size and, in addition, larger-bodied species had a higher probability of introduction success. There was also a marked taxonomic bias: gamebirds had a much lower probability of success than other species. A model including five of these variables explained perfectly the patterns in introduction success across-species. 3 Of the successful species, those with larger geographical ranges in Australia had a greater area of climatically suitable habitat, traits associated with a faster population growth rate (small body size, short incubation period and more broods per season) and a larger overseas range size. The relationships between range size in Australia, the extent of climatically suitable habitat and overseas range size held after controlling for phylogeny. 4 We discuss the probable causes underlying these relationships and why, in retrospect, the outcome of bird introductions to Australia are highly predictable." }, { "instance_id": "R57101xR56996", "comparison_id": "R57101", "paper_id": "R56996", "text": "Invasion success of vertebrates in Europe and North America Species become invasive if they (i) are introduced to a new range, (ii) establish themselves, and (iii) spread. To address the global problems caused by invasive species, several studies investigated steps ii and iii of this invasion process. However, only one previous study looked at step i and examined the proportion of species that have been introduced beyond their native range. We extend this research by investigating all three steps for all freshwater fish, mammals, and birds native to Europe or North America. A higher proportion of European species entered North America than vice versa. However, the introduction rate from Europe to North America peaked in the late 19th century, whereas it is still rising in the other direction. There is no clear difference in invasion success between the two directions, so neither the imperialism dogma (that Eurasian species are exceptionally successful invaders) is supported, nor is the contradictory hypothesis that North America offers more biotic resistance to invaders than Europe because of its less disturbed and richer biota. Our results do not support the tens rule either: that approximately 10% of all introduced species establish themselves and that approximately 10% of established species spread. We find a success of approximately 50% at each step. In comparison, only approximately 5% of native vertebrates were introduced in either direction. These figures show that, once a vertebrate is introduced, it has a high potential to become invasive. Thus, it is crucial to minimize the number of species introductions to effectively control invasive vertebrates." }, { "instance_id": "R57101xR56959", "comparison_id": "R57101", "paper_id": "R56959", "text": "Interception frequency of exotic bark and ambrosia beetles (Coleoptera: Scolytinae) and relationship with establishment in New Zealand and worldwide Scolytinae species are among the most damaging forest pests, and many of them are invasive. Over 1500 Scolytinae interceptions were recorded at New Zealand's borders between 1950 and 2000. Among the 103 species were Dendroctonus ponderosae, Ips typographus, and other high-risk species, but actual arrivals probably included many more species. Interceptions were primarily associated with dunnage, casewood (crating), and sawn timber, and originated from 59 countries, mainly from Europe, Australasia, northern Asia, and North America. New Zealand and United States interception data were highly correlated, and 7 of the 10 most intercepted species were shared. Interception frequency and establishment in New Zealand were not clearly related. By combining New Zealand and United States interceptions of true bark beetles we obtained data on species found in shipments from around the world. Logistic regression analysis showed that frequently intercepted species were about four times as likely as rarely intercepted species to be established somewhere. Interception records of wood and bark borers are valuable for the prediction of invaders and for our general understanding of invasions. The use of alternatives to solid wood packaging, such as processed wood, should be encouraged to reduce the spread of invasive wood and bark borers." }, { "instance_id": "R57101xR56096", "comparison_id": "R57101", "paper_id": "R56096", "text": "Across islands and continents, mammals are more successful invaders than birds Many invasive species cause ecological or economic damage, and the fraction of introduced species that become invasive is an important determinant of the overall costs caused by invaders. According to the widely quoted tens rule, about 10% of all introduced species establish themselves and about 10% of these established species become invasive. Global taxonomic differences in the fraction of species becoming invasive have not been described. In a global analysis of mammal and bird introductions, I show that both mammals and birds have a much higher invasion success than predicted by the tens rule, and that mammals have a significantly higher success than birds. Averaged across islands and continents, 79% of mammals and 50% of birds introduced have established themselves and 63% of mammals and 34% of birds established have become invasive. My analysis also does not support the hypothesis that islands are more susceptible to invaders than continents, as I did not find a significant relationship between invasion success and the size of the island or continent to which the species were introduced. The data set used in this study has a number of limitations, e.g. information on propagule pressure was not available at this global scale, so understanding the mechanisms behind the observed patterns has to be postponed to future studies." }, { "instance_id": "R57101xR57088", "comparison_id": "R57101", "paper_id": "R57088", "text": "The analysis and modelling of British invasions The SCOPE programme on the ecology of biological invasions addresses three questions: What are the factors that determine whether a species will become an invader or not? What are the site properties which determine whether an ecological system will be relatively prone to or resistant to invasion? How should management systems be developed to best advantage, given the knowledge gained by attempting to answer the first two questions? The answers that have been offered to these questions earlier, and during the course of the programme, are reviewed. The consensus is that, although certain habitat and biological features increase the probability of invasion and establishment, these features are neither necessary nor sufficient, and that the prediction of invasion is not yet feasible. These points are illustrated by examples and generalizations from a survey of British invaders. The probability that an established invader will be a pest in Britain seems to be around 10% . Mathematical modelling may help in understanding and, later, in predicting invasions. Models indicate that establishment may be more critical than spread, and that a successful invader will spread at a constant linear speed. Models and data suggest that both an accelerating rate of spread and occasional major jumps can be expected; consequently, efforts to eliminate an invader at an early stage will be the most effective." }, { "instance_id": "R57101xR56953", "comparison_id": "R57101", "paper_id": "R56953", "text": "Determinants of establishment success for introducted exotic mammals We conducted comparisons for exotic mammal species introduced to New Zealand (28 successful, 4 failed), Australia (24, 17) and Britain (15, 16). Modelling of variables associated with establishment success was constrained by small sample sizes and phylogenetic dependence, so our results should be interpreted with caution. Successful species were subject to more release events, had higher climate matches between their overseas geographic range and their country of introduction, had larger overseas geographic range sizes and were more likely to have established an exotic population elsewhere than was the case for failed species. Of the mammals introduced to New Zealand, successful species also had larger areas of suitable habitat than did failed species. Our findings may guide risk assessments for the import of live mammals to reduce the rate new species establish in the wild." }, { "instance_id": "R57101xR57075", "comparison_id": "R57101", "paper_id": "R57075", "text": "How well do we understand the impacts of alien species on ecosystem services? A pan-European, cross-taxa assessment Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates \u2013 in terrestrial, freshwater, and marine environments \u2013 on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect \u201csupporting\u201d, \u201cprovisioning\u201d, \u201cregulating\u201d, and \u201ccultural\u201d services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe." }, { "instance_id": "R57101xR56947", "comparison_id": "R57101", "paper_id": "R56947", "text": "Predictors of introduction success in the South Florida avifauna Biological invasions are an increasing global challenge, for which single-species studies and analyses focused on testing single hypotheses of causation in isolation are unlikely to provide much additional insight. Species interact with other species to create communities, which derive from species interactions and from the interactions of species with the scale specific elements of the landscape that provide suitable habitat and exploitable resources. I used logistic regression analysis to sort among potential intrinsic, community and landscape variables that theoretically influence introduction success. I utilized the avian fauna of the Everglades of South Florida, and the variables body mass, distance to nearest neighbor (in terms of body mass), year of introduction, presence of congeners, guild membership, continent of origin, distribution in a body mass aggregation or gap, and distance to body-mass aggregation edge (in terms of body mass). Two variables were significant predictors of introduction success. Introduced avian species whose body mass placed them nearer to a body-mass aggregation edge and further from their neighbor were more likely to become successfully established. This suggests that community interactions, and community level phenomena, may be better understood by explicitly incorporating scale." }, { "instance_id": "R57101xR55136", "comparison_id": "R57101", "paper_id": "R55136", "text": "Correlates of Introduction Success in Exotic New Zealand Birds Whether or not a bird species will establish a new population after invasion of uncolonized habitat depends, from theory, on its life-history attributes and initial population size. Data about initial population sizes are often unobtainable for natural and deliberate avian invasions. In New Zealand, however, contemporary documentation of introduction efforts allowed us to systematically compare unsuccessful and successful invaders without bias. We obtained data for 79 species involved in 496 introduction events and used the present-day status of each species as the dependent variable in fitting multiple logistic regression models. We found that introduction efforts for species that migrated within their endemic ranges were significantly less likely to be successful than those for nonmigratory species with similar introduction efforts. Initial population size, measured as number of releases and as the minimum number of propagules liberated in New Zealand, significantly increased the probability of translocation success. A null model showed that species released more times had a higher probability per release of successful establishment. Among 36 species for which data were available, successful invaders had significantly higher natality/mortality ratios. Successful invaders were also liberated at significantly more sites. Invasion of New Zealand by exotic birds was therefore primarily related to management, an outcome that has implications for conservation biology." }, { "instance_id": "R57101xR57083", "comparison_id": "R57101", "paper_id": "R57083", "text": "Invaders, weeds and the risk from genetically manipulated organisms Invaders, weeds and colonizers comprise different but overlapping sets of species. The probability of successful invasion is low. The 10:10 rule state that 10% of introduced speices (those with feral individuals) become established, 10% of established species (those with self-sustaining populations) become pests. The rule gives an adequate fit to British plant data. The rule predicts that invaders will be rarer than natives. This is shown for British Anatidae. There is a continuous spectrum of perceived weediness. Although this spectrum is significantly related to Baker characters, neither those characters or any others can usefully predict which species will be weeds over a wide range of species. Characters tuned to sets of closely related species shown more promise. A study of BritishImpatiens shows that the characters responsible for critical ecological behaviour are still obscure. Small genetic changes can cause large ecological changes. GMOs will have characters entirely new to that species' evolutionary history. While most will have little ecological effect, a few may be ecologically and economically damaging. A sensible programme of field trials and monitoring is justified to minimize the risk." }, { "instance_id": "R57101xR57028", "comparison_id": "R57101", "paper_id": "R57028", "text": "Accounting for differential success in the biological control of homopteran and lepidopteran pests One of the strongest patterns in the historical record of biological control is that programmes targeted against lepidopteran pests have been far less successful than those targeted against homopteran pests. Despite fueling considerable interest in the theory of host-parasitoid interactions, biological control has few unifying principles and no theoretical basis for understanding the differential pattern of success against these two pest groups. Potential explanations considered here include competitive limitation of natural enemy establishment, the influence of antagonistic parasitoid interactions, generation time ratio, and gregarious parasitoid development. An analysis of the biological control record showed that on average six natural enemies have been introduced per pest for both pest groups, providing no evidence of a differential intensity of competition. Similarly, use of a discrete time host-parasitoid model showed that antagonistic interactions that are common among parasitoids of Lepidoptera should not limit the success of biological control as such interactions can readily be counteracted by host refuge breaking. A similar model showed that a small generation time ratio (coupled with a broad window of host attack) and gregarious development can facilitate the suppression of pest abundance by parasitoids, and both were found to be positively associated with success in the biological control record. Of the four explanations considered here, generation time ratio coupled with a broad window of host attack appears to provide the best explanation for the differential pattern of success." }, { "instance_id": "R57101xR55125", "comparison_id": "R57101", "paper_id": "R55125", "text": "Behavioural flexibility predicts invasion success in birds introduced to New Zealand A fundamental question in ecology is whether there are evolutionary characteristics of species that make some better than others at invading new communities. In birds, nesting habits, sexually selected traits, migration, clutch size and body mass have been suggested as important variables, but behavioural flexibility is another obvious trait that has received little attention. Behavioural flexibility allows animals to respond more rapidly to environmental changes and can therefore be advantageous when invading novel habitats. Behavioural flexibility is linked to relative brain size and, for foraging, has been operationalised as the number of innovations per taxon reported in the short note sections of ornithology journals. Here, we use data on avian species introduced to New Zealand and test the link between forebrain size, feeding innovation frequency and invasion success. Relative brain size was, as expected, a significant predictor of introduction success, after removing the effect of introduction effort. Species with relatively larger brains tended to be better invaders than species with smaller ones. Introduction effort, migratory strategy and mode of juvenile development were also significant in the models. Pair-wise comparisons of closely related species indicate that successful invaders also showed a higher frequency of foraging innovations in their region of origin. This study provides the first evidence in vertebrates of a general set of traits, behavioural flexibility, that can potentially favour invasion success." }, { "instance_id": "R57101xR57048", "comparison_id": "R57101", "paper_id": "R57048", "text": "Predicting the number of ecologically harmful exotic species in an aquatic system Most introduced species apparently have little impact on native biodiversity, but the proliferation of human vectors that transport species worldwide increases the probability of a region being affected by high-impact invaders \u2010 i.e. those that cause severe declines in native species populations. Our study determined whether the number of high-impact invaders can be predicted from the total number of invaders in an area, after controlling for species\u2010area effects. These two variables are positively correlated in a set of 16 invaded freshwater and marine systems from around the world. The relationship is a simple linear function; there is no evidence of synergistic or antagonistic effects of invaders across systems. A similar relationship is found for introduced freshwater fishes across 149 regions. In both data sets, high-impact invaders comprise approximately 10% of the total number of invaders. Although the mechanism driving this correlation is likely a sampling effect, it is not simply the proportional sampling of a constant number of repeat-offenders; in most cases, an invader is not reported to have strong impacts on native species in the majority of regions it invades. These findings link vector activity and the negative impacts of introduced species on biodiversity, and thus justify management efforts to reduce invasion rates even where numerous invasions have already occurred." }, { "instance_id": "R57101xR56963", "comparison_id": "R57101", "paper_id": "R56963", "text": "Global patterns in the establishment and distribution of exotic birds Abstract I use three separate data bases to examine recipient community and site factors that might be influencing the establishment, persistence, and distribution of avian exotics. All in all, about half the variance between islands/regions in their numbers of successfully and unsuccessfully introduced species can be accounted for by recipient site-specific variables; the most important correlate of success is the number of native species extinctions over about the last 3000 years, which reflects the degree of human activity and habitat destruction and deterioration through intrusions of exotic predators, herbivores, and parasites. Consequently, the number of exotic species gained is close to the number of species lost through extinction. Even after controlling for avian extinctions, island area correlates positively with introduced species number. Invasion success does not decline significantly with the richness of the native avifauna (after controlling for the effects of extinctions and island area) nor the variety of potential mammalian predators. The relative proportion of extinct native species across islands/regions is negatively correlated with area and positively correlated with introduced species number and the number of endemic species. A strong correlation exists between the number of successes and the number of failures, attesting to the role of persistent acclimatization societies in increasing species numbers despite high failure rates. The relative success to failure rate increases with the number of extinct native species. The correlation between introductions and native extinctions seems to arise because native birds are usually more common, if not restricted, to native habitats while introduced birds are primary occupants of disturbed and open habitats. As more of an island's area is converted to urban, agricultural and disturbed habitats or altered through the introduction of herbivores and exotic predators, most natives lose good living space while most introduced birds, that frequent open and disturbed areas and have evolved in predator-rich areas, gain habitat. I find little support for the notion that rich avifaunas in themselves repel the establishment of avian invaders at the level of whole islands or archipelagoes. However, interactions between established exotics and natives may be influencing habitat distributions of species in both sets within islands. In both man-made habitats and native forest habitats, exotic species number and the relative abundance of exotic birds is negatively related to the number of native species. After accounting for this local variation, exotic species number is positively related to exotic species number for the entire island/region. In local surveys the relative abundance of exotic birds compared to native birds is affected by habitat (non-native habitats have more exotics) and also by the numbers of species of exotics and natives on the island. The relative importance of biotic interactions like competition, apparent competition through differential disease transmission or susceptibility, and predation in shaping the abundance and habitat affinities of exotics and native species can be difficult to unravel when regional affects are so important." }, { "instance_id": "R57101xR56990", "comparison_id": "R57101", "paper_id": "R56990", "text": "Alien aquatic plant species in European countries Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297\u2013306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe." }, { "instance_id": "R57501xR57109", "comparison_id": "R57501", "paper_id": "R57109", "text": "The abiotic and biotic factors limiting establishment of predatory fishes at their expanding northern range boundaries in Ontario, Canada There is a poor understanding of the importance of biotic interactions in determining species distributions with climate change. Theory from invasion biology suggests that the success of species introductions outside of their historical ranges may be either positively (biotic acceptance) or negatively (biotic resistance) related to native biodiversity. Using data on fish community composition from two survey periods separated by approximately 28 years during which climate was warming, we examined the factors influencing the establishment of three predatory centrarchids: Smallmouth Bass (Micropterus dolomieu), Largemouth Bass (M. salmoides), and Rock Bass (Ambloplites rupestris) in lakes at their expanding northern range boundaries in Ontario. Variance partitioning demonstrated that, at a regional scale, abiotic factors play a stronger role in determining the establishment of these species than biotic factors. Pairing lakes within watersheds where each species had established with lakes sharing similar abiotic conditions where the species had not established revealed both positive and negative relationships between the establishment of centrarchids and the historical presence of other predatory species. The establishment of these species near their northern range boundaries is primarily determined by abiotic factors at a regional scale; however, biotic factors become important at the lake-to-lake scale. Studies of exotic species invasions have previously highlighted how spatial scale mediates the importance of abiotic vs. biotic factors on species establishment. Our study demonstrates how concepts from invasion biology can inform our understanding of the factors controlling species distributions with changing climate." }, { "instance_id": "R57501xR57171", "comparison_id": "R57501", "paper_id": "R57171", "text": "Invasion of exotic plant species in tallgrass prairie fragments The tallgrass prairie is one of the most severely affected ecosystems in North America. As a result of extensive conversion to agriculture during the last century, as little as 1% of the original tallgrass prairie re- mains. The remaining fragments of tallgrass prairie communities have conservation significance, but ques- tions remain about their viability and importance to conservation. We investigated the effects of fragment size, native plant species diversity, and location on invasion by exotic plant species at 25 tallgrass prairie sites in central North America at various geographic scales. We used exotic species richness and relative cover as measures of invasion. Exotic species richness and cover were not related to area for all sites considered to- gether. There were no significant relationships between native species richness and exotic species richness at the cluster and regional scale or for all sites considered together. At the local scale, exotic species richness was positively related to native species richness at four sites and negatively related at one. The 10 most frequently occurring and abundant exotic plant species in the prairie fragments were cool-season, or C 3, species, in con- trast to the native plant community, which was dominated by warm-season, or C 4, species. This suggests that timing is important to the success of exotic species in the tallgrass prairie. Our study indicates that some small fragments of tallgrass prairie are relatively intact and should not be overlooked as long-term refuges for prai- rie species, sources of genetic variability, and material for restoration ." }, { "instance_id": "R57501xR57146", "comparison_id": "R57501", "paper_id": "R57146", "text": "Aquatic plant community invasibility and scale-dependent patterns in native and invasive species richness Invasive species richness often is negatively correlated with native species richness at the small spatial scale of sampling plots, but positively correlated in larger areas. The pattern at small scales has been interpreted as evidence that native plants can competitively exclude invasive species. Large-scale patterns have been understood to result from environmental heterogeneity, among other causes. We investigated species richness patterns among submerged and floating-leaved aquatic plants (87 native species and eight invasives) in 103 temperate lakes in Connecticut (northeastern USA) and found neither a consistently negative relationship at small (3-m2) scales, nor a positive relationship at large scales. Native species richness at sampling locations was uncorrelated with invasive species richness in 37 of the 60 lakes where invasive plants occurred; richness was negatively correlated in 16 lakes and positively correlated in seven. No correlation between native and invasive species richness was found at larger spatial scales (whole lakes and counties). Increases in richness with area were uncorrelated with abiotic heterogeneity. Logistic regression showed that the probability of occurrence of five invasive species increased in sampling locations (3 m2, n = 2980 samples) where native plants occurred, indicating that native plant species richness provided no resistance against invasion. However, the probability of three invasive species' occurrence declined as native plant density increased, indicating that density, if not species richness, provided some resistance with these species. Density had no effect on occurrence of three other invasive species. Based on these results, native species may resist invasion at small spatial scales only in communities where density is high (i.e., in communities where competition among individuals contributes to community structure). Most hydrophyte communities, however, appear to be maintained in a nonequilibrial condition by stress and/or disturbance. Therefore, most aquatic plant communities in temperate lakes are likely to be vulnerable to invasion." }, { "instance_id": "R57501xR57317", "comparison_id": "R57501", "paper_id": "R57317", "text": "Exotic plants on Lord Howe Island: distribution in space and time, 1853-1981 One hundred and seventy-three exotic angiosperms form 48.2% of the angiosperm flora of Lord Howe Island (310 35'S, 1590 05'E) in the south Pacific Ocean. The families Poaceae (23%) and Asteraceae (13%) dominate the exotic flora. Some 30% are native to the Old World, 26% from the New World and 14% from Eurasia. Exotics primarily occur on heavily disturbed areas but c. 10% are widely distributed in undisturbed vegetation. Analysis of historical records, eleven species lists over the 128 years 1853-1981, shows that invasion has been a continuous process at an exponential rate. Exotics have been naturalized at the overall rate of 1.3 species y-1. Most exotics were deliberately introduced as pasture species or accidentally as contaminants although ornamental plants are increasing. Exotics show some evidence of invading progressively less disturbed habitats but the response of each species is individualistic. As introduction of exotics is a social rather than an ecological problem, the present pattern will continue." }, { "instance_id": "R57501xR52120", "comparison_id": "R57501", "paper_id": "R52120", "text": "Plant functional group diversity as a mechanism for invasion resistance A commonly cited mechanism for invasion resistance is more complete resource use by diverse plant assemblages with maximum niche complementarity. We investigated the invasion resistance of several plant functional groups against the nonindigenous forb Spotted knapweed (Centaurea maculosa). The study consisted of a factorial combination of seven functional group removals (groups singularly or in combination) and two C. maculosa treatments (addition vs. no addition) applied in a randomized complete block design replicated four times at each of two sites. We quantified aboveground plant material nutrient concentration and uptake (concentration 3 biomass) by indigenous functional groups: grasses, shallow-rooted forbs, deep-rooted forbs, spikemoss, and the nonindigenous invader C. maculosa. In 2001, C. maculosa density depended upon which functional groups were removed. The highest C. maculosa densities occurred where all vegetation or all forbs were removed. Centaurea maculosa densities were the lowest in plots where nothing, shallowrooted forbs, deep-rooted forbs, grasses, or spikemoss were removed. Functional group biomass was also collected and analyzed for nitrogen, phosphorus, potassium, and sulphur. Based on covariate analyses, postremoval indigenous plot biomass did not relate to invasion by C. maculosa. Analysis of variance indicated that C. maculosa tissue nutrient percentage and net nutrient uptake were most similar to indigenous forb functional groups. Our study suggests that establishing and maintaining a diversity of plant functional groups within the plant community enhances resistance to invasion. Indigenous plants of functionally similar groups as an invader may be particularly important in invasion resistance." }, { "instance_id": "R57501xR57128", "comparison_id": "R57501", "paper_id": "R57128", "text": "How does Reynoutria invasion fit the various theories of invasibility? Abstract Questions: 1. How does species richness of recipient communities affect Reynoutria invasion? 2. How does Reynoutria invasion change host community structure? 3. Are there any differences in habitat preferences among three closely related Reynoutria taxa? 4. How does the genetic structure of Reynoutria populations change along the course of a river? Location: River Jizera basin, north Bohemia, Czech Republic. Methods: Nine 0.25 km2 plots were chosen along the river. Within each plot all main habitat types were determined and sampled using the Braun-Blanquet scale to determine the invasibility of various communities. The patches invaded by Reynoutria taxa and surrounding Reynoutria-free vegetation in the same habitat type were sampled as relev\u00e9 pairs to compare the composition of invaded and non-invaded vegetation. In addition, to characterize the genetic structure of Reynoutria populations along the river, 30 samples from different clones were collected. Results and conclusions: 1. The species richness of communities has no influence on the success of Reynoutria invasion in the area studied. The combination of environmental conditions and propagule spread is more important to the invasion success than the number of species in the host community. 2. Reynoutria invasion greatly reduces species diversity. 3. R. japonica invaded more habitat types than R. sachalinensis and R. \u00d7 bohemica. The hybrid R. \u00d7 bohemica outcompetes the parental taxa at sites where both taxa co-occur. 4. Isozyme analysis revealed phenotype variability in the hybrid in contrast to the parental taxa. Different hybrid phenotypes are distributed randomly on the middle and lower reaches of the River Jizera; one of them dominates and the other three occur occasionally. This pattern supports the hypothesis that sexual reproduction occasionally occurs within Reynoutria taxa. Nomenclature: Ehrendorfer (1973)." }, { "instance_id": "R57501xR57179", "comparison_id": "R57501", "paper_id": "R57179", "text": "Spatial heterogeneity explains the scale dependence of the native-exotic diversity relationship While small-scale studies show that more diverse native communities are less invasible by exotics, studies at large spatial scales often find positive correlations between native and exotic diversity. This large-scale pattern is thought to arise because landscapes with favorable conditions for native species also have favorable conditions for exotic species. From theory, we proposed an alternative hypothesis: the positive relationship at large scales is driven by spatial heterogeneity in species composition, which is driven by spatial heterogeneity in the environment. Landscapes with more spatial heterogeneity in the environment can sustain more native and more exotic species, leading to a positive correlation of native and exotic diversity at large scales. In a nested data set for grassland plants, we detected negative relationships between native and exotic diversity at small spatial scales and positive relationships at large spatial scales. Supporting our hypothesis, the positive relationships between native and exotic diversity at large scales were driven by positive relationships between native and exotic beta diversity. Further, both native and exotic diversity were positively correlated with spatial heterogeneity in abiotic conditions (variance of soil depth, soil nitrogen, and aspect) but were uncorrelated with average abiotic conditions, supporting the spatial-heterogeneity hypothesis but not the favorable-conditions" }, { "instance_id": "R57501xR57336", "comparison_id": "R57501", "paper_id": "R57336", "text": "Biotic and abiotic constraints to a plant invasion in vegetation communities of Tierra del Fuego The biotic resistance theory relates invader success to species richness, and predicts that, as species richness increases, invasibility decreases. The relationship between invader success and richness, however, seems to be positive at large scales of analysis, determined by abiotic constraints, and it is to be expected that it is negative at small scales, because of biotic interactions. Moreover, the negative relationship at small scales would be stronger within species of the same functional group, because of having similar resource exploitation mechanisms. We studied the relationship between the cover of a worldwide invader of grasslands, Hieracium pilosella L., and species richness, species diversity and the cover of different growth forms at two different levels of analysis in 128 sites during the initial invasion process in the Fuegian steppe, Southern Patagonia, Argentina. At regional level, the invader was positively correlated to total (r = 0.28, P = 0.003), exotic (r = 0.273, P = 0.004), and native species richness (r = 0.210, P = 0.026), and to species diversity (r = 0.193, P = 0.041). At community level, we found only a weak negative correlation between H. pilosella and total richness (r = \u22120.426, P = 0.079) and diversity (r = \u22120.658, P = 0.063). The relationship between the invader and other species of the same growth form was positive both at regional (r = 0.484, P < 0.001) and community (r = 0.593, P = 0.012) levels. Consequently, in the period of establishment and initial expansion of this exotic species, our results support the idea that invader success is related to abiotic factors at large scales of analysis. Also, we observed a possible sign of biotic constraint at community level, although this was not related to the abundance of species of the same growth form." }, { "instance_id": "R57501xR57209", "comparison_id": "R57501", "paper_id": "R57209", "text": "Invasibility and compositional stability in a grassland community: relationships to diversity and extrinsic factors We present results from an ongoing field study conducted in Kansas grassland to examine correlates of invasibility and community stability along a natural gradient of plant diversity. Invasibility was evaluated by sowing seeds of 34 plant species into 40 experimental plots and then measuring colonization success after two growing seasons. Compositional stability, defined as resistance to change in species relative abundances over two growing seasons and in response to experimental disturbance, was measured in a separate set of 40 plots. We found that community susceptibility to invasion was greatest in high diversity microsites within this grassland. Multiple regression analyses suggested that the positive correlation between invasibility and plant diversity was due to the direct influences of the extrinsic factors that contribute to spatial variation in diversity (soil disturbances; light availability), not to any direct impact of diversity. In addition, we found that compositional stability in response to disturbance was greatest within low diversity microsites and was strongly related to the dominance (evenness) component of diversity." }, { "instance_id": "R57501xR57403", "comparison_id": "R57501", "paper_id": "R57403", "text": "Positive diversity-invasibility relationship in species-rich semi-natural grassland at the neighbourhood scale BACKGROUND AND AIMS Attempts to answer the old question of whether high diversity causes high invasion resistance have resulted in an invasion paradox: while large-scale studies often find a positive relationship between diversity and invasibility, small-scale experimental studies often find a negative relationship. Many of the small-scale studies are conducted in artificial communities of even-aged plants. Species in natural communities, however, do not represent one simultaneous cohort and occur at various levels of spatial aggregation at different scales. This study used natural patterns of diversity to assess the relationship between diversity and invasibility within a uniformly managed, semi-natural community. METHODS In species-rich grassland, one seed of each of ten species was added to each of 50 contiguous 16 cm(2) quadrats within seven plots (8 \u00d7 100 cm). The emergence of these species was recorded in seven control plots, and establishment success was measured in relation to the species diversity of the resident vegetation at two spatial scales, quadrat (64 cm(2)) within plots (800 cm(2)) and between plots within the site (approx. 400 m(2)) over 46 months. KEY RESULTS Invader success was positively related to resident species diversity and richness over a range of 28-37 species per plot. This relationship emerged 7 months after seed addition and remained over time despite continuous mortality of invaders. CONCLUSIONS Biotic resistance to plant invasion may play only a sub-ordinate role in species-rich, semi-natural grassland. As possible alternative explanations for the positive diversity-invasibility relationship are not clear, it is recommended that future studies elaborate fine-scale environmental heterogeneity in resource supplies or potential resource flows from resident species to seedlings by means of soil biological networks established by arbuscular mycorrhizal fungi." }, { "instance_id": "R57501xR57268", "comparison_id": "R57501", "paper_id": "R57268", "text": "Local interactions, dispersal, and native and exotic plant diversity along a California stream Although the species pool, dispersal, and local interactions all influence species diversity, their relative importance is debated. I examined their importance in controlling the number of native and exotic plant species occupying tussocks formed by the sedge Carex nudata along a California stream. Of particular interest were the factors underlying a downstream increase in plant diversity and biological invasions. I conducted seed addition experiments and manipulated local diversity and cover to evaluate the degree to which tussocks saturate with species, and to examine the roles of local competitive processes, abiotic factors, and seed supply in controlling the system-wide patterns. Seeds of three native and three exotic plants sown onto experimentally assembled tussock communities less successfully established on tussocks with a greater richness of resident plants. Nonetheless, even the most diverse tussocks were somewhat colonized, suggesting that tussocks are not completely saturated with species. Similarly, in an experiment where I sowed seeds onto natural tussocks along the river, colonization increased two- to three-fold when I removed the resident species. Even on intact tussocks, however, seed addition increased diversity, indicating that the tussock assemblages are seed limited. Colonization success on cleared and uncleared tussocks increased downstream from km 0 to km 3 of the study site, but showed no trends from km 3 to km 8. This suggests that while abiotic and biotic features of the tussocks may control the increase in diversity and invasions from km 0 to km 3, similar increases from km 3 to km 8 are more likely explained by potential downstream increases in seed supply. The effective water dispersal of seed mimics and prevailingly downstream winds indicated that dispersal most likely occurs in a downstream direction. These results suggest that resident species diversity, competitive interactions, and seed supply similarly influence the colonization of native and exotic species." }, { "instance_id": "R57501xR57211", "comparison_id": "R57501", "paper_id": "R57211", "text": "Biological invasions of Southern Ocean islands: the Collembola of Marion Island as a test of generalities It has been suggested previously that the presence and abundance of indigenous species have a marked influence on the likelihood of invasion of a community. It has also been suggested that such biotic resistance has a negligible influence on the outcome of an invasion, but that the abiotic characteristics of the environment being invaded are more important. The latter has been claimed to be especially important on the islands of the Southern Ocean. In order to test these competing hypotheses we examined the distribution and abundance of indigenous and introduced springtails across 13 habitats, which differ considerably in the properties of their soils, and soil temperature, on the eastern quarter of sub-Antarctic Marion Island. There was no evidence of negative abundance covariation or species associations within habitats, nor were there significant relationships between species richness or abundance of the indigenous as opposed to the introduced collembolans across habitats. Interspecific interactions thus seem to have played no readily identifiable role in the outcome of invasions by Collembola on Marion island. In contrast, the indigenous and introduced species responded very differently to abiotic variables. The indigenous Collembola prefer drier, more mineral soils with a low organic carbon content, and species richness tends to be highest in cold, fellfield areas. On the other hand, the introduced springtails prefer moist, warm sites, with organically enriched soils, introduced species richness was negligible in cold, fellfield areas. Disturbance also appeared to influence positively the species richness and abundance of introduced species at a site. These results provide independent support for the idea that abiotic factors, especially temperature, significantly influence the likelihood of biological invasions on Southern Ocean islands. They also suggest that predicting the outcome of climate change on community structure in this region is likely to be problematic, especially in the case of the Collembola." }, { "instance_id": "R57501xR55021", "comparison_id": "R57501", "paper_id": "R55021", "text": "Assessing the Relative Importance of Disturbance, Herbivory, Diversity, and Propagule Pressure in Exotic Plant Invasion The current rate of invasive species introductions is unprecedented, and the dramatic impacts of exotic invasive plants on community and ecosystem properties have been well documented. Despite the pressing management implications, the mechanisms that control exotic plant invasion remain poorly understood. Several factors, such as disturbance, propagule pressure, species diversity, and herbivory, are widely believed to play a critical role in exotic plant invasions. However, few studies have examined the relative importance of these factors, and little is known about how propagule pressure interacts with various mechanisms of ecological resistance to determine invasion success. We quantified the relative importance of canopy disturbance, propagule pressure, species diversity, and herbivory in determining exotic plant invasion in 10 eastern hemlock forests in Pennsylvania and New Jersey (USA). Use of a maximum-likelihood estimation framework and information theoretics allowed us to quantify the strength of evidence for alternative models of the influence of these factors on changes in exotic plant abundance. In addition, we developed models to determine the importance of interactions between ecosystem properties and propagule pressure. These analyses were conducted for three abundant, aggressive exotic species that represent a range of life histories: Alliaria petiolata, Berberis thunbergii, and Microstegium vimineum. Of the four hypothesized determinants of exotic plant invasion considered in this study, canopy disturbance and propagule pressure appear to be the most important predictors of A. petiolata, B. thunbergii, and M. vimineum invasion. Herbivory was also found to be important in contributing to the invasion of some species. In addition, we found compelling evidence of an important interaction between propagule pressure and canopy disturbance. This is the first study to demonstrate the dominant role of the interaction between canopy disturbance and propagule pressure in determining forest invasibility relative to other potential controlling factors. The importance of the disturbance-propagule supply interaction, and its nonlinear functional form, has profound implications for the management of exotic plant species populations. Improving our ability to predict exotic plant invasions will require enhanced understanding of the interaction between propagule pressure and ecological resistance mechanisms." }, { "instance_id": "R57501xR57389", "comparison_id": "R57501", "paper_id": "R57389", "text": "Plant richness patterns in agricultural and urban landscapes in Central Germany - spatial gradients of species richness Urban areas are generally inhabited by greater numbers of plant species than rural areas of the same size. Though this phenomenon is well documented, scientists seem to be drawn to opposing views when it comes to explaining the high ratio of alien to native plants. Several ecological concepts claim that in cities, alien species displace native species. However, several studies show that both species groups increase proportionally. Another view tries to correlate the high species number in urban areas to the heterogeneity of the urban landscape. This correlation seems to be evident but still needs to be tested. Most of these findings stem from studies performed on large or intermediate scales using data from official databases. We wanted to confront existing findings and opinions with our study comparing a typical urban with an agricultural landscape section on a local scale. Our results support the view that plant species richness is higher in cities than in surrounding rural areas, partly because of a high rate of alien species brought into cities by humans. However, this species richness stems from an increase in alien as well as native species. Higher species richness is supported by a highly varying landscape structure mainly caused by anthropogenic land use. \u00a9 2005 Elsevier B.V. All rights reserved." }, { "instance_id": "R57501xR57119", "comparison_id": "R57501", "paper_id": "R57119", "text": "Deconstructing the native-exotic richness relationship in plants Aim Classic theory suggests that species-rich communities should be more resistant to the establishment of exotic species than species-poor communities. Although this theory predicts that exotic species should be less diverse in regions that contain more native species, macroecological analyses often find that the correlation between exotic and native species richness is positive rather than negative. To reconcile results with theory, we explore to what extent climatic conditions, landscape heterogeneity and anthropogenic disturbance may explain the positive relationship between native and exotic plant richness. Location Catalonia (western Mediterranean region). Methods We integrated floristic records and GIS-based environmental measures to make spatially explicit 10-km grid cells. We asked whether the observed positive relationship between native and exotic plant richness (R2= 0.11) resulted from the addition of several negative correlations corresponding to different environmental conditions identified with cluster analysis. Moreover, we directly quantified the importance of common causal effects with a structural equation modelling framework. Results We found no evidence that the relationship between native and exotic plant richness was negative when the comparison was made within environmentally homogeneous groups. Although there were common factors explaining both native and exotic richness, mainly associated with landscape heterogeneity and human pressure, these factors only explained 17.2% of the total correlation. Nevertheless, when the comparison was restricted to native plants associated with human-disturbed (i.e. ruderal) ecosystems, the relationship was stronger (R2= 0.52) and the fraction explained by common factors increased substantially (58.3%). Main conclusions While our results confirm that the positive correlation between exotic and native plant richness is in part explained by common extrinsic factors, they also highlight the great importance of anthropic factors that \u2013 by reducing biotic resistance \u2013 facilitate the establishment and spread of both exotic and native plants that tolerate disturbed environments." }, { "instance_id": "R57501xR57165", "comparison_id": "R57501", "paper_id": "R57165", "text": "Species richness and exotic species invasion in middle Tennessee cedar glades in relation to abiotic and biotic factors Abstract Abiotic factors, particularly area, and biotic factors play important roles in determining species richness of continental islands such as cedar glades. We examined the relationship between environmental parameters and species richness on glades and the influence of native species richness on exotic invasion. Field surveys of vascular plants on 40 cedar glades in Rutherford County, Tennessee were conducted during the 2001\u20132003 growing seasons. Glades were geo-referenced to obtain area, perimeter, distance from autotour road, and degree of isolation. Amount of disturbance also was recorded. Two-hundred thirty two taxa were found with Andropogon virginicus, Croton monanthogynus, Juniperus virginiana, Panicum flexile, and Ulmus alata present on all glades. The exotics Ligustrum sinense, Leucanthemum vulgare, and Taraxacum officinale occurred on the majority of glades. Lobelia appendiculata var. gattingeri, Leavenworthia stylosa, and Pediomelum subacaule were the most frequent endemics. Richness of native, exotic and endemic species increased with increasing area and perimeter and decreased with increasing isolation (P \u2264 0.03); richness was unrelated to distance to road (P \u2265 0.20). Perimeter explained a greater amount of variation than area for native and exotic species, whereas area accounted for greater variation for endemic species. Slope of the relationship between area and total richness (0.17) was within the range reported for continental islands. Disturbed glades contained a higher number of exotic and native species than nondisturbed ones, but they were larger (P \u2264 0.03). Invasion of exotic species was unrelated to native species richness when glade size was statistically controlled (P = 0.88). Absence of a relationship is probably due to a lack of substantial competitive interactions. Most endemics occurred over a broad range of glade sizes emphasizing the point that glades of all sizes are worthy of protection." }, { "instance_id": "R57501xR57124", "comparison_id": "R57501", "paper_id": "R57124", "text": "Diversity-invasibility across an experimental disturbance gradient in Appalachian forests Research examining the relationship between community diversity and invasions by nonnative species has raised new questions about the theory and management of biological invasions. Ecological theory predicts, and small-scale experiments confirm, lower levels of nonnative species invasion into species-rich compared to species-poor communities, but observational studies across a wider range of scales often report positive relationships between native and nonnative species richness. This paradox has been attributed to the scale dependency of diversity-invasibility relationships and to differences between experimental and observational studies. Disturbance is widely recognized as an important factor determining invasibility of communities, but few studies have investigated the relative and interactive roles of diversity and disturbance on nonnative species invasion. Here, we report how the relationship between native and nonnative plant species richness responded to an experimentally applied disturbance gradient (from no disturbance up to clearcut) in oak-dominated forests. We consider whether results are consistent with various explanations of diversity-invasibility relationships including biotic resistance, resource availability, and the potential effects of scale (1 m2 to 2 ha). We found no correlation between native and nonnative species richness before disturbance except at the largest spatial scale, but a positive relationship after disturbance across scales and levels of disturbance. Post-disturbance richness of both native and nonnative species was positively correlated with disturbance intensity and with variability of residual basal area of trees. These results suggest that more nonnative plants may invade species-rich communities compared to species-poor communities following disturbance." }, { "instance_id": "R57501xR57321", "comparison_id": "R57501", "paper_id": "R57321", "text": "Biotic acceptance in introduced amphibians and reptiles in Europe and North America Aim The biotic resistance hypothesis argues that complex plant and animal communities are more resistant to invasion than simpler communities. Conversely, the biotic acceptance hypothesis states that non-native and native species richness are positively related. Most tests of these hypotheses at continental scales, typically conducted on plants, have found support for biotic acceptance. We tested these hypotheses on both amphibians and reptiles across Europe and North America. Location Continental countries in Europe and states/provinces in North America. Methods We used multiple linear regression models to determine which factors predicted successful establishment of amphibians and reptiles in Europe and North America, and additional models to determine which factors predicted native species richness. Results Successful establishment of amphibians and reptiles in Europe and reptiles in North America was positively related to native species richness. We found higher numbers of successful amphibian species in Europe than in North America. Potential evapotranspiration (PET) was positively related to non-native species richness for amphibians and reptiles in Europe and reptiles in North America. PET was also the primary factor determining native species richness for both amphibians and reptiles in Europe and North America. Main conclusions We found support for the biotic acceptance hypothesis for amphibians and reptiles in Europe and reptiles in North America, suggesting that the presence of native amphibian and reptile species generally indicates good habitat for non-native species. Our data suggest that the greater number of established amphibians per native amphibians in Europe than in North America might be explained by more introductions in Europe or climate-matching of the invaders. Areas with high native species richness should be the focus of control and management efforts, especially considering that non-native species located in areas with a high number of natives can have a large impact on biological diversity." }, { "instance_id": "R57501xR57391", "comparison_id": "R57501", "paper_id": "R57391", "text": "The link between international trade and the global distribution of invasive alien species Invasive alien species (IAS) exact large biodiversity and economic costs and are a significant component of human-induced, global environmental change. Previous studies looking at the variation in alien species across regions have been limited geographically or taxonomically or have not considered economics. We used a global invasive species database to regress IAS per-country on a suite of socioeconomic, ecological, and biogeographical variables. We varied the countries included in the regression tree analyses, in order to explore whether certain outliers were biasing the results, and in most of the cases, merchandise imports was the most important explanatory variable. The greater the degree of international trade, the higher the number of IAS. We also found a positive relationship between species richness and the number of invasives, in accord with other investigations at large spatial scales. Island status (overall), country area, latitude, continental position (New World versus Old World) or other measures of human disturbance (e.g., GDP per capita, population density) were not found to be important determinants of a country\u2019s degree of biological invasion, contrary to previous studies. Our findings also provide support to the idea that more resources for combating IAS should be directed at the introduction stage and that novel trade instruments need to be explored to account for this environmental externality." }, { "instance_id": "R57501xR56104", "comparison_id": "R57501", "paper_id": "R56104", "text": "Diversity-invasibility relationships across multiple scales in disturbed forest understoreys Non-native plant species richness may be either negatively or positively correlated with native species due to differences in resource availability, propagule pressure or the scale of vegetation sampling. We investigated the relationships between these factors and both native and non-native plant species at 12 mainland and island forested sites in southeastern Ontario, Canada. In general, the presence of non-native species was limited: <20% of all species at a site were non-native and non-native species cover was <4% m\u22122 at 11 of the 12 sites. Non-native species were always positively correlated with native species, regardless of spatial scale and whether islands were sampled. Additionally, islands had a greater abundance of non-native species. Non-native species richness across mainland sites was significantly negatively correlated with mean shape index, a measure of the ratio of forest edge to area, and positively correlated with the mean distance to the nearest forest patch. Other factors associated with disturbance and propagule pressure in northeastern North America forests, including human land use, white-tailed deer populations, understorey light, and soil nitrogen, did not explain non-native richness nor cover better than the null models. Our results suggest that management strategies for controlling non-native plant invasions should aim to reduce the propagule pressure associated with human activities, and maximize the connectivity of forest habitats to benefit more poorly dispersed native species." }, { "instance_id": "R57501xR57292", "comparison_id": "R57501", "paper_id": "R57292", "text": "The distribution and habitat associations of non-native plant species in urban riparian habitats Questions: 1. What are the distribution and habitat associations of non-native (neophyte) species in riparian zones? 2. Are there significant differences, in terms of plant species diversity, composition, habitat condition and species attributes, between plant communities where non-natives are present or abundant and those where non-natives are absent or infrequent? 3. Are the observed differences generic to non-natives or do individual non-native species differ in their vegetation associations? Location: West Midlands Conurbation (WMC), UK. Methods: 56 sites were located randomly on four rivers across the WMC. Ten 2 m \u00d7 2 m quadrats were placed within 15 m of the river to sample vegetation within the floodplain at each site. All vascular plants were recorded along with site information such as surrounding land use and habitat types. Results: Non-native species were found in many vegetation types and on all rivers in the WMC. There were higher numbers of non-natives on more degraded, human-modified rivers. More non-native species were found in woodland, scrub and tall herb habitats than in grasslands. We distinguish two types of communities with non-natives. In communities colonized following disturbance, in comparison to quadrats containing no non-native species, those with non-natives had higher species diversity and more forbs, annuals and shortlived monocarpic perennials. Native species in quadrats containing non-natives were characteristic of conditions of higher fertility and pH, had a larger specific leaf area and were less stress tolerant or competitive. In later successional communities dominated by particular non-natives, native diversity declined with increasing cover of non-natives. Associated native species were characteristic of low light conditions. Conclusions: Communities containing non-natives can be associated with particular types of native species. Extrinsic factors (disturbance, eutrophication) affected both native and non-native species. In disturbed riparian habitats the key determinant of diversity is dominance by competitive invasive species regardless of their native or non-native origin." }, { "instance_id": "R57501xR57387", "comparison_id": "R57501", "paper_id": "R57387", "text": "Does fluctuating resource availability increase invasibility? Evidence from field experiments in New Zealand short tussock grassland The theory of fluctuating resource availability proposes that the susceptibility of a plant community to invasion by new species (i.e., invasibility) depends upon conditions of intermittent resource enrichment coinciding with the presence of invading propagules. We compared the response of a rapidly invading forb (Hieracium pilosella L.) between different experimental treatments in a short tussock grassland in New Zealand, over 6\u201312 years, to determine whether the theory explains differences in invasibility. The theory predicts that environments subject to periodic resource enrichment will be more invasible than those with more stable resource-supply rates. In our study, H. pilosella did not increase more rapidly in treatments subject to periodic resource pulses (fertiliser and water) than in those with more stable resource supplies. Also contrary to the predictions of the theory, the rate of invasion of H. pilosella did not increase following an increase in the rate of supply of water or nutrient resources, or following treatments that temporarily reduced resource uptake in the community, including grazing. H. pilosella did not increase immediately following abrupt increases in water and nutrient supply and removal of the dominant grass species with herbicide, as predicted by the theory, although temporary increases in resident exotic guilds indicated that the intensity of competition for resources was reduced. Neither H. pilosella nor resident exotic guilds showed increased cover growth rates following resumed grazing. The rate of invasion by H. pilosella was not correlated with species richness, a result consistent with one of the predictions of the theory. Therefore, short-lived events that temporarily reduced or suspended competition did not appear to determine the invasion success of this particular species in this region. In New Zealand\u2019s perennial short tussock grasslands, the characteristics of the resident plant community may be more critical than resource fluctuations in determining invasion success of H. pilosella. Invasion of H. pilosella may be most successfully controlled here by promoting a successional physiognomic shift to a taller, shrub-and-tussock-dominated canopy that competitively excludes low-growing forbs." }, { "instance_id": "R57501xR52114", "comparison_id": "R57501", "paper_id": "R52114", "text": "Using prairie restoration to curtail invasion of Canada thistle: the importance of limiting similarity and seed mix richness Theory has predicted, and many experimental studies have confirmed, that resident plant species richness is inversely related to invisibility. Likewise, potential invaders that are functionally similar to resident plant species are less likely to invade than are those from different functional groups. Neither of these ideas has been tested in the context of an operational prairie restoration. Here, we tested the hypotheses that within tallgrass prairie restorations (1) as seed mix species richness increased, cover of the invasive perennial forb, Canada thistle (Cirsium arvense) would decline; and (2) guilds (both planted and arising from the seedbank) most similar to Canada thistle would have a larger negative effect on it than less similar guilds. Each hypothesis was tested on six former agricultural fields restored to tallgrass prairie in 2005; all were within the tallgrass prairie biome in Minnesota, USA. A mixed-model with repeated measures (years) in a randomized block (fields) design indicated that seed mix richness had no effect on cover of Canada thistle. Structural equation models assessing effects of cover of each planted and non-planted guild on cover of Canada thistle in 2006, 2007, and 2010 revealed that planted Asteraceae never had a negative effect on Canada thistle. In contrast, planted cool-season grasses and non-Asteraceae forbs, and many non-planted guilds had negative effects on Canada thistle cover. We conclude that early, robust establishment of native species, regardless of guild, is of greater importance in resistance to Canada thistle than is similarity of guilds in new prairie restorations." }, { "instance_id": "R57501xR57384", "comparison_id": "R57501", "paper_id": "R57384", "text": "Biotic resistance to invader establishment of a southern Appalachian plant community is determined by environmental conditions Summary 1 Tests of the relationship between resident plant species richness and habitat invasibility have yielded variable results. I investigated the roles of experimental manipulation of understorey species richness and overstorey characteristics in resistance to invader establishment in a floodplain forest in south-western Virginia, USA. 2 I manipulated resident species richness in experimental plots along a flooding gradient, keeping plot densities at their original levels, and quantified the overstorey characteristics of each plot. 3 After manipulating the communities, I transplanted 10 randomly chosen invaders from widespread native and non-native forest species into the experimental plots. Success of an invasion was measured by survival and growth of the invader. 4 Native and non-native invader establishment trends were influenced by different aspects of the biotic community and these relationships depended on the site of invasion. The most significant influence on non-native invader survival in this system of streamside and upper terrace plots was the overstorey composition. Non-native species survival in the flooded plots after 2 years was significantly positively related to proximity to larger trees. However, light levels did not fully explain the overstorey effect and were unrelated to native survivorship. The effects of understorey richness on survivorship depended on the origin of the invaders and the sites they were transplanted into. Additionally, native species growth was significantly affected by understorey plot richness. 5 The direction and strength of interactions with both the overstorey (for non-native invaders) and understorey richness (for natives and non-natives) changed with the site of invasion and associated environmental conditions. Rather than supporting the hypothesis of biotic resistance to non-native invasion, my results suggest that native invaders experienced increased competition with the native understorey plants in the more benign upland habitat and facilitation in the stressful riparian zone." }, { "instance_id": "R57501xR57148", "comparison_id": "R57501", "paper_id": "R57148", "text": "INVASION RESISTANCE, SPECIES BUILDUP AND COMMUNITY COLLAPSE IN METAPOPULATION MODELS WITH INTERSPECIES COMPETITION Islands or habitat patches in a metapopulation exist as multi-species communities. Community interactions link eachspecies' dynamics so that the colonization of one species may cause the extinction of another. In this way, communityinteractions may set limits to the invadability of an island and to the likelihood of resident species extinctions uponinvasion. To examine the nature of these limits, I assemble stable multi-species Lotka-Volterra competition communities thatdiffer in resident species number and the average strength (and variance) of species interactions. These are then invaded withspecies whose properties are drawn from the same distribution as the residents. The invader success rate and the extinctionrate of resident species is determined as a function of community- and species-level properties. I show that the probabilityof colonization success for an invader decreases with species number and the strength and variance of interspecificinteractions. Communities comprised of many strongly interacting species limit the invasion possibilities of competingspecies. Community interactions, even for a superior invading competitor, set up a sort of \u201cactivation barrier\u201dthat repels the invader. This \u201cpriority effect\u201d for residents is not assumed a priori in mydescription for the individual population dynamics of these species, rather it arises because species-rich andstrongly-interacting species sets have alternative stable states that tend to disfavour species at low densities. These modelspoint to community-level rather than invader-level properties as the strongest determinant of differences in invasion success.If an invading species is successful it competitively displaces a greater number of resident species, on average, as communitysize increases. These results provide a logical framework for an island-biogeographic theory based on species interactions andinvasions and for the protection of fragile native species from invading exotics." }, { "instance_id": "R57501xR57105", "comparison_id": "R57501", "paper_id": "R57105", "text": "Community Structure Affects Annual Grass Weed Invasion During Restoration of a Shrub-Steppe Ecosystem Abstract Ecological restoration of shrub\u2013steppe communities in the western United States is often hampered by invasion of exotic annual grasses during the process. An important question is how to create restored communities that can better resist reinvasion by these weeds. One hypothesis is that communities comprised of species that are functionally similar to the invader will best resist invasion, while an alternative hypothesis is that structurally more complex and diverse communities will result in more effective competitive exclusion. In this field experiment, we examined the effects of restored community structure on the invasion success of three annual grass weeds (downy brome, jointed goatgrass, and cereal rye). We created replicated community plots that varied in species composition, structural complexity and density, then seeded in annual grass weeds and measured their biomass and seed production the following year, and their cover after 1 and 3 yr. Annual grass weeds were not strongly suppressed by any of the restored communities, indicating that it was difficult for native species to completely capture available resources and exclude annual grass weeds in the first years after planting. Perennial grass monocultures, particularly of the early seral grass bottlebrush squirreltail, were the most highly invaded communities, while structurally complex and diverse mixtures of shrubs (big sagebrush, rubber rabbitbrush), perennial grasses (bluebunch wheatgrass and bottlebrush squirreltail) and forbs (Lewis flax, Utah sweetvetch, hairy golden aster, gooseberryleaf globemallow) were more resistant to invasion. These results suggest that restoration of sagebrush steppe communities resistant to annual grass invasion benefits from higher species diversity; significant reduction of weed propagule pressure prior to restoration may be required." }, { "instance_id": "R57501xR57315", "comparison_id": "R57501", "paper_id": "R57315", "text": "Effects of Native Herbs and Light on Garlic Mustard (Alliaria petiolata) Invasion Abstract The degree to which invasive species drive or respond to environmental change has important implications for conservation and invasion management. Often characterized as a driver of change in North American woodlands, the invasive herb garlic mustard may instead respond to declines in native plant cover and diversity. We tested effects of native herb cover, richness, and light availability on garlic mustard invasion in a Minnesota oak woodland. We planted 50 garlic mustard seeds into plots previously planted with 0 to 10 native herb species. We measured garlic mustard seedling establishment, survival to rosette and adult stages, and average (per plant) and total (per plot) biomass and silique production. With the use of structural equation models, we analyzed direct, indirect, and net effects of native cover, richness, and light on successive garlic mustard life stages. Native plant cover had a significant negative effect on all life stages. Species richness had a significant positive effect on native cover, resulting in indirect negative effects on all garlic mustard stages, and net negative effects on adult numbers, total biomass, and silique production. Light had a strong negative effect on garlic mustard seedling establishment and a positive effect on native herb cover, resulting in significant negative net effects on garlic mustard rosette and adult numbers. However, light's net effect on total garlic mustard biomass and silique production was positive; reproductive output was high even in low-light/high-cover conditions. Combined effects of cover, richness, and light suggest that native herbs provide biotic resistance to invasion by responding to increased light availability and suppressing garlic mustard responses, although this resistance may be overwhelmed by high propagule pressure. Garlic mustard invasion may occur, in part, in response to native plant decline. Restoring native herbs and controlling garlic mustard seed production may effectively reduce garlic mustard spread and restore woodland diversity." }, { "instance_id": "R57501xR52077", "comparison_id": "R57501", "paper_id": "R52077", "text": "Plant functional group identity and diversity determine biotic resistance to invasion by an exotic grass Summary 1. Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion-resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. 2. We classi! ed 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity\u2010interaction models. 3. In monoculture treatments, RCIavg of wetland plants was signi! cantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast-growing annuals, suggesting priority effect. 4. RCIavg of wetland plants was signi! cantly greater in mixture than in monoculture mainly due to complementarity\u2010diversity effect among functional groups. In diversity\u2010interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when ! tted to RCIavg or biomass, implying niche partitioning. 5. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche preemption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to ! eld situations will facilitate generalization." }, { "instance_id": "R57501xR57233", "comparison_id": "R57501", "paper_id": "R57233", "text": "Yellow crazy ant (Anoplolepis gracilipes) invasions within undisturbed mainland Australian habitats: no support for biotic resistance hypothesis Ants are highly successful invaders, especially on islands, yet undisturbed mainland environments often do not contain invasive ants, and this observation is largely attributed to biotic resistance. An exception is the incursion of Yellow crazy ant Anoplolepis gracilipes within northeast Arnhem Land. The existence of A. gracilipes within this landscape\u2019s intact environments containing highly competitive ant communities indicates that biotic resistance is not a terminally inhibitory factor mediating this ant\u2019s distribution at the regional scale. We test whether biotic resistance may still operate at a more local scale by assessing whether ecological impacts are proportional to habitat suitability for A. gracilipes, as well as to the competitiveness of the invaded ant community. The abundance and species richness of native ants were consistently greater in uninfested than infested plots but the magnitude of the impacts did not differ between habitats. The abundance and ordinal richness of other macro-invertebrates were consistently lower in infested plots in all habitats. A significant negative relationship was found for native ant abundance and A. gracilipes abundance. No relationships were found between A. gracilipes abundance and any measure of other macro-invertebrates. The relative contribution of small ants (<2.5 mm) to total abundance and relative species richness was always greater in infested sites coinciding with a reduction of the contribution of the larger size classes. Differences in the relative abundance of ant functional groups between infested and uninfested sites reflected impacts according to ant size classes and ecology. The widespread scale of these incursions and non-differential level of impacts among the habitats, irrespective of native ant community competitiveness and abiotic suitability to A. gracilipes, does not support the biotic resistance hypothesis." }, { "instance_id": "R57501xR54786", "comparison_id": "R57501", "paper_id": "R54786", "text": "Pollution reduces native diversity and increases invader dominance in marine hard-substrate communities Anthropogenic disturbance is considered a risk factor in the establishment of non-indigenous species (NIS); however, few studies have investigated the role of anthropogenic disturbance in facilitating the establishment and spread of NIS in marine environments. A baseline survey of native and NIS was undertaken in conjunction with a manipulative experiment to determine the effect that heavy metal pollution had on the diversity and invasibility of marine hard-substrate assemblages. The study was repeated at two sites in each of two harbours in New South Wales, Australia. The survey sampled a total of 47 sessile invertebrate taxa, of which 15 (32%) were identified as native, 19 (40%) as NIS, and 13 (28%) as cryptogenic. Increasing pollution exposure decreased native species diversity at all study sites by between 33% and 50%. In contrast, there was no significant change in the numbers of NIS. Percentage cover was used as a measure of spatial dominance, with increased pollution exposure leading to increased NIS dominance across all sites. At three of the four study sites, assemblages that had previously been dominated by natives changed to become either extensively dominated by NIS or equally occupied by native and NIS alike. No single native or NIS was repeatedly responsible for the observed changes in native species diversity or NIS dominance at all sites. Rather, the observed effects of pollution were driven by a diverse range of taxa and species. These findings have important implications for both the way we assess pollution impacts, and for the management of NIS. When monitoring the response of assemblages to pollution, it is not sufficient to simply assess changes in community diversity. Rather, it is important to distinguish native from NIS components since both are expected to respond differently. In order to successfully manage current NIS, we first need to address levels of pollution within recipient systems in an effort to bolster the resilience of native communities to invasion." }, { "instance_id": "R57501xR57187", "comparison_id": "R57501", "paper_id": "R57187", "text": "Diversity of locust gut bacteria protects against pathogen invasion Diversity-invasibility relationships were explored in the novel context of the colonization resistance provided by gut bacteria of the desert locust Schistocerca gregaria against pathogenic bacteria. Germ-free insects were associated with various combinations of one to three species of locust gut bacteria and then fed an inoculum of the pathogenic bacterium Serratia marcescens. There was a significant negative relationship between the resulting density of Serratia marcescens and the number of symbiotic gut bacterial species present. Likewise there was a significant inverse relationship between community diversity and the proportion of locusts that harboured Serratia. Host mortality was not negatively correlated with resistance to gut-invasion by Serratia marcescens, although there were significantly more deaths among pathogen fed germ-free insects than tri-associated gnotobiotes. The outcome is consistent with the predictions of community ecology theory that species-rich communities are more resistant to invasion than species-poor communities." }, { "instance_id": "R57501xR57298", "comparison_id": "R57501", "paper_id": "R57298", "text": "Effects of human population, area, and time on non-native plant and fish diversity in the United States Non-native species diversity of plants and fishes in the contiguous 48 United States is analyzed to measure the influence of human population size, time of modern settlement, area and native species diversity. Besides exotic (from outside USA) plants, four types of non-native fishes are examined: established exotic fishes, reported exotic fishes, US fishes not native to a state, and native state fishes moved to new locations in a state. Human population size is most highly correlated with exotic plant diversity (r>70%) but is still significantly correlated with most types of non-native fish diversity. Time of modern settlement significantly increases non-native plant (but not most fish) diversity, even after the effects of current population size are removed. These patterns occur because most non-native plants are imported for landscaping, farming and other uses intimately linked to human settlements whereas, almost half of non-native fishes were released by state agencies for sport, often into large western states with relatively few humans. This also explains why state area is significantly correlated with all types of non-native fish diversity, but not non-native plant diversity where smaller eastern states have more people and more years of settlement which increase non-native plant diversity. Positive correlation of non-native plant diversity with native plant diversity is found, as humans tend to settle in states with high native species diversity. In contrast, negative correlation between non-native fish and native fish diversity is found. These findings may help predict non-native species diversity if past trends continue. They also imply that the most cost-effective way to slow non-native species impact may be to focus where human population is still small, because rate of establishment of non-native species decreases with increasing human population." }, { "instance_id": "R57501xR57169", "comparison_id": "R57501", "paper_id": "R57169", "text": "Effect of microbial species richness on community stability and community function in a model plant-based wastewater processing system Microorganisms will be an integral part of biologically based waste processing systems used for water purification or nutrient recycling on long-term space missions planned by the National Aeronautics and Space Administration. In this study, the function and stability of microbial inocula of different diversities were evaluated after inoculation into plant-based waste processing systems. The microbial inocula were from a constructed community of plant rhizosphere-associated bacteria and a complexity gradient of communities derived from industrial wastewater treatment plant-activated sludge. Community stability and community function were defined as the ability of the community to resist invasion by a competitor (Pseudomonas fluorescens 5RL) and the ability to degrade surfactant, respectively. Carbon source utilization was evaluated by measuring surfactant degradation and through Biolog and BD oxygen biosensor community level physiological profiling. Community profiles were obtained from a 16S\u201323S rDNA intergenic spacer region array. A wastewater treatment plant-derived community with the greatest species richness was the least susceptible to invasion and was able to degrade surfactant to a greater extent than the other complexity gradient communities. All communities resisted invasion by a competitor to a greater extent than the plant rhizosphere isolate constructed community. However, the constructed community degraded surfactant to a greater extent than any of the other communities and utilized the same number of carbon sources as many of the other communities. These results demonstrate that community function (carbon source utilization) and community stability (resistance to invasion) are a function of the structural composition of the community irrespective of species richness or functional richness." }, { "instance_id": "R57501xR56082", "comparison_id": "R57501", "paper_id": "R56082", "text": "Determinants of establishment success in introduced birds A major component of human-induced global change is the deliberate or accidental translocation of species from their native ranges to alien environments, where they may cause substantial environmental and economic damage. Thus we need to understand why some introductions succeed while others fail. Successful introductions tend to be concentrated in certain regions, especially islands and the temperate zone, suggesting that species-rich mainland and tropical locations are harder to invade because of greater biotic resistance. However, this pattern could also reflect variation in the suitability of the abiotic environment at introduction locations for the species introduced, coupled with known confounding effects of nonrandom selection of species and locations for introduction. Here, we test these alternative hypotheses using a global data set of historical bird introductions, employing a statistical framework that accounts for differences among species and regions in terms of introduction success. By removing these confounding effects, we show that the pattern of avian introduction success is not consistent with the biotic resistance hypothesis. Instead, success depends on the suitability of the abiotic environment for the exotic species at the introduction site." }, { "instance_id": "R57501xR57346", "comparison_id": "R57501", "paper_id": "R57346", "text": "Realistic plant species losses reduce invasion resistance in a California serpentine grassland Summary 1. The majority of experiments examining effects of species diversity on ecosystem functioning have randomly manipulated species richness. More recent studies demonstrate that realistic species losses have dramatically different effects on ecosystem functioning than those of randomized losses, but these results are based primarily on microcosm experiments or modelling efforts. 2. We conducted a field-based experiment directly comparing the consequences of realistic and randomized plant species losses on invasion resistance and productivity in a native-dominated serpentine grassland in California, USA. The realistic species loss order was based on nested subset analysis of long-term presence \u2044 absence data from our research site and reflects differing species sensitivities to prolonged drought. 3. Biomass of exotic invasive plant species was inversely related to native species richness in the realistic loss order. In contrast, invader biomass was consistently low across species richness levels in the randomized species loss order, with no effect of native species richness on invader biomass among randomized assemblages. Although total above-ground plant biomass increased with soil depth (a proxy for resource availability) in both realistic and randomized assemblages, soil depth influenced invader biomass only in the randomized assemblages. 4. Synthesis. Our results illustrate that the functional consequences of realistic species losses can differ distinctly from those of randomized species losses and that incorporation of realistic species loss scenarios can increase the relevance of experiments linking biodiversity and ecosystem functioning to conservation in the face of anthropogenic global change." }, { "instance_id": "R57501xR57265", "comparison_id": "R57501", "paper_id": "R57265", "text": "Species diversity and biological invasions: relating local process to community pattern In a California riparian system, the most diverse natural assemblages are the most invaded by exotic plants. A direct in situ manipulation of local diversity and a seed addition experiment showed that these patterns emerge despite the intrinsic negative effects of diversity on invasions. The results suggest that species loss at small scales may reduce invasion resistance. At community-wide scales, the overwhelming effects of ecological factors spatially covarying with diversity, such as propagule supply, make the most diverse communities most likely to be invaded." }, { "instance_id": "R57501xR56087", "comparison_id": "R57501", "paper_id": "R56087", "text": "Invasibility of tropical islands by introduced plants: partitioning the influence of isolation and propagule pressure All else being equal, more isolated islands should be more susceptible to invasion because their native species are derived from a smaller pool of colonists, and isolated islands may be missing key functional groups. Although some analyses seem to support this hypothesis, previous studies have not taken into account differences in the number of plant introductions made to different islands, which will affect invasibility estimates. Furthermore, previous studies have not assessed invasibility in terms of the rates at which introduced plant species attain different degrees invasion or naturalization. I compared the naturalization status of introduced plants on two pairs of Pacific island groups that are similar in most respects but that differ in their distances from a mainland. Then, to factor out differences in propagule pressure due to differing numbers of introductions, I compared the naturalization status only among shared introductions. In the first comparison, Hawai\u2018i (3700 km from a mainland) had three times more casual/weakly naturalized, naturalized and pest species than Taiwan (160 km from a mainland); however, roughly half (54%) of this difference can be attributed to a larger number of plant introductions to Hawai\u2018i. In the second comparison, Fiji (2500 km from a mainland) did not differ in susceptibility to invasion in comparison to New Caledonia (1000 km from a mainland); the latter two island groups appear to have experienced roughly similar propagule pressure, and they have similar invasibility. The rate at which naturalized species have become pests is similar for Hawai\u2018i and other island groups. The higher susceptibility of Hawai\u2018i to invasion is related to more species entering the earliest stages in the invasion process (more casual and weakly naturalized species), and these higher numbers are then maintained in the naturalized and pest pools. The number of indigenous (not endemic) species was significantly correlated with susceptibility to invasion across all four island groups. When islands share similar climates and habitat diversity, the number of indigenous species may be a better predictor of invasibility than indices of physical isolation because it is a composite measure of biological isolation." }, { "instance_id": "R57501xR57157", "comparison_id": "R57501", "paper_id": "R57157", "text": "Human-related processes drive the richness of exotic birds in Europe Both human-related and natural factors can affect the establishment and distribution of exotic species. Understanding the relative role of the different factors has important scientific and applied implications. Here, we examined the relative effect of human-related and natural factors in determining the richness of exotic bird species established across Europe. Using hierarchical partitioning, which controls for covariation among factors, we show that the most important factor is the human-related community-level propagule pressure (the number of exotic species introduced), which is often not included in invasion studies due to the lack of information for this early stage in the invasion process. Another, though less important, factor was the human footprint (an index that includes human population size, land use and infrastructure). Biotic and abiotic factors of the environment were of minor importance in shaping the number of established birds when tested at a European extent using 50\u00d750 km 2 grid squares. We provide, to our knowledge, the first map of the distribution of exotic bird richness in Europe. The richest hotspot of established exotic birds is located in southeastern England, followed by areas in Belgium and The Netherlands. Community-level propagule pressure remains the major factor shaping the distribution of exotic birds also when tested for the UK separately. Thus, studies examining the patterns of establishment should aim at collecting the crucial and hard-to-find information on community-level propagule pressure or develop reliable surrogates for estimating this factor. Allowing future introductions of exotic birds into Europe should be reconsidered carefully, as the number of introduced species is basically the main factor that determines the number established." }, { "instance_id": "R57501xR57341", "comparison_id": "R57501", "paper_id": "R57341", "text": "Ecological resistance to Acer negundo invasion in a European riparian forest: relative importance of environmental and biotic drivers Question The relative importance of environmental vs. biotic resistance of recipient ecological communities remains poorly understood in invasion ecology. Acer negundo, a North American tree, has widely invaded riparian forests throughout Europe at the ecotone between early- (Salix spp. and Populus spp.) and late-successional (Fraxinus spp.) species. However, it is not present in the upper part of the Rhone River, where native Alnus incana occurs at an intermediate position along the successional riparian gradient. Is this absence of the invasive tree due to environmental or biotic resistance of the recipient communities, and in particular due to the presence of Alnus? Location Upper Rhone River, France. Methods We undertook a transplant experiment in an Alnus-dominated community along the Upper Rhone River, where we compared Acer negundo survival and growth, with and without biotic interactions (tree and herb layer effects), to those of four native tree species from differing successional positions in the Upper Rhone communities (P. alba, S. alba, F. excelsior and Alnus incana). Results Without biotic interactions Acer negundo performed similarly to native species, suggesting that the Upper Rhone floodplain is not protected from Acer invasion by a simple abiotic barrier. In contrast, this species performed less well than F. excelsior and Alnus incana in environments with intact tree and/or herb layers. Alnus showed the best growth rate in these conditions, indicating biotic resistance of the native plant community. Conclusions We did not find evidence for an abiotic barrier to Acer negundo invasion of the Upper Rhone River floodplain communities, but our results suggest a biotic resistance. In particular, we demonstrated that (i) additive competitive effects of the tree and herb layer led to Acer negundo suppression and (ii) Alnus incana grew more rapidly than Acer negundo in this intermediate successional niche." }, { "instance_id": "R57501xR57144", "comparison_id": "R57501", "paper_id": "R57144", "text": "Interactions between abiotic constraint, propagule pressure, and biotic resistance regulate plant invasion With multiple species introductions and rapid global changes, there is a need for comprehensive invasion models that can predict community responses. Evidence suggests that abiotic constraint, propagule pressure, and biotic resistance of resident species each determine plant invasion success, yet their interactions are rarely tested. To understand these interactions, we conducted community assembly experiments simulating situations in which seeds of the invasive grass species Phragmites australis (Poaceae) land on bare soil along with seeds of resident wetland plant species. We used structural equation models to measure both direct abiotic constraint (here moist vs. flooded conditions) on invasion success and indirect constraint on the abundance and, therefore, biotic resistance of resident plant species. We also evaluated how propagule supply of P. australis interacts with the biotic resistance of resident species during invasion. We observed that flooding always directly reduced invasion success but had a synergistic or antagonistic effect on biotic resistance depending on the resident species involved. Biotic resistance of the most diverse resident species mixture remained strong even when abiotic conditions changed. Biotic resistance was also extremely effective under low propagule pressure of the invader. Moreover, the presence of a dense resident plant cover appeared to lower the threshold at which invasion success became stable even when propagule supply increased. Our study not only provides an analytical framework to quantify the effect of multiple interactions relevant to community assembly and species invasion, but it also proposes guidelines for innovative invasion management strategies based on a sound understanding of ecological processes." }, { "instance_id": "R57501xR57330", "comparison_id": "R57501", "paper_id": "R57330", "text": "Patterns of invasion in temperate nature reserves The extent of plant invasions was studied in 302 nature reserves located in the Czech Republic, central Europe. Lists of vascular plant species were obtained for each reserve, alien species were divided into archaeophytes and neophytes (introduced before and after 1500, respectively). The statistical analysis using general linear models made it possible to identify the effects of particular variables. Flora representation by neophytes decreased with altitude (explained 23.8% of variance) while, with archaeophytes, the effect of altitude depended on their interaction with native species in particular vegetation types. The proportion of neophytes increased with increasing density of human population. Both the number and proportion of aliens plants significantly increased with increasing number of native species in a reserve. This relationship was affected by altitude, and after filtering out this variable, the effect remained positive for neophytes but became negative for archaeophytes in humid grasslands. The positive relationship between neophytes and native species is not a mere side effect of species\u2013area relationship of native flora, but indicates that the two groups do not directly compete. Vegetation type alone explained 14.2 and 55.5% of variation in proportion of aliens in regions of mesophilous and mountain flora, respectively. Humid grasslands were the least invaded vegetation type. Positioning the reserve within large protected sections of landscape significantly decreases probability of it being invaded by potentially invasive alien species. Within the context of SLOSS debate, a new model \u2014 several small inside single large (SSISL) \u2014 is suggested as an appropriate solution from the viewpoint of plant invasions to nature reserves. # 2002 Elsevier Science Ltd. All rights reserved." }, { "instance_id": "R57501xR57275", "comparison_id": "R57501", "paper_id": "R57275", "text": "Linking invasions and biogeography: Isolation differentially affects exotic and native plant diversity The role of native species diversity in providing biotic resistance to invasion remains controversial, with evidence supporting both negative and positive relationships that are often scale dependent. Across larger spatial scales, positive relationships suggest that exotic and native species respond similarly to factors other than diversity. In the case of island habitats, such factors may include island size and isolation from the mainland. However, previous island studies exploring this issue examined only a few islands or islands separated by extreme distances. In this study, we surveyed exotic and native plant diversity on 25 islands separated by <15 km in Boston Harbor. Exotic and native species richness were positively correlated. Consistent with island biogeography theory, species richness of both groups was positively related to area and negatively related to isolation. However, the isolation effect was significantly stronger for native species. This differential effect of isolation on native species translated into exotic species representing a higher proportion of all plant species on more distant islands. The community similarity of inner harbor islands vs. outer harbor islands was greater for exotic species, indicating that isolation had a weaker influence on individual exotic species. These results contrast with recent work focusing on similarities between exotic and native species and highlight the importance of studies that use an island biogeographic approach to better understand those factors influencing the ecology of invasive species." }, { "instance_id": "R57501xR54661", "comparison_id": "R57501", "paper_id": "R54661", "text": "Invasibility and abiotic gradients: the positive correlation between native and exotic plant diversity We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness." }, { "instance_id": "R57501xR57113", "comparison_id": "R57501", "paper_id": "R57113", "text": "Niche occupation by invasive ground-dwelling predator species in Canarian laurel forests The distribution and co-occurrence of three groups of carnivorous soil macro-invertebrates (Carabidae, Staphylinidae, Chilopoda) were examined in laurel forests of the western Canary Islands. The species numbers per site decreased from East to West (La Gomera, El Hierro, La Palma) in both, beetles and centipedes. No evidence was found for the \u2018diversity-invasibility-hypothesis\u2019 sensu Elton. The number of invasive species per site increased with that of native species in Chilopoda, and was not significant in Carabidae+Staphylinidae. Carabidae and Staphylinidae were combined to form a guild of non-specialized ground-dwelling predatory beetles. The mandible length of adults and larvae was used as an indicator of the preferred food size class to determine the food niche width and niche separation. Two invasive coleopteran species were also examined: Ocypus olens occupied the vacant top predator niche on El Hierro, and Laemostenus complanatus occupied the vacant medium size predator niche on La Palma. Neither of these species was found in laurel forests of any other island where these niches are occupied by autochthonous species, though they are introduced on these islands too. The Chilopoda occurred in the forests with seven invasive and seven native species. Autochthonous and introduced centipedes species of the same size class and group are mutually exclusive." }, { "instance_id": "R57501xR57231", "comparison_id": "R57501", "paper_id": "R57231", "text": "Predicting the landscape-scale distribution of alien plants and their threat to plant diversity Abstract: Invasive alien organisms pose a major threat to global biodiversity. The Cape Peninsula, South Africa, provides a case study of the threat of alien plants to native plant diversity. We sought to identify where alien plants would invade the landscape and what their threat to plant diversity could be. This information is needed to develop a strategy for managing these invasions at the landscape scale. We used logistic regression models to predict the potential distribution of six important invasive alien plants in relation to several environmental variables. The logistic regression models showed that alien plants could cover over 89% of the Cape Peninsula. Acacia cyclops and Pinus pinaster were predicted to cover the greatest area. These predictions were overlaid on the current distribution of native plant diversity for the Cape Peninsula in order to quantify the threat of alien plants to native plant diversity. We defined the threat to native plant diversity as the number of native plant species (divided into all species, rare and threatened species, and endemic species) whose entire range is covered by the predicted distribution of alien plant species. We used a null model, which assumed a random distribution of invaded sites, to assess whether area invaded is confounded with threat to native plant diversity. The null model showed that most alien species threaten more plant species than might be suggested by the area they are predicted to invade. For instance, the logistic regression model predicted that P. pinaster threatens 350 more native species, 29 more rare and threatened species, and 21 more endemic species than the null model would predict. Comparisons between the null and logistic regression models suggest that species richness and invasibility are positively correlated and that species richness is a poor indicator of invasive resistance in the study site. Our results emphasize the importance of adopting a spatially explicit approach to quantifying threats to biodiversity, and they provide the information needed to prioritize threats from alien species and the sites that need urgent management intervention." }, { "instance_id": "R57501xR57382", "comparison_id": "R57501", "paper_id": "R57382", "text": "Weed numbers in New Zealand's forest and scrub reserves New Zealand's protected natural areas are being increasingly threatened by weeds as the natural landscape is fragmented and surrounding land use intensifies. To assist in designing management to reduce the threat, we attempted to determine the most important reserve characteristics influencing the presence of problem weeds in forest and scrub reserves. Data on 15 reserve characteristics were derived from surveys of 234 reserves. From correlation analysis, analysis of variance and consideration of several multivariate models, it appears that the most important characteristics influencing the number of problem weeds in reserves are proximity to towns, distance from roads and railway lines, human use, reserve shape, and habitat diversity. These factors reflect principally increased proximity to source of propagules associated with intensifying land use, including urbanisation. Reserves with the most weeds are narrow remnants on fertile soils with clearings and a history of modification, and those close to towns or sites of high human activity. If these reserves are to continue to protect natural values, they will require regular attention to prevent the establishment of further weeds. Accidental spread of weeds and disturbance in reserves should be minimised." }, { "instance_id": "R57501xR57131", "comparison_id": "R57501", "paper_id": "R57131", "text": "Phosphorus addition reduces invasion of a longleaf pine savanna (Southeastern USA) by a non-indigenous grass (Imperata cylindrica) Imperata cylindrica is an invasive C4 grass, native to Asia and increasing in frequency throughout the tropics, subtropics, and southeastern USA. Such increases are associated with reduced biodiversity, altered fire regimes, and a more intense competitive environment for commercially important species. We measured rates of clonal spread by I. cylindrica from a roadside edge into the interior of two longleaf pine savannas. In addition, we measured the effects of fertilization with nitrogen and phosphorus on clonal invasion of one of these sites. Clonal invasion occurred at both sites and at similar rates. Older portions of an I. cylindrica sward contained fewer species of native pine-savanna plants. Clonal growth rates and aboveground mass of I. cylindrica were reduced by the addition of phosphorus relative to controls by the second growing season at one site. As a group, native species were not affected much by P-addition, although the height of legumes was increased by P addition, and the percent cover of legumes relative to native non-legumes decreased with increasing expected P limitation (i.e., going from P-fertilized to controls to N-fertilized treatments). Clonal invasion was negatively correlated with the relative abundance of legumes in control plots but not in P-fertilized plots. Species richness and percent cover of native plants (both legumes and non-legumes) were dramatically lower in N-fertilized plots than in controls or P-fertilized plots. Species richness of native plants was negatively correlated with final aboveground mass of I. cylindrica in control and P-fertilized plots, but not in N-fertilized plots. The results suggest that I. cylindrica is a better competitor for phosphorus than are native pine-savanna plants, especially legumes, and that short-lived, high-level pulses of phosphorus addition reduce this competitive advantage without negatively affecting native plant diversity. Ratios of soil P to N or native legume to non-legume plant species may provide indicators of the resistance of pristine pine savannas to clonal invasion by I. cylindrica." }, { "instance_id": "R57501xR57201", "comparison_id": "R57501", "paper_id": "R57201", "text": "Dominant species identity, not community evenness, regulates invasion in experimental grassland plant communities While there has been extensive interest in understanding the relationship between diversity and invasibility of communities, most studies have only focused on one component of diversity: species richness. Although the number of species can affect community invasibility, other aspects of diversity, including species identity and community evenness, may be equally important. While several field studies have examined how invasibility varies with diversity by manipulating species identity or evenness, the results are often confounded by resource heterogeneity, site history, or disturbance. We designed a mesocosm experiment to examine explicitly the role of dominant species identity and evenness on the invasibility of grassland plant communities. We found that the identity of the dominant plant species, but not community evenness, significantly impacted invasibility. Using path analysis, we found that community composition (dominant species identity) reduced invasion by reducing early-season light availability and increasing late-season plant community biomass. Nitrogen availability was an important factor for the survival of invaders in the second year of the experiment. We also found significant direct effects of certain dominant species on invasion, although the mechanisms driving these effects remain unclear. The magnitude of dominant species effects on invasibility we observed are comparable to species richness effects observed in other studies, showing that species composition and dominant species can have strong effects on the invasibility of a community." }, { "instance_id": "R57501xR57356", "comparison_id": "R57501", "paper_id": "R57356", "text": "Biodiversity, invasion resistance, and marine ecosystem function: Reconciling pattern and process A venerable generalization about community resistance to invasions is that more diverse communities are more resistant to invasion. However, results of experimental and observational studies often conflict, leading to vigorous debate about the mechanistic importance of diversity in determining invasion success in the field, as well as other eco- system properties, such as productivity and stability. In this study, we employed both field experiments and observational approaches to assess the effects of diversity on the invasion of a subtidal marine invertebrate community by three species of nonindigenous ascidians (sea squirts). In experimentally assembled communities, decreasing native diversity in- creased the survival and final percent cover of invaders, whereas the abundance of individual species had no effect on these measures of invasion success. Increasing native diversity also decreased the availability of open space, the limiting resource in this system, by buffering against fluctuations in the cover of individual species. This occurred because temporal patterns of abundance differed among species, so space was most consistently and completely occupied when more species were present. When we held diversity constant, but manipulated resource availability, we found that the settlement and recruitment of new invaders dramatically increased with increasing availability of open space. This suggests that the effect of diversity on invasion success is largely due to its effects on resource (space) availability. Apart from invasion resistance, the increased temporal stability found in more diverse communities may itself be considered an enhancement of ecosystem func- tion. In field surveys, we found a strong negative correlation between native-species richness and the number and frequency of nonnative invaders at the scale of both a single quadrat (25 3 25 cm), and an entire site (50 3 50 m). Such a pattern suggests that the means by which diversity affects invasion resistance in our experiments is important in determining the distribution of invasive species in the field. Further synthesis of mechanistic and ob- servational approaches should be encouraged, as this will increase our understanding of the conditions under which diversity does (and does not) play an important role in deter- mining the distribution of invaders in the field." }, { "instance_id": "R57501xR57359", "comparison_id": "R57501", "paper_id": "R57359", "text": "A null model of exotic plant diversity tested with exotic and native species-area relationships At large spatial scales, exotic and native plant diversity exhibit a strong positive relationship. This may occur because exotic and native species respond similarly to processes that influence diversity over large geographical areas. To test this hypothesis, we compared exotic and native species-area relationships within six North American ecoregions. We predicted and found that within ecoregions the ratio of exotic to native species richness remains constant with increasing area. Furthermore, we predicted that areas with more native species than predicted by the species-area relationship would have proportionally more exotics as well. We did find that these exotic and native deviations were highly correlated, but areas that were good (or bad) for native plants were even better (or worse) for exotics. Similar processes appear to influence exotic and native plant diversity but the degree of this influence may differ with site quality." }, { "instance_id": "R57501xR57223", "comparison_id": "R57501", "paper_id": "R57223", "text": "Distributions of exotic plants in eastern Asia and North America Although some plant traits have been linked to invasion success, the possible effects of regional factors, such as diversity, habitat suitability, and human activity are not well understood. Each of these mechanisms predicts a different pattern of distribution at the regional scale. Thus, where climate and soils are similar, predictions based on regional hypotheses for invasion success can be tested by comparisons of distributions in the source and receiving regions. Here, we analyse the native and alien geographic ranges of all 1567 plant species that have been introduced between eastern Asia and North America or have been introduced to both regions from elsewhere. The results reveal correlations between the spread of exotics and both the native species richness and transportation networks of recipient regions. This suggests that both species interactions and human-aided dispersal influence exotic distributions, although further work on the relative importance of these processes is needed." }, { "instance_id": "R57501xR57354", "comparison_id": "R57501", "paper_id": "R57354", "text": "Species diversity and invasion resistance in a marine ecosystem Theory predicts that systems that are more diverse should be more resistant to exotic species, but experimental tests are needed to verify this. In experimental communities of sessile marine invertebrates, increased species richness significantly decreased invasion success, apparently because species-rich communities more completely and efficiently used available space, the limiting resource in this system. Declining biodiversity thus facilitates invasion in this system, potentially accelerating the loss of biodiversity and the homogenization of the world's biota." }, { "instance_id": "R57501xR57369", "comparison_id": "R57501", "paper_id": "R57369", "text": "Scale and plant invasions: a theory of biotic acceptance We examined the relationship between native and alien plant species richness, cover, and estimated biomass at multiple spatial scales. The large dataset included 7051 1-m subplots, 1443 10-m subplots, and 727 100-m subplots, nested in 727 1000-m plots in 37 natural vegetation types in seven states in the central United States. We found that native and alien species richness (averaged across the vegetation types) increased significantly with plot area. Furthermore, the relationship between native and alien species richness became increasingly positive and significant from the plant neighbourhood scale (1-m) to the 10-m, 100-m, and the 1000-m scale where over 80% of the vegetation types had positive slopes between native and alien species richness. Both native and alien plant species may be responding to increased resource availability and/or habitat heterogeneity with increased area. We found significant positive relationships between the coefficient of variation of native cover in 1-m subplots in a vegetation type (i.e. a measure of habitat heterogeneity), and both the relative cover and relative biomass of alien plant species. At the 1000-m scale, we did find weak negative relationships between native species richness and the cover, biomass, and relative cover of alien plant species. However, we found very strong positive relationships between alien species richness and the cover, relative cover, and relative biomass of alien species at regional scales. These results, along with many other field studies in natural ecosystems, show that the dominant general pattern in invasion ecology at multiple spatial scales is one of \u201cbiotic acceptance\u201d where natural ecosystems tend to accommodate the establishment and coexistence of introduced species despite the presence and abundance of native species." }, { "instance_id": "R57501xR52096", "comparison_id": "R57501", "paper_id": "R52096", "text": "Species-rich Scandinavian grasslands are inherently open to invasion Invasion of native habitats by alien or generalist species is recognized worldwide as one of the major causes behind species decline and extinction. One mechanism determining community invasibility, i.e. the susceptibility of a community to invasion, which has been supported by recent experimental studies, is species richness and functional diversity acting as barriers to invasion. We used Scandinavian semi-natural grasslands, exceptionally species-rich at small spatial scales, to examine this mechanism, using three grassland generalists and one alien species as experimental invaders. Removal of two putative functional groups, legumes and dominant non-legume forbs, had no effect on invasibility except a marginally insignificant effect of non-legume forb removal. The amount of removed biomass and original plot species richness had no effect on invasibility. Actually, invasibility was high already in the unmanipulated community, leading us to further examine the relationship between invasion and propagule pressure, i.e. the inflow of seeds into the community. Results from an additional experiment suggested that these species-rich grasslands are effectively open to invasion and that diversity may be immigration driven. Thus, species richness is no barrier to invasion. The high species diversity is probably in itself a result of the community being highly invasible, and species have accumulated at small scales during centuries of grassland management." }, { "instance_id": "R57501xR57225", "comparison_id": "R57501", "paper_id": "R57225", "text": "Native and introduced fish species richness in Chilean Patagonian lakes: inferences on invasion mechanisms using salmonid-free lakes Aim Geographic patterns of species richness have been linked to many physical and biological drivers. In this study, we document and explain gradients of species richness for native and introduced freshwater fish in Chilean lakes. We focus on the role of the physical environment to explain native richness patterns. For patterns of introduced salmonid richness and dominance, we also examine the biotic resistance and human activity hypotheses. We were particularly interested in identifying the factors that best explain the persistence of salmonid-free lakes in Patagonia. Location Chile (39\u00b0 to 54\u00b0S). Methods We conducted an extensive survey of 63 lakes, over a broad latitudinal range. We tested for the importance of temperature, ecosystem size, current and historic aquatic connectivity as well as measures of human activity (road access and land use) in determining patterns of native and introduced richness. Results Introduced species richness was positively correlated with native richness. Native and introduced richness declined with latitude, increased with temperature and ecosystem size. Variation in native richness was related to historic drainage connections, while introduced richness and salmonid dominance were significantly affected by current habitat connectivity. We found a total of 15 salmonid-free lakes, all located in remote areas south of 45\u00b0S, and all upstream of major naturally occurring physical barriers. Main conclusions Temperature, as a correlate of latitude, and lake size were key determinants of native and introduced species richness in Chilean lakes and were responsible for the positive correlation between native and introduced richness. We found no evidence for biotic resistance by native species to salmonid expansion, and although the original introductions were human-mediated, current patterns of introduced richness were not related to human activity, as measured by road access or land use. Rather, environmental factors, especially habitat connectivity and temperature, appear to limit salmonid expansion within Chilean freshwaters." }, { "instance_id": "R57501xR57288", "comparison_id": "R57501", "paper_id": "R57288", "text": "Native plant diversity resists invasion at both low and high resource levels Human modification of the environment is causing both loss of species and changes in resource availability. While studies have examined how species loss at the local level can influence invasion resistance, interactions between species loss and other components of environmental change remain poorly studied. In particular, the manner in which native diversity interacts with resource availability to influence invasion resistance is not well understood. We created experimental plant assemblages that varied in native species (1-16 species) and/or functional richness (defined by rooting morphology and phenology; one to five functional groups). We crossed these diversity treatments with resource (water) addition to determine their interactive effects on invasion resistance to spotted knapweed (Centaurea maculosa), a potent exotic invader in the intermountain West of the United States. We also determined how native diversity and resource addition influenced plant-available soil nitrogen, soil moisture, and light. Assemblages with lower species and functional diversity were more heavily invaded than assemblages with greater species and functional diversity. In uninvaded assemblages, experimental addition of water increased soil moisture and plant-available nitrogen and decreased light availability. The availability of these resources generally declined with increasing native plant diversity. Although water addition increased susceptibility to invasion, it did not fundamentally change the negative relationship between diversity and invasibility. Thus, native diversity provided strong invasion resistance even under high resource availability. These results suggest that the effects of local diversity can remain robust despite enhanced resource levels that are predicted under scenarios of global change." }, { "instance_id": "R57501xR57393", "comparison_id": "R57501", "paper_id": "R57393", "text": "Diversity effects on invasion vary with life history stage in marine macroalgae Most experimental studies of diversity effects on invasibility have reported negative relationships while observational studies have often found positive correlations between the numbers of exotic and native taxa. Nearly all of these studies have been done with terrestrial plants or aquatic invertebrates. We investigated effects of native macroalgal diversity on invasion success of the introduced macroalga Sargassum muticum (Yendo) Fensholt (Phaeophyceae: Fucales) on the west coast of Vancouver Island. We conducted both observational field surveys of the correlation between native diversity and exotic cover, and experimental manipulations of native diversity in constructed 25 \ufffd 25 cm communities. Field surveys found higher cover of S. muticum in plots with low native diversity, suggesting a negative relationship between diversity and invasibility at the neighbourhood scale. The experiment found initial cover of S. muticum germlings was highest in plots with greater diversity. Over the duration of the experiment cover of settled germlings increased fastest in the low diversity plots, so that there was a weak negative effect of diversity on final cover of the invader after 77 days. The slope of the relationship reversed over time, with field patterns and experimental results converging at the end of the experiment. Our results suggest native diversity has contrasting effects on different stages of invasion. Diversity facilitates invader recruitment of S. muticum but decreases growth and or survivorship." }, { "instance_id": "R57501xR54707", "comparison_id": "R57501", "paper_id": "R54707", "text": "Do biodiversity and human impact influence the introduction or establishment of alien mammals? What determines the number of alien species in a given region? \u2018Native biodiversity\u2019 and \u2018human impact\u2019 are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance." }, { "instance_id": "R57501xR57121", "comparison_id": "R57501", "paper_id": "R57121", "text": "Environmental productivity and biodiversity effects on invertebrate community invasibility Productivity influences the availability of resources for colonizing species. Biodiversity may also influence invasibility of communities because of more complete use of resource types with increasing species richness. We hypothesized that communities with higher environmental productivity and lower species richness should be more invasible by a competitor than those where productivity is low or where richness is high. We experimentally examined the invasion resistance of herbivorous meiofauna of Jamaican rock pools by a competitor crustacean (Ostracoda: Potamocypris sp. (Brady)) by contrasting three levels of nutrient input and four levels of species richness. Although relative abundance (dominance) of the invasive was largely unaffected by resource availability, increasing resources did increase the success rate of establishment. Effects of species richness on dominance were more pronounced with a trend towards the lowest species richness treatment of 2 resident species being more invasible than those with 4, 6, or 7 species. These results can be attributed to a \u2018sampling effect associated with the introduction of Alona davidii (Richard) into the higher biodiversity treatments. Alona dominated the communities where it established and precluded dominance by the introduced ostracod. Our experimental study supports the idea that niche availability and community interactions define community invasibility and does not support the application of a neutral community model for local food web management where predictions of exotic species impacts are needed." }, { "instance_id": "R57501xR57177", "comparison_id": "R57501", "paper_id": "R57177", "text": "Productivity alters the scale dependence of the diversity-invasibility relationship At small scales, areas with high native diversity are often resistant to invasion, while at large scales, areas with more native species harbor more exotic species, suggesting that different processes control the relationship between native and exotic species diversity at different spatial scales. Although the small-scale negative relationship between native and exotic diversity has a satisfactory explanation, we lack a mechanistic explanation for the change in relationship to positive at large scales. We investigated the native-exotic diversity relationship at three scales (range: 1-4000 km2) in California serpentine, a system with a wide range in the productivity of sites from harsh to lush. Native and exotic diversity were positively correlated at all three scales; it is rarer to detect a positive relationship at the small scales within which interactions between individuals occur. However, although positively correlated on average, the small-scale relationship between native and exotic diversity was positive at low-productivity sites and negative at high-productivity sites. Thus, the change in the relationship between native and exotic diversity does not depend on spatial scale per se, but occurs whenever environmental conditions change to promote species coexistence rather than competitive exclusion. This occurred within a single spatial scale when the environment shifted from being locally unproductive to productive." }, { "instance_id": "R57501xR55050", "comparison_id": "R57501", "paper_id": "R55050", "text": "ECOLOGICAL RESISTANCE TO BIOLOGICAL INVASION OVERWHELMED BY PROPAGULE PRESSURE Models and observational studies have sought patterns of predictability for invasion of natural areas by nonindigenous species, but with limited success. In a field experiment using forest understory plants, we jointly manipulated three hypothesized determinants of biological invasion outcome: resident diversity, physical disturbance and abiotic conditions, and propagule pressure. The foremost constraints on net habitat invasibility were the number of propagules that arrived at a site and naturally varying resident plant density. The physical environment (flooding regime) and the number of established resident species had negligible impact on habitat invasibility as compared to propagule pressure, despite manipulations that forced a significant reduction in resident richness, and a gradient in flooding from no flooding to annual flooding. This is the first experimental study to demonstrate the primacy of propagule pressure as a determinant of habitat invasibility in comparison with other candidate controlling factors." }, { "instance_id": "R57501xR57137", "comparison_id": "R57501", "paper_id": "R57137", "text": "Control of plant species diversity and community invasibility by species immigration: seed richness versus seed density Brown, R. L. and Fridley, J. D. 2003. Control of plant species diversity andcommunity invasibility by species immigration: seed richness versus seed density. \u2013Oikos 102: 15\u201324.Immigration rates of species into communities are widely understood to in\ufb02uencecommunity diversity, which in turn is widely expected to in\ufb02uence the susceptibilityof ecosystems to species invasion. For a given community, however, immigrationprocesses may impact diversity by means of two separable components: the numberof species represented in seed inputs and the density of seed per species. Theindependent effects of these components on plant species diversity and consequentrates of invasion are poorly understood. We constructed experimental plant commu-nities through repeated seed additions to independently measure the effects of seedrichness and seed density on the trajectory of species diversity during the develop-ment of annual plant communities. Because we sowed species not found in theimmediate study area, we were able to assess the invasibility of the resultingcommunities by recording the rate of establishment of species from adjacent vegeta-tion. Early in community development when species only weakly interacted, seedrichness had a strong effect on community diversity whereas seed density had littleeffect. After the plants became established, the effect of seed richness on measureddiversity strongly depended on seed density, and disappeared at the highest level ofseed density. The ability of surrounding vegetation to invade the experimentalcommunities was decreased by seed density but not by seed richness, primarilybecause the individual effects of a few sown species could explain the observedinvasion rates. These results suggest that seed density is just as important as seedrichness in the control of species diversity, and perhaps a more important determi-nant of community invasibility than seed richness in dynamic plant assemblages." }, { "instance_id": "R57501xR57399", "comparison_id": "R57501", "paper_id": "R57399", "text": "Realistic variation in species composition affects grassland production, resource use and invasion resistance We investigated the effects of realistic variation in plant species and functional group composition, with species occurring at realistic abundances, on ecosystem processes in exotic-dominated California grassland communities. Progressive species removals from microcosm communities, designed to mimic nested variation in diversity observed in the field, reduced grassland production, resistance to intentional invasions, and resistance to natural colonization by new species. Three lines of evidence point to the particular importance of intensified competition within a single functional group\u2014late-active forbs\u2014in explaining the observed effects of realistic species loss order on community resistance. First, reduced success of naturally colonizing species in more diverse assemblages was dominated by declining colonization by late-active forbs. Second, increasing late-active forb biomass appeared to reduce the biomass of intentionally introduced yellow starthistle (Centaurea solstitialis, a late-season forb) both within and across diversity levels. Finally, starthistle addition reduced biomass of resident late-season forbs but not of any other functional group. Increasing diversity increased light levels and soil moisture availability in spring and summer, providing a proximate mechanism linking our realistic species loss order to decreased community resistance. Starthistle addition reduced light and soil moisture availability but not N across richness levels, mirroring the apparent effects of the additional late-active forb species present in higher diversity treatments. Species losses that entail the early loss of whole or key functional groups could, through mechanisms like those we explore, have greater ecosystem consequences than those suggested by randomized-loss experiments." }, { "instance_id": "R57501xR57111", "comparison_id": "R57501", "paper_id": "R57111", "text": "Environmental and biotic correlates to lionfish invasion success in Bahamian coral reefs Lionfish (Pterois volitans), venomous predators from the Indo-Pacific, are recent invaders of the Caribbean Basin and southeastern coast of North America. Quantification of invasive lionfish abundances, along with potentially important physical and biological environmental characteristics, permitted inferences about the invasion process of reefs on the island of San Salvador in the Bahamas. Environmental wave-exposure had a large influence on lionfish abundance, which was more than 20 and 120 times greater for density and biomass respectively at sheltered sites as compared with wave-exposed environments. Our measurements of topographic complexity of the reefs revealed that lionfish abundance was not driven by habitat rugosity. Lionfish abundance was not negatively affected by the abundance of large native predators (or large native groupers) and was also unrelated to the abundance of medium prey fishes (total length of 5\u201310 cm). These relationships suggest that (1) higher-energy environments may impose intrinsic resistance against lionfish invasion, (2) habitat complexity may not facilitate the lionfish invasion process, (3) predation or competition by native fishes may not provide biotic resistance against lionfish invasion, and (4) abundant prey fish might not facilitate lionfish invasion success. The relatively low biomass of large grouper on this island could explain our failure to detect suppression of lionfish abundance and we encourage continuing the preservation and restoration of potential lionfish predators in the Caribbean. In addition, energetic environments might exert direct or indirect resistance to the lionfish proliferation, providing native fish populations with essential refuges." }, { "instance_id": "R57501xR57229", "comparison_id": "R57501", "paper_id": "R57229", "text": "The introduction of Littorina littorea to British Columbia, Canada: potential impacts and the importance of biotic resistance by native predators Although the establishment and spread of non-indigenous species depends upon survival in the face of novel environmental conditions and novel biological interactions, relatively little attention has been focused on the specific role of native predators in limiting invasion success. The European common periwinkle, Littorina littorea, was recently introduced to the Pacific coast of Canada and provides a case study of an introduction into an area with an important predator guild (sea stars) that is functionally minor in the invader\u2019s native habitat. Here, we assess the likelihood of establishment, spread, and negative ecological impact of this introduced gastropod, with an emphasis on the role of native sea stars as agents of biotic resistance. Size frequency distributions and local market availability suggest that L. littorea was most likely introduced via the live seafood trade. Non-native hitchhikers (e.g., the trematode Cryptocotyle lingua) were found on/in both market and field specimens. Laboratory studies and field observations confirmed that L. littorea can survive seasonal low salinity in Vancouver, British Columbia. Periwinkles also readily consumed native Ulva, suggesting that periwinkles could impact native communities via herbivory or resource competition. Unlike native gastropods, however, L. littorea lacked behavioural avoidance responses to Northeast Pacific predatory sea stars (Pisaster ochraceus and Pycnopodia helianthoides), and sea star predation rates on L. littorea were much higher than predation rates on native turban snails (Chlorostoma funebralis) in common garden experiments. We therefore expect periwinkle establishment in British Columbia to be limited to areas with low predator density, as is seen in its field distribution to date. We caution that this conclusion may understate the importance of the L. littorea introduction if it also serves as a vector for additional non-indigenous species such as C. lingua." }, { "instance_id": "R57501xR57286", "comparison_id": "R57501", "paper_id": "R57286", "text": "Rare species loss alters ecosystem function - invasion resistance The imminent decline in species diversity coupled with increasing exotic species introductions has provoked investigation into the role of resident diversity in community resistance to exotic species colonization. Here we present the results of a field study using an experimental method in which diversity was altered by removal of less abundant species and the resulting disturbance was controlled for by removal of an equivalent amount of biomass of the most common species from paired plots. Following these manipulations, the exotic grass, Lolium temulentum, was introduced. We found that exotic species establishment was higher in plots in which diversity was successfully reduced by removal treatments and was inversely related to imposed species richness. These results demonstrate that less common species can significantly influence invasion events and highlight the potential role of less common species in the maintenance of ecosystem function." }, { "instance_id": "R57501xR57328", "comparison_id": "R57501", "paper_id": "R57328", "text": "Plant diversity, herbivory and resistance of a plant community to invasion in Mediterranean annual communities Several components of the diversity of plant communities, such as species richness, species composition, number of functional groups and functional composition, have been shown to directly affect the performance of exotic species. Exotics can also be affected by herbivores of the native plant community. However, these two possible mechanisms limiting invasion have never been investigated together. The aim of this study was to investigate the relationships between plant diversity, herbivory and performance of two annual exotics, Conyza bonariensis and C. canadensis, in Mediterranean annual communities. We wanted to test whether herbivory of these exotics was influenced either by species richness, functional-group richness or functional-group composition. We also studied the relationship between herbivory on the exotic species and their performance. Herbivory increased with increasing species and functional-group richness for both Conyza species. These patterns are interpreted as reflecting a greater number of available herbivore niches in a richer, more complex, plant community. The identities of functional groups also affected Conyza herbivory, which decreased in the presence of Asteraceae or Fabaceae and increased in the presence of Poaceae. Increasing herbivory had consequences for vegetative and demographic parameters of both invasive species: survival, final biomass and net fecundity decreased with increasing herbivory, leading to a loss of reproductive capacity. We conclude that communities characterised by a high number of grass species instead of Asteraceae or Fabaceae may be more resistant to invasion by the two Conyza species, in part due to predation by native herbivores." }, { "instance_id": "R57501xR57367", "comparison_id": "R57501", "paper_id": "R57367", "text": "Exotic plant species invade hot spots of native plant diversity Some theories and experimental studies suggest that areas of low plant spe- cies richness may be invaded more easily than areas of high plant species richness. We gathered nested-scale vegetation data on plant species richness, foliar cover, and frequency from 200 1-m 2 subplots (20 1000-m 2 modified-Whittaker plots) in the Colorado Rockies (USA), and 160 1-m 2 subplots (16 1000-m 2 plots) in the Central Grasslands in Colorado, Wyoming, South Dakota, and Minnesota (USA) to test the generality of this paradigm. At the 1-m 2 scale, the paradigm was supported in four prairie types in the Central Grasslands, where exotic species richness declined with increasing plant species richness and cover. At the 1-m 2 scale, five forest and meadow vegetation types in the Colorado Rockies contradicted the paradigm; exotic species richness increased with native-plant species richness and foliar cover. At the 1000-m 2 plot scale (among vegetation types), 83% of the variance in exotic species richness in the Central Grasslands was explained by the total percentage of nitrogen in the soil and the cover of native plant species. In the Colorado Rockies, 69% of the variance in exotic species richness in 1000-m 2 plots was explained by the number of native plant species and the total percentage of soil carbon. At landscape and biome scales, exotic species primarily invaded areas of high species richness in the four Central Grasslands sites and in the five Colorado Rockies vegetation types. For the nine vegetation types in both biomes, exotic species cover was positively correlated with mean foliar cover, mean soil percentage N, and the total number of exotic species. These patterns of invasibility depend on spatial scale, biome and vegetation type, spatial autocorrelation effects, availability of resources, and species-specific responses to grazing and other disturbances. We conclude that: (1) sites high in herbaceous foliar cover and soil fertility, and hot spots of plant diversity (and biodiversity), are invasible in many landscapes; and (2) this pattern may be more closely related to the degree resources are available in native plant communities, independent of species richness. Exotic plant in- vasions in rare habitats and distinctive plant communities pose a significant challenge to land managers and conservation biologists." }, { "instance_id": "R57501xR57348", "comparison_id": "R57501", "paper_id": "R57348", "text": "Exotic species in a C4-dominated grassland: invasibility, disturbance, and community structure Abstract We used data from a 15-year experiment in a C4-dominated grassland to address the effects of community structure (i.e., plant species richness, dominance) and disturbance on invasibility, as measured by abundance and richness of exotic species. Our specific objectives were to assess the temporal and spatial patterns of exotic plant species in a native grassland in Kansas (USA) and to determine the factors that control exotic species abundance and richness (i.e., invasibility). Exotic species (90% C3 plants) comprised approximately 10% of the flora, and their turnover was relatively high (30%) over the 15-year period. We found that disturbances significantly affected the abundance and richness of exotic species. In particular, long-term annually burned watersheds had lower cover of exotic species than unburned watersheds, and fire reduced exotic species richness by 80\u201390%. Exotic and native species richness were positively correlated across sites subjected to different fire (r = 0.72) and grazing (r = 0.67) treatments, and the number of exotic species was lowest on sites with the highest productivity of C4 grasses (i.e., high dominance). These results provide strong evidence for the role of community structure, as affected by disturbance, in determining invasibility of this grassland. Moreover, a significant positive relationship between exotic and native species richness was observed within a disturbance regime (annually burned sites, r = 0.51; unburned sites, r = 0.59). Thus, invasibility of this C4-dominated grassland can also be directly related to community structure independent of disturbance." }, { "instance_id": "R57501xR57235", "comparison_id": "R57501", "paper_id": "R57235", "text": "Factors governing rate of invasion: a natural experiment using Argentine ants Abstract Predicting the success of biological invasions is a major goal of invasion biology. Determining the causes of invasions, however, can be difficult, owing to the complexity and spatio-temporal heterogeneity of the invasion process. The purpose of this study was to assess factors influencing rate of invasion for the Argentine ant (Linepithema humile), a widespread invasive species. The rate of invasion for 20 independent Argentine ant populations was measured over 4 years in riparian woodlands in the lower Sacramento River Valley of northern California. A priori predictors of rate of invasion included stream flow (a measure of abiotic suitability), disturbance, and native ant richness. In addition, baits were used to estimate the abundance of Argentine ants and native ants at the 20 sites. A multiple regression model accounted for nearly half of the variation in mean rate of invasion (R2 = 0.46), but stream flow was the only significant factor in this analysis. Argentine ants spread, on average, 16 m year\u22121 at sites with permanent stream flow and retreated, on average, \u22126 m year\u22121 at sites with intermittent stream flow. Rate of invasion was independent of both disturbance and native ant richness. Argentine ants recruited to more baits in higher numbers in invaded areas than did native ants in uninvaded areas. In addition, rate of invasion was positively correlated with the proportion of baits recruited to by native ants in uninvaded areas. Together, these findings suggest that abiotic suitability is of paramount importance in determining rate of invasion for the Argentine ant." }, { "instance_id": "R57501xR57263", "comparison_id": "R57501", "paper_id": "R57263", "text": "Fish invasions in the world's river systems: When natural processes are blurred by human activities Because species invasions are a principal driver of the human-induced biodiversity crisis, the identification of the major determinants of global invasions is a prerequisite for adopting sound conservation policies. Three major hypotheses, which are not necessarily mutually exclusive, have been proposed to explain the establishment of non-native species: the \u201chuman activity\u201d hypothesis, which argues that human activities facilitate the establishment of non-native species by disturbing natural landscapes and by increasing propagule pressure; the \u201cbiotic resistance\u201d hypothesis, predicting that species-rich communities will readily impede the establishment of non-native species; and the \u201cbiotic acceptance\u201d hypothesis, predicting that environmentally suitable habitats for native species are also suitable for non-native species. We tested these hypotheses and report here a global map of fish invasions (i.e., the number of non-native fish species established per river basin) using an original worldwide dataset of freshwater fish occurrences, environmental variables, and human activity indicators for 1,055 river basins covering more than 80% of Earth's surface. First, we identified six major invasion hotspots where non-native species represent more than a quarter of the total number of species. According to the World Conservation Union, these areas are also characterised by the highest proportion of threatened fish species. Second, we show that the human activity indicators account for most of the global variation in non-native species richness, which is highly consistent with the \u201chuman activity\u201d hypothesis. In contrast, our results do not provide support for either the \u201cbiotic acceptance\u201d or the \u201cbiotic resistance\u201d hypothesis. We show that the biogeography of fish invasions matches the geography of human impact at the global scale, which means that natural processes are blurred by human activities in driving fish invasions in the world's river systems. In view of our findings, we fear massive invasions in developing countries with a growing economy as already experienced in developed countries. Anticipating such potential biodiversity threats should therefore be a priority." }, { "instance_id": "R57501xR57361", "comparison_id": "R57501", "paper_id": "R57361", "text": "Species richness and patterns of invasion in plants, birds, and fishes in the United States We quantified broad-scale patterns of species richness and species density (mean # species/km2) for native and non-indigenous plants, birds, and fishes in the continental USA and Hawaii. We hypothesized that the species density of native and non-indigenous taxa would generally decrease in northern latitudes and higher elevations following declines in potential evapotranspiration, mean temperature, and precipitation. County data on plants (n = 3004 counties) and birds (n=3074 counties), and drainage (6 HUC) data on fishes (n = 328 drainages) showed that the densities of native and non-indigenous species were strongly positively correlated for plant species (r = 0.86, P < 0.0001), bird species (r = 0.93, P<0.0001), and fish species (r = 0.41, P<0.0001). Multiple regression models showed that the densities of native plant and bird species could be strongly predicted (adj. R2 = 0.66 in both models) at county levels, but fish species densities were less predictable at drainage levels (adj. R2 = 0.31, P<0.0001). Similarly, non-indigenous plant and bird species densities were strongly predictable (adj. R2 = 0.84 and 0.91 respectively), but non-indigenous fish species density was less predictable (adj. R2 = 0.38). County level hotspots of native and non-indigenous plants, birds, and fishes were located in low elevation areas close to the coast with high precipitation and productivity (vegetation carbon). We show that (1) native species richness can be moderately well predicted with abiotic factors; (2) human populations have tended to settle in areas rich in native species; and (3) the richness and density of non-indigenous plant, bird, and fish species can be accurately predicted from biotic and abiotic factors largely because they are positively correlated to native species densities. We conclude that while humans facilitate the initial establishment, invasions of non-indigenous species, the spread and subsequent distributions of non-indigenous species may be controlled largely by environmental factors." }, { "instance_id": "R57501xR57273", "comparison_id": "R57501", "paper_id": "R57273", "text": "Phalaris arundinacea seedling establishment: effects of canopy complexity in fen, mesocosm, and restoration experiments Phalaris arundinacea L. (reed canary grass) is a major invader of wetlands in temperate North America; it creates monotypic stands and displaces native vegetation. In this study, the effect of plant canopies on the establishment of P. arundinacea from seed in a fen, fen-like mesocosms, and a fen restoration site was assessed. In Wingra Fen, canopies that were more resistant to P. arundinacea establishment had more species (eight or nine versus four to six species) and higher cover of Aster firmus. In mesocosms planted with Glyceria striata plus 1, 6, or 15 native species, all canopies closed rapidly and prevented P. arundinacea establishment from seed, regardless of the density of the matrix species or the number of added species. Only after gaps were created in the canopy was P. arundinacea able to establish seedlings; then, the 15-species treatment reduced establishment to 48% of that for single-species canopies. A similar experiment in the restoration site produced less cover of native plants, and P. arundinacea recruited more readily. Results suggest that, where conditions are favorable for native plant growth, even species-poor canopies can inhibit P. arundinacea establishment from seed, but when disturbances create gaps, species-rich canopies confer greater resistance to invasion.Key words: diversity, establishment, fen, invasion resistance, species richness, wetlands." }, { "instance_id": "R57501xR57181", "comparison_id": "R57501", "paper_id": "R57181", "text": "Community structure, succession and invasibility in a seasonal deciduous forest in southern Brazil Majority of invasive trees colonize grasslands, shrublands, and temperate forests. Hovenia dulcis is an exception, because it is one of the most pervasive invaders in Brazilian subtropical forests where it has changed their structure and composition. This study has aimed to identify the clues for its success by defining the structural and functional characteristics of plant communities in different stages of succession with and without H. dulcis. Following the general assumptions of invasion ecology, we expected that H. dulcis establishment and invasion success would be significantly higher in early successional communities, with high resource availability and low species richness and diversity, as well as low functional diversity. Contrary to this hypothesis, no differences were found between plant communities invaded and non-invaded by H. dulcis at three different succession stages. No relationship was found between species richness and diversity and functional diversity, with respect to invasibility along the successional gradient. Hovenia dulcis is strongly associated with semi-open vegetation, where the species was found in higher density. The invasion of open vegetation is more recent, providing evidence of the species\u2019s ability to invade plant communities in early successional stages. We concluded that the colonization by H. dulcis was associated with forest openness, but the species is also able to colonize semi-open vegetation, and persist in the successionally more advanced communities." }, { "instance_id": "R57501xR57238", "comparison_id": "R57501", "paper_id": "R57238", "text": "Forest invasibility in communities in southeastern New York While biological invasions have been the subject of considerable attention both historically and recently, the factors controlling the susceptibility of communities to plant invasions remain controversial. We surveyed 44 sites in southeastern New York State to examine the relationships between plant community characteristics, soil characteristics, and nonnative plant invasion. Soil nitrogen mineralization and nitrification rates were strongly related to the degree of site invasion (F= 30.2, P < 0.0001 and F= 11.8, P < 0.005, respectively), and leaf C : N ratios were negatively correlated with invasion (R2= 0.22, P < 0.0001). More surprisingly, there was a strong positive relationship between soil calcium levels and the degree of site invasion (partial r= 0.70, P < 0.01), and there were also positive relationships between invasion and soil magnesium and phosphorus. We found, in addition, a positive factor-ceiling relationship between native species diversity and invasive species diversity. This positive relationship between native and invasive diversity contradicts earlier hypotheses concerning the relationships between species diversity and invasion, but supports some recent findings. Cluster analysis distinguished two broad forest community types at our sites: pine barrens and mixed hardwood communities. Invaders were significantly more abundant in mixed hardwood than in pine barrens communities (Mann\u2013Whitney U = 682.5, P < 0.0001). Even when evaluating the mixed hardwood communities alone, invasion remained significantly positively correlated with soil fertility (calcium, magnesium, and net nitrogen mineralization rates). Soil texture and pH were not useful predictors of the degree to which forests were invaded. Nitrogen and calcium are critical components of plant development, and species better able to take advantage of increased nutrient availability may out-perform others at sites with higher nutrient levels. These results have implications for areas such as the eastern United States, where anthropogenic changes in the availability of nitrogen and calcium are affecting many plant communities." }, { "instance_id": "R57501xR57352", "comparison_id": "R57501", "paper_id": "R57352", "text": "Productivity, herbivory, and species traits rather than diversity influence invasibility of experimental phytoplankton communities Biological invasions are a major threat to natural biodiversity; hence, understanding the mechanisms underlying invasibility (i.e., the susceptibility of a community to invasions by new species) is crucial. Invasibility of a resident community may be affected by a complex but hitherto hardly understood interplay of (1) productivity of the habitat, (2) diversity, (3) herbivory, and (4) the characteristics of both invasive and resident species. Using experimental phytoplankton microcosms, we investigated the effect of nutrient supply and species diversity on the invasibility of resident communities for two functionally different invaders in the presence or absence of an herbivore. With increasing nutrient supply, increased herbivore abundance indicated enhanced phytoplankton biomass production, and the invasion success of both invaders showed a unimodal pattern. At low nutrient supply (i.e., low influence of herbivory), the invasibility depended mainly on the competitive abilities of the invaders, whereas at high nutrient supply, the susceptibility to herbivory dominated. This resulted in different optimum nutrient levels for invasion success of the two species due to their individual functional traits. To test the effect of diversity on invasibility, a species richness gradient was generated by random selection from a resident species pool at an intermediate nutrient level. Invasibility was not affected by species richness; instead, it was driven by the functional traits of the resident and/or invasive species mediated by herbivore density. Overall, herbivory was the driving factor for invasibility of phytoplankton communities, which implies that other factors affecting the intensity of herbivory (e.g., productivity or edibility of primary producers) indirectly influence invasions." }, { "instance_id": "R57501xR57195", "comparison_id": "R57501", "paper_id": "R57195", "text": "Food web structure provides biotic resistance against plankton invasion attempts It is generally accepted that native communities provide resistance against invaders through biotic interactions. However, much remains uncertain about the types of ecological processes and community attributes that contribute to biotic resistance. We used experimental mesocosms to examine how zooplankton community structure, invertebrate predation, and nutrient supply jointly affected the establishment of the exotic Daphnia lumholtzi. We predicted that establishment would increase with declining biomass and diversity of native zooplankton communities and that an invertebrate predator (IP) would indirectly facilitate the establishment of D. lumholtzi due to its relatively long predator-deterring spines. Furthermore, we hypothesized that elevated nutrient supply would increase algal food availability and facilitate establishment. Only when the biomass and diversity of native zooplankton were significantly reduced, was D.\u2423lumholtzi able to successfully invade mesocosms. Although invertebrate predation and resource supply modified attributes of native zooplankton communities, they did not influence the establishment of D. lumholtzi. Overall, our\u2423results are consistent with observed population dynamics in invaded reservoirs where D.\u2423lumholtzi tends to be present only during the late summer, coinciding with historic mid-summer declines in native zooplankton populations. Lakes and reservoirs may be more susceptible to invasion not only by D. lumholtzi, but also by other planktonic species, in the late summer when native communities exhibit characteristics associated with lower levels of biotic resistance." }, { "instance_id": "R57501xR57279", "comparison_id": "R57501", "paper_id": "R57279", "text": "Effects of a directional abiotic gradient on plant community dynamics and invasion in a coastal dune system Summary 1 Local abiotic factors are likely to play a crucial role in modifying the relative abundance of native and exotic species in plant communities. Natural gradients provide an ideal opportunity to test this hypothesis. 2 In a coastal dune system in northern California, we used comparative and experimental studies to evaluate how a wind and soil texture gradient influences the relative abundance of native and exotic plant species in this community. 3 We detected small-scale spatial variation in soil texture along a 200-m gradient from relatively sheltered to more exposed. Sand coarseness significantly increased with exposure while soil nitrate levels significantly decreased. The more extreme end of the gradient was also subject to greater wind speeds and less soil moisture. 4 The plant community consistently responded to this gradient in the 7 years censused. Species richness decreased with exposure, cover of natives decreased and cover of exotics increased at the more extreme end of the gradient. 5 A single-season wind-shelter experiment similarly shifted the balance between native and exotic species. Shelters decreased the relative density of exotic species and increased the relative density of natives regardless of position on the gradient. 6 These comparative and manipulative findings both suggest that a single factor, wind, at least partially explains the success of exotic species in a coastal dune plant community. This supports the hypothesis that local abiotic conditions can explain differences in invasibility within a plant community." }, { "instance_id": "R57501xR57243", "comparison_id": "R57501", "paper_id": "R57243", "text": "Linking nitrogen partitioning and species abundance to invasion resistance in the Great Basin Resource partitioning has been suggested as an important mechanism of invasion resistance. The relative importance of resource partitioning for invasion resistance, however, may depend on how species abundance is distributed in the plant community. This study had two objectives. First, we quantified the degree to which one resource, nitrogen (N), is partitioned by time, depth and chemical form among coexisting species from different functional groups by injecting 15N into soils around the study species three times during the growing season, at two soil depths and as two chemical forms. A watering treatment also was applied to evaluate the impact of soil water content on N partitioning. Second, we examined the degree to which native functional groups contributed to invasion resistance by seeding a non-native annual grass into plots where bunchgrasses, perennial forbs or annual forbs had been removed. Bunchgrasses and forbs differed in timing, depth and chemical form of N capture, and these patterns of N partitioning were not affected by soil water content. However, when we incorporated abundance (biomass) with these relative measures of N capture to determine N sequestration by the community there was no evidence suggesting that functional groups partitioned different soil N pools. Instead, dominant bunchgrasses acquired the most N from all soil N pools. Consistent with these findings we also found that bunchgrasses were the only functional group that inhibited annual grass establishment. At natural levels of species abundance, N partitioning may facilitate coexistence but may not necessarily contribute to N sequestration and invasion resistance by the plant community. This suggests that a general mechanism of invasion resistance may not be expected across systems. Instead, the key mechanism of invasion resistance within a system may depend on trait variation among coexisting species and on how species abundance is distributed in the system." }, { "instance_id": "R57501xR57155", "comparison_id": "R57501", "paper_id": "R57155", "text": "Native-exotic species richness relationships across spatial scales and biotic homogenization in wetland plant communities of Illinois, USA Aim To examine native-exotic species richness relationships across spatial scales and corresponding biotic homogenization in wetland plant communities. Location Illinois, USA. Methods We analysed the native-exotic species richness relationship for vascular plants at three spatial scales (small, 0.25 m2 of sample area; medium, 1 m2 of sample area; large, 5 m2 of sample area) in 103 wetlands across Illinois. At each scale, Spearman\u2019s correlation coefficient between native and exotic richness was calculated. We also investigated the potential for biotic homogenization by comparing all species surveyed in a wetland community (from the large sample area) with the species composition in all other wetlands using paired comparisons of their Jaccard\u2019s and Simpson\u2019s similarity indices. Results At large and medium scales, native richness was positively correlated with exotic richness, with the strength of the correlation decreasing from the large to the medium scale; at the smallest scale, the native-exotic richness correlation was negative. The average value for homogenization indices was 0.096 and 0.168, using Jaccard\u2019s and Simpson\u2019s indices, respectively, indicating that these wetland plant communities have been homogenized because of invasion by exotic species. Main Conclusions Our study demonstrated a clear shift from a positive to a negative native-exotic species richness relationship from larger to smaller spatial scales. The negative native-exotic richness relationship that we found is suggested to result from direct biotic interactions (competitive exclusion) between native and exotic species, whereas positive correlations likely reflect the more prominent influence of habitat heterogeneity on richness at larger scales. Our finding of homogenization at the community level extends conclusions from previous studies having found this pattern at much larger spatial scales. Furthermore, these results suggest that even while exhibiting a positive native-exotic richness relationship, community level biotas can/are still being homogenized because of exotic species invasion." }, { "instance_id": "R57501xR57277", "comparison_id": "R57501", "paper_id": "R57277", "text": "Functional group dominance and identity effects influence the magnitude of grassland invasion Fil: Longo, Maria Grisel. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Oficina de Coordinacion Administrativa Parque Centenario. Instituto de Investigaciones Fisiologicas y Ecologicas Vinculadas A la Agricultura; Argentina" }, { "instance_id": "R57501xR57183", "comparison_id": "R57501", "paper_id": "R57183", "text": "The response of plant community diversity to alien invasion: evidence from a sand dune time series This study examines the process of invasion of coastal dunes in north-eastern Italy along a 60-year time series considering alien attributes (origin, residence time, invasive status, and growth form strategy) and habitat properties (species richness, diversity and evenness, proportion of aliens, and proportion of focal species). Vegetation changes through time were investigated in four sandy coastal habitats, using a fine-scale diachronic approach that compared vegetation data collected by use of the same procedure, in four time periods, from the 1950s to 2011. Our analysis revealed an overall significant decline of species richness over the last six decades. Further, both the average number of species per plot and the mean focal species proportion were proved to be negatively affected by the increasing proportion of alien species at plot level. The severity of the impact, however, was found to be determined by a combination of species attributes, habitat properties, and human disturbance suggesting that alien species should be referred to as \u201cpassengers\u201d and not as \u201cdrivers\u201d of ecosystem change. Passenger alien species are those which take advantage of disturbances or other changes to which they are adapted but that lead to a decline in native biodiversity. Their spread is facilitated by widespread anthropogenic environmental alterations, which create new, suitable habitats, and ensure human-assisted dispersal, reducing the distinctiveness of plant communities and inducing a process of biotic homogenization." }, { "instance_id": "R57501xR57159", "comparison_id": "R57501", "paper_id": "R57159", "text": "Ecological biogeography of southern oceanic islands: species-area relationships, human impacts, and conservation Previous studies have concluded that southern ocean islands are anomalous because past glacial extent and current temperature apparently explain most variance in their species richness. Here, the relationships between physical variables and species richness of vascular plants, insects, land and seabirds, and mammals were reexamined for these islands. Indigenous and introduced species were distinguished, and relationships between the latter and human occupancy variables were investigated. Most variance in indigenous species richness was explained by combinations of area and temperature (56%)\u2014vascular plants; distance (nearest continent) and vascular plant species richness (75%)\u2014insects; area and chlorophyll concentration (65%)\u2014seabirds; and indigenous insect species richness and age (73%)\u2014land birds. Indigenous insects and plants, along with distance (closest continent), explained most variance (70%) in introduced land bird species richness. A combination of area and temperature explained most variance in species richness of introduced vascular plants (73%), insects (69%), and mammals (69%). However, there was a strong relationship between area and number of human occupants. This suggested that larger islands attract more human occupants, increasing the risk of propagule transfer, while temperature increases the chance of propagule establishment. Consequently, human activities on these islands should be regulated more tightly." }, { "instance_id": "R57501xR57193", "comparison_id": "R57501", "paper_id": "R57193", "text": "\"The rich get richer\" concept in riparian woody species - A case study of the Warta River Valley (Poznan, Poland) Abstract Riparian ecosystems are one of the most invasible habitats due to frequent disturbances and high resource availability, creating conditions for pioneer species. In urban areas, richer in alien species than other areas, the risk of invasion is higher than elsewhere. The richness of both alien and native woody plants in urban riparian areas allows examination of the relationship between native and alien species richness, which has not been previously done for woody species as a separate guild. The aims of study were to compare current flora of woody species to that from the 1980s and to test whether the hypothesis of biotic acceptance can explain richness of alien woody species. The study was conducted in the Warta River Valley in Poznan city (Poland). During field work we compiled lists of woody species in 31 grid squares (1 km \u00d7 1 km). Regression analysis was used to examine the relationship between number of native and alien woody plant species. From the 1980s to 2013, the number of woody species increased from 76 to 116. The share of alien species also increased from 29.3% to 45.7%. We found a positive relationship between richness of alien and native woody species, which explains 46% of variance. The abundance of the most successful invaders is connected with high initial propagule pressure from the high number of alien ornamental species in urban green areas. The positive relationship between number of native and alien woody species shows the wider base of \u201cthe rich get richer\u201d concept, so treated separately, woody species may be determinants of dynamic processes in vegetation. Due to recent manifestations of invasive potential of some species, their invasion probability may be underestimated. Therefore, alien species should be avoided in urban green areas near natural and semi-natural urban forests, especially riparian forests, which are vulnerable to biological invasion." }, { "instance_id": "R57501xR57258", "comparison_id": "R57501", "paper_id": "R57258", "text": "Resource availability and plant diversity explain patterns of invasion of an exotic grass Aims In this study, we examine two common invasion biology hypotheses\u2014 biotic resistance and fluctuating resource availability\u2014to explain the patterns of invasion of an invasive grass, Microstegium vimineum. Methods We used 13-year-old deer exclosures in Great Smoky Mountains National Park, USA, to examine how chronic disturbance by deer browsing affects available resources, plant diversity, and invasion in an understory plant community. Using two replicate 1 m 2 plots in each deer browsed and unbrowsed area, we recorded each plant species present, the abundance per species, and the fractional per cent cover of vegetation by the cover classes: herbaceous, woody, and graminoid. For each sample plot, we also estimated overstory canopy cover, soil moisture, total soil carbon and nitrogen, and soil pH as a measure of abiotic differences between plots. Important Findings We found that plant community composition between chronically browsed and unbrowsed plots differed markedly. Plant diversity was 40% lower in browsed than in unbrowsed plots. At our sites, diver sity explained 48% and woody plant cover 35% of the variation in M. vimineum abundance. In addition, we found 3.3 times less M. vimineum in the unbrowsed plots due to higher woody plant cover and plant diversity than in the browsed plots. A parsimonious explanation of these results indicate that disturbances such as herbivory may elicit multiple conditions, namely releasing available resources such as open space, light, and decreasing plant diversity, which may facilitate the proliferation of an invasive species. Finally, by testing two different hypotheses, this study addresses more recent calls to incorporate multiple hypotheses into research attempting to explain plant invasion." }, { "instance_id": "R57501xR57326", "comparison_id": "R57501", "paper_id": "R57326", "text": "Plant community diversity and invasibility by exotics: invasion of Mediterranean old fields by Conyza bonariensis and Conyza canadensis A series of communities were established in situ to differentiate the effects of species richness, functional richness and functional group identity on invasibility of Mediterranean annual old fields. We monitored the demographic and vegetative parameters of two exotic annuals introduced as seedlings, Conyza bonariensis and C. canadensis. Community species richness and functional composition determined resistance to invasion by Conyza.Conyza bonariensis biomass decreased with increasing species richness. Legumes increased the biomass and consequently the net fecundity of both Conyza, while survival was favoured by Asteraceae. Communities with fewer Asteraceae and grasses increased the reproductive effort of C. bonariensis. A separate glasshouse experiment using the same species mixes revealed that establishment of Conyza decreased with increasing species richness or when grasses were present. Patterns of Conyza performance are interpreted in the light of measurements of ecosystem functional parameters, making it possible to formulate hypotheses about mechanisms limiting community invasibility." }, { "instance_id": "R57501xR57334", "comparison_id": "R57501", "paper_id": "R57334", "text": "Assessing the importance of disturbance, site conditions, and the biotic barrier for dandelion invasion in an Alpine habitat Several factors have been identified as relevant in determining the abundance of non-native invasive species. Nevertheless, the relative importance of these factors will vary depending on the invaded habitat and the characteristics of the invasive species. Due to their harsh environmental conditions and remoteness, high-alpine habitats are often considered to be at low risk of plant invasion. However, an increasing number of reports have shown the presence and spread of non-native plant species in alpine habitats; thus, it is important to study which factors control the invasion process in these harsh habitats. In this study, we assessed the role of disturbance, soil characteristics, biotic resistance and seed rain in the establishment and abundance of the non-native invasive species Taraxacum officinale (dandelion) in the Andes of central Chile. By focusing on human-disturbed patches, naturally disturbed patches, and undisturbed patches, we did not find that disturbance per se, or its origin, affected the establishment and abundance of T. officinale. The abundance of this non-native invasive species was not negatively related to the diversity of native species at local scales, indicating no biotic resistance to invasion; instead, some positive relationships were found. Our results indicate that propagule pressure (assessed by the seed rain) and the abiotic soil characteristics are the main factors related to the abundance of this non-native invasive species. Hence, in contrast to what has been found for more benign habitats, disturbance and biotic resistance have little influence on the invasibility of T. officinale in this high-alpine habitat." }, { "instance_id": "R57501xR57256", "comparison_id": "R57501", "paper_id": "R57256", "text": "Opposite relationships between invasibility and native species richness at patch versus landscape scales of native species. In 1 m 2 patches, native cover was positively associated with native richness and thus cover-related competition was a likely mechanism by which richness influenced R. cathartica . At the landscape scale (comparing the aggregate stand-scale metrics among the 17 stands), native cover and richness were still positively related, but had opposite relationships with R. cathartica cover. R. cathartica cover was positively related to species richness and negatively related to native species cover. The observed switch at different scales from a positive to a negative relationship between R. cathartica cover and native richness supported the hypothesized scale dependence of these relations. Propagule pressure, which we estimated by measuring the size of nearby mature R. cathartica shrubs, had a large positive effect on R. cathartica seedling cover at the landscape scale. These results suggest that landscape patterns of invasion may be best understood in light of the combination of many factors including native diversity, native cover, and propagule pressure." }, { "instance_id": "R57501xR57185", "comparison_id": "R57501", "paper_id": "R57185", "text": "Darwin's naturalization conundrum: dissecting taxonomic patterns of species invasions Darwin acknowledged contrasting, plausible arguments for how species invasions are influenced by phylogenetic relatedness to the native community. These contrasting arguments persist today without clear resolution. Using data on the naturalization and abundance of exotic plants in the Auckland region, we show how different expectations can be accommodated through attention to scale, assumptions about niche overlap, and stage of invasion. Probability of naturalization was positively related to the number of native species in a genus but negatively related to native congener abundance, suggesting the importance of both niche availability and biotic resistance. Once naturalized, however, exotic abundance was not related to the number of native congeners, but positively related to native congener abundance. Changing the scale of analysis altered this outcome: within habitats exotic abundance was negatively related to native congener abundance, implying that native and exotic species respond similarly to broad scale environmental variation across habitats, with biotic resistance occurring within habitats." }, { "instance_id": "R57501xR57261", "comparison_id": "R57501", "paper_id": "R57261", "text": "Invasibility of plankton food webs along a trophic state gradient Biological invasions are becoming more common, yet the majority of introduced exotic species fail to establish viable populations in new environments. Current ecological research suggests that invasion success may be determined by properties of the native ecosystem, such as the supply rate of limiting nutrients (i.e. trophic state). We examined how trophic state influences invasion success by introducing an exotic zooplankter, Daphnia lumholtzi into native plankton communities in a series of experimental mesocosms exposed to a strong nutrient gradient. We predicted that the attributes of nutrient-enriched communities would increase the likelihood of a successful invasion attempt by D. lumholtzi. Contrary to our original predictions, we found that D. lumholtzi was often absent from mesocosms that developed under high nutrient supply rates. Instead, the presence of D. lumholtzi was associated with systems that had low nutrients, low zooplankton biomass, and high zooplankton species diversity. Using generalized estimating equations (GEE) and multivariate species data, we found that the presence-absence of D. lumholtzi could be explained by variations in zooplankton community structure, which was itself strongly influenced by nutrient supply rate. We argue that the apparent invasion success of D. lumholtzi was inhibited by the dominance of another cladoceran species, Chydorus sphaericus. These results suggest that the interaction between trophic state and species identity influenced the invasion success of introduced D. lumholtzi." }, { "instance_id": "R57501xR57189", "comparison_id": "R57501", "paper_id": "R57189", "text": "Short-term invasibility patterns in burnt and unburnt experimental Mediterranean grassland communities of varying diversities This paper reports the findings of a short-term natural invasibility field study in constructed Mediterranean herbaceous communities of varying diversities, under a fire treatment. Three components of invasibility, i.e. species richness, density and biomass of invaders, have been monitored in burnt and unburnt experimental plots with resident diversity ranging from monocultures to 18-species mixtures. In general, species richness, density and biomass of invaders decreased significantly with the increase of resident species richness. Furthermore, the density and biomass of invading species were significantly influenced by the species composition of resident communities. Although aboveground biomass, leaf area index, canopy height and percent bare ground of the resident communities explained a significant part of the variation in the success of invading species, these covariates did not fully explain the effects of resident species richness. Fire mainly influenced invasibility via soil nutrient levels. The effect of fire on observed invasibility patterns seems to be less important than the effects of resident species richness. Our results demonstrate the importance of species richness and composition in controlling the initial stages of plant invasions in Mediterranean grasslands but that there was a lack of interaction with the effects of fire disturbance." }, { "instance_id": "R57501xR57241", "comparison_id": "R57501", "paper_id": "R57241", "text": "Tree species diversity reduces the invasibility of maritime pine stands by the bast scale, Matsucoccus feytaudi (Homoptera: Margarodidae) Species-rich plant communities may be more resistant to invasive herbivores because of reduced host-plant accessibility and increased natural enemy diversity and abundance. We tested these hypotheses in Corsica, a Mediterranean island recently invaded by the maritime pine bast scale, Matsucoccus feytaudi Duc., which causes widespread tree mortality in Pinus pinaster Ait. The endemic Matsucoccus pini Green infests Corsican pine, Pinus nigra laricio Poiret, where it is controlled by the native predatory bug Elatophilus nigricornis Zetterstedt. As revealed by kairomone trapping, E. nigricornis was most abundant in pure Corsican pine in areas not yet colonized by M. feytaudi, and in pure maritime pine its density decreased with the distance from the nearest Corsican pine forest. The abundance of M. feytaudi was compared in five pairs of pure maritime pine and mixed maritime and Corsican pine stands. It was consistently higher in pure than in mixed maritime pine stands, whereas E. nigricornis showed the opposite pattern, and relative differences were correlated with the proportion of Corsican pine in the mixture. The predation by E. nigricornis was manipulated in pure maritime pine stands using synthetic attractants of the predator. Matsucoccus feytaudi density was significantly reduced in maritime pines baited with kairomone dispensers." }, { "instance_id": "R57501xR57332", "comparison_id": "R57501", "paper_id": "R57332", "text": "Invasion by Heracleum mantegazzianum in different habitats in the Czech Republic . Heracleum mantegazzianum, a tall forb from the western Caucasus invaded several different habitats in the Czech Republic. The relation between invasion success and type of recipient habitat was studied in the Slavkovsku les hilly ridge, Czech Republic. The vegetation of 14 habitat types occurring in an area of ca. 25 km2 was analysed using phytosociological releves, and the invasion success of Heracleum (in terms of number of localities, area covered and proportion of available area occupied) was recorded separately in each of them. Site conditions were expressed indirectly using Ellenberg indicator values. The hypothesis tested was that Heracleum spreads in the majority of vegetation types regardless of the properties of the recipient vegetation. Community invasibility appeared to be affected by site conditions and the composition of the recipient vegetation. The species is not found in acidic habitats. Disturbed habitats with good possibilities of dispersal for Heracleum seeds are more easily invaded. Communities with a higher proportion of phanerophytes and of species with CS (Competitive/Stresstolerating) strategy were more resistant to invasion. The invasion success was bigger in sites with increased possibilities of spread for Heracleum diaspores. Communities invaded by Heracleum had a lower species diversity and a higher indicator value for nitrogen than not-invaded stands. It appears that species contributing to community resistance against invasion of Heracleum, or capable of persisting in Heracleum-invaded stands, have similar ecological requirements but a different life strategy to the invader." }, { "instance_id": "R57501xR57401", "comparison_id": "R57501", "paper_id": "R57401", "text": "Realistic species losses disproportionately reduce grassland resistance to biological invaders Consequences of progressive biodiversity declines depend on the functional roles of individual species and the order in which species are lost. Most studies of the biodiversity\u2013ecosystem functioning relation tackle only the first of these factors. We used observed variation in grassland diversity to design an experimental test of how realistic species losses affect invasion resistance. Because entire plant functional groups disappeared faster than expected by chance, resistance declined dramatically with progressive species losses. Realistic biodiversity losses, even of rare species, can thus affect ecosystem processes far more than indicated by randomized-loss experiments." }, { "instance_id": "R57501xR57221", "comparison_id": "R57501", "paper_id": "R57221", "text": "Species diversity defends against the invasion of Nile tilapia (Oreochromis niloticus) Nile tilapia (Oreochromis niloticus ) is one of the most widely cultured species globally and has successfully colonized much of the world. Despite numerous studies of this exotic species, how differences in native communities mitigate the consequences of Nile tilapia invasion is unknown. Theory predicts that communities that are more diverse should be more resistant to exotic species, an effect that is referred to as \u201cbiotic resistance\u201d, but these effects are spatially dependent and organism-specific. Field surveys and laboratory experiments were conducted to test the theory of \u201cbiotic resistance\u201d and ascertain the relationship between native species richness and the invasion of Nile tilapia. In the field, we found that as native species richness increased, the biomass of Nile tilapia was significantly reduced. Consistent with results from the field, our manipulative experiment indicated that the growth of Nile tilapia was negatively related to native species richness. Thus, our study supports the theory of \u201cbiotic resistance\u201d and suggests that species biodiversity represents an important defense against the invasion of Nile tilapia." }, { "instance_id": "R57501xR57207", "comparison_id": "R57501", "paper_id": "R57207", "text": "Diversity and biomass of native macrophytes are negatively related to dominance of an invasive Poaceae in Brazilian sub-tropical streams Besides exacerbated exploitation, pollution, flow alteration and habitats degradation, freshwater biodiversity is also threatened by biological invasions. This paper addresses how native aquatic macrophyte communities are affected by the non-native species Urochloa arrecta, a current successful invader in Brazilian freshwater systems. We compared the native macrophytes colonizing patches dominated and non-dominated by this invader species. We surveyed eight streams in Northwest Paran\u00e1 State (Brazil). In each stream, we recorded native macrophytes' richness and biomass in sites where U. arrecta was dominant and in sites where it was not dominant or absent. No native species were found in seven, out of the eight investigated sites where U. arrecta was dominant. Thus, we found higher native species richness, Shannon index and native biomass values in sites without dominance of U. arrecta than in sites dominated by this invader. Although difficult to conclude about causes of such differences, we infer that the elevated biomass production by this grass might be the primary reason for alterations in invaded environments and for the consequent impacts on macrophytes' native communities. However, biotic resistance offered by native richer sites could be an alternative explanation for our results. To mitigate potential impacts and to prevent future environmental perturbations, we propose mechanical removal of the invasive species and maintenance or restoration of riparian vegetation, for freshwater ecosystems have vital importance for the maintenance of ecological services and biodiversity and should be preserved." }, { "instance_id": "R57501xR57197", "comparison_id": "R57501", "paper_id": "R57197", "text": "Invasibility of experimental grassland communities: the role of earthworms, plant functional group identity and seed size Invasions of natural communities by non-indigenous species threaten native biodiversity and are currently rated as one of the most important global-scale environmental problems. The mechanisms that make communities resistant to invasions and drive the establishment success of seedlings are essential both for management and for understanding community assembly and structure. Especially in grasslands, anecic earthworms are known to function as ecosystem engineers, however, their direct effects on plant community composition and on the invasibility of plant communities via plant seed burial, ingestion and digestion are poorly understood. In a greenhouse experiment we investigated the impact of Lumbricus terrestris, plant functional group identity and seed size of plant invader species and plant functional group of the established plant community on the number and biomass of plant invaders. We set up 120 microcosms comprising four plant community treatments, two earthworm treatments and three plant invader treatments containing three seed size classes. Earthworm performance was influenced by an interaction between plant functional group identity of the established plant community and that of invader species. The established plant community and invader seed size affected the number of invader plants significantly, while invader biomass was only affected by the established community. Since earthworm effects on the number and biomass of invader plants varied with seed size and plant functional group identity they probably play a key role in seedling establishment and plant community composition. Seeds and germinating seedlings in earthworm burrows may significantly contribute to earthworm nutrition, but this deserves further attention. Lumbricus terrestris likely behaves like a \u2018farmer\u2019 by collecting plant seeds which cannot directly be swallowed or digested. Presumably, these seeds are left in middens and become eatable after partial microbial decay. Increased earthworm numbers in more diverse plant communities likely contribute to the positive relationship between plant species diversity and resistance against invaders." }, { "instance_id": "R57501xR54588", "comparison_id": "R57501", "paper_id": "R54588", "text": "Invasion patterns of ground-dwelling arthropods in Canarian laurel forests Patterns of invasive species in four different functional groups of ground-dwelling arthropods (Carnivorous ground dwelling beetles; Chilopoda; Diplopoda; Oniscoidea) were examined in laurel forests of the Canary Islands. The following hypotheses were tested: (A) increasing species richness is connected with decreasing invasibility as predicted by the Diversity\u2013invasibility hypothesis (DIH); (B) disturbed or anthropogenically influenced habitats are more sensitive for invasions than natural and undisturbed habitats; and (C) climatic differences between laurel forest sites do not affect the rate of invasibility. A large proportion of invasives (species and abundances) was observed in most of the studied arthropod groups. However, we did not find any support for the DIH based on the examined arthropod groups. Regarding the impact of the extrinsic factors \u2018disturbance\u2019 and \u2018climate\u2019 on invasion patterns, we found considerable differences between the studied functional groups. Whereas the \u2018disturbance parameters\u2019 played a minor role and only affected the relative abundances of invasive centipedes (positively) and millipedes (negatively), the \u2018climate parameters\u2019 were significantly linked with the pattern of invasive detritivores. Interactions between native and invading species have not been observed thus far, but cannot completely be excluded." }, { "instance_id": "R57501xR57397", "comparison_id": "R57501", "paper_id": "R57397", "text": "Predator-driven biotic resistance and propagule pressure regulate the invasive apple snail Pomacea canaliculata in Japan Species richness in local communities has been considered an important factor determining the success of invasion by exotic species (the biotic resistance hypothesis). However, the detailed mechanisms, especially the role of predator communities, are not well understood. We studied biotic resistance to an invasive freshwater snail, Pomacea canaliculata, at 31 sites in an urban river basin (the Yamatogawa) in western Japan. First, we studied the relationship between the richness of local animal species and the abundance of P. canaliculata, demonstrating a negative relationship, which suggests that the intensity of biotic resistance regulates local snail populations. This pattern was due to the richness of native predator communities rather than that of introduced species or non-predators (mainly competitors of the apple snail). Local snail abundance was also affected by immigration of snails from nearby rice fields (i.e. propagule pressure), where few predators occur. Second, we assessed short-term predation pressure on the snail by means of a tethering experiment. Predation pressure was positively correlated with the number of individual predators and negatively correlated with snail abundance. The introduced crayfish Procambarus clarkii was responsible for the variance in predation pressure. These results indicate that the predator community, composed of both native and introduced species, is responsible for resistance to a novel invader even in a polluted urban river." }, { "instance_id": "R57501xR57139", "comparison_id": "R57501", "paper_id": "R57139", "text": "Landscape-scale patterns of biological invasions in shoreline plant communities Little is known about the patterns and dynamics of exotic species invasions at landscape to regional spatial scales. We quantified the presence (identity, abundance, and richness) and characteristics of native and exotic species in estuarine strandline plant communities at 24 sites in Narragansett Bay, Rhode Island, USA. Our results do not support several fundamental predictions of invasion biology. Established exotics (79 of 147 recorded plant species) were nearly indistinguishable from the native plant species (i.e. in terms of growth form, taxonomic grouping, and patterns of spatial distribution and abundance) and essentially represent a random sub-set of the current regional species pool. The cover and richness of exotic species varied substantially among quadrats and sites but were not strongly related to any site-level physical characteristics thought to affect invasibility (i.e. the physical disturbance regime, legal status, neighboring habitat type, and substrate characteristics). Native and exotic cover or richness were not negatively related within most sites. Across sites, native and exotic richness were positively correlated and exotic cover was unrelated to native richness. The colonization and spread of exotics does not appear to have been substantially reduced at sites with high native diversity. Furthermore, despite the fact that the Rhode Island strandline system is one of the most highly-invaded natural plant communities described to date, exotic species, both individually and as a group, currently appear to pose little threat to native plant diversity. Our findings are concordant with most recent, large-scale investigations that do not support the theoretical foundation of invasion biology and generally contradict small-scale experimental work." }, { "instance_id": "R57501xR57252", "comparison_id": "R57501", "paper_id": "R57252", "text": "Native and introduced gastropods in laurel forests on Tenerife, Canary Islands Abstract The introduction of non-native gastropods on islands has repetitively been related to a decline of the endemic fauna. So far, no quantitative information is available even for the native gastropod fauna from the laurel forests (the so-called Laurisilva) of the Canary Islands. Much of the original laurel forest has been logged in recent centuries. Based on vegetation studies, we hypothesized that densities and the number of introduced species decline with the age of the regrowth forests. We sampled 27 sites from which we collected thirty native and seven introduced species. Two introduced species, Milax nigricans and Oxychilus alliarius , were previously not reported from the Canary Islands. Assemblage composition was mainly structured by disturbance history and altitude. Overall species richness was correlated with slope inclination, prevalence of rocky outcrops, amounts of woody debris and leaf litter depth. Densities were correlated with the depth of the litter layer and the extent of herb layer cover and laurel canopy cover. Introduced species occurred in 22 sites but were neither related to native species richness nor to the time that elapsed since forest regrowth. One introduced slug, Lehmannia valentiana , is already wide-spread, with densities strongly related to herb cover. Overall species richness seemed to be the outcome of invasibility, thus factors enhancing species richness likely also enhance invasibility. Although at present introduced species contribute to diversity, the potential competition between introduced slugs and the rich native semi-slug fauna, and the effects of introduced predatory snails ( Oxychilus spp. and Testacella maugei ) warrant further monitoring." }, { "instance_id": "R57501xR57117", "comparison_id": "R57501", "paper_id": "R57117", "text": "Stress and land-use legacies alter the relationship between invasive- and native- plant richness Questions Does the driver of native richness impact invasion\u2013diversity relationships? In high stress systems, do negative invasion\u2013diversity relationships emerge from biotic resistance (as predicted by the diversity\u2013resistance hypothesis) or from stress acting as a selective driver of native richness? Do invasion\u2013diversity relationships change when abiotic and biotic legacies, rather than stress, act as non-selective drivers of richness? Location Calcareous fens, Wisconsin, USA. Methods We compared plot-level relationships between native and invasive species richness in 220 plots in 11 calcareous fens, six of which had stress alleviation and community impoverishment due to historic ploughing. We measured nutrient availability (resource stress) and saturation stress (non-resource stress), native richness and invasive richness in each plot. Results Residual maximum likelihood (REML) regression found a negative correlation between native and invasive richness in never ploughed plots, but no relationship was found in ploughed plots. REML multiple regression found that after accounting for saturation and nutrients, there was no relationship between native and invasive richness in ploughed or never ploughed plots. Saturation stress predicted low invasive richness and high native richness in never ploughed plots. Time since abandonment did not predict invasive richness. Non-metric multidimensional scaling (NMDS) found most invasive species present only in areas with the least saturation stress. Conclusions Negative relationships between native and invasive richness can result when the main driver of native richness favour native species (such as stress), and therefore may not support the diversity\u2013resistance hypothesis. When the main driver of native richness does not select between native and invasive richness, there may be no relationship between native and invasive richness, even at very small spatial scales." }, { "instance_id": "R57501xR57294", "comparison_id": "R57501", "paper_id": "R57294", "text": "Interactive effects of resource enrichment and resident diversity on invasion of native grassland by Lolium arundinaceum Resident diversity and resource enrichment are both recognized as potentially important determinants of community invasibility, but the effects of these biotic and abiotic factors on invasions are often investigated separately, and little work has been done to directly compare their relative effects or to examine their potential interactions. Here, we evaluate the individual and interactive effects of resident diversity and resource enrichment on plant community resistance to invasion. We factorially manipulated plant diversity and the enrichment of belowground (soil nitrogen) and aboveground (light) resources in low-fertility grassland communities invaded by Lolium arundinaceum, the most abundant invasive grass in eastern North America. Soil nitrogen enrichment enhanced L. arundinaceum performance, but increased resident diversity dampened this effect of nitrogen enrichment. Increased light availability (via clipping of aboveground vegetation) had a negligible effect on community invasibility. These results demonstrate that a community\u2019s susceptibility to invasion can be contingent upon the type of resource pulse and the diversity of resident species. In order to assess the generality of these results, future studies that test the effects of resident diversity and resource enrichment against a range of invasive species and in other environmental contexts (e.g., sites differing in soil fertility and light regimes) are needed. Such studies may help to resolve conflicting interpretations of the diversity\u2013invasibility relationship and provide direction for management strategies." }, { "instance_id": "R57501xR57371", "comparison_id": "R57501", "paper_id": "R57371", "text": "The myth of plant species saturation Plant species assemblages, communities or regional floras might be termed 'saturated' when additional immigrant species are unsuccessful at establishing due to competitive exclusion or other inter-specific interactions, or when the immigration of species is off-set by extirpation of species. This is clearly not the case for state, regional or national floras in the USA where colonization (i.e. invasion by exotic species) exceeds extirpation by roughly a 24 to 1 margin. We report an alarming temporal trend in plant invasions in the Pacific Northwest over the past 100 years whereby counties highest in native species richness appear increasingly invaded over time. Despite the possibility of some increased awareness and reporting of native and exotic plant species in recent decades, historical records show a significant, consistent long-term increase in exotic species (number and frequency) at county, state and regional scales in the Pacific Northwest. Here, as in other regions of the country, colonization rates by exotic species are high and extirpation rates are negligible. The rates of species accumulation in space in multi-scale vegetation plots may provide some clues to the mechanisms of the invasion process from local to national scales." }, { "instance_id": "R57501xR57214", "comparison_id": "R57501", "paper_id": "R57214", "text": "Plant community diversity and composition provide little resistance to Juniperus encroachment Widespread encroachment of the fire-intolerant species Juniperus virginiana L. into North American grasslands and savannahs where fire has largely been removed has prompted the need to identify mechanisms driving J. virginiana encroachment. We tested whether encroachment success of J. virginiana is related to plant species diversity and composition across three plant communities. We predicted J. virginiana encroachment success would (i) decrease with increasing diversity, and (ii) J. virginiana encroachment success would be unrelated to species composition. We simulated encroachment by planting J. virginiana seedlings in tallgrass prairie, old-field grassland, and upland oak forest. We used J. virginiana survival and growth as an index of encroachment success and evaluated success as a function of plant community traits (i.e., species richness, species diversity, and species composition). Our results indicated that J. virginiana encroachment success increased with increasing plant richness and diversity. Moreover, growth and survival of J. virginiana seedlings was associated with plant species composition only in the old-field grassland and upland oak forest. These results suggest that greater plant species richness and diversity provide little resistance to J. virginiana encroachment, and the results suggest resource availability and other biotic or abiotic factors are determinants of J. virginiana encroachment success." }, { "instance_id": "R58002xR57648", "comparison_id": "R58002", "paper_id": "R57648", "text": "Biogeographical comparison of the arthropod herbivore communities associated with Lepidium draba in its native, expanded and introduced ranges Aim To examine the composition and structure of the arthropod community on the invasive weed Lepidium draba in its native, expanded and introduced ranges, in order to elucidate the lack of a biotic constraint that may facilitate invasion. Location Europe and western North America. Methods Identical sampling protocols were used to collect data from a total of 35 populations of L. draba in its native (Eastern European), expanded (Western European) and introduced (western US) ranges. A bootstrapping analysis was used to compare herbivore richness, diversity and evenness among the regions. Core species groups (monophages, oligophages and polyphages) on the plant were defined and their abundances and host utilization patterns described. Results Species richness was greatest in the native range, while species diversity and evenness were similar in the native and expanded range, but significantly greater than in the introduced range of L. draba. Specialist herbivore abundance was greater in the native and expanded compared with the introduced range. Oligophagous Brassicaceae-feeders were equally abundant in all three ranges, and polyphagous herbivore abundance was significantly greater in the introduced range. Overall herbivore abundance was greater in the introduced range. Host utilization was more complete in the two European ranges due to monophagous herbivores that do not exist in the introduced range. Root feeders and gall formers were completely absent from the introduced range, which was dominated by generalist sap-sucking herbivores. However, one indigenous stem-mining weevil, Ceutorhynchus americanus, occurred on L. draba in the introduced range. Main conclusions This is, to our knowledge, the first study documenting greater herbivore abundance on an invasive weed in its introduced, compared with its native, range. However, greater abundance does not necessarily translate to greater impact. We argue that, despite the greater total herbivore abundance in the introduced range, differences in the herbivore community structure (specialist vs. generalist herbivory) may contribute to the invasion success of L. draba in the western USA." }, { "instance_id": "R58002xR57926", "comparison_id": "R58002", "paper_id": "R57926", "text": "Grassland fires may favor native over introduced plants by reducing pathogen loads Grasslands have been lost and degraded in the United States since Euro-American settlement due to agriculture, development, introduced invasive species, and changes in fire regimes. Fire is frequently used in prairie restoration to control invasion by trees and shrubs, but may have additional consequences. For example, fire might reduce damage by herbivore and pathogen enemies by eliminating litter, which harbors eggs and spores. Less obviously, fire might influence enemy loads differently for native and introduced plant hosts. We used a controlled burn in a Willamette Valley (Oregon) prairie to examine these questions. We expected that, without fire, introduced host plants should have less damage than native host plants because the introduced species are likely to have left many of their enemies behind when they were transported to their new range (the enemy release hypothesis, or ERH). If the ERH holds, then fire, which should temporarily reduce enemies on all species, should give an advantage to the natives because they should see greater total reduction in damage by enemies. Prior to the burn, we censused herbivore and pathogen attack on eight plant species (five of nonnative origin: Bromus hordaceous, Cynosuros echinatus, Galium divaricatum, Schedonorus arundinaceus (= Festuca arundinacea), and Sherardia arvensis; and three natives: Danthonia californica, Epilobium minutum, and Lomatium nudicale). The same plots were monitored for two years post-fire. Prior to the burn, native plants had more kinds of damage and more pathogen damage than introduced plants, consistent with the ERH. Fire reduced pathogen damage relative to the controls more for the native than the introduced species, but the effects on herbivory were negligible. Pathogen attack was correlated with plant reproductive fitness, whereas herbivory was not. These results suggest that fire may be useful for promoting some native plants in prairies due to its negative effects on their pathogens." }, { "instance_id": "R58002xR57811", "comparison_id": "R58002", "paper_id": "R57811", "text": "Influence of insects and fungal pathogens on individual and population parameters of Cirsium arvense in its native and introduced ranges Introduced weeds are hypothesized to be invasive in their exotic ranges due to release from natural enemies. Cirsium arvense (Californian, Canada, or creeping thistle) is a weed of Eurasian origin that was inadvertently introduced to New Zealand (NZ), where it is presently one of the worst invasive weeds. We tested the \u2018enemy release hypothesis\u2019 (ERH) by establishing natural enemy exclusion plots in both the native (Europe) and introduced (NZ) ranges of C. arvense. We followed the development and fate of individually labelled shoots and recorded recruitment of new shoots into the population over two years. Natural enemy exclusion had minimal impact on shoot height and relative growth rate in either range. However, natural enemies did have a significant effect on shoot population growth and development in the native range, supporting the ERH. In year one, exclusion of insect herbivores increased mean population growth by 2.1\u20133.6 shoots m\u22122, and in year two exclusion of pathogens increased mean population growth by 2.7\u20134.1 shoots m\u22122. Exclusion of insect herbivores in the native range also increased the probability of shoots developing from the budding to the reproductive growth stage by 4.0\u00d7 in the first year, and 13.4\u00d7 in the second year; but exclusion of pathogens had no effect on shoot development in either year. In accordance with the ERH, exclusion of insect herbivores and pathogens did not benefit shoot development or population growth in the introduced range. In either range, we found no evidence for an additive benefit of dual exclusion of insects and pathogens, and in no case was there an interaction between insect and pathogen exclusion. This study further demonstrates the value of conducting manipulative experiments in the native and introduced ranges of an invasive plant to elucidate invasion mechanisms." }, { "instance_id": "R58002xR57761", "comparison_id": "R58002", "paper_id": "R57761", "text": "Community structure of insect herbivores on introduced and native Solidago plants in Japan We compared community composition, density, and species richness of herbivorous insects on the introduced plant Solidago altissima L. (Asteraceae) and the related native species Solidago virgaurea L. in Japan. We found large differences in community composition on the two Solidago species. Five hemipteran sap feeders were found only on S. altissima. Two of them, the aphid Uroleucon nigrotuberculatum Olive (Hemiptera: Aphididae) and the scale insect Parasaissetia nigra Nietner (Hemiptera: Coccidae), were exotic species, accounting for 62% of the total individuals on S. altissima. These exotic sap feeders mostly determined the difference of community composition on the two plant species. In contrast, the herbivore community on S. virgaurea consisted predominately of five native insects: two lepidopteran leaf chewers and three dipteran leaf miners. Overall species richness did not differ between the plants because the increased species richness of sap feeders was offset by the decreased richness of leaf chewers and leaf miners on S. altissima. The overall density of herbivorous insects was higher on S. altissima than on S. virgaurea, because of the high density of the two exotic sap feeding species on S. altissima. We discuss the importance of analyzing community composition in terms of feeding guilds of insect herbivores for understanding how communities of insect herbivores are organized on introduced plants in novel habitats." }, { "instance_id": "R58002xR57674", "comparison_id": "R58002", "paper_id": "R57674", "text": "The interaction between soil nutrients and leaf loss during early 14 establishment in plant invasion Nitrogen availability affects both plant growth and the preferences of herbivores. We hypothesized that an interaction between these two factors could affect the early establishment of native and exotic species differently, promoting invasion in natural systems. Taxonomically paired native and invasive species (Acer platanoides, Acer rubrum, Lonicera maackii, Diervilla lonicera, Celastrus orbiculata, Celastrus scandens, Elaeagnus umbellata, Ceanothus americanus, Ampelopsis brevipedunculata, and Vitis riparia) were grown in relatively high-resource (hardwood forests) and low-resource (pine barrens) communities on Long Island, New York, for a period of 3 months. Plants were grown in ambient and nitrogen-enhanced conditions in both communities. Nitrogen additions produced an average 12% initial increase in leaf number of all plants. By the end of the experiment, invasive species outperformed native species in nitrogen-enhanced plots in hardwood forests, where all plants experienced increased damage relative to control plots. Native species experienced higher overall amounts of damage in hardwood forests, losing, on average, 45% more leaves than exotic species, and only native species experienced a decline in growth rates (32% compared with controls). In contrast, in pine barrens, there were no differences in damage and no differences in performance between native and invasive plants. Our results suggest that unequal damage by natural enemies may play a role in determining community composition by shifting the competitive advantage to exotic species in nitrogen-enhanced environments. FOR. SCI. 53(6):701-709." }, { "instance_id": "R58002xR57733", "comparison_id": "R58002", "paper_id": "R57733", "text": "A cross-continental test of the Enemy Release Hypothesis: leaf herbivory on Acer platanoides (L.) is three times lower in North America than in its native Europe Acer platanoides (Norway maple) is a widespread native tree species in Europe. It has been introduced to North America where it has often established dense stands in both secondary woodlands and relatively undisturbed mature woodlands. In Europe A. platanoides is also extending its original range, but generally seems to exist at much lower densities. One explanation for the \u2018aggressiveness\u2019 of invasive plants such as A. platanoides is that they have left behind pests and diseases which limit their population densities in their native lands (the enemy release hypothesis or ERH). To assess the ERH for Norway maple, a large network of collaborators assessed leaf herbivory rates in populations throughout Europe and North America. We found significantly lower total leaf herbivory (1.6% \u00b1 0.19, n = 21 vs. 7.4% \u00b1 1.94, n = 34) and lower fungal damage (1.0% \u00b1 0.35, n = 13 vs. 3.7% \u00b1 0.85, n = 34) in North America than in Europe over a 2 year period, which is consistent with the predictions of the Enemy Release Hypothesis. Across years, the average total leaf herbivory was significantly correlated with average annual temperature of the site (P < 0.05), although this was mostly due to sites in Europe (P < 0.001), and not sites in North America (P > 0.05). Furthermore, only populations in Europe showed very high levels of herbivory (e.g., nine sites had total leaf herbivory ranging from 10.0 to 51.2% in at least 1 year) or leaf fungal damage (only one site in North America showed high levels of fungal damage in 1 year), suggesting the possibility of more frequent episodic outbreaks in the native range. Leaf herbivory and fungal damage are only two aspects of consumer pressure and we do not know whether the differences reported here are enough to actually elicit release from top-down population control, but such large scale biogeographic differences in herbivory contribute towards understanding exotic invasions." }, { "instance_id": "R58002xR57770", "comparison_id": "R58002", "paper_id": "R57770", "text": "From seed production to seedling establishment: Important steps in an invasive process Abstract It is widely accepted that exotic invasive species are one of the most important ecological and economic problems. Reproductive and establishment traits are considered key features of a population expansion process, but few works have studied many of these simultaneously. This work examines how large the differences are in reproductive and establishment traits between two Fabaceae, the exotic invasive, Gleditsia triacanthos and the native, Acacia aroma . Gleditsia is a serious leguminous woody invader in various parts of the world and Acacia is a common native tree of Argentina. Both species have similar dispersal mechanisms and their reproductive phenology overlaps. We chose 17 plants of each species in a continuous forest of the Chaco Serrano Forest of Cordoba, Argentina. In each plant we measured fruit production, fruit removal (exclusion experiments), seed predation (pre- and post-dispersal), seed germination, seed bank (on each focal tree, three sampling periods during the year), and density of seedlings (around focal individuals and randomly in the study site). Gleditsia presented some traits that could favour the invasion process, such as a higher number of seeds per plant, percentage of scarified seed germination and density of seedlings around the focal individuals, than Acacia . On the other hand, Gleditsia presented a higher percentage of seed predation. The seed bank was persistent in both species and no differences were observed in fruit removal. This work highlights the importance of simultaneously studying reproductive and establishment variables involved in the spreading of an exotic invasive species. It also gives important insight into the variables to be considered when planning management strategies. The results are discussed from the perspective of some remarkable hypotheses on invasive species and may contribute to rethinking some aspects of the theory on invasive species." }, { "instance_id": "R58002xR57652", "comparison_id": "R58002", "paper_id": "R57652", "text": "Limited grazing pressure by native herbivores on the invasive seaweed Caulerpa taxifolia in a temperate Australian estuary Caulerpa taxifolia is an invasive alga threatening biodiversity in invaded regions. Its proliferation in recipient communities will be due to several factors including limited grazing effects by native herbivores. However, little is known about grazing pressure exerted by native herbivores on C. taxifolia relative to native macrophytes or its attractiveness to them as habitat. The present study determined which herbivores co-occurred with invasive C. taxifolia in a temperate Australian estuary and documented their abundance, relative grazing effects, habitat preference and survivorship on C. taxifolia compared with native macrophytes. Four herbivores co-occurred with C. taxifolia and their densities were often low or zero at the sites studied. Feeding experiments showed that compared with C. taxifolia: the fish, Girella tricuspidata, preferred Ulva sp.; the sea-hare, Aplysia dactylomela, preferred Laurencia sp.; whereas the mesograzers, Cymadusa setosa and Platynereis dumerilii antipoda, both consumed Cystoseira trinodus and Sargassum sp. at higher rates. The two mesograzers also showed strong habitat preference for C. trinodus and Sargassum sp. Cymadusa setosa had poor survivorship on Caulerpa taxifolia whereas P. dumerilii antipoda had 100% survivorship on C. taxifolia after 41 days. We consider that the low diversity and abundance of native herbivores, their weak grazing pressure on C. taxifolia and its low attractiveness as habitat may facilitate further local spread in this estuary, and potentially in other invaded locations." }, { "instance_id": "R58002xR57671", "comparison_id": "R58002", "paper_id": "R57671", "text": "Increased chemical resistance explains low herbivore colonization of introduced seaweed The success of introduced species is often attributed to release from co-evolved enemies in the new range and a subsequent decreased allocation to defense (EICA), but these hypotheses have rarely been evaluated for systems with low host-specificity of enemies. Here, we compare herbivore utilization of the brown seaweed, Fucusevanescens, and its coexisting competitors both in its native and new ranges, to test certain predictions derived from these hypotheses in a system dominated by generalist herbivores. While F. evanescens was shown to be a preferred host in its native range, invading populations supported a less diverse herbivore fauna and it was less preferred in laboratory choice experiments with important herbivores, when compared to co-occurring seaweeds. These results are consistent with the enemy release hypothesis, despite the fact that the herbivore communities in both regions were mainly composed of generalist species. However, in contrast to the prediction of EICA, analysis of anti-grazing compounds indicated a higher allocation to defense in introduced compared to native F.evanescens. The results suggest that the invader is subjected to less intense enemy control in the new range, but that this is due to an increased allocation to defense rather than release from specialized herbivores. This indicates that increased resistance to herbivory might be an important strategy for invasion success in systems dominated by generalist herbivores." }, { "instance_id": "R58002xR57924", "comparison_id": "R58002", "paper_id": "R57924", "text": "Macroparasite Fauna of Alien Grey Squirrels (Sciurus carolinensis): Composition, Variability and Implications for Native Species Introduced hosts populations may benefit of an \"enemy release\" through impoverishment of parasite communities made of both few imported species and few acquired local ones. Moreover, closely related competing native hosts can be affected by acquiring introduced taxa (spillover) and by increased transmission risk of native parasites (spillback). We determined the macroparasite fauna of invasive grey squirrels (Sciurus carolinensis) in Italy to detect any diversity loss, introduction of novel parasites or acquisition of local ones, and analysed variation in parasite burdens to identify factors that may increase transmission risk for native red squirrels (S. vulgaris). Based on 277 grey squirrels sampled from 7 populations characterised by different time scales in introduction events, we identified 7 gastro-intestinal helminths and 4 parasite arthropods. Parasite richness is lower than in grey squirrel's native range and independent from introduction time lags. The most common parasites are Nearctic nematodes Strongyloides robustus (prevalence: 56.6%) and Trichostrongylus calcaratus (6.5%), red squirrel flea Ceratophyllus sciurorum (26.0%) and Holarctic sucking louse Neohaematopinus sciuri (17.7%). All other parasites are European or cosmopolitan species with prevalence below 5%. S. robustus abundance is positively affected by host density and body mass, C. sciurorum abundance increases with host density and varies with seasons. Overall, we show that grey squirrels in Italy may benefit of an enemy release, and both spillback and spillover processes towards native red squirrels may occur." }, { "instance_id": "R58002xR57745", "comparison_id": "R58002", "paper_id": "R57745", "text": "The effect of enemy-release and climate conditions on invasive birds: a regional test using the rose-ringed parakeet (Psittacula krameri) as a case study Aim Some invasive species succeed particularly well and manage to establish populations across a wide variety of regions and climatic conditions. Understanding how biotic and environmental factors facilitate their invasion success remains a challenge. Here, we assess the role of two major hypotheses explaining invasion success: (1) enemy-release, which argues that invasive species are freed from their native predators and parasites in the new areas; and (2) climate-matching, which argues that the climatic similarity between the exotic and native range determines the success of invasive populations. Location India, Israel and the UK. Methods We studied the reproductive success of one of the most successful avian invaders, the rose-ringed parakeet ( Psittacula krameri ), in its native range (India) and in two introduced regions, varying in their climate conditions (Israel and the UK). We combined literature and field data to evaluate the role of predation pressure and climatic conditions in explaining the differences in reproductive success between the three regions. Results We found significant differences in reproductive success between regions. In accordance with the enemy-release hypothesis, we discovered that while predation was the main factor responsible for the reduction of fecundity in India, it did not significantly affect the fecundities of parakeet populations in the two introduced regions. In accordance with the climate-matching hypothesis, we found that in the colder temperate UK, egg infertility was high, resulting in lower fecundities. Populations in both the warmer Mediterranean climate of Israel and in the native Indian range had significantly lower egg infertility and higher fecundities than the UK populations." }, { "instance_id": "R58002xR57623", "comparison_id": "R58002", "paper_id": "R57623", "text": "Natural-enemy release facilitates habitat expansion of the invasive tropical shrub Clidemia hirta Nonnative, invasive plant species often increase in growth, abundance, or habitat distribution in their introduced ranges. The enemy-release hypothesis, proposed to account for these changes, posits that herbivores and pathogens (natural enemies) limit growth or survival of plants in native areas, that natural enemies have less impact in the introduced than in the native range, and that the release from natural-enemy regulation in areas of introduction accounts in part for observed changes in plant abundance. We tested experimentally the enemy-release hypothesis with the invasive neotropical shrub Clidemia hirta (L.) D. Don (Melastomataceae). Clidemia hirta does not occur in forest in its native range but is a vigorous invader of tropical forest in its introduced range. Therefore, we tested the specific prediction that release from natural enemies has contributed to its ex- panded habitat distribution. We planted C. hirta into understory and open habitats where it is native (Costa Rica) and where it has been introduced (Hawaii) and applied pesticides to examine the effects of fungal pathogen and insect herbivore exclusion. In understory sites in Costa Rica, C. hirta survival increased by 12% if sprayed with insecticide, 19% with fungicide, and 41% with both insecticide and fungicide compared to control plants sprayed only with water. Exclusion of natural enemies had no effect on survival in open sites in Costa Rica or in either habitat in Hawaii. Fungicide application promoted relative growth rates of plants that survived to the end of the experiment in both habitats of Costa Rica but not in Hawaii, suggesting that fungal pathogens only limit growth of C. hirta where it is native. Galls, stem borers, weevils, and leaf rollers were prevalent in Costa Rica but absent in Hawaii. In addition, the standing percentage of leaf area missing on plants in the control (water only) treatment was five times greater on plants in Costa Rica than in Hawaii and did not differ between habitats. The results from this study suggest that significant effects of herbivores and fungal pathogens may be limited to particular habitats. For Clidemia hirta, its absence from forest understory in its native range likely results in part from the strong pressures of natural enemies. Its invasion into Hawaiian forests is apparently aided by a release from these herbivores and pathogens." }, { "instance_id": "R58002xR57755", "comparison_id": "R58002", "paper_id": "R57755", "text": "Release from foliar and floral fungal pathogen species does not explain the geographic spread of naturalized North American plants in Europe Summary 1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy-release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus\u2013plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy-release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non-invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies." }, { "instance_id": "R58002xR57897", "comparison_id": "R58002", "paper_id": "R57897", "text": "Lack of enemy release for an invasive leafroller in California: temporal patterns and influence of host plant origin The enemy release hypothesis posits that the success of invasive species can be attributed to their escape from natural enemies. Invading hosts are expected to encounter an enemy assemblage consisting of fewer species, with lower representation of specialists, and to experience less mortality as a result. In this study, we examined parasitism of the Light Brown Apple Moth (LBAM), Epiphyas postvittana (Walker), in California, an exotic leafroller that is native to southeastern Australia. From 2008 to 2011 we monitored parasitoid species richness, representation of the more specialized koinobiont parasitoids, and parasitism rates of LBAM collected three times per year from four plant species of Australian origin and six plant species of non-Australian origin, at two locations in coastal California. We found the resident parasitoid assemblage of LBAM in California to have comparable levels of species richness, to have a similar representation of koinobionts versus idiobionts, and to inflict similar parasitism rates as in its native range. The two dominant parasitoids were Meteorus ictericus (Braconidae) and Enytus eureka (Ichneumonidae). Parasitoid species richness varied with season and plant origin and decreased slowly, but significantly, over the 4 year period. Parasitism rates were lowest in spring and highest on plants of Australian origin, but did not change with year. Hyperparasitism rates were higher on E. eureka (36.5 %) compared with M. ictericus and other parasitoids combined (3.3 %) and were highest on plants of Australian origin. We subsequently discuss the lack of both apparent enemy reduction and realized enemy release for LBAM in California and the unique finding that a shared plant origin enhanced the parasitism of this exotic leafroller by resident parasitoids." }, { "instance_id": "R58002xR57751", "comparison_id": "R58002", "paper_id": "R57751", "text": "Plant-soil feedback induces shifts in biomass allocation in the invasive plant Chromolaena odorata Summary 1. Soil communities and their interactions with plants may play a major role in determining the success of invasive species. However, rigorous investigations of this idea using cross-continental comparisons, including native and invasive plant populations, are still scarce. 2. We investigated if interactions with the soil community affect the growth and biomass allocation of the (sub)tropical invasive shrub Chromolaena odorata. We performed a cross-continental comparison with both native and non-native-range soil and native and non-native-range plant populations in two glasshouse experiments. 3. Results are interpreted in the light of three prominent hypotheses that explain the dominance of invasive plants in the non-native range: the enemy release hypothesis, the evolution of increased competitive ability hypothesis and the accumulation of local pathogens hypothesis. 4. Our results show that C. odorata performed significantly better when grown in soil pre-cultured by a plant species other than C. odorata. Soil communities from the native and non-native ranges did not differ in their effect on C. odorata performance. However, soil origin had a significant effect on plant allocation responses. 5. Non-native C. odorata plants increased relative allocation to stem biomass and height growth when confronted with soil communities from the non-native range. This is a plastic response that may allow species to be more successful when competing for light. This response differed between native and non-native-range populations, suggesting that selection may have taken place during the process of invasion. Whether this plastic response to soil organisms will indeed select for increased competitive ability needs further study. 6. The native grass Panicum maximum did not perform worse when grown in soil pre-cultured by C. odorata. Therefore, our results did not support the accumulation of local pathogens hypothesis. 7. Synthesis. Non-native C. odorata did not show release from soil-borne enemies compared to its native range. However, non-native plants responded to soil biota from the non-native range by enhanced allocation in stem biomass and height growth. This response can affect the competitive balance between native and invasive species. The evolutionary potential of this soil biota-induced change in plant biomass allocation needs further study." }, { "instance_id": "R58002xR57887", "comparison_id": "R58002", "paper_id": "R57887", "text": "Comparison of phototrophic shell-degrading endoliths in invasive and native populations of the intertidal mussel Mytilus galloprovincialis The intertidal mussel Mytilus galloprovincialis is a successful invader worldwide. Since its accidental introduction onto the South African west coast in the late 1970s, it has become the most successful marine invasive species in South Africa. One possible explanation for this phenomenon is that M. galloprovincialis suffers less from phototrophic shell-degrading endoliths in its invasive than in its native range. We assessed photoautotrophic endolithic pressure on M. galloprovincialis in native (Portugal) and invasive (South Africa) ranges. Invasive populations were more heavily infested than native populations. In Portugal, only the biggest/oldest mussels displayed endolithic erosion of the shell and the incidence of infestation was greater at higher shore levels where more prolonged exposure to light enhances endolith photosynthesis. In South Africa, even the smallest size classes of mussels were heavily infested throughout the shore. In Portugal, endolithic-induced mortality was observed at only one location, while in South Africa it occurred at all locations and at significantly higher rates than in Portugal. Important sub-lethal effects were detected in infested native mussels, confirming previous studies of invasive populations and suggesting an energy trade-off between shell repair and other physiological constraints. We observed a positive relationship between infestation rates and barnacle colonization on mussel shells, suggesting possible facilitation of barnacle settlement/survival by shell-boring pathogens. Identification of endoliths revealed common species between regions. However, two species were unique in the invasive range while another was unique in the native region. Different levels of endolithic infestation in the invasive and the native range were not explained by the effect of major environmental determinants (Photosynthetically Available Radiation and wave height). The results reject our initial hypothesis, indicating that invasion success of M. galloprovincialis is not simply explained by escape from its natural enemies but results from complex interactions between characteristics of the invaded community and properties of the invader." }, { "instance_id": "R58002xR57637", "comparison_id": "R58002", "paper_id": "R57637", "text": "Herbivory, time since introduction and the invasiveness of exotic plants Summary 1 We tested the enemy release hypothesis for invasiveness using field surveys of herbivory on 39 exotic and 30 native plant species growing in natural areas near Ottawa, Canada, and found that exotics suffered less herbivory than natives. 2 For the 39 introduced species, we also tested relationships between herbivory, invasiveness and time since introduction to North America. Highly invasive plants had significantly less herbivory than plants ranked as less invasive. Recently arrived plants also tended to be more invasive; however, there was no relationship between time since introduction and herbivory. 3 Release from herbivory may be key to the success of highly aggressive invaders. Low herbivory may also indicate that a plant possesses potent defensive chemicals that are novel to North America, which may confer resistance to pathogens or enable allelopathy in addition to deterring herbivorous insects." }, { "instance_id": "R58002xR57845", "comparison_id": "R58002", "paper_id": "R57845", "text": "Invasive species are less parasitized than native competitors, but for how long? The case of the round goby in the Great Lakes-St. Lawrence Basin There is increasing evidence that parasitism represents an unpredictable dimension of the ecological impacts of biological invasions. In addition to the risk of exotic pathogen transmission, other mechanisms such as parasite-release, could contribute to shaping the relationship between introduced species and native communities. In this study, we used the Eurasian round goby (Neogobius menalostomus) in the Great Lakes-St. Lawrence River ecosystem to further explore these ideas. As predicted by the parasite-release hypothesis, recently established populations of round goby were parasitized by a depauperate community of generalist helminths (8 taxa), all commonly found in the St. Lawrence River. In comparison, two native species, the logperch (Percina caprodes) and spottail shiner (Notropis hudsonius), were the hosts of 25 and 24 taxa respectively. Round gobies from each of 3 sampled localities were also less heavily infected than both indigenous species. This is in contrast to what is observed in round goby\u2019s native range where the species is often the most parasitized among gobid competitors. This relative difference in parasite pressure could enhance its competitiveness in the introduced range. However, our study of an older population of round goby in Lake St. Clair suggests that this advantage over native species could be of short duration. Within 15 years, the parasite abundance and richness in the round goby has more than doubled whereas the number of parasite species per fish has increased to levels of those typical of fish indigenous to the St. Lawrence-Great Lakes watershed." }, { "instance_id": "R58002xR57662", "comparison_id": "R58002", "paper_id": "R57662", "text": "Insect herbivore faunal diversity among invasive, non-invasive and native Eugenia species: Implications for the enemy release hypothesis Abstract The enemy release hypothesis (ERH) frequently has been invoked to explain the naturalization and spread of introduced species. One ramification of the ERH is that invasive plants sustain less herbivore pressure than do native species. Empirical studies testing the ERH have mostly involved two-way comparisons between invasive introduced plants and their native counterparts in the invaded region. Testing the ERH would be more meaningful if such studies also included introduced non-invasive species because introduced plants, regardless of their abundance or impact, may support a reduced insect herbivore fauna and experience less damage. In this study, we employed a three-way comparison, in which we compared herbivore faunas among native, introduced invasive, and introduced non-invasive plants in the genus Eugenia (Myrtaceae) which all co-occur in South Florida. We observed a total of 25 insect species in 12 families and 6 orders feeding on the six species of Eugenia. Of these insect species, the majority were native (72%), polyphagous (64%), and ectophagous (68%). We found that invasive introduced Eugenia has a similar level of herbivore richness as both the native and the non-invasive introduced Eugenia. However, the numbers and percentages of oligophagous insect species were greatest on the native Eugenia, but they were not different between the invasive and non-invasive introduced Eugenia. One oligophagous endophagous insect has likely shifted from the native to the invasive, but none to the non-invasive Eugenia. In summary, the invasive Eugenia encountered equal, if not greater, herbivore pressure than the non-invasive Eugenia, including from oligophagous and endophagous herbivores. Our data only provided limited support to the ERH. We would not have been able to draw this conclusion without inclusion of the non-invasive Eugenia species in the study." }, { "instance_id": "R58002xR57799", "comparison_id": "R58002", "paper_id": "R57799", "text": "Associations of leaf miners and leaf gallers with island plants of different residency histories Aim Introduced plant species are less likely to be attacked by herbivores than are native plant species. Isolated oceanic islands provide an excellent model system for comparing the associations between herbivore species and plant species of different residency histories, namely endemic, indigenous (non-endemic) or introduced (naturalized or cultivated) species. My aim was to test the prediction that, on isolated oceanic islands, introduced plant species have a lower tendency to have an association with insect herbivores than do endemic and indigenous plant species. Location Ogasawara (Bonin) Islands in the western Pacific Ocean. Methods I examined the presence/absence of leaf-mining and leaf-galling insect species on 71 endemic, 31 indigenous, 18 naturalized and 31 cultivated (introduced but not naturalized) species of woody plants from 2004 to 2008. Results Leaf-mining insect species were found on 53.5%, 35.5%, 11.1% and 16.1% and leaf-galling species were found on 14.1%, 9.7%, 5.6% and 0% of endemic, indigenous, naturalized and cultivated plant species, respectively. Species of Lepidoptera (moths) and Hemiptera (primarily psyllids) comprised the dominant types of leaf miners and leaf gallers, respectively. Main conclusions The incidence of leaf miners and leaf gallers differed as a function of residency history of the plant species. Introduced (naturalized and cultivated) species were less frequently associated with leaf miners and leaf gallers than were native (endemic and indigenous) species, indicating that the leaf-mining and leaf-galling insect species, most of which feed on leaves of a particular native plant genus (i.e. they show oligophagy), have not yet begun to utilize most introduced plant species." }, { "instance_id": "R58002xR57789", "comparison_id": "R58002", "paper_id": "R57789", "text": "The effects of disturbance and enemy exclusion on performance of an invasive species, common ragweed, in its native range Common ragweed (Ambrosia artemisiifolia) is an abundant weed in its native North America, despite supporting a wide range of natural enemies. Here, we tested whether these enemies have significant impacts on the performance of this plant in its native range. We excluded enemies from the three principal life-history stages (seed, seedling, and adult) of this annual in a series of field experiments; at the adult stage, we also manipulated soil disturbance and conspecific density. We then measured the consequences of these treatments for growth, survival, and reproduction. Excluding fungi and vertebrate granivores from seeds on the soil surface did not increase germination relative to control plots. Seedling survivorship was only slightly increased by the exclusion of molluscs and other herbivores. Insecticide reduced damage to leaves of adult plants, but did not improve growth or reproduction. Growth and survivorship of adults were strongly increased by disturbance, while higher conspecific density reduced performance in disturbed plots. These results indicate ragweed is insensitive to attack by many of its natural enemies, helping to explain its native-range success. In addition, they suggest that even though ragweed lost most of its insect folivores while invading Europe, escape from these enemies is unlikely to have provided a significant demographic advantage; instead, disturbance is likely to have been a much more important factor in its invasion. Escape from enemies should not be assumed to explain the success of exotic species unless improved performance also can be demonstrated; native-range studies can help achieve this goal." }, { "instance_id": "R58002xR57907", "comparison_id": "R58002", "paper_id": "R57907", "text": "Little evidence for release from herbivores as a driver of plant invasiveness from a multi-species herbivore-removal experiment Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species." }, { "instance_id": "R58002xR57632", "comparison_id": "R58002", "paper_id": "R57632", "text": "Enemy release? An experiment with congeneric plant pairs and diverse above- and belowground enemies Several hypotheses proposed to explain the success of introduced species focus on altered interspecific interactions. One of the most prominent, the Enemy Release Hypothesis, posits that invading species benefit compared to their native counterparts if they lose their herbivores and pathogens during the invasion process. We previously reported on a common garden experiment (from 2002) in which we compared levels of herbivory between 30 taxonomically paired native and introduced old-field plants. In this phyloge- netically controlled comparison, herbivore damage tended to be higher on introduced than on native plants. This striking pattern, the opposite of current theory, prompted us to further investigate herbivory and several other interspecific interactions in a series of linked ex- periments with the same set of species. Here we show that, in these new experiments, introduced plants, on average, received less insect herbivory and were subject to half the negative soil microbial feedback compared to natives; attack by fungal and viral pathogens also tended to be reduced on introduced plants compared to natives. Although plant traits (foliar C:N, toughness, and water content) suggested that introduced species should be less resistant to generalist consumers, they were not consistently more heavily attacked. Finally, we used meta-analysis to combine data from this study with results from our previous work to show that escape generally was inconsistent among guilds of enemies: there were few instances in which escape from multiple guilds occurred for a taxonomic pair, and more cases in which the patterns of escape from different enemies canceled out. Our examination of multiple interspecific interactions demonstrates that escape from one guild of enemies does not necessarily imply escape from other guilds. Because the effects of each guild are likely to vary through space and time, the net effect of all enemies is also likely to be variable. The net effect of these interactions may create ''invasion opportunity windows'': times when introduced species make advances in native communities." }, { "instance_id": "R58002xR57820", "comparison_id": "R58002", "paper_id": "R57820", "text": "Island invasion by a threatened tree species: evidence for natural enemy release of mahogany (Swietenia macrophylla) on Dominica, Lesser Antilles Despite its appeal to explain plant invasions, the enemy release hypothesis (ERH) remains largely unexplored for tropical forest trees. Even scarcer are ERH studies conducted on the same host species at both the community and biogeographical scale, irrespective of the system or plant life form. In Cabrits National Park, Dominica, we observed patterns consistent with enemy release of two introduced, congeneric mahogany species, Swietenia macrophylla and S. mahagoni, planted almost 50 years ago. Swietenia populations at Cabrits have reproduced, with S. macrophylla juveniles established in and out of plantation areas at densities much higher than observed in its native range. Swietenia macrophylla juveniles also experienced significantly lower leaf-level herbivory (\u223c3.0%) than nine co-occurring species native to Dominica (8.4\u201321.8%), and far lower than conspecific herbivory observed in its native range (11%\u201343%, on average). These complimentary findings at multiple scales support ERH, and confirm that Swietenia has naturalized at Cabrits. However, Swietenia abundance was positively correlated with native plant diversity at the seedling stage, and only marginally negatively correlated with native plant abundance for stems \u22651-cm dbh. Taken together, these descriptive patterns point to relaxed enemy pressure from specialized enemies, specifically the defoliator Steniscadia poliophaea and the shoot-borer Hypsipyla grandella, as a leading explanation for the enhanced recruitment of Swietenia trees documented at Cabrits." }, { "instance_id": "R58002xR57776", "comparison_id": "R58002", "paper_id": "R57776", "text": "Comparisons of arthropod assemblages on an invasive and native trees: abundance, diversity and damage The success of exotic plants may be due to lower herbivore loads than those on native plants (Enemies Release Hypothesis). Predictions of this hypothesis include lower herbivore abundances, diversity, and damage on introduced plant species compared to native ones. Greater density or diversity of predators and parasitoids on exotic versus native plants may also reduce regulation of exotic plants by herbivores. To test these predictions, we measured arthropod abundance, arthropod diversity, and foliar damage on invasive Chinese tallow tree (Triadica sebifera) and three native tree species: silver maple (Acer saccharinum), sycamore (Platanus occidentalis), and sweetgum (Liquidambar styraciflua). Arthropod samples were collected with canopy sweep nets from six 20 year old monoculture plots of each species at a southeast Texas site. A total of 2,700 individuals and 285 species of arthropods were caught. Overall, the species richness and abundance of arthropods on tallow tree were similar to the natives. But, ordination (NMS) showed community composition differed on tallow tree compared to all three native trees. It supported an arthropod community that had relatively lower herbivore abundance but relatively more predator species compared to the native species examined. Leaves were collected to determine damage. Tallow tree experienced less mining damage than native trees. The results of this study supported the Enemies Release Hypothesis predictions that tallow tree would have low herbivore loads which may contribute to its invasive success. Moreover, a shift in the arthropod community to fewer herbivores without a reduction in predators may further limit regulation of this exotic species by herbivores in its introduced range." }, { "instance_id": "R58002xR57866", "comparison_id": "R58002", "paper_id": "R57866", "text": "Invasive Eupatorium adenophorum suffers lower enemy impact on carbon assimilation than native congeners Enemy release hypothesis predicts that alien plants that escape from their natural enemies suffer lower enemy regulation in their introduced ranges than in native ranges. An extension of this theory suggests that if enemy release plays a crucial role in invasive success, then in the introduced range, invasive plants should also suffer lower local enemy impact than native residents (local enemy release hypothesis, LERH). In order to test LERH, we compared invasive Eupatorium adenophorum with two native congeners (E. heterophyllum and E. japonicum) in terms of damage by leaf enemies at two natural field sites and two manipulated sites. We also determined enemy impact on carbon assimilation at two manipulated sites. In each site, E. adenophorum was only damaged by herbivores, while in native congeners, leaf scabs or (and) leaf rolls was found in addition to herbivory damage. In both manipulated sites, the total enemy impact on carbon assimilation was lower for E. adenophorum than for native congeners; this observation was consistent with LERH. The results of this study indicate that a short co-existence time with generalist enemies (behavior constraint) might be the main contributor to the lower enemy impact on E. adenophorum." }, { "instance_id": "R58002xR57933", "comparison_id": "R58002", "paper_id": "R57933", "text": "An ecological comparison of Impatiens glandulifera Royle in the native and introduced range Understanding the ecology of plant species in their whole range (native and introduced) can provide insights into those that become problematic weeds in the introduced range despite being benign components of the vegetative community in the native range. We studied the morphological traits of Impatiens glandulifera in the native (Indian Himalayas) and introduced (UK) range and evaluated what influences natural enemies and arbuscular mycorrhizal fungi (AMF) have on plant performance. We compared height, total leaf area, root: shoot ratio, natural enemy damage and the colonisation of AMF from individual plants within and between ranges twice in 2010 during the months of June and August. In addition, in August 2010, we estimated the number of reproductive units (expressed as the sum of flowers, seed capsule and seeds) at each site. We found that all morphological traits varied between populations and countries, though in general introduced populations, and the semi-natural population in India, showed higher performance compared to natural native populations. There was only an indication that natural enemy damage, which was significantly higher in the native range, negatively affected reproductive units. Within the introduced range, the percentage colonisation of AMF was negatively associated with plant performance indicating that I. glandulifera may associate with an incompatible AMF species incurring a cost to invasive populations. We conclude that species which are heavily regulated in the native range, though still show high levels of performance, should be considered undesirable introductions into similar ecoclimatic ranges due to the potential that these species will become highly invasive species." }, { "instance_id": "R58002xR57618", "comparison_id": "R58002", "paper_id": "R57618", "text": "Plant-soil biota interactions and spatial distribution of black cherry in its native and invasive ranges One explanation for the higher abundance of invasive species in their non-native than native ranges is the escape from natural enemies. But there are few experimental studies comparing the parallel impact of enemies (or competitors and mutualists) on a plant species in its native and invaded ranges, and release from soil pathogens has been rarely investigated. Here we present evidence showing that the invasion of black cherry (Prunus serotina) into north-western Europe is facilitated by the soil community. In the native range in the USA, the soil community that develops near black cherry inhibits the establishment of neighbouring conspecifics and reduces seedling performance in the greenhouse. In contrast, in the non-native range, black cherry readily establishes in close proximity to conspecifics, and the soil community enhances the growth of its seedlings. Understanding the effects of soil organisms on plant abundance will improve our ability to predict and counteract plant invasions." }, { "instance_id": "R58002xR57646", "comparison_id": "R58002", "paper_id": "R57646", "text": "Evidence for the enemy release hypothesis in Hypericum perforatum The enemy release hypothesis (ERH), which has been the theoretical basis for classic biological control, predicts that the success of invaders in the introduced range is due to their release from co-evolved natural enemies (i.e. herbivores, pathogens and predators) left behind in the native range. We tested this prediction by comparing herbivore pressure on native European and introduced North American populations of Hypericum perforatum (St John\u2019s Wort). We found that introduced populations occur at larger densities, are less damaged by insect herbivory and suffer less mortality than populations in the native range. However, overall population size was not significantly different between ranges. Moreover, on average plants were significantly smaller in the introduced range than in the native range. Our survey supports the contention that plants from the introduced range experience less herbivore damage than plants from the native range. While this may lead to denser populations, it does not result in larger plant size in the introduced versus native range as postulated by the ERH." }, { "instance_id": "R58002xR57902", "comparison_id": "R58002", "paper_id": "R57902", "text": "Herbivores on native and exotic Senecio plants: is host switching related to plant novelty and insect diet breadth under field conditions? Native herbivores can establish novel interactions with alien plants after invasion. Nevertheless, it is unclear whether these new associations are quantitatively significant compared to the assemblages with native flora under natural conditions. Herbivores associated with two exotic plants, namely Senecio inaequidens and S. pterophorus, and two coexisting natives, namely S. vulgaris and S. lividus, were surveyed in a replicated long\u2010term field study to ascertain whether the plant\u2013herbivore assemblages in mixed communities are related to plant novelty and insect diet breadth. Native herbivores used exotic Senecio as their host plants. Of the 19 species of Lepidoptera, Diptera, and Hemiptera found in this survey, 14 were associated with the exotic Senecio plants. Most of these species were polyphagous, yet we found a higher number of individuals with a narrow diet breadth, which is contrary to the assumption that host switching mainly occurs in generalist herbivores. The Senecio specialist Sphenella marginata (Diptera: Tephritidae) was the most abundant and widely distributed insect species (ca. 80% of the identified specimens). Sphenella was associated with S. lividus, S. vulgaris and S. inaequidens and was not found on S. pterophorus. The presence of native plant congeners in the invaded community did not ensure an instantaneous ecological fitting between insects and alien plants. We conclude that novel associations between native herbivores and introduced Senecio plants are common under natural conditions. Plant novelty is, however, not the only predictor of herbivore abundance due to the complexity of natural conditions." }, { "instance_id": "R58002xR57892", "comparison_id": "R58002", "paper_id": "R57892", "text": "Does enemy loss cause release? A biogeographical comparison of parasitoid effects on an introduced insect The loss of natural enemies is a key feature of species introductions and is assumed to facilitate the increased success of species in new locales (enemy release hypothesis; ERH). The ERH is rarely tested experimentally, however, and is often assumed from observations of enemy loss. We provide a rigorous test of the link between enemy loss and enemy release by conducting observational surveys and an in situ parasitoid exclusion experiment in multiple locations in the native and introduced ranges of a gall-forming insect, Neuroterus saltatorius, which was introduced poleward, within North America. Observational surveys revealed that the gall-former experienced increased demographic success and lower parasitoid attack in the introduced range. Also, a different composition of parasitoids attacked the gall-former in the introduced range. These observational results show that enemies were lost and provide support for the ERH. Experimental results, however, revealed that, while some enemy release occurred, it was not the sole driver of demographic success. This was because background mortality in the absence of enemies was higher in the native range than in the introduced range, suggesting that factors other than parasitoids limit the species in its native range and contribute to its success in its introduced range. Our study demonstrates the importance of measuring the effect of enemies in the context of other community interactions in both ranges to understand what factors cause the increased demographic success of introduced species. This case also highlights that species can experience very different dynamics when introduced into ecologically similar communities." }, { "instance_id": "R58002xR57643", "comparison_id": "R58002", "paper_id": "R57643", "text": "Invasive plants and their escape from root herbivory: a worldwide comparison of the root-feeding nematode communities of the dune grass Ammophila arenaria in natural and introduced ranges Invasive plants generally have fewer aboveground pathogens and viruses in their introduced range than in their natural range, and they also have fewer pathogens than do similar plant species native to the introduced range. However, although plant abundance is strongly controlled by root herbivores and soil pathogens, there is very little knowledge on how invasive plants escape from belowground enemies. We therefore investigated if the general pattern for aboveground pathogens also applies to root-feeding nematodes and used the natural foredune grass Ammophila arenariaas a model. In the late 1800s, the European A. arenariawas introduced into southeast Australia (Tasmania), New Zealand, South Africa, and the west coast of the USA to be used for sand stabilization. In most of these regions, it has become a threat to native vegetation, because its excessive capacity to stabilize wind-blown sand has changed the geomorphology of coastal dunes. In stable dunes of most introduced regions, A. arenaria is more abundant and persists longer than in stabilized dunes of the natural range. We collected soil and root samples and used additional literature data to quantify the taxon richness of root-feeding nematodes on A.\u2423arenaria in its natural range and collected samples from the four major regions where it has been introduced. In most introduced regions A. arenaria did not have fewer root-feeding nematode taxa than the average number in its natural range, and native plant species did not have more nematode taxa than the introduced species. However, in the introduced range native plants had more feeding-specialist nematode taxa than A. arenaria and major feeding specialists (the sedentary endoparasitic cyst and root knot nematodes) were not found on A. arenaria in the southern hemisphere. We conclude that invasiveness of A. arenaria correlates with escape from feeding specialist nematodes, so that the pattern of escape from root-feeding nematodes is more alike escape from aboveground insect herbivores than escape from aboveground pathogens and viruses. In the natural range of A. arenaria, the number of specialist-feeding nematode taxa declines towards the margins. Growth experiments are needed to determine the relationship between nematode taxon diversity, abundance, and invasiveness of A. arenaria." }, { "instance_id": "R58002xR57794", "comparison_id": "R58002", "paper_id": "R57794", "text": "Higher parasite richness, abundance and impact in native versus introduced cichlid fishes Empirical studies suggest that most exotic species have fewer parasite species in their introduced range relative to their native range. However, it is less clear how, ecologically, the loss of parasite species translates into a measurable advantage for invaders relative to native species in the new community. We compared parasitism at three levels (species richness, abundance and impact) for a pair of native and introduced cichlid fishes which compete for resources in the Panama Canal watershed. The introduced Nile tilapia, Oreochromis niloticus, was infected by a single parasite species from its native range, but shared eight native parasite species with the native Vieja maculicauda. Despite acquiring new parasites in its introduced range, O. niloticus had both lower parasite species richness and lower parasite abundance compared with its native competitor. There was also a significant negative association between parasite load (abundance per individual fish) and host condition for the native fish, but no such association for the invader. The effects of parasites on the native fish varied across sites and types of parasites, suggesting that release from parasites may benefit the invader, but that the magnitude of release may depend upon interactions between the host, parasites and the environment." }, { "instance_id": "R58002xR57742", "comparison_id": "R58002", "paper_id": "R57742", "text": "Herbivory and population dynamics of invasive and native Lespedeza Some exotic plants are able to invade habitats and attain higher fitness than native species, even when the native species are closely related. One explanation for successful plant invasion is that exotic invasive plant species receive less herbivory or other enemy damage than native species, and this allows them to achieve rapid population growth. Despite many studies comparing herbivory and fitness of native and invasive congeners, none have quantified population growth rates. Here, we examined the contribution of herbivory to the population dynamics of the invasive species, Lespedeza cuneata, and its native congener, L. virginica, using an herbivory reduction experiment. We found that invasive L. cuneata experienced less herbivory than L. virginica. Further, in ambient conditions, the population growth rate of L. cuneata (\u03bb = 20.4) was dramatically larger than L. virginica (\u03bb = 1.7). Reducing herbivory significantly increased fitness of only the largest L. virginica plants, and this resulted in a small but significant increase in its population growth rate. Elasticity analysis showed that the growth rate of these species is most sensitive to changes in the seed production of small plants, a vital rate that is relatively unaffected by herbivory. In all, these species show dramatic differences in their population growth rates, and only 2% of that difference can be explained by their differences in herbivory incidence. Our results demonstrate that to understand the importance of consumers in explaining the relative success of invasive and native species, studies must determine how consumer effects on fitness components translate into population-level consequences." }, { "instance_id": "R58002xR57900", "comparison_id": "R58002", "paper_id": "R57900", "text": "The herbivorous arthropods associated with the invasive alien plant, Arundo donax, and the native analogous plant, Phragmites australis, in the Free State Province, South Africa The Enemy Release Hypothesis (ERH) predicts that when plant species are introduced outside their native range there is a release from natural enemies resulting in the plants becoming problematic invasive alien species (Lake & Leishman 2004; Puliafico et al. 2008). The release from natural enemies may benefit alien plants more than simply reducing herbivory because, according to the Evolution of Increased Competitive Ability (EICA) hypothesis, without pressure from herbivores more resources that were previously allocated to defence can be allocated to reproduction (Blossey & Notzold 1995). Alien invasive plants are therefore expected to have simpler herbivore communities with fewer specialist herbivores (Frenzel & Brandl 2003; Heleno et al. 2008; Heger & Jeschke 2014)." }, { "instance_id": "R58002xR57829", "comparison_id": "R58002", "paper_id": "R57829", "text": "Relationships among leaf damage, natural enemy release, and abundance in exotic and native prairie plants The Enemy Release hypothesis holds that exotic plants may have an advantage over native plants because their specialized natural enemies are absent. We tested this hypothesis by measuring leaf damage and plant abundance for naturally-occurring plants in prairies, and by removing natural enemies in an enemy exclusion experiment. We classified plants as invasive exotic, noninvasive exotic, or native, to determine if their degree of invasiveness influenced their relationships with natural enemies. Our field surveys showed that invasive exotic plants generally had significantly lower levels of foliar damage than native species while there was no consistent pattern for noninvasive exotics compared to natives. The relationship between damage and abundance was different for exotic and native plants: foliar damage decreased with increasing abundance for exotic plants while the trend was positive for native plants. While these results from the field surveys supported the Enemy Release Hypothesis, the enemy exclusion experiment did not. There was no relationship between a species\u2019 status as exotic or native and its degree of release from herbivory. Pastinaca sativa, the invasive exotic in this experiment, experienced gains in leaf area and vegetative biomass when treated with pesticides, indicating substantial herbivore pressure in the introduced range. These results show that foliar damage may not accurately predict the amount of herbivore pressure that plants actually experience, and that the Enemy Release hypothesis is not sufficient to explain the invasiveness of P. sativa in prairies." }, { "instance_id": "R58002xR57904", "comparison_id": "R58002", "paper_id": "R57904", "text": "Escape from parasitism by the invasive alien ladybird, Harmonia axyridis Alien species are often reported to perform better than functionally similar species native to the invaded range, resulting in high population densities, and a tendency to become invasive. The enemy release hypothesis (ERH) explains the success of invasive alien species (IAS) as a consequence of reduced mortality from natural enemies (predators, parasites and pathogens) compared with native species. The harlequin ladybird, Harmonia axyridis, a species alien to Britain, provides a model system for testing the ERH. Pupae of H. axyridis and the native ladybird Coccinella septempunctata were monitored for parasitism between 2008 and 2011, from populations across southern England in areas first invaded by H. axyridis between 2004 and 2009. In addition, a semi\u2010field experiment was established to investigate the incidence of parasitism of adult H. axyridis and C. septempunctata by Dinocampus coccinellae. Harmonia axyridis pupae were parasitised at a much lower rate than conspecifics in the native range, and both pupae and adults were parasitised at a considerably lower rate than C. septempunctata populations from the same place and time (H. axyridis: 1.67%; C. septempunctata: 18.02%) or in previous studies on Asian H. axyridis (2\u20137%). We found no evidence that the presence of H. axyridis affected the parasitism rate of C. septempunctata by D. coccinellae. Our results are consistent with the general prediction that the prevalence of natural enemies is lower for introduced species than for native species at early stages of invasion. This may partly explain why H. axyridis is such a successful IAS." }, { "instance_id": "R58002xR57992", "comparison_id": "R58002", "paper_id": "R57992", "text": "Incorporation of an invasive plant into a native insect herbivore food web The integration of invasive species into native food webs represent multifarious dynamics of ecological and evolutionary processes. We document incorporation of Prunus serotina (black cherry) into native insect food webs. We find that P. serotina harbours a herbivore community less dense but more diverse than its native relative, P. padus (bird cherry), with similar proportions of specialists and generalists. While herbivory on P. padus remained stable over the past century, that on P. serotina gradually doubled. We show that P. serotina may have evolved changes in investment in cyanogenic glycosides compared with its native range. In the leaf beetle Gonioctena quinquepunctata , recently shifted from native Sorbus aucuparia to P. serotina , we find divergent host preferences on Sorbus - versus Prunus -derived populations, and weak host-specific differentiation among 380 individuals genotyped for 119 SNP loci. We conclude that evolutionary processes may generate a specialized herbivore community on an invasive plant, allowing prognoses of reduced invasiveness over time. On the basis of the results presented here, we would like to caution that manual control might have the adverse effect of a slowing down of processes of adaptation, and a delay in the decline of the invasive character of P. serotina ." }, { "instance_id": "R58002xR57682", "comparison_id": "R58002", "paper_id": "R57682", "text": "Experimental field comparison of native and non-native maple seedlings: natural enemies, ecophysiology, growth and survival Summary 1 Acer platanoides (Norway maple) is an important non-native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first-season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second-season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non-native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non-native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life-history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non-native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non-native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy." }, { "instance_id": "R58002xR57883", "comparison_id": "R58002", "paper_id": "R57883", "text": "Exotic Lonicera species both escape and resist specialist and generalist herbivores in the introduced range in North America The enemy release hypothesis predicts that invasive plant species may benefit from a lack of top-down control by co-evolved herbivores, particularly specialists, in their new range. However, to benefit from enemy escape, invasive plants must also escape or resist specialist or generalist herbivores that attack related species in the introduced range. We compared insect herbivore damage on the exotic shrub, Lonicera maackii, the native congener Lonicera reticulata, and the native confamilial Viburnum prunifolium in North America. We also compared the laboratory preference and performance of a North American honeysuckle specialist sawfly (Zaraea inflata) and the performance of a widespread generalist caterpillar (Spodoptera frugiperda) on cut foliage from native and exotic Lonicera species. L. maackii received significantly lower amounts of foliar herbivory than L. reticulata across three seasons, while damage levels observed on V. prunifolium for two seasons was generally intermediate between L. reticulata and L. maackii. The specialist sawfly damaged L. reticulata heavily, but was not detected on L. maackii in the field. There were few statistical differences in the performance of sawfly larvae on L. reticulata and L. maackii, but the sawfly achieved higher pupal masses on L. reticulata than on L. maackii, and they strongly preferred L. reticulata over L. maackii when given a choice. The sawfly was unable to complete development on native L. sempervirens and non-native L. japonica. In contrast, the generalist caterpillar performed similarly on all Lonicera species. While L. maackii experienced little herbivory in the field compared to native relatives in the same habitat, laboratory assays indicate L. maackii appears to be a suitable host that escapes selection by the specialist, but L. japonica and L. sempervirens are highly resistant to it. These findings indicate that both enemy escape and resistance (to a specialist, but not a generalist herbivore) may contribute to the success of exotic Lonicera species." }, { "instance_id": "R58002xR57685", "comparison_id": "R58002", "paper_id": "R57685", "text": "When there is no escape: The effects of natural enemies on native, invasive, and noninvasive plants An important question in the study of biological invasions is the degree to which successful invasion can be explained by release from control by natural enemies. Natural enemies dominate explanations of two alternate phenomena: that most introduced plants fail to establish viable populations (biotic resistance hypothesis) and that some introduced plants become noxious invaders (natural enemies hypothesis). We used a suite of 18 phylogenetically related native and nonnative clovers (Trifolium and Medicago) and the foliar pathogens and invertebrate herbivores that attack them to answer two questions. Do native species suffer greater attack by natural enemies relative to introduced species at the same site? Are some introduced species excluded from native plant communities because they are susceptible to local natural enemies? We address these questions using three lines of evidence: (1) the frequency of attack and composition of fungal pathogens and herbivores for each clover species in four years of common garden experiments, as well as susceptibility to inoculation with a common pathogen; (2) the degree of leaf damage suffered by each species in common garden experiments; and (3) fitness effects estimated using correlative approaches and pathogen removal experiments. Introduced species showed no evidence of escape from pathogens, being equivalent to native species as a group in terms of infection levels, susceptibility, disease prevalence, disease severity (with more severe damage on introduced species in one year), the influence of disease on mortality, and the effect of fungicide treatment on mortality and biomass. In contrast, invertebrate herbivores caused more damage on native species in two years, although the influence of herbivore attack on mortality did not differ between native and introduced species. Within introduced species, the predictions of the biotic resistance hypothesis were not supported: the most invasive species showed greater infection, greater prevalence and severity of disease, greater prevalence of herbivory, and greater effects of fungicide on biomass and were indistinguishable from noninvasive introduced species in all other respects. Therefore, although herbivores preferred native over introduced species, escape from pest pressure cannot be used to explain why some introduced clovers are common invaders in coastal prairie while others are not." }, { "instance_id": "R58002xR57836", "comparison_id": "R58002", "paper_id": "R57836", "text": "Invertebrate community composition differs between invasive herb alligator weed and native sedges Abstract Chemical and/or architectural differences between native and exotic plants may influence invertebrate community composition. According to the enemy release hypothesis, invasive weeds should host fewer and less specialised invertebrates than native vegetation. Invertebrate communities were compared on invasive Alternanthera philoxeroides (alligator weed) and native sedges ( Isolepis prolifer and Schoenoplectus tabernaemontani ) in a New Zealand lake. A. philoxeroides is more architecturally and chemically similar to I. prolifer than to S. tabernaemontani . Lower invertebrate abundance, richness and proportionally fewer specialists were predicted on A. philoxeroides compared to native sedges, but with greatest differences between A. philoxeroides and S. tabernaemontani . A. philoxeroides is more architecturally and chemically similar to I. prolifer than to S. tabernaemontani . Invertebrate abundance showed taxa-specific responses, rather than consistently lower abundance on A. philoxeroides . Nevertheless, as predicted, invertebrate fauna of A. philoxeroides was more similar to that of I. prolifer than to S. tabernaemontani . The prediction of a depauperate native fauna on A. philoxeroides received support from some but not all taxa. All vegetation types hosted generalist-dominated invertebrate communities with simple guild structures. The enemy release hypothesis thus had minimal ability to predict patterns in this system. Results suggest the extent of architectural and chemical differences between native and invasive vegetation may be useful in predicting the extent to which they will host different invertebrate communities. However, invertebrate ecology also affects whether invertebrate taxa respond positively or negatively to weed invasion. Thus, exotic vegetation may support distinct invertebrate communities despite similar overall invertebrate abundance to native vegetation." }, { "instance_id": "R58002xR57740", "comparison_id": "R58002", "paper_id": "R57740", "text": "Acceleration of Exotic Plant Invasion in a Forested Ecosystem by a Generalist Herbivore The successful invasion of exotic plants is often attributed to the absence of coevolved enemies in the introduced range (i.e., the enemy release hypothesis). Nevertheless, several components of this hypothesis, including the role of generalist herbivores, remain relatively unexplored. We used repeated censuses of exclosures and paired controls to investigate the role of a generalist herbivore, white-tailed deer (Odocoileus virginianus), in the invasion of 3 exotic plant species (Microstegium vimineum, Alliaria petiolata, and Berberis thunbergii) in eastern hemlock (Tsuga canadensis) forests in New Jersey and Pennsylvania (U.S.A.). This work was conducted in 10 eastern hemlock (T. canadensis) forests that spanned gradients in deer density and in the severity of canopy disturbance caused by an introduced insect pest, the hemlock woolly adelgid (Adelges tsugae). We used maximum likelihood estimation and information theoretics to quantify the strength of evidence for alternative models of the influence of deer density and its interaction with the severity of canopy disturbance on exotic plant abundance. Our results were consistent with the enemy release hypothesis in that exotic plants gained a competitive advantage in the presence of generalist herbivores in the introduced range. The abundance of all 3 exotic plants increased significantly more in the control plots than in the paired exclosures. For all species, the inclusion of canopy disturbance parameters resulted in models with substantially greater support than the deer density only models. Our results suggest that white-tailed deer herbivory can accelerate the invasion of exotic plants and that canopy disturbance can interact with herbivory to magnify the impact. In addition, our results provide compelling evidence of nonlinear relationships between deer density and the impact of herbivory on exotic species abundance. These findings highlight the important role of herbivore density in determining impacts on plant abundance and provide evidence of the operation of multiple mechanisms in exotic plant invasion." }, { "instance_id": "R58002xR57816", "comparison_id": "R58002", "paper_id": "R57816", "text": "Diversity, loss, and gain of malaria parasites in a globally invasive bird Invasive species can displace natives, and thus identifying the traits that make aliens successful is crucial for predicting and preventing biodiversity loss. Pathogens may play an important role in the invasive process, facilitating colonization of their hosts in new continents and islands. According to the Novel Weapon Hypothesis, colonizers may out-compete local native species by bringing with them novel pathogens to which native species are not adapted. In contrast, the Enemy Release Hypothesis suggests that flourishing colonizers are successful because they have left their pathogens behind. To assess the role of avian malaria and related haemosporidian parasites in the global spread of a common invasive bird, we examined the prevalence and genetic diversity of haemosporidian parasites (order Haemosporida, genera Plasmodium and Haemoproteus) infecting house sparrows (Passer domesticus). We sampled house sparrows (N = 1820) from 58 locations on 6 continents. All the samples were tested using PCR-based methods; blood films from the PCR-positive birds were examined microscopically to identify parasite species. The results show that haemosporidian parasites in the house sparrows' native range are replaced by species from local host-generalist parasite fauna in the alien environments of North and South America. Furthermore, sparrows in colonized regions displayed a lower diversity and prevalence of parasite infections. Because the house sparrow lost its native parasites when colonizing the American continents, the release from these natural enemies may have facilitated its invasion in the last two centuries. Our findings therefore reject the Novel Weapon Hypothesis and are concordant with the Enemy Release Hypothesis." }, { "instance_id": "R58002xR57626", "comparison_id": "R58002", "paper_id": "R57626", "text": "Variation in herbivore damage to invasive and native woody plant species in open forest vegetation on Mahe, Seychelles Enemy release of introduced plants and variation in herbivore pressure in relation to community diversity are presently discussed as factors that affect plant species invasiveness or habitat invasibility. So far few data are available on this topic and the results are inconclusive. We compared leaf herbivory between native and invasive woody plants on Mah\u00e9, the main island of the tropical Seychelles. We further investigated variation in leaf herbivory on three abundant invasive species along an altitudinal gradient (50\u2013550 m a.s.l.). The median percentage of leaves affected by herbivores was significantly higher in native species (50%) than in invasive species (27%). In addition, the species suffering from the highest leaf area loss were native to the Seychelles. These results are consistent with the enemy release hypothesis (ERH). While the invasive species showed significant and mostly consistent variation in the amount of leaf damage between sites, this variation was not related to general altitudinal trends in diversity but rather to local variation in habitat structure and diversity. Our results indicate that in the Seychelles invasive woody plants profit from herbivore release relative to the native species and that the amount of herbivory, and therefore its effect on species invasiveness or habitat invasibility, may be dependent on local community structure and composition." }, { "instance_id": "R58002xR57722", "comparison_id": "R58002", "paper_id": "R57722", "text": "Novel host associations and habitats for Senecio-specialist herbivorous insects in Auckland We studied the genusand species-specialist monophagous herbivorous insects of Senecio (Asteraceae) in Auckland, New Zealand. With the exception of the widespread S. hispidulus, the eight native Senecio species in mainland Auckland (two endemic) are typically uncommon and restricted to less modified conservation land. However, 11 naturalised Senecio have established and are often widespread in urban and rural habitats. Three endemic Senecio-specialist herbivores \u2013 Nyctemera annulata, Patagoniodes farnaria, and Tephritis fascigera \u2013 formed novel host associations with naturalised Senecio species and spread into modified landscapes. Host associations for these species were not related to whether Senecio species are naturalised or native. However, the abundances of Patagonoides farnaria and Tephritis fascigera were significantly higher in wildland habitats than rural or urban habitats, and wildland Senecio were on average 1.4 times more likely to experience >5% folivory than urban conspecifics. ___________________________________________________________________________________________________________________________________" }, { "instance_id": "R58002xR57982", "comparison_id": "R58002", "paper_id": "R57982", "text": "Comparison of parasite diversity in native panopeid mud crabs and the invasive Asian shore crab in estuaries of northeast North America Numerous non-indigenous species (NIS) have successfully established in new locales, where they can have large impacts on community and ecosystem structure. A loss of natural enemies, such as parasites, is one mechanism proposed to contribute to that success. While several studies have shown NIS are initially less parasitized than native conspecifics, fewer studies have investigated whether parasite richness changes over time. Moreover, evaluating the role that parasites have in invaded communities requires not only an understanding of the parasite diversity of NIS but also the species with which they interact; yet parasite diversity in native species may be inadequately quantified. In our study, we examined parasite taxonomic richness, infection prevalence, and infection intensity in the invasive Asian shore crab Hemigrapsus sanguineus De Haan, 1835 and two native mud crabs (Panopeus herbstii Milne-Edwards, 1834 and Eurypanopeus depressus Smith, 1869) in estuarine and coastal communities along the east coast of the USA. We also examined reproductive tissue allocation (i.e., the proportion of gonad weight to total body weight) in all three crabs to explore possible differences in infected versus uninfected crabs. We found three parasite taxa infecting H. sanguineus and four taxa infecting mud crabs, including a rhizocephalan castrator (Loxothylacus panopaei) parasitizing E. depressus. Moreover, we documented a significant negative relationship between parasite escape and time for H. sanguineus, including a new 2015 record of a native microphallid trematode. Altogether, there was no significant difference in taxonomic richness among the crab species. Across parasite taxa, H. sanguineus demonstrated significantly lower infection prevalence compared to P. herbstii; yet a multivariate analysis of taxa-specific prevalence demonstrated no significant differences among crabs. Finally, infected P. herbstii had the highest proportion of gonad weight to total body weight. Our study finds some evidence for lower infection prevalence in the non-native versus the native hosts. However, we also demonstrate that parasite escape can lessen with time. Our work has implications for the understanding of the potential influence parasites may have on the future success of NIS in introduced regions." }, { "instance_id": "R58002xR57725", "comparison_id": "R58002", "paper_id": "R57725", "text": "Test of the enemy release hypothesis: The native magpie moth prefers a native fireweed (Senecio pinnatifolius) to its introduced congener (S madagascariensis) The enemy release hypothesis predicts that native herbivores will either prefer or cause more damage to native than introduced plant species. We tested this using preference and performance experiments in the laboratory and surveys of leaf damage caused by the magpie moth Nyctemera amica on a co-occuring native and introduced species of fireweed (Senecio) in eastern Australia. In the laboratory, ovipositing females and feeding larvae preferred the native S. pinnatifolius over the introduced S. madagascariensis. Larvae performed equally well on foliage of S. pinnatifolius and S. madagascariensis: pupal weights did not differ between insects reared on the two species, but growth rates were significantly faster on S. pinnatifolius. In the field, foliage damage was significantly greater on native S. pinnatifolius than introduced S. madagascariensis. These results support the enemy release hypothesis, and suggest that the failure of native consumers to switch to introduced species contributes to their invasive success. Both plant species experienced reduced, rather than increased, levels of herbivory when growing in mixed populations, as opposed to pure stands in the field; thus, there was no evidence that apparent competition occurred." }, { "instance_id": "R58002xR57955", "comparison_id": "R58002", "paper_id": "R57955", "text": "Parasitism by water mites in native and exotic Corixidae: Are mites limiting the invasion of the water boatman Trichocorixa verticalis (Fieber, 1851)? Abstract The water boatman Trichocorixa verticalis verticalis (Fieber 1851) is originally from North America and has been introduced into the southern Iberian Peninsula, where it has become the dominant Corixidae species in saline wetlands. The reasons for its success in saline habitats, and low abundance in low salinity habitats, are poorly known. Here we explore the potential role of water mites, which are typical parasites of hemipterans, in the invasion dynamics of T. v. verticalis. We compared infection levels between T. v. verticalis and the natives Sigara lateralis (Leach, 1817) and S. scripta (Rambur, 1840). No mites were found in saline wetlands where T. v. verticalis is highly dominant. Larvae of two mite species were identified infecting corixids in habitats of lower salinity: Hydrachna skorikowi and Eylais infundibulifera. Total parasite prevalence and prevalence of E. infundibulifera were significantly higher in T. v. verticalis compared with S. lateralis and S. scripta. Mean abundance of total infection and of E. infundibulifera and H. skorikowi were also higher in T. v. verticalis. When infected with H. skorikowi, native species harbored only one or two parasite individuals, while the smaller T. v. verticalis carried up to seven mites. When infected with E. infundibulifera, native species harboured only one parasite individual, while T. v. verticalis carried up to 6. Mite size didn\u2019t differ among host species, suggesting that all are suitable for engorgement. Both mite species showed a negative correlation between prevalence and salinity. T. v. verticalis susceptibility to parasitic mites may explain its low abundance in low salinity habitats, and may contribute to the conservation of native corixids. The success of T. v. verticalis in saline wetlands may be partly explained by the absence of parasitic mites, which are less halotolerant." }, { "instance_id": "R58002xR57994", "comparison_id": "R58002", "paper_id": "R57994", "text": "Can enemy release explain the invasion success of the diploid Leucanthemum vulgare in North America? Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory." }, { "instance_id": "R58002xR57814", "comparison_id": "R58002", "paper_id": "R57814", "text": "Remote analysis of biological invasion and the impact of enemy release Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and after ungulate exclusion using airborne imaging spectroscopy and LiDAR, time series satellite observations, and ground-based field studies over nine years indicated that removal of generalist herbivores facilitated exotic success, but the abundance of native species was unchanged. Vegetation cover <1 m in height increased in ungulate-free areas from 48.7% +/- 1.5% to 74.3% +/- 1.8% over 8.4 years, corresponding to an annualized growth rate of lambda = 1.05 +/- 0.01 yr(-1) (median +/- SD). Most of the change was attributable to exotic plant species, which increased from 24.4% +/- 1.4% to 49.1% +/- 2.0%, (lambda = 1.08 +/- 0.01 yr(-1)). Native plants experienced no significant change in cover (23.0% +/- 1.3% to 24.2% +/- 1.8%, lambda = 1.01 +/- 0.01 yr(-1)). Time series of satellite phenology were indistinguishable between the treatment and a 3.0-km2 control site for four years prior to ungulate removal, but they diverged immediately following exclusion of ungulates. Comparison of monthly EVI means before and after ungulate exclusion and between the managed and control areas indicates that EVI strongly increased in the managed area after ungulate exclusion. Field studies and airborne analyses show that the dominant invader was Senecio madagascariensis, an invasive annual forb that increased from < 0.01% to 14.7% fractional cover in ungulate-free areas (lambda = 1.89 +/- 0.34 yr(-1)), but which was nearly absent from the control site. A combination of canopy LAI, water, and fractional cover were expressed in satellite EVI time series and indicate that the invaded region maintained greenness during drought conditions. These findings demonstrate that enemy release from generalist herbivores can facilitate exotic success and suggest a plausible mechanism by which invasion occurred. They also show how novel remote-sensing technology can be integrated with conservation and management to help address exotic plant invasions." }, { "instance_id": "R58002xR57748", "comparison_id": "R58002", "paper_id": "R57748", "text": "Cryptic seedling herbivory by nocturnal introduced generalists impacts survival, performance of native and exotic plants Although much of the theory on the success of invasive species has been geared at escape from specialist enemies, the impact of introduced generalist invertebrate herbivores on both native and introduced plant species has been underappreciated. The role of nocturnal invertebrate herbivores in structuring plant communities has been examined extensively in Europe, but less so in North America. Many nocturnal generalists (slugs, snails, and earwigs) have been introduced to North America, and 96% of herbivores found during a night census at our California Central Valley site were introduced generalists. We explored the role of these herbivores in the distribution, survivorship, and growth of 12 native and introduced plant species from six families. We predicted that introduced species sharing an evolutionary history with these generalists might be less vulnerable than native plant species. We quantified plant and herbivore abundances within our heterogeneous site and also established herbivore removal experiments in 160 plots spanning the gamut of microhabitats. As 18 collaborators, we checked 2000 seedling sites every day for three weeks to assess nocturnal seedling predation. Laboratory feeding trials allowed us to quantify the palatability of plant species to the two dominant nocturnal herbivores at the site (slugs and earwigs) and allowed us to account for herbivore microhabitat preferences when analyzing attack rates on seedlings. The relationship between local slug abundance and percent cover of five common plant taxa at the field site was significantly negatively associated with the mean palatability of these taxa to slugs in laboratory trials. Moreover, seedling mortality of 12 species in open-field plots was positively correlated with mean palatability of these taxa to both slugs and earwigs in laboratory trials. Counter to expectations, seedlings of native species were neither more vulnerable nor more palatable to nocturnal generalists than those of introduced species. Growth comparison of plants within and outside herbivore exclosures also revealed no differences between native and introduced plant species, despite large impacts of herbivores on growth. Cryptic nocturnal predation on seedlings was common and had large effects on plant establishment at our site. Without intensive monitoring, such predation could easily be misconstrued as poor seedling emergence." }, { "instance_id": "R58002xR57616", "comparison_id": "R58002", "paper_id": "R57616", "text": "Release of invasive plants from fungal and viral pathogens Invasive plant species both threaten native biodiversity and are economically costly, but only a few naturalized species become pests. Here we report broad, quantitative support for two long-standing hypotheses that explain why only some naturalized species have large impacts. The enemy release hypothesis argues that invaders' impacts result from reduced natural enemy attack. The biotic resistance hypothesis argues that interactions with native species, including natural enemies, limit invaders' impacts. We tested these hypotheses for viruses and for rust, smut and powdery mildew fungi that infect 473 plant species naturalized to the United States from Europe. On average, 84% fewer fungi and 24% fewer virus species infect each plant species in its naturalized range than in its native range. In addition, invasive plant species that are more completely released from pathogens are more widely reported as harmful invaders of both agricultural and natural ecosystems. Together, these results strongly support the enemy release hypothesis. Among noxious agricultural weeds, species accumulating more pathogens in their naturalized range are less widely noxious, supporting the biotic resistance hypothesis. Our results indicate that invasive plants' impacts may be a function of both release from and accumulation of natural enemies, including pathogens." }, { "instance_id": "R58002xR57763", "comparison_id": "R58002", "paper_id": "R57763", "text": "Entomofauna of the introduced Chinese Tallow Tree Abstract Entomofauna in monospecific stands of the introduced Chinese tallow tree (Sapium sebiferum) and native mixed woodlands was sampled in 1982 along the Texas coast and compared to samples of arthropods from an earlier study of native coastal prairie and from a study of arthropods in S. sebiferum in 2004. Species diversity, richness, and abundance were highest in prairie, and were higher in mixed woodland than in S. sebiferum. Nonmetric multidimensional scaling distinguished orders and families of arthropods, and families of herbivores in S. sebiferum from mixed woodland and coastal prairie. Taxonomic similarity between S. sebiferum and mixed woodland was 51%. Fauna from S. sebiferum in 2001 was more similar to mixed woodland than to samples from S. sebiferum collected in 1982. These results indicate that the entomofauna in S. sebiferum originated from mixed prairie and that, with time, these faunas became more similar. Species richness and abundance of herbivores was lower in S. sebiferum, but proportion of total species in all trophic groups, except herbivores, was higher in S. sebiferum than mixed woodland. Low concentration of tannin in leaves of S. sebiferum did not explain low loss of leaves to herbivores. Lower abundance of herbivores on introduced species of plants fits the enemy release hypothesis, and low concentration of defense compounds in the face of low number of herbivores fits the evolution of increased competitive ability hypothesis." }, { "instance_id": "R58002xR57594", "comparison_id": "R58002", "paper_id": "R57594", "text": "Effects of fungal pathogens on seeds of native and exotic plants: a test using congeneric pairs Summary 1 It has previously been hypothesized that low rates of attack by natural enemies may contribute to the invasiveness of exotic plants. 2 We tested this hypothesis by investigating the influence of pathogens on survival during a critical life-history stage: the seed bank. We used fungicide treatments to estimate the impacts of soil fungi on buried seeds of a taxonomically broad suite of congeneric natives and exotics, in both upland and wetland meadows. 3 Seeds of both natives and exotics were recovered at lower rates in wetlands than in uplands. Fungicide addition reduced this difference by improving recovery in wetlands, indicating that the lower recovery was largely attributable to a higher level of fungal mortality. This suggests that fungal pathogens may contribute to the exclusion of upland species from wetlands. 4 The effects of fungicide on the recovery of buried seeds did not differ between natives and exotics. Seeds of exotics were recovered at a higher rate than seeds of natives in uplands, but this effect was not attributable to fungal pathogens. 5 Fungal seed pathogens may offer poor prospects for the management of most exotic species. The lack of consistent differences in the responses of natives vs. exotics to fungicide suggests few aliens owe their success to low seed pathogen loads, while impacts of seed-pathogenic biocontrol agents on non-target species would be frequent." }, { "instance_id": "R58002xR57847", "comparison_id": "R58002", "paper_id": "R57847", "text": "Coexistence between native and exotic species is facilitated by asymmetries in competitive ability and susceptibility to herbivores Differences between native and exotic species in competitive ability and susceptibility to herbivores are hypothesized to facilitate coexistence. However, little fieldwork has been conducted to determine whether these differences are present in invaded communities. Here, we experimentally examined whether asymmetries exist between native and exotic plants in a community invaded for over 200 years and whether removing competitors or herbivores influences coexistence. We found that natives and exotics exhibit pronounced asymmetries, as exotics are competitively superior to natives, but are more significantly impacted by herbivores. We also found that herbivore removal mediated the outcome of competitive interactions and altered patterns of dominance across our field sites. Collectively, these findings suggest that asymmetric biotic interactions between native and exotic plants can help to facilitate coexistence in invaded communities." }, { "instance_id": "R58002xR57614", "comparison_id": "R58002", "paper_id": "R57614", "text": "Herbivorous arthropod community of an alien weed Solanum carolinense L. Herbivorous arthropod fauna of the horse nettle Solanum carolinense L., an alien solanaceous herb of North American origin, was characterized by surveying arthropod communities in the fields and comparing them with the original community compiled from published data to infer the impact of herbivores on the weed in the introduced region. Field surveys were carried out in the central part of mainland Japan for five years including an intensive regular survey in 1992. Thirty-nine arthropod species were found feeding on the weed. The leaf, stem, flower and fruit of the weed were infested by the herbivores. The comparison of characteristics of the arthropod community with those of the community in the USA indicated that more sapsuckers and less chewers were on the weed in Japan than in the USA. The community in Japan was composed of high proportions of polyphages and exophages compared to that in the USA. Eighty-seven percent of the species are known to be pests of agricultural crops. Low species diversity of the community was also suggested. The depauperated herbivore community, in terms of feeding habit and niche on S. carolinense, suggested that the weed partly escaped from herbivory in its reproductive parts. The regular population census, however, indicated that a dominant coccinellid beetle, Epilachna vigintioctopunctata, caused a noticeable damage on the leaves of the weed." }, { "instance_id": "R58002xR57885", "comparison_id": "R58002", "paper_id": "R57885", "text": "Mortality factors affecting Agrilus auroguttatus Schaeffer (Coleoptera: Buprestidae) eggs in the native and invaded ranges An absence of diverse and coevolved natural enemies may explain the high levels of oak mortality caused by an invasive wood boring beetle, Agrilus auroguttatus Schaeffer (Coleoptera: Buprestidae), in California (CA). A field study was conducted to test the enemy release hypothesis for a single guild of natural enemies by comparing mortality factors affecting A. auroguttatus sentinel eggs deployed in both native (southern Arizona [AZ]) and introduced ranges (southern CA). The percentage of eggs attacked by natural enemies did not differ between sites, which does not support the enemy release hypothesis for this life stage. Although the predominant cause of mortality to sentinel eggs deployed in CA and AZ was due to factors other than natural enemy activity, chewed, missing, and parasitized eggs contributed to as much as 16% and 24% of sentinel egg mortality in CA and AZ, respectively. In addition, the first known egg parasitoid of A. auroguttatus was collected during this study from a single egg deployed in AZ, and was identified as Trichogramma sp. using molecular techniques. This parasitoid is a generalist, and therefore not suitable for use in a classical biological control program against A. auroguttatus in CA. A continuation of this study is needed across a larger number of field sites and over a longer period of time to optimize the potential detection of host specific egg parasitoids for potential introduction into CA as part of a future classical biological control program, and to better quantify natural enemy impacts on A. auroguttatus eggs." }, { "instance_id": "R58002xR57801", "comparison_id": "R58002", "paper_id": "R57801", "text": "Does release from natural belowground enemies help explain the invasiveness of Lygodium microphyllum? A cross-continental comparison Lygodium microphyllum (Cav.) R. Br., a climbing fern native to the Pantropics of the Old World, is aggressively colonizing natural ecosystems in the Florida Peninsula. Here, we examined soil factors that might affect the fern\u2019s invasiveness, specifically addressing the hypothesis that a release from natural belowground enemies contributes to its vigorous growth in Florida. We also investigated phenotypic differences of sporophytes raised from spores collected in Florida and the fern\u2019s native range in Australia, hypothesizing that the Florida population would possess traits resulting in faster growth and superior competitive ability than the two Australian populations. We tested our hypotheses in parallel greenhouse experiments\u2014one in Australia using soil from the fern\u2019s native habitat, and another in Florida, USA, with soil from a recently colonized ecosystem. Fern growth rate and its principal determinants were expressed relative to the optimal growth with a common sand culture in each experiment and compared among treatments in which soil was altered through either sterilization or nutrient amendment, or both. Contrary to the expectation, the optimal growth rates in the sand culture were higher for Australian populations than the Florida population, while the comparatively poor growth of all populations in unaltered soil was stimulated by nutrient amendment and sterilization. The overall effect of sterilization, however, was muted under high-nutrient conditions, suggesting that the effect of soil sterilization may be due to greater nutrient availability in sterilized soils. The only exception was the local population from the site where the soil was collected for the experiment in Australia, which grew significantly faster in sterilized than in non-sterilized soil, and also more rapidly in response to soil insecticide application. Our results indicate that the invasiveness of L. microphyllum in Florida is not a simple phenotypic difference in inherent growth rate as predicted by the evolution of increased competitive ability hypothesis, but it may be mediated in part by release from soil-borne enemies that vary in their effectiveness even within the native geographical range of the fern." }, { "instance_id": "R58002xR57759", "comparison_id": "R58002", "paper_id": "R57759", "text": "Host introduction and parasites: a case study on the parasite community of the peacock grouper Cephalopholis argus (Serranidae) in the Hawaiian Islands The peacock grouper (Cephalopholis argus) was intentionally introduced to the Hawaiian coastal waters 50 years ago to enhance the local fisheries. Following introduction, this species spread rapidly and became extremely abundant. A comparison of the metazoan parasite community of C. argus was performed between its native range (Moorea Island, French Polynesia) and its introduced range (Oahu and Big Island, Hawaii). Polynesian groupers were infected with a highly diversified parasite community whereas Hawaiian groupers exhibited a depauperate ensemble of parasite species, C. argus having lost most of the parasites common in their native range. Interestingly, the grouper has not acquired new parasites present in Hawaiian waters. This study provides the first field evidence of significant parasite release in a wild but previously introduced fish in coral reefs and is discussed in relation to the Enemy-Release Hypothesis which has never been assessed in those ecosystems." }, { "instance_id": "R58002xR57840", "comparison_id": "R58002", "paper_id": "R57840", "text": "Parasites and invasions: a biogeographic examination of parasites and hosts in native and introduced ranges Aim To use a comparative approach to understand parasite demographic patterns in native versus introduced populations, evaluating the potential roles of host invasion history and parasite life history. Location North American east and west coasts with a focus on San Francisco Bay (SFB). Methods Species richness and prevalence of trematode parasites were examined in the native and introduced ranges of two gastropod host species, Ilyanassa obsoleta and Littorina saxatilis. We divided the native range into the putative source area for introduction and areas to the north and south; we also sampled the overlapping introduced range in SFB. We dissected 14,781 snails from 103 populations and recorded the prevalence and identity of trematode parasites. We compared trematode species richness and prevalence across the hosts\u2019 introduced and native ranges, and evaluated the influence of host availability on observed patterns. Results Relative to the native range, both I. obsoleta and L. saxatilis have escaped (lost) parasites in SFB, and L. saxatilis demonstrated a greater reduction of trematode diversity and infection prevalence than I. obsoleta. This was not due to sampling inequalities between the hosts. Instead, rarefaction curves suggested complete capture of trematode species in native source and SFB subregions, except for L. saxatilis in SFB, where infection was extremely rare. For I. obsoleta, infection prevalence of trematodes using fish definitive hosts was significantly lower in SFB compared to the native range, unlike those using bird hosts. Host availability partly explained the presence of introduced trematodes in SFB. Main conclusions Differential losses of parasite richness and prevalence for the two gastropod host species in their introduced range is probably the result of several mechanistic factors: time since introduction, propagule pressure, vector of introduction, and host availability. Moreover, the recent occurrence of L. saxatilis\u2019 invasion and its active introduction vector suggest that its parasite diversity and distribution will probably increase over time. Our study suggests that host invasion history and parasite life history play key roles in the extent and diversity of trematodes transferred to introduced populations. Our results also provide vital information for understanding community-level influences of parasite introductions, as well as for disease ecology in general." }, { "instance_id": "R58002xR57966", "comparison_id": "R58002", "paper_id": "R57966", "text": "Invasive plant species may serve as a biological corridor for the invertebrate fauna of naturally isolated hosts The negative effects of alien invasive plants on habitats have been well-documented. However, the exchange of organisms between these and native taxa has been far less researched. Here we assess the exchanges of arthropod associates of a native (Virgilia divaricata) and an invasive (Acacia mearnsii) legume tree within the ecotone between forest and fynbos vegetation within the Cape Floristic Region of South Africa. Arthropod species richness, abundance, species assemblage composition and measures of beta-diversity were assessed between these two legume species where they grow sympatrically. Except for spiders and ants, arthropod species richness did not differ significantly between the two tree taxa. The overall abundance of arthropods was, however, significantly higher on the native tree species. This pattern was strongly driven by herbivores, as is consistent with predictions of the Enemy Release Hypothesis. When excluding rare taxa, over 75 % of all arthropod species collected in this study were associated with both host trees. However, arthropod community composition differed significantly between the two host plant taxa, largely due to differences between their herbivore communities. Arthropod beta diversity was high on the native host, with arthropod communities on the invasive host being much more homogenous across the sampling range. These results indicate that there are numerous exchanges of arthropods between these native and invasive plants. The invasive plant may provide arthropods with a pathway to other habitats between previously isolated native populations. This will have significant implications for biodiversity conservation at the habitat, species and population level." }, { "instance_id": "R58002xR57641", "comparison_id": "R58002", "paper_id": "R57641", "text": "Enemy release but no evolutionary loss of defence in a plant invasion: an inter-continental reciprocal transplant experiment When invading new regions exotic species may escape from some of their natural enemies. Reduced top\u2013down control (\u201cenemy release\u201d) following this escape is often invoked to explain demographic expansion of invasive species and also may alter the selective regime for invasive species: reduced damage can allow resources previously allocated to defence to be reallocated to other functions like growth and reproduction. This reallocation may provide invaders with an \u201cevolution of increased competitive ability\u201d over natives that defend themselves against specialist enemies. We tested for enemy release and the evolution of increased competitive ability in the North American native ragweed (Ambrosia artemisiifolia: Asteraceae), which currently is invading France. We found evidence of enemy release in natural field populations from the invaded and native ranges. Further we carried out a reciprocal transplant experiment, comparing several life history traits of plants from two North American (Ontario and South Carolina) and one French population in four common gardens on both continents. French and Canadian plants had similar flowering phenologies, flowering earlier than plants from further south in the native range. This may suggest that invasive French plants originated from similar latitudes to the Canadian population sampled. As with natural populations, experimental plants suffered far less herbivore damage in France than in Ontario. This difference in herbivory translated into increased growth but not into increased size or vigour. Moreover, we found that native genotypes were as damaged as invading ones in all experimental sites, suggesting no evolutionary loss of defence against herbivores." }, { "instance_id": "R58002xR57928", "comparison_id": "R58002", "paper_id": "R57928", "text": "Release from herbivory does not confer invasion success for Eugenia uniflora in Florida One of the most commonly cited hypotheses explaining invasion success is the enemy release hypothesis (ERH), which maintains that populations are regulated by coevolved natural enemies where they are native but are relieved of this pressure in the new range. However, the role of resident enemies in plant invasion remains unresolved. We conducted a field experiment to test predictions of the ERH empirically using a system of native, introduced invasive, and introduced non-invasive Eugenia congeners in south Florida. Such experiments are rarely undertaken but are particularly informative in tests of the ERH, as they simultaneously identify factors allowing invasive species to replace natives and traits determining why most introduced species are unsuccessful invaders. We excluded insect herbivores from seedlings of Eugenia congeners where the native and invasive Eugenia co-occur, and compared how herbivore exclusion affected foliar damage, growth, and survival. We found no evidence to support the ERH in this system, instead finding that the invasive E. uniflora sustained significantly more damage than the native and introduced species. Interestingly, E. uniflora performed better than, or as well as, its congeners in terms of growth and survival, in spite of higher damage incidence. Further, although herbivore exclusion positively influenced Eugenia seedling survival, there were few differences among species and no patterns in regard to invasion status or origin. We conclude that the ability of E. uniflora to outperform its native and introduced non-invasive congeners, and not release from insect herbivores, contributes to its success as an invader in Florida." }, { "instance_id": "R58002xR57988", "comparison_id": "R58002", "paper_id": "R57988", "text": "Alien and native plant establishment in grassland communities is more strongly affected by disturbance than above- and below-ground enemies Summary Understanding the factors that drive commonness and rarity of plant species and whether these factors differ for alien and native species are key questions in ecology. If a species is to become common in a community, incoming propagules must first be able to establish. The latter could be determined by competition with resident plants, the impacts of herbivores and soil biota, or a combination of these factors. We aimed to tease apart the roles that these factors play in determining establishment success in grassland communities of 10 alien and 10 native plant species that are either common or rare in Germany, and from four families. In a two-year multisite field experiment, we assessed the establishment success of seeds and seedlings separately, under all factorial combinations of low vs. high disturbance (mowing vs mowing and tilling of the upper soil layer), suppression or not of pathogens (biocide application) and, for seedlings only, reduction or not of herbivores (net-cages). Native species showed greater establishment success than alien species across all treatments, regardless of their commonness. Moreover, establishment success of all species was positively affected by disturbance. Aliens showed lower establishment success in undisturbed sites with biocide application. Release of the undisturbed resident community from pathogens by biocide application might explain this lower establishment success of aliens. These findings were consistent for establishment from either seeds or seedlings, although less significantly so for seedlings, suggesting a more important role of pathogens in very early stages of establishment after germination. Herbivore exclusion did play a limited role in seedling establishment success. Synthesis: In conclusion, we found that less disturbed grassland communities exhibited strong biotic resistance to establishment success of species, whether alien or native. However, we also found evidence that alien species may benefit weakly from soilborne enemy release, but that this advantage over native species is lost when the latter are also released by biocide application. Thus, disturbance was the major driver for plant species establishment success and effects of pathogens on alien plant establishment may only play a minor role." }, { "instance_id": "R58002xR57964", "comparison_id": "R58002", "paper_id": "R57964", "text": "Natural selection on plant resistance to herbivores in the native and introduced range Plants introduced into a new range are expected to harbour fewer specialized herbivores and to receive less damage than conspecifics in native ranges. Datura stramonium was introduced in Spain about five centuries ago. Here, we compare damage by herbivores, plant size, and leaf trichomes between plants from non-native and native ranges and perform selection analyses. Non-native plants experienced much less damage, were larger and less pubescent than plants of native populations. While plant size was related to fitness in both ranges, selection to increase resistance was only detected in the native region. We suggest this is a consequence of a release from enemies in this new environment." }, { "instance_id": "R58002xR57667", "comparison_id": "R58002", "paper_id": "R57667", "text": "Recruitment limitation, seedling performance and persistence of exotic tree monocultures Many native plant communities are replaced by exotic monocultures that may be successional stages or persistent community types. We surveyed a stand of Sapium sebiferum (Chinese Tallow Tree) that replaced tallgrass prairie in Texas and performed experiments with seeds and seedlings to determine the contributions of recruitment limitation and natural enemy release to allowing such a forest type to persist or to allowing native species to reduce Sapium dominance. The stand was dominated by Sapium, especially for mature trees (>99) and annual seed input (97) but less so for saplings (80). Field sown Sapium seeds had lower germination and survival rates than Celtis seeds. Together with the extreme dominance of Sapium in seed rain this suggests that native species are currently recruitment limited in this stand by seed supply but not by germination, early growth or survival. To investigate whether Sapium may benefit from low herbivory or diseases, we transplanted Sapium and Celtis seedlings into the forest and manipulated foliar fungal diseases and insect herbivores with sprays. As predicted, insect herbivores caused greater damage to Celtis seedlings than to Sapiumseedlings. However, suppression of insect herbivores caused significantly greater increases in survivorship of Sapium seedlings compared to Celtis seedlings. This suggests that herbivores in the understory of this Sapiumforest may significantly reduce Sapiumseedling success. Such a pattern of strong herbivore impact on seedlings growing near adult conspecifics was unexpected for this invasive species. However, even with insects and fungi suppressed, Sapium seedling performance was poor in this forest. Our results point towards Sapium as a successional species in a forest that will eventually be dominated by native trees that are currently recruitment limited but outperform Sapium in the understory." }, { "instance_id": "R58002xR57971", "comparison_id": "R58002", "paper_id": "R57971", "text": "Insect assemblages associated with the exotic riparian shrub Russian olive (Elaeagnaceae), and co-occurring native shrubs in British Columbia, Canada Abstract Russian olive ( Elaeagnus angustifolia Linnaeus; Elaeagnaceae) is an exotic shrub/tree that has become invasive in many riparian ecosystems throughout semi-arid, western North America, including southern British Columbia, Canada. Despite its prevalence and the potentially dramatic impacts it can have on riparian and aquatic ecosystems, little is known about the insect communities associated with Russian olive within its invaded range. At six sites throughout the Okanagan valley of southern British Columbia, Canada, we compared the diversity of insects associated with Russian olive plants to that of insects associated with two commonly co-occurring native plant species: Woods\u2019 rose ( Rosa woodsii Lindley; Rosaceae) and Saskatoon ( Amelanchier alnifolia (Nuttall) Nuttall ex Roemer; Rosaceae). Total abundance did not differ significantly among plant types. Family richness and Shannon diversity differed significantly between Woods\u2019 rose and Saskatoon, but not between either of these plant types and Russian olive. An abundance of Thripidae (Thysanoptera) on Russian olive and Tingidae (Hemiptera) on Saskatoon contributed to significant compositional differences among plant types. The families Chloropidae (Diptera), Heleomyzidae (Diptera), and Gryllidae (Orthoptera) were uniquely associated with Russian olive, albeit in low abundances. Our study provides valuable and novel information about the diversity of insects associated with an emerging plant invader of western Canada." }, { "instance_id": "R58002xR57797", "comparison_id": "R58002", "paper_id": "R57797", "text": "Reduction in post-invasion genetic diversity in Crangonyx pseudogracilis (Amphipoda: Crustacea): a genetic bottleneck or the work of hitchhiking vertically transmitted microparasites? Parasites can strongly influence the success of biological invasions. However, as invading hosts and parasites may be derived from a small subset of genotypes in the native range, it is important to examine the distribution and invasion of parasites in the context of host population genetics. We demonstrate that invasive European populations of the North American Crangonyx pseudogracilis have experienced a reduction in post-invasion genetic diversity. We predict that vertically transmitted parasites may evade the stochastic processes and selective pressures leading to enemy release. As microsporidia may be vertically or horizontally transmitted, we compared the diversity of these microparasites in the native and invasive ranges of the host. In contrast to the reduction in host genetic diversity, we find no evidence for enemy release from microsporidian parasites in the invasive populations. Indeed, a single, vertically transmitted, microsporidian sex ratio distorter dominates the microsporidian parasite assemblage in the invasive range and appears to have invaded with the host. We propose that overproduction of female offspring as a result of parasitic sex ratio distortion may facilitate host invasion success. We also propose that a selective sweep resulting from the increase in infected individuals during the establishment may have contributed to the reduction in genetic diversity in invasive Crangonyx pseudogracilis populations." }, { "instance_id": "R58002xR57863", "comparison_id": "R58002", "paper_id": "R57863", "text": "Combined effects of plant competition and insect herbivory hinder invasiveness of an introduced thistle The biotic resistance hypothesis is a dominant paradigm for why some introduced species fail to become invasive in novel environments. However, predictions of this hypothesis require further empirical field tests. Here, we focus on evaluating two biotic factors known to severely limit plants, interspecific competition and insect herbivory, as mechanisms of biotic resistance. We experimentally evaluated the independent and combined effects of three levels of competition by tallgrass prairie vegetation and two levels of herbivory by native insects on seedling regeneration, size, and subsequent flowering of the Eurasian Cirsium vulgare, a known invasive species elsewhere, and compared its responses to those of the ecologically similar and co-occurring native congener C. altissimum. Seedling emergence of C. vulgare was greater than that of C. altissimum, and that emergence was reduced by the highest level of interspecific competition. Insect leaf herbivory was also greater on C. vulgare than on C. altissimum at all levels of competition. Herbivory on seedlings dramatically decreased the proportion of C. vulgare producing flower heads at all competition levels, but especially at the high competition level. Competition and herbivory interacted to significantly decrease plant survival and biomass, especially for C. vulgare. Thus, both competition and herbivory limited regeneration of both thistles, but their effects on seedling emergence, survival, size and subsequent reproduction were greater for C. vulgare than for C. altissimum. These results help explain the unexpectedly low abundance recorded for C. vulgare in western tallgrass prairie, and also provide strong support for the biotic resistance hypothesis." }, { "instance_id": "R58002xR57720", "comparison_id": "R58002", "paper_id": "R57720", "text": "Herbivores, but not other insects, are scarce on alien plants Abstract Understanding how the landscape-scale replacement of indigenous plants with alien plants influences ecosystem structure and functioning is critical in a world characterized by increasing biotic homogenization. An important step in this process is to assess the impact on invertebrate communities. Here we analyse insect species richness and abundance in sweep collections from indigenous and alien (Australasian) woody plant species in South Africa's Western Cape. We use phylogenetically relevant comparisons and compare one indigenous with three Australasian alien trees within each of Fabaceae: Mimosoideae, Myrtaceae, and Proteaceae: Grevilleoideae. Although some of the alien species analysed had remarkably high abundances of herbivores, even when intentionally introduced biological control agents are discounted, overall, herbivorous insect assemblages from alien plants were slightly less abundant and less diverse compared with those from indigenous plants \u2013 in accordance with predictions from the enemy release hypothesis. However, there were no clear differences in other insect feeding guilds. We conclude that insect assemblages from alien plants are generally quite diverse, and significant differences between these and assemblages from indigenous plants are only evident for herbivorous insects." }, { "instance_id": "R58002xR57874", "comparison_id": "R58002", "paper_id": "R57874", "text": "Reduced seed predation after invasion supports enemy release in a broad biogeographical survey The Enemy Release (ER) hypothesis predicts an increase in the plant invasive capacity after being released from their associated herbivores or pathogens in their area of origin. Despite the large number of studies on biological invasions addressing this hypothesis, tests evaluating changes in herbivory on native and introduced populations and their effects on plant reproductive potential at a biogeographical level are relatively rare. Here, we tested the ER hypothesis on the South African species Senecio pterophorus (Asteraceae), which is native to the Eastern Cape, has expanded into the Western Cape, and was introduced into Australia (>70\u2013100 years ago) and Europe (>30 years ago). Insect seed predation was evaluated to determine whether plants in the introduced areas were released from herbivores compared to plants from the native range. In South Africa, 25 % of the seedheads of sampled plants were damaged. Plants from the introduced populations suffered lower seed predation compared to those from the native populations, as expected under the ER hypothesis, and this release was more pronounced in the region with the most recent introduction (Europe 0.2 % vs. Australia 15 %). The insect communities feeding on S. pterophorus in Australia and Europe differed from those found in South Africa, suggesting that the plants were released from their associated fauna after invasion and later established new associations with local herbivore communities in the novel habitats. Our study is the first to provide strong evidence of enemy release in a biogeographical survey across the entire known distribution of a species." }, { "instance_id": "R58002xR57704", "comparison_id": "R58002", "paper_id": "R57704", "text": "Comparison of damage to native and exotic tallgrass prairie plants by natural enemies We surveyed the prevalence and amount of leaf damage related to herbivory and pathogens on 12 pairs of exotic (invasive and noninvasive) and ecologically similar native plant species in tallgrass prairie to examine whether patterns of damage match predictions from the enemy release hypothesis. We also assessed whether natural enemy impacts differed in response to key environmental factors in tallgrass prairie by surveying the prevalence of rust on the dominant C4 grass, Andropogon gerardii, and its congeneric invasive exotic C4 grass, A. bladhii, in response to fire and nitrogen fertilization treatments. Overall, we found that the native species sustain 56.4% more overall leaf damage and 83.6% more herbivore-related leaf damage when compared to the exotic species. Moreover, we found that the invasive exotic species sustained less damage from enemies relative to their corresponding native species than the noninvasive exotic species. Finally, we found that burning and nitrogen fertilization both significantly increased the prevalence of rust fungi in the native grass, while rust fungi rarely occurred on the exotic grass. These results indicate that reduced damage from enemies may in part explain the successful naturalization of exotic species and the spread of invasive exotic species in tallgrass prairie." }, { "instance_id": "R58002xR57889", "comparison_id": "R58002", "paper_id": "R57889", "text": "Biogeographic comparisons of herbivore attack, growth and impact of Japanese knotweed between Japan and France To shed light on the process of how exotic species become invasive, it is necessary to study them both in their native and non-native ranges. Our intent was to measure differences in herbivory, plant growth and the impact on other species in Fallopia japonica in its native and non-native ranges. We performed a cross-range full descriptive, field study in Japan (native range) and France (non-native range). We assessed DNA ploidy levels, the presence of phytophagous enemies, the amount of leaf damage, several growth parameters and the co-occurrence of Fallopia japonica with other plant species of herbaceous communities. Invasive Fallopia japonica plants were all octoploid, a ploidy level we did not encounter in the native range, where plants were all tetraploid. Octoploids in France harboured far less phytophagous enemies, suffered much lower levels of herbivory, grew larger and had a much stronger impact on plant communities than tetraploid conspecifics in the native range in Japan. Our data confirm that Fallopia japonica performs better-plant vigour and dominance in the her-baceous community-in its non-native than its native range. Because we could not find octoploids in the native range, we cannot separate the effects of differences in ploidy from other biogeographic factors. To go further, common garden experiments would now be needed to disentangle the proper role of each factor, taking into account the ploidy levels of plants in their native and non-native ranges. Synthesis. As the process by which invasive plants successfully invade ecosystems in their non-native range is probably multifactorial in most cases, examining several components-plant growth, herbivory load, impact on recipient systems-of plant invasions through biogeographic comparisons is important. Our study contributes towards filling this gap in the research, and it is hoped that this method will spread in invasion ecology, making such an approach more common." }, { "instance_id": "R58002xR57895", "comparison_id": "R58002", "paper_id": "R57895", "text": "Two tests of enemy release of commonly co-occurring bunchgrasses native in Europe and introduced in the United States The popularly cited enemy release hypothesis, which states that non-native species are released from population control by their enemies, has not been adequately tested in plants. Many empirical studies have compared damage to native versus non-native invaders only in the invaded range, which can lead to erroneous conclusions regarding enemy release. Biogeographical studies that have compared natural enemies in native and introduced ranges have typically focused on a small area of the plants\u2019 distributions in each range, only one plant species, and/or only one guild of natural enemies. To test enemy release, we first surveyed both pathogens and herbivores in multiple populations in both the native and naturalized ranges of three commonly co-occurring perennial bunchgrasses introduced to the United States from Europe. We then compared our field results to the number of fungal pathogens that have been documented on each species from published host-pathogen data compilations. Consistent with enemy release, our field survey showed less herbivory and denser populations in the naturalized range, but there was no evidence of release from pathogens. In contrast, the published host-pathogen data compilations produced evidence of enemy release from pathogens. The difference in results produced by the two approaches highlights the need for multiple approaches to testing mechanisms of invasions by introduced species, which can enable well supported theory to inform sound management practices." }, { "instance_id": "R58002xR57712", "comparison_id": "R58002", "paper_id": "R57712", "text": "Role of plant enemies in the forestry of indigenous vs. nonindigenous pines Plantations of rapidly growing trees are becoming increasingly common because the high productivity can enhance local economies, support improvements in educational systems, and generally improve the quality of life in rural communities. Landowners frequently choose to plant nonindigenous species; one rationalization has been that silvicultural productivity is enhanced when trees are separated from their native herbivores and pathogens. The expectation of enemy reduction in nonindigenous species has theoretical and empirical support from studies of the enemy release hypothesis (ERH) in the context of invasion ecology, but its relevance to forestry has not been evaluated. We evaluated ERH in the productive forests of Galicia, Spain, where there has been a profusion of pine plantations, some with the indigenous Pinus pinaster, but increasingly with the nonindigenous P. radiata. Here, one of the most important pests of pines is the indigenous bark beetle, Tomicus piniperda. In support of ERH, attacks by T. piniperda were more than twice as great in stands of P. pinaster compared to P. radiata. This differential held across a range of tree ages and beetle abundance. However, this extension of ERH to forestry failed in the broader sense because beetle attacks, although fewer on P. radiata, reduced productivity of P. radiata more than that of P. pinaster (probably because more photosynthetic tissue is lost per beetle attack in P. radiata). Productivity of the nonindigenous pine was further reduced by the pathogen, Sphaeropsis sapinea, which infected up to 28% of P. radiata but was absent in P. pinaster. This was consistent with the forestry axiom (antithetical to ERH) that trees planted \"off-site\" are more susceptible to pathogens. Fungal infections were positively correlated with beetle attacks; apparently T. piniperda facilitates S. sapinea infections by creating wounds and by carrying fungal propagules. A globally important component in the diminution of indigenous flora has been the deliberate large-scale propagation of nonnative trees for silviculture. At least for Pinus forestry in Spain, reduced losses to pests did not rationalize the planting of nonindigenous trees. There would be value in further exploration of relations between invasion ecology and the forestry of nonindigenous trees." }, { "instance_id": "R58002xR57757", "comparison_id": "R58002", "paper_id": "R57757", "text": "Release from native herbivores facilitates the persistence of invasive marine algae: a biogeographical comparison of the relative contribution of nutrients and herbivory to invasion success The effect of herbivory and nutrient enrichment on the growth of invasive and native macroalgal species was simultaneously studied in two biogeographic regions: the Caribbean and Hawaii. Herbivores suppressed growth of invasive algae in their native (Caribbean) and invaded range (Hawaii), but despite similar levels of herbivore biomass, the intensity of herbivory was lower in Hawaii. Algal species with a circumtropical distribution did not show a similar effect of herbivores on their growth. Nutrient enrichment did not enhance growth of any algal species in either region. The reduction in herbivore intensity experienced by invasive algae in Hawaii rather than an escape from (native) herbivores provided invasive macroalgae with \u201cenemy release\u201d sensu the Enemy Release Hypothesis (ERH). Since native, Hawaiian herbivores still feed and even prefer invasive algae over native species, invasion scenario\u2019s that involve predation (e.g. the ERH) could be falsely dismissed when invasive species are only studied in their invasive range. We therefore argue that escape from herbivores (i.e. enemy release) can only effectively be determined with additional information on the intensity of predation experienced by an invasive species in its native range." }, { "instance_id": "R58002xR57727", "comparison_id": "R58002", "paper_id": "R57727", "text": "Diversity and abundance of arthropod floral visitor and herbivore assemblages on exotic and native Senecio species The enemy release hypothesis predicts that native herbivores prefer native, rather than exotic plants, giving invaders a competitive advantage. In contrast, the biotic resistance hypothesis states that many invaders are prevented from establishing because of competitive interactions, including herbivory, with native fauna and flora. Success or failure of spread and establishment might also be influenced by the presence or absence of mutualists, such as pollinators. Senecio madagascariensis (fireweed), an annual weed from South Africa, inhabits a similar range in Australia to the related native S. pinnatifolius. The aim of this study was to determine, within the context of invasion biology theory, whether the two Senecio species share insect fauna, including floral visitors and herbivores. Surveys were carried out in south-east Queensland on allopatric populations of the two Senecio species, with collected insects identified to morphospecies. Floral visitor assemblages were variable between populations. However, the two Senecio species shared the two most abundant floral visitors, honeybees and hoverflies. Herbivore assemblages, comprising mainly hemipterans of the families Cicadellidae and Miridae, were variable between sites and no patterns could be detected between Senecio species at the morphospecies level. However, when insect assemblages were pooled (i.e. community level analysis), S. pinnatifolius was shown to host a greater total abundance and richness of herbivores. Senecio madagascariensis is unlikely to be constrained by lack of pollinators in its new range and may benefit from lower levels of herbivory compared to its native congener S. pinnatifolius." }, { "instance_id": "R58002xR57710", "comparison_id": "R58002", "paper_id": "R57710", "text": "Metazoan parasites of introduced round and tubenose gobies in the Great Lakes: Support for the \"Enemy Release Hypothesis\" ABSTRACT Recent invasion theory has hypothesized that newly established exotic species may initially be free of their native parasites, augmenting their population success. Others have hypothesized that invaders may introduce exotic parasites to native species and/or may become hosts to native parasites in their new habitats. Our study analyzed the parasites of two exotic Eurasian gobies that were detected in the Great Lakes in 1990: the round goby Apollonia melanostoma and the tubenose goby Proterorhinus semilunaris. We compared our results from the central region of their introduced ranges in Lakes Huron, St. Clair, and Erie with other studies in the Great Lakes over the past decade, as well as Eurasian native and nonindigenous habitats. Results showed that goby-specific metazoan parasites were absent in the Great Lakes, and all but one species were represented only as larvae, suggesting that adult parasites presently are poorly-adapted to the new gobies as hosts. Seven parasitic species are known to infest the tubenose goby in the Great Lakes, including our new finding of the acanthocephalan Southwellina hispida, and all are rare. We provide the first findings of four parasite species in the round goby and clarified two others, totaling 22 in the Great Lakes\u2014with most being rare. In contrast, 72 round goby parasites occur in the Black Sea region. Trematodes are the most common parasitic group of the round goby in the Great Lakes, as in their native Black Sea range and Baltic Sea introduction. Holarctic trematode Diplostomum spathaceum larvae, which are one of two widely distributed species shared with Eurasia, were found in round goby eyes from all Great Lakes localities except Lake Huron proper. Our study and others reveal no overall increases in parasitism of the invasive gobies over the past decade after their establishment in the Great Lakes. In conclusion, the parasite \u201cload\u201d on the invasive gobies appears relatively low in comparison with their native habitats, lending support to the \u201cenemy release hypothesis.\u201d" }, { "instance_id": "R58002xR57609", "comparison_id": "R58002", "paper_id": "R57609", "text": "Invasiveness of Ammophila arenaria: Release from soil-borne pathogens? The Natural Enemies Hypothesis (i.e., introduced species experience release from their natural enemies) is a common explanation for why invasive species are so successful. We tested this hypothesis for Ammophila arenaria (Poaceae: European beachgrass), an aggressive plant invading the coastal dunes of California, USA, by comparing the demographic effects of belowground pathogens on A. arenaria in its introduced range to those reported in its native range. European research on A. arenaria in its native range has established that soil-borne pathogens, primarily nematodes and fungi, reduce A. arenaria's growth. In a greenhouse experiment designed to parallel European studies, seeds and 2-wk-old seedlings were planted in sterilized and nonsterilized soil collected from the A. arenaria root zone in its introduced range of California. We assessed the effects of pathogens via soil sterilization on three early performance traits: seed germination, seedling survival, and plant growth. We found that seed germinatio..." }, { "instance_id": "R58002xR57783", "comparison_id": "R58002", "paper_id": "R57783", "text": "Phylogenetically structured damage to Asteraceae: susceptibility of native and exotic species to foliar herbivores Invasive plants often lose natural enemies while moving to new regions; however, once established in a new area, these invaders may be susceptible to attack by locally occurring enemies. Such damage may be more likely for exotics with close native relatives in the invaded area, since shifts of enemies should be more likely among closely related hosts. In this study, we evaluated whether exotics experience less herbivore damage than natives, and whether phylogenetically novel exotics experience less damage that those that are more closely related to locally occurring family members. Foliar damage was measured on 20 native and 15 exotic Asteraceae that co-occur locally in southern Ontario, Canada. The phylogenetic structure of this damage was quantified using an eigenvector decomposition method, and the relationship between damage and phylogenetic novelty of exotics was evaluated based on phylogenetic distances to other locally occurring Asteraceae. Our results show that 32% of the variation in damage was explained by phylogenetic relationship; similarity in damage tended to be associated with tribes. As predicted, exotics experienced lower damage than native species, even when the dataset was corrected for phylogenetic nonindependence. Contrary to our prediction, however, exotics that were more phylogenetically isolated from locally occurring relatives did not experience less damage. These results suggest that, though exotic Asteraceae may escape many of their natural enemies, this is not in general more likely for species phylogenetically distant from locally occurring native confamilials." }, { "instance_id": "R58002xR57957", "comparison_id": "R58002", "paper_id": "R57957", "text": "Helminth species richness of introduced and native grey mullets (Teleostei: Mugilidae) Quantitative complex analyses of parasite communities of invaders across different native and introduced populations are largely lacking. The present study provides a comparative analysis of species richness of helminth parasites in native and invasive populations of grey mullets. The local species richness differed between regions and host species, but did not differ when compared with invasive and native hosts. The size of parasite assemblages of endohelminths was higher in the Mediterranean and Azov-Black Seas, while monogeneans were the most diverse in the Sea of Japan. The helminth diversity was apparently higher in the introduced population of Liza haematocheilus than that in their native habitat, but this trend could not be confirmed when the size of geographic range and sampling efforts were controlled for. The parasite species richness at the infracommunity level of the invasive host population is significantly lower compared with that of the native host populations that lends support to the enemy release hypothesis. A distribution pattern of the infracommunity richness of acquired parasites by the invasive host can be characterized as aggregated and it is random in native host populations. Heterogeneity in the host susceptibility and vulnerability to acquired helminth species was assumed to be a reason of the aggregation of species numbers in the population of the invasive host." }, { "instance_id": "R58002xR57628", "comparison_id": "R58002", "paper_id": "R57628", "text": "Release from native root herbivores and biotic resistance by soil pathogens in a new habitat both affect the alien Ammophila arenaria in South Africa Many native communities contain exotic plants that pose a major threat to indigenous vegetation and ecosystem functioning. Therefore the enemy release hypothesis (ERH) and biotic resistance hypothesis (BRH) were examined in relation to the invasiveness of the introduced dune grass Ammophila arenaria in South Africa. To compare plant\u2013soil feedback from the native habitat in Europe and the new habitat in South Africa, plants were grown in their own soil from both Europe and South Africa, as well as in sterilised and non-sterilised soils from a number of indigenous South African foredune plant species. While the soil feedback of most plant species supports the ERH, the feedback from Sporobolus virginicus soil demonstrates that this plant species may contribute to biotic resistance against the introduced A. arenaria, through negative feedback from the soil community. Not only the local plant species diversity, but also the type of plant species present seemed to be important in determining the potential for biotic resistance. As a result, biotic resistance against invasive plant species may depend not only on plant competition, but also on the presence of plant species that are hosts of potential soil pathogens that may negatively affect the invaders. In conclusion, exotic plant species such as A. arenaria in South Africa that do not become highly invasive, may experience the ERH and BRH simultaneously, with the balance between enemy escape versus biotic resistance determining the invasiveness of a species in a new habitat." }, { "instance_id": "R58002xR57605", "comparison_id": "R58002", "paper_id": "R57605", "text": "Why Alien Invaders Succeed: Support for the Escape-from-Enemy Hypothesis Successful biological invaders often exhibit enhanced performance following introduction to a new region. The traditional explanation for this phenomenon is that natural enemies (e.g., competitors, pathogens, and predators) present in the native range are absent from the introduced range. The purpose of this study was to test the escape\u2010from\u2010enemy hypothesis using the perennial plant Silene latifolia as a model system. This European native was introduced to North America in the 1800s and subsequently spread to a large part of the continent. It is now considered a problematic weed of disturbed habitats and agricultural fields in the United States and Canada. Surveys of 86 populations in the United States and Europe revealed greater levels of attack by generalist enemies (aphids, snails, floral herbivores) in Europe compared with North America. Two specialists (seed predator, anther smut fungus) that had dramatic effects on plant fitness in Europe were either absent or in very low frequency in North America. Overall, plants were 17 times more likely to be damaged in Europe than in North America. Thus, S. latifolia's successful North American invasion can, at least in part, be explained by escape from specialist enemies and lower levels of damage following introduction." }, { "instance_id": "R58002xR57660", "comparison_id": "R58002", "paper_id": "R57660", "text": "Geographic patterns of herbivory and resource allocation to defense, growth, and reproduction in an invasive biennial, Alliaria petiolata We investigated geographic patterns of herbivory and resource allocation to defense, growth, and reproduction in an invasive biennial, Alliaria petiolata, to test the hypothesis that escape from herbivory in invasive species permits enhanced growth and lower production of defensive chemicals. We quantified herbivore damage, concentrations of sinigrin, and growth and reproduction inside and outside herbivore exclusion treatments, in field populations in the native and invasive ranges. As predicted, unmanipulated plants in the native range (Hungary, Europe) experienced greater herbivore damage than plants in the introduced range (Massachusetts and Connecticut, USA), providing evidence for enemy release, particularly in the first year of growth. Nevertheless, European populations had consistently larger individuals than US populations (rosettes were, for example, eightfold larger) and also had greater reproductive output, but US plants produced larger seeds at a given plant height. Moreover, flowering plants showed significant differences in concentrations of sinigrin in the invasive versus native range, although the direction of the difference was variable, suggesting the influence of environmental effects. Overall, we observed less herbivory, but not increased growth or decreased defense in the invasive range. Geographical differences in performance and leaf chemistry appear to be due to variation in the environment, which could have masked evolved differences in allocation." }, { "instance_id": "R58002xR57937", "comparison_id": "R58002", "paper_id": "R57937", "text": "Specialist enemies, generalist weapons and the potential spread of exotic pathogens: malaria parasites in a highly invasive bird Pathogens can influence the success of invaders. The Enemy Release Hypothesis predicts invaders encounter reduced pathogen abundance and diversity, while the Novel Weapons Hypothesis predicts invaders carry novel pathogens that spill over to competitors. We tested these hypotheses using avian malaria (haemosporidian) infections in the invasive myna (Acridotheres tristis), which was introduced to southeastern Australia from India and was secondarily expanded to the eastern Australian coast. Mynas and native Australian birds were screened in the secondary introduction range for haemosporidians (Plasmodium and Haemoproteus spp.) and results were combined with published data from the myna's primary introduction and native ranges. We compared malaria prevalence and diversity across myna populations to test for Enemy Release and used phylogeographic analyses to test for exotic strains acting as Novel Weapons. Introduced mynas carried significantly lower parasite diversity than native mynas and significantly lower Haemoproteus prevalence than native Australian birds. Despite commonly infecting native species that directly co-occur with mynas, Haemoproteus spp. were only recorded in introduced mynas in the primary introduction range and were apparently lost during secondary expansion. In contrast, Plasmodium infections were common in all ranges and prevalence was significantly higher in both introduced and native mynas than in native Australian birds. Introduced mynas carried several exotic Plasmodium lineages that were shared with native mynas, some of which also infected native Australian birds and two of which are highly invasive in other bioregions. Our results suggest that introduced mynas may benefit through escape from Haemoproteus spp. while acting as important reservoirs for Plasmodium spp., some of which are known exotic lineages." }, { "instance_id": "R58002xR57806", "comparison_id": "R58002", "paper_id": "R57806", "text": "Species interactions contribute to the success of a global plant invader Biological invasions are ubiquitous ecological phenomena that often impact native ecosystems. Some introduced species have evolved traits that enhance their ability to compete and dominate in recipient communities. However, it is still unknown if introduced species can evolve traits that may enhance their species interactions to fuel invasion success. We tested whether Centaurea solstitialis (yellow starthistle) from introduced populations have greater performance than native counterparts, and whether they generate more beneficial plant-soil interactions. We used common garden and plant-soil feedback experiments with soils and seeds from native Eurasian and introduced Californian populations. We found that performance of Centaurea did not differ among source genotypes, implying that the success of this invasive species is not due to evolutionary changes. However, Centaurea grew significantly larger in soils from introduced regions than from native regions, indicating a reduction in natural enemy pressure from native populations. We conclude that species interactions, not evolution, may contribute to Centaurea\u2019s invasion success in introduced populations." }, { "instance_id": "R58002xR57768", "comparison_id": "R58002", "paper_id": "R57768", "text": "Post-dispersal seed mortality of exotic and native species: Effects of fungal pathogens and seed predators Abstract The invasive behaviour of exotic species is assumed to be due to the reduced impact of enemies on their performance, along with other possible mechanisms. I studied whether the seeds of exotics (6 species) are less impacted by seed predators and seed fungal pathogens than the seeds of their related natives (5 species). I also explored whether the co-occurrence of related natives and the time since introduction increased the percentage of lost seeds in exotics. Seeds were either left unprotected during a period of seven months or treated with fungicide, protected by seed predator exclosures or subjected to both treatments. Both treatments improved seed survival rate. Fungicide treatment had more positive effect on seeds of native than of exotic species but the fungicide-by-origin interaction was insignificant. When exotic species only were considered, fungicide had neutral effect on survival of their seeds, irrespective of the co-occurrence of related natives in the vegetation. Time since introduction was shown not to influence the proportion of seeds lost due to fungi or seed predators. Though the results of this study did not support enemy release as a possible mechanism causing the invasiveness of exotic species, it identified fungal pathogens as an enemy group with possibly differential impacts on native and exotic seeds, which thus deserves attention in future studies." }, { "instance_id": "R58002xR57708", "comparison_id": "R58002", "paper_id": "R57708", "text": "Variable effects of large mammal herbivory on three non-native versus three native woody plants Abstract (1) The enemy release hypothesis posits that introduced species leave behind co-evolved pathogens and predators, thereby gaining an advantage over native competitors. On the other hand, introduced plants may encounter biotic resistance from local generalist herbivores such as large mammals. (2) We conducted a replicated, manipulative field experiment to compare the effects of large-mammal herbivory on growth and survival of three native and three invasive woody species over 2 years. Non-native Acer platanoides , Frangula alnus P. Mill. (= Rhamnus frangula L.) and Elaeagnus umbellata were each paired with a likely native competitor of similar life form and shade tolerance. Seedlings were planted with and without large-mammal exclosures, in open and understory environments. (3) In the open, E. umbellata grew taller than its paired native only when exposed to herbivory, but F. alnus grew taller than its paired native only within exclosures. The effects of exclosure on growth rate did not differ between A. platanoides and its native congener. In the understory, exposure to browsing reduced height growth rate overall in native species, but not in invasive species. (4) Browsing increased understory mortality only in the native shrub Viburnum dentatum , and did not affect mortality in the open. Within exclosures, there was a general trade-off between open growth and understory survival, but outside of exclosures, E. umbellata exhibited both greater open growth and greater understory survival than its native competitor. (5) Although large-mammal herbivory did not consistently favor non-natives, lack of browsing impact played an important facilitating role for E. umbellata in particular." }, { "instance_id": "R58002xR57818", "comparison_id": "R58002", "paper_id": "R57818", "text": "Spatial distribution and performance of native and invasive Ardisia (Myrsinaceae) species in Puerto Rico: the anatomy of an invasion Comparisons between native and invasive congeners are potentially useful approaches for identifying characteristics that promote invasiveness. Those traits for which an invasive exhibits superior ecological performance are likely to contribute to its invasiveness. We tested the hypothesis that invasive tree species have better ecological performance in early life cycle stages than native species in forests where they coexist. We studied locally sympatric populations of the invasive Ardisia elliptica and the native A. obovata (Myrsinaceae) in Puerto Rico. We compared spatial distribution, herbivory and growth in seedlings, seed germination in the field and under controlled conditions, and fruit production. We found the distribution of each species was aggregated in the three categories of size (seedlings, juveniles and adults) and the populations partially overlapped. The invasive species was the most abundant species in every category of size. The two species did not differ in percentage of leaf area consumed and seedlings of both species had the same relative growth rate (RGR) in the forest. However, the invasive species had higher germination success in the field, faster mean germination time in the lab and higher fruit production. It appears that the success of A. elliptica is not through escape from pathogens or herbivores, but by a better performance in fruiting and seed germination in the forest." }, { "instance_id": "R58002xR57853", "comparison_id": "R58002", "paper_id": "R57853", "text": "Invading from the garden? A comparison of leaf herbivory for exotic and native plants in natural and ornamental settings Abstract The enemies release hypothesis proposes that exotic species can become invasive by escaping from predators and parasites in their novel environment. Agrawal et al. (Enemy release? An experiment with congeneric plant pairs and diverse above\u2010 and below\u2010ground enemies. Ecology, 86, 2979\u20132989) proposed that areas or times in which damage to introduced species is low provide opportunities for the invasion of native habitat. We tested whether ornamental settings may provide areas with low levels of herbivory for trees and shrubs, potentially facilitating invasion success. First, we compared levels of leaf herbivory among native and exotic species in ornamental and natural settings in Cincinnati, Ohio, United States. In the second study, we compared levels of herbivory for invasive and noninvasive exotic species between natural and ornamental settings. We found lower levels of leaf damage for exotic species than for native species; however, we found no differences in the amount of leaf damage suffered in ornamental or natural settings. Our results do not provide any evidence that ornamental settings afford additional release from herbivory for exotic plant species." }, { "instance_id": "R58002xR57693", "comparison_id": "R58002", "paper_id": "R57693", "text": "Tolerance to herbivory, and not resistance, may explain differential success of invasive, naturalized, and native North American temperate vines Numerous hypotheses suggest that natural enemies can influence the dynamics of biological invasions. Here, we use a group of 12 related native, invasive, and naturalized vines to test the relative importance of resistance and tolerance to herbivory in promoting biological invasions. In a field experiment in Long Island, New York, we excluded mammal and insect herbivores and examined plant growth and foliar damage over two growing seasons. This novel approach allowed us to compare the relative damage from mammal and insect herbivores and whether damage rates were related to invasion. In a greenhouse experiment, we simulated herbivory through clipping and measured growth response. After two seasons of excluding herbivores, there was no difference in relative growth rates among invasive, naturalized, and native woody vines, and all vines were susceptible to damage from mammal and insect herbivores. Thus, differential attack by herbivores and plant resistance to herbivory did not explain invasion success of these species. In the field, where damage rates were high, none of the vines were able to fully compensate for damage from mammals. However, in the greenhouse, we found that invasive vines were more tolerant of simulated herbivory than native and naturalized relatives. Our results indicate that invasive vines are not escaping herbivory in the novel range, rather they are persisting despite high rates of herbivore damage in the field. While most studies of invasive plants and natural enemies have focused on resistance, this work suggests that tolerance may also play a large role in facilitating invasions." }, { "instance_id": "R58002xR57791", "comparison_id": "R58002", "paper_id": "R57791", "text": "Virulence of soil-borne pathogens and invasion by Prunus serotina *Globally, exotic invaders threaten biodiversity and ecosystem function. Studies often report that invading plants are less affected by enemies in their invaded vs home ranges, but few studies have investigated the underlying mechanisms. *Here, we investigated the variation in prevalence, species composition and virulence of soil-borne Pythium pathogens associated with the tree Prunus serotina in its native US and non-native European ranges by culturing, DNA sequencing and controlled pathogenicity trials. *Two controlled pathogenicity experiments showed that Pythium pathogens from the native range caused 38-462% more root rot and 80-583% more seedling mortality, and 19-45% less biomass production than Pythium from the non-native range. DNA sequencing indicated that the most virulent Pythium taxa were sampled only from the native range. The greater virulence of Pythium sampled from the native range therefore corresponded to shifts in species composition across ranges rather than variation within a common Pythium species. *Prunus serotina still encounters Pythium in its non-native range but encounters less virulent taxa. Elucidating patterns of enemy virulence in native and nonnative ranges adds to our understanding of how invasive plants escape disease. Moreover, this strategy may identify resident enemies in the non-native range that could be used to manage invasive plants." }, { "instance_id": "R58002xR57823", "comparison_id": "R58002", "paper_id": "R57823", "text": "Biotic resistance via granivory: establishment by invasive, naturalized, and native asters reflects generalist preference Escape from specialist natural enemies is frequently invoked to explain exotic plant invasions, but little attention has been paid to how generalist consumers in the recipient range may influence invasion. We examined how seed preferences of the widespread generalist granivore Peromyscus maniculatus related to recruitment of the strongly invasive exotic Centaurea stoebe and several weakly invasive exotics and natives by conducting laboratory feeding trials and seed addition experiments in the field. Laboratory feeding trials showed that P. maniculatus avoided consuming seeds of C. stoebe relative to the 12 other species tested, even when seeds of alternative species were 53-94% smaller than those of C. stoebe. Seed addition experiments conducted in and out of rodent exclosures revealed that weakly invasive exotics experienced relatively greater release from seed predation than C. stoebe, although this was not the case for natives. Seed mass explained 81% of the variation in recruitment associated with rodent exclusion for natives and weak invaders, with larger-seeded species benefiting most from protection from granivores. However, recruitment of C. stoebe was unaffected by rodent exclusion, even though the regression model predicted seeds of correspondingly large mass should experience substantial predation. These combined laboratory and field results suggest that generalist granivores can be an important biological filter in plant communities and that species-specific seed attributes that determine seed predation may help to explain variation in native plant recruitment and the success of exotic species invasions." }, { "instance_id": "R58002xR57772", "comparison_id": "R58002", "paper_id": "R57772", "text": "Enemy release and plant invasion: patterns of defensive traits and leaf damage in Hawaii Invasive species may be released from consumption by their native herbivores in novel habitats and thereby experience higher fitness relative to native species. However, few studies have examined release from herbivory as a mechanism of invasion in oceanic island systems, which have experienced particularly high loss of native species due to the invasion of non-native animal and plant species. We surveyed putative defensive traits and leaf damage rates in 19 pairs of taxonomically related invasive and native species in Hawaii, representing a broad taxonomic diversity. Leaf damage by insects and pathogens was monitored in both wet and dry seasons. We found that native species had higher leaf damage rates than invasive species, but only during the dry season. However, damage rates across native and invasive species averaged only 2% of leaf area. Native species generally displayed high levels of structural defense (leaf toughness and leaf thickness, but not leaf trichome density) while native and invasive species displayed similar levels of chemical defenses (total phenolics). A defense index, which integrated all putative defense traits, was significantly higher for native species, suggesting that native species may allocate fewer resources to growth and reproduction than do invasive species. Thus, our data support the idea that invasive species allocate fewer resources to defense traits, allowing them to outperform native species through increased growth and reproduction. While strong impacts of herbivores on invasion are not supported by the low damage rates we observed on mature plants, population-level studies that monitor how herbivores influence recruitment, mortality, and competitive outcomes are needed to accurately address how herbivores influence invasion in Hawaii." }, { "instance_id": "R58002xR57729", "comparison_id": "R58002", "paper_id": "R57729", "text": "Effects of non-native plants on the native insect community of Delaware Due to the lack of a co-evolutionary history, the novel defenses presented by introduced plants may be insurmountable to many native insects. Accordingly, non-native plants are expected to support less insect biomass than native plants. Further, native insect specialists may be more affected by introduced plants than native generalist herbivores, resulting in decreased insect diversity on non-native plants due to the loss of specialists. To test these hypotheses, we used a common garden experiment to compare native insect biomass, species richness, and the proportion of native specialist to native generalist insects supported by 45 species of woody plants. Plants were classified into three groupings, with 10 replicates of each species: 15 species native to Delaware (Natives), 15 non-native species that were congeneric with a member of the Native group (Non-native Congeners), and 15 non-native species that did not have a congener present in the United States (Aliens). Native herbivorous insects were sampled in May, June, and July of 2004 and 2005. Overall, insect biomass was greater on Natives than Non-native Congeners and Aliens, but insect biomass varied unpredictably between congeneric pair members. Counter to expectations, Aliens held more insect biomass than did Non-native Congeners. There was no difference in species richness or the number of specialist and generalist species collected among the three plant groupings in either year, although our protocol was biased against sampling specialists. If these results generalize to other studies, loss of native insect biomass due to introduced plants may negatively affect higher trophic levels of the ecosystem." }, { "instance_id": "R58002xR57920", "comparison_id": "R58002", "paper_id": "R57920", "text": "Invasive plants escape from suppressive soil biota at regional scales Summary A prominent hypothesis for plant invasions is escape from the inhibitory effects of soil biota. Although the strength of these inhibitory effects, measured as soil feedbacks, has been assessed between natives and exotics in non-native ranges, few studies have compared the strength of plant\u2013soil feedbacks for exotic species in soils from non-native versus native ranges. We examined whether 6 perennial European forb species that are widespread invaders in North American grasslands (Centaurea stoebe, Euphorbia esula, Hypericum perforatum, Linaria vulgaris, Potentilla recta and Leucanthemum vulgare) experienced different suppressive effects of soil biota collected from 21 sites across both ranges. Four of the six species tested exhibited substantially reduced shoot biomass in \u2018live\u2019 versus sterile soil from Europe. In contrast, North American soils produced no significant feedbacks on any of the invasive species tested indicating a broad scale escape from the inhibitory effects of soil biota. Negative feedbacks generated by European soil varied idiosyncratically among sites and species. Since this variation did not correspond with the presence of the target species at field sites, it suggests that negative feedbacks can be generated from soil biota that are widely distributed in native ranges in the absence of density-dependent effects. Synthesis. Our results show that for some invasives, native soils have strong suppressive potential, whereas this is not the case in soils from across the introduced range. Differences in regional-scale evolutionary history among plants and soil biota could ultimately help explain why some exotics are able to occur at higher abundance in the introduced versus native range." }, { "instance_id": "R58002xR57702", "comparison_id": "R58002", "paper_id": "R57702", "text": "Resistance of an invasive gastropod to an indigenous trematode parasite in Lake Malawi Successful establishment and spread of biological invaders may be promoted by the absence of population-regulating enemies such as pathogens, parasites or predators. This may come about when introduced taxa are missing enemies from their native habitats, or through immunity to enemies within invaded habitats. Here we provide field evidence that trematode parasites are absent in a highly invasive morph of the gastropod Melanoides tuberculata in Lake Malawi, and that the invasive morph is resistant to indigenous trematodes that castrate and induce gigantism in native M. tuberculata. Since helminth infections can strongly influence host population abundances in other host-parasite systems, this enemy release may have provided an advantage to the invasive morph in terms of reproductive capacity and survivorship." }, { "instance_id": "R58002xR57691", "comparison_id": "R58002", "paper_id": "R57691", "text": "Soil feedback of exotic savanna grass relates to pathogen absence and mycorrhizal selectivity Enemy release of exotic plants from soil pathogens has been tested by examining plant-soil feedback effects in repetitive growth cycles. However, positive soil feedback may also be due to enhanced benefit from the local arbuscular mycorrhizal fungi (AMF). Few studies actually have tested pathogen effects, and none of them did so in arid savannas. In the Kalahari savanna in Botswana, we compared the soil feedback of the exotic grass Cenchrus biflorus with that of two dominant native grasses, Eragrostis lehmanniana and Aristida meridionalis. The exotic grass had neutral to positive soil feedback, whereas both native grasses showed neutral to negative feedback effects. Isolation and testing of root-inhabiting fungi of E. lehmanniana yielded two host-specific pathogens that did not influence the exotic C. biflorus or the other native grass, A. meridionalis. None of the grasses was affected by the fungi that were isolated from the roots of the exotic C. biflorus. We isolated and compared the AMF community of the native and exotic grasses by polymerase chain reaction-denaturing gradient gel elecrophoresis (PCR-DGGE), targeting AMF 18S rRNA. We used roots from monospecific field stands and from plants grown in pots with mixtures of soils from the monospecific field stands. Three-quarters of the root samples of the exotic grass had two nearly identical sequences, showing 99% similarity with Glomus versiforme. The two native grasses were also associated with distinct bands, but each of these bands occurred in only a fraction of the root samples. The native grasses contained a higher diversity of AMF bands than the exotic grass. Canonical correspondence analyses of the AMF band patterns revealed almost as much difference between the native and exotic grasses as between the native grasses. In conclusion, our results support the hypothesis that release from soil-borne enemies may facilitate local abundance of exotic plants, and we provide the first evidence that these processes may occur in arid savanna ecosystems. Pathogenicity tests implicated the involvement of soil pathogens in the soil feedback responses, and further studies should reveal the functional consequences of the observed high infection with a low diversity of AMF in the roots of exotic plants." }, { "instance_id": "R58002xR57656", "comparison_id": "R58002", "paper_id": "R57656", "text": "Prevalence and evolutionary relationships of haematozoan parasites in native versus introduced populations of common myna Acridotheres tristis The success of introduced species is frequently explained by their escape from natural enemies in the introduced region. We tested the enemy release hypothesis with respect to two well studied blood parasite genera ( Plasmodium and Haemoproteus ) in native and six introduced populations of the common myna Acridotheres tristis . Not all comparisons of introduced populations to the native population were consistent with expectations of the enemy release hypothesis. Native populations show greater overall parasite prevalence than introduced populations, but the lower prevalence in introduced populations is driven by low prevalence in two populations on oceanic islands (Fiji and Hawaii). When these are excluded, prevalence does not differ significantly. We found a similar number of parasite lineages in native populations compared to all introduced populations. Although there is some evidence that common mynas may have carried parasite lineages from native to introduced locations, and also that introduced populations may have become infected with novel parasite lineages, it may be difficult to differentiate between parasites that are native and introduced, because malarial parasite lineages often do not show regional or host specificity." }, { "instance_id": "R58002xR57918", "comparison_id": "R58002", "paper_id": "R57918", "text": "Determining the origin of invasions and demonstrating a lack of enemy release from microsporidian pathogens in common wasps (Vespula vulgaris) Aim Understanding the role of enemy release in biological invasions requires an assessment of the invader\u2019s home range, the number of invasion events and enemy prevalence. The common wasp (Vespula vulgaris) is a widespread invader. We sought to determine the Eurasian origin of this wasp and examined world-wide populations for microsporidian pathogen infections to investigate enemy release. Location Argentina, Eurasia, New Zealand. Methods A haplotype network and phylogenetic tree were constructed from combined wasp COI and cytb mitochondrial markers. A morphometric study using canonical discriminant analysis was conducted on wing venation patterns. Microsporidian pathogens prevalence was also examined using small subunit rRNA microsporidia-specific primers. Results Our spatially structured haplotype network from the native range suggested a longitudinal cline of wasp haplotypes along an east to west gradient. Six haplotypes were detected from New Zealand, and two from Argentina. The populations from the introduced range were genetically similar to the western European, United Kingdom and Ireland. The morphometric analysis showed significant morphological variation between countries and supported the Western European origin for New Zealand populations, although not for Argentine samples. Microsporidian infection rates were highest in New Zealand samples (54%), but no significant differences in infection rates were observed between the invaded and native range. Nosema species included matches to N. apis (a pathogen from honey bees) and N. bombi (from bumble bees). Main conclusions Multiple introductions of the common wasp have occurred in the invaded range. A high microsporidian infection rate within the native range, combined with multiple introductions and a reservoir of pathogens in other social insects such as bees, likely contributes to the high microsporidian infection rates in the invaded range. Enemy release is likely to be more frequent when pathogens are rare in the home range, or are host specific and rare in reservoir populations of the introduced range." }, { "instance_id": "R58002xR57940", "comparison_id": "R58002", "paper_id": "R57940", "text": "Non-native plant invader renders suitable habitat unsuitable Coevolution between insect herbivores and their target plants often prompts an evolutionary arms race so that insects develop methods to overcome plant defenses. In doing so, some insects become specialized on host plants so that suitable habitat for these herbivorous insects may be demarcated by coevolved plant species. Non-native plant species may render suitable habitat unsuitable by replacing palatable native species with novel defenses that deter native herbivores. We investigated these dynamics in an urban forest heavily invaded by a non-native shrub (Rhamnus cathartica). We experimentally introduced native trees into a non-native dominated urban forest and found significantly higher Lepidoptera larvae abundance and diversity, and overall herbivory, on the native than non-native species. Hence, we rendered unsuitable habitat suitable by providing a key biotic niche requirement: palatable plants. These results suggest that non-native plants may simplify ecological systems by eliminating critical niche requirements for native species." }, { "instance_id": "R58002xR53306", "comparison_id": "R58002", "paper_id": "R53306", "text": "Phylogenetic structure predicts capitular damage to Asteraceae better than origin or phylogenetic distance to natives Exotic species more closely related to native species may be more susceptible to attack by native natural enemies, if host use is phylogenetically conserved. Where this is the case, the use of phylogenies that include co-occurring native and exotic species may help to explain interspecific variation in damage. In this study, we measured damage caused by pre-dispersal seed predators to common native and exotic plants in the family Asteraceae. Damage was then mapped onto a community phylogeny of this family. We tested the predictions that damage is phylogenetically structured, that exotic plants experience lower damage than native species after controlling for this structure, and that phylogenetically novel exotic species would experience lower damage. Consistent with our first prediction, 63% of the variability in damage was phylogenetically structured. When this structure was accounted for, exotic plants experienced significantly lower damage than native plants, but species origin only accounted for 3% of the variability of capitular damage. Finally, there was no support for the phylogenetic novelty prediction. These results suggest that interactions between exotic plants and their seed predators may be strongly influenced by their phylogenetic position, but not by their relationship to locally co-occurring native species. In addition, the influence of a species\u2019 origin on the damage it experiences often may be small relative to phylogenetically conserved traits." }, { "instance_id": "R58002xR57935", "comparison_id": "R58002", "paper_id": "R57935", "text": "Enemy release and genetic founder effects in invasive killer shrimp populations of Great Britain The predatory \u201ckiller shrimp\u201d Dikerogammarus villosus invaded Britain from mainland Europe in 2010. Originating in the Ponto-Caspian region, this invader has caused significant degradation of European freshwater ecosystems by predating and competitively excluding native invertebrate species. In contrast to continental Europe, in which invasions occurred through the migration of large numbers of individuals along rivers and canals, the invasion of Great Britain must have involved long distance dispersal across the sea. This makes the loss of genetic diversity and of debilitating parasites more likely. Analysis of nuclear microsatellite loci and mitochondrial DNA sequences of D. villosus samples from the four known populations in Britain reveal loss of rare alleles, in comparison to reference populations from the west coast of continental Europe. Screening of the British D. villosus populations by PCR detected no microsporidian parasites, in contrast with continental populations of D. villosus and native amphipod populations, most of which are infected with microsporidia. These findings suggest that the initial colonisation of Great Britain and subsequent long distance dispersal within Britain were associated with genetic founder effects and enemy release due to loss of parasites. Such effects are also likely to occur during future long-distance dispersal events of D. villosus to Ireland or North America." }, { "instance_id": "R58002xR57658", "comparison_id": "R58002", "paper_id": "R57658", "text": "Native parasites adopt introduced bivalves of the North Sea Introduced species may have a competitive advantage over native species due to a lack of predators or pathogens. In the North Sea region, it has been assumed that no metazoan parasites are to be found in marine introduced species. In an attempt to test this assumption, we found native parasites in the introduced bivalves Crassostrea gigas and Ensis americanus with a prevalence of 35% and 80%, respectively, dominated by the trematode Renicola roscovita. When comparing these introduced species with native bivalves from the same localities, Mytilus edulis and Cerastoderma edule, trematode intensity was always lower in the introduced species. These findings have three major implications: (1) introduced bivalves are not free of detrimental parasites which raises the question whether introduced species have an advantage over native species after invasion, (2) introduced bivalves may divert parasite burdens providing a relief for native species and (3) they may affect parasite populations by influencing the fate of infectious stages, ending either in dead end hosts, not being consumed by potential final hosts or by adding new hosts. Future studies should consider these implications to arrive at a better understanding of the interplay between native parasites and introduced hosts." }, { "instance_id": "R58002xR57833", "comparison_id": "R58002", "paper_id": "R57833", "text": "Exploring the potential for climatic factors, herbivory, and co-occurring vegetation to shape performance in native and introduced populations of Verbascum thapsus Biogeographic data describing performance differences in native and introduced populations of invasive species are increasingly coming to light, revealing that introduced populations often perform better than their native conspecifics. However, this pattern is not universal, nor is it well studied in species that fall on the more \u201cbenign\u201d end of the invasion spectrum. Furthermore, performance data are infrequently linked with variation in key environmental factors experienced by populations in each range, making it difficult to assess which factors are typically associated with shifts in performance. Here we assessed performance in native and introduced populations of Verbascum thapsus (common mullein), an herbaceous biennial that was initially introduced to the eastern US from Europe, but which has subsequently expanded its range into semi-arid mountainous regions of the western US, where it appears to be more problematic. Indeed, we found that introduced populations were larger than native populations, with over half of them comprising more than 500 individuals, a size seldom achieved in the native range. We further explored the role that abiotic factors (latitude, elevation, and precipitation) might serve in shaping performance in European and western US populations, and quantified variation in two biotic factors relevant to invasion: herbivory, and the potential for competition from co-occurring vegetation (as well as its inverse, the availability of bare ground). When the influence of abiotic factors was not considered, introduced mullein performed better than native mullein in terms of plant density and plant size (i.e., number of leaves and area covered by the basal rosette). When the influence of abiotic factors was statistically taken into account, the difference in the density of native and introduced populations remained strong, while the difference in number of leaves was reduced, though it remained significant. In contrast, controlling for abiotic factors reversed the pattern for plant area such that plants in introduced populations performed less well than natives. These results suggest that the difference in climate experienced by native and introduced populations is an important driver of mullein performance only for plant area. Thus, increased performance in western US population likely hinges in part on shifts in biotic factors. Indeed, we found a reduction in the prevalence of several herbivore guilds on introduced relative to native mullein, accompanied by a significant decrease in chewing damage in introduced populations. We also found differences in the potential for competition: cover of vegetation is significantly higher in native mullein populations than in introduced populations, and increasing cover of vegetation is associated with declining performance (i.e., density) of native populations but not introduced populations. In sum, the introduced populations performed better than the native populations in several respects; thus, although mullein is considered a relatively \u2018benign\u2019 introduced species, it has the potential to differentially impact resident communities in its native and introduced ranges. Additionally, despite the disparity in abiotic conditions experienced by native and introduced populations, these factors do not appear to consistently drive differences in performance. Instead, evidence suggests that enemy escape and shifts in the competitive regime may facilitate mullein invasion. We use our data to propose hypotheses to be tested experimentally." }, { "instance_id": "R58002xR57715", "comparison_id": "R58002", "paper_id": "R57715", "text": "Parasite loss and introduced species: a comparison of the parasites of the Puerto Rican tree frog, (Eleutherodactylus coqui), in its native and introduced ranges The Puerto Rican frog, Eleutherodactylus coqui has invaded Hawaii and reached densities far exceeding those in their native range. One possible explanation for the success of E. coqui in its introduced range is that it lost its co-evolved parasites in the process of the invasion. We compared the parasites of E. coqui in its native versus introduced range. We collected parasite data on 160 individual coqui frogs collected during January-April 2006 from eight populations in Puerto Rico and Hawaii. Puerto Rican coqui frogs had higher species richness of parasites than Hawaiian coqui frogs. Parasite prevalence and intensity were significantly higher in Hawaii, however this was likely a product of the life history of the dominant parasite and its minimal harm to the host. This suggests that the scarcity of parasites may be a factor contributing to the success of Eleutherodactylus coqui in Hawaii." }, { "instance_id": "R58002xR54616", "comparison_id": "R58002", "paper_id": "R54616", "text": "Impact of Acroptilon repens on co-occurring native plants is greater in the invader's non-native range Concern over exotic invasions is fueled in part by the observation that some exotic species appear to be more abundant and have stronger impacts on other species in their non-native ranges than in their native ranges. Past studies have addressed biogeographic differences in abundance, productivity, biomass, density and demography between plants in their native and non-native ranges, but despite widespread observations of biogeographic differences in impact these have been virtually untested. In a comparison of three sites in each range, we found that the abundance of Acroptilon repens in North America where it is invasive was almost twice that in Uzbekistan where it is native. However, this difference in abundance translated to far greater differences between regions in the apparent impacts of Acroptilon on native species. The biomass of native species in Acroptilon stands was 25\u201330 times lower in the non-native range than in the native range. Experimental addition of native species as seeds significantly increased the abundance of natives at one North American site, but the proportion of native biomass even with seed addition remained over an order of magnitude lower than that of native species in Acroptilon stands in Uzbekistan. Experimental disturbance had no long-term effect on Acroptilon abundance or impact in North America, but Acroptilon increased slightly in abundance after disturbance in Uzbekistan. In a long-term experiment in Uzbekistan, suppression of invertebrate herbivores and pathogens did not result in either consistent increases in Acroptilon biomass across years or declines in the biomass of other native species, as one might expect if the low impact of Acroptilon in the native range was due to its strong top\u2013down regulation by natural enemies. Our local scale measurements do not represent all patterns of Acroptilon distribution and abundance that might exist at the scale of landscapes in either range, but they do suggest the possibility of fundamental biogeographic differences in the way a highly successful invader interacts with other species, differences that are not simply related to greater biomass or reduced top\u2013down regulation of the invader in its non-native range." }, { "instance_id": "R58002xR57607", "comparison_id": "R58002", "paper_id": "R57607", "text": "Herbivores and the success of exotic plants: a phylogenetically controlled experiment In a field experiment with 30 locally occurring old-field plant species grown in a common garden, we found that non-native plants suffer levels of attack (leaf herbivory) equal to or greater than levels suffered by congeneric native plants. This phylogenetically controlled analysis is in striking contrast to the recent findings from surveys of exotic organisms, and suggests that even if enemy release does accompany the invasion process, this may not be an important mechanism of invasion, particularly for plants with close relatives in the recipient flora." }, { "instance_id": "R58002xR57635", "comparison_id": "R58002", "paper_id": "R57635", "text": "Invasive exotic plants suffer less herbivory than non-invasive exotic plants We surveyed naturally occurring leaf herbivory in nine invasive and nine non-invasive exotic plant species sampled in natural areas in Ontario, New York and Massachusetts, and found that invasive plants experienced, on average, 96% less leaf damage than non-invasive species. Invasive plants were also more taxonomically isolated than non-invasive plants, belonging to families with 75% fewer native North American genera. However, the relationship between taxonomic isolation at the family level and herbivory was weak. We suggest that invasive plants may possess novel phytochemicals with anti-herbivore properties in addition to allelopathic and anti-microbial characteristics. Herbivory could be employed as an easily measured predictor of the likelihood that recently introduced exotic plants may become invasive." }, { "instance_id": "R58002xR57914", "comparison_id": "R58002", "paper_id": "R57914", "text": "High water-use efficiency and growth contribute to success of non-native Erodium cicutarium in a Sonoran Desert winter annual community Erodium cicutarium, an invasive plant, has recently increased in abundance in the Sonoran Desert. We tested hypotheses for its success, and found no evidence for a release from natural enemies. Instead, E. cicutarium was able to achieve higher growth rates while controlling leaf-level water loss, allowing it to out-compete natives." }, { "instance_id": "R58002xR57868", "comparison_id": "R58002", "paper_id": "R57868", "text": "Parasites of the fish Cichla piquiti (Cichlidae) in native and invaded Brazilian basins: release not from the enemy, but from its effects The enemy release hypothesis is frequently used to explain the success of invaders, postulating that introduced species have escaped from their native enemies, including parasites. Here, we tested this hypothesis for the tucunar\u00e9 (Cichla piquiti), a predatory cichlid, and its endoparasites. First, the parasites and their influence on the condition of the hosts in the native environment, the Tocantins River (TO), were compared to an environment where the fish was introduced, the Paran\u00e1 River (PR). Then, comparisons of the abundances of Diplostomidae eye flukes and Contracaecum sp. larval nematodes were made between the introduced tucunar\u00e9 and two predators native to the PR, Hoplias malabaricus and Raphiodon vulpinus. Nine species of endoparasites were recorded in total, five of which occurring in both localities. Total species richness did not differ between localities, and fish condition was negatively affected by the cestodes Sciadocephalus megalodiscus only in the TO. In the PR, abundance of Contracaecum sp. did not differ between natives and invaders; however, eye flukes were more abundant in the native fish H. malabaricus, which may represent an advantage to the invader if they were competing for prey. These results did not support the idea that the escape from parasites favoured the establishment of C. piquiti in the PR. Instead, the escape from the parasites' effects seems a better explanation, and further studies examining effects on host physiology and/or fitness in the native and introduced ranges are needed." }, { "instance_id": "R6755xR6535", "comparison_id": "R6755", "paper_id": "R6535", "text": "LinkDaViz \u2013 Automatic Binding of Linked Data to Visualizations As the Web of Data is growing steadily, the demand for user-friendly means for exploring, analyzing and visualizing Linked Data is also increasing. The key challenge for visualizing Linked Data consists in providing a clear overview of the data and supporting non-technical users in finding suitable visualizations while hiding technical details of Linked Data and visualization configuration. In order to accomplish this, we propose a largely automatic workflow which guides users through the process of creating visualizations by automatically categorizing and binding data to visualization parameters. The approach is based on a heuristic analysis of the structure of the input data and a comprehensive visualization model facilitating the automatic binding between data and visualization parameters. The resulting assignments are ranked and presented to the user. With LinkDaViz we provide a web-based implementation of the approach and demonstrate the feasibility by an extended user and performance evaluation." }, { "instance_id": "R6755xR6527", "comparison_id": "R6755", "paper_id": "R6527", "text": "rdf:SynopsViz \u2013 A Framework for Hierarchical Linked Data Visual Exploration and Analysis The purpose of data visualization is to offer intuitive ways for information perception and manipulation, especially for non-expert users. The Web of Data has realized the availability of a huge amount of datasets. However, the volume and heterogeneity of available information make it difficult for humans to manually explore and analyse large datasets. In this paper, we present rdf:SynopsViz, a tool for hierarchical charting and visual exploration of Linked Open Data (LOD). Hierarchical LOD exploration is based on the creation of multiple levels of hierarchically related groups of resources based on the values of one or more properties. The adopted hierarchical model provides effective information abstraction and summarization. Also, it allows efficient -on the fly- statistic computations, using aggregations over the hierarchy levels." }, { "instance_id": "R6755xR6515", "comparison_id": "R6755", "paper_id": "R6515", "text": "Formal Linked Data Visualization Model Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data." }, { "instance_id": "R6755xR6539", "comparison_id": "R6755", "paper_id": "R6539", "text": "Visual analysis of statistical data on maps using linked open data When analyzing statistical data, one of the most basic and at the same time widely used techniques is analyzing correlations. As shown in previous works, Linked Open Data is a rich resource for discovering such correlations. In this demo, we show how statistical analysis and visualization on maps can be combined to facilitate a deeper understanding of the statistical findings." }, { "instance_id": "R6755xR6507", "comparison_id": "R6755", "paper_id": "R6507", "text": "LODWheel \u2013 JavaScript-based Visualization of RDF Data Visualizing Resource Description Framework (RDF) data to support decision-making processes is an important and challenging aspect of consuming Linked Data. With the recent development of JavaScript libraries for data visualization, new opportunities for Web-based visualization of Linked Data arise. This paper presents an extensive evaluation of JavaScript-based libraries for visualizing RDF data. A set of criteria has been devised for the evaluation and 15 major JavaScript libraries have been analyzed against the criteria. The two JavaScript libraries with the highest score in the evaluation acted as the basis for developing LODWheel (Linked Open Data Wheel) - a prototype for visualizing Linked Open Data in graphs and charts - introduced in this paper. This way of visualizing RDF data leads to a great deal of challenges related to data-categorization and connecting data resources together in new ways, which are discussed in this paper." }, { "instance_id": "R6755xR6519", "comparison_id": "R6755", "paper_id": "R6519", "text": "Payola: Collaborative Linked Data Analysis and Visualization Framework Payola is a framework for Linked Data analysis and visualization. The goal of the project is to provide end users with a tool enabling them to analyze Linked Data in a user-friendly way and without knowledge of SPARQL query language. This goal can be achieved by populating the framework with variety of domain-specific analysis and visualization plugins. The plugins can be shared and reused among the users as well as the created analyses. The analyses can be executed using the tool and the results can be visualized using a variety of visualization plugins. The visualizations can be further customized according to ontologies used in the resulting data. The framework is highly extensible and uses modern technologies such as HTML5 and Scala. In this paper we show two use cases, one general and one from the domain of public procurement." }, { "instance_id": "R6755xR6531", "comparison_id": "R6755", "paper_id": "R6531", "text": "Using Semantics for Interactive Visual Analysis of Linked Open Data Providing easy to use methods for visual analysis of Linked Data is often hindered by the complexity of semantic technologies. On the other hand, semantic information inherent to Linked Data provides opportunities to support the user in interactively analysing the data. This paper provides a demonstration of an interactive, Web-based visualisation tool, the \"Vis Wizard\", which makes use of semantics to simplify the process of setting up visualisations, transforming the data and, most importantly, interactively analysing multiple datasets using brushing and linking methods." }, { "instance_id": "R6756xR6441", "comparison_id": "R6756", "paper_id": "R6441", "text": "Interactive Relationship Discovery via the Semantic Web This paper presents an approach for the interactive discovery of relationships between selected elements via the Semantic Web. It emphasizes the human aspect of relationship discovery by offering sophisticated interaction support. Selected elements are first semi-automatically mapped to unique objects of Semantic Web datasets. These datasets are then crawled for relationships which are presented in detail and overview. Interactive features and visual clues allow for a sophisticated exploration of the found relationships. The general process is described and the RelFinder tool as a concrete implementation and proof-of-concept is presented and evaluated in a user study. The application potentials are illustrated by a scenario that uses the RelFinder and DBpedia to assist a business analyst in decision-making. Main contributions compared to previous and related work are data aggregations on several dimensions, a graph visualization that displays and connects relationships also between more than two given objects, and an advanced implementation that is highly configurable and applicable to arbitrary RDF datasets." }, { "instance_id": "R6756xR6421", "comparison_id": "R6756", "paper_id": "R6421", "text": "Browsing Linked Data with Fenfire A wealth of information has recently become available as browsable RDF data on the Web, but the selection of client applications to interact with this Linked Data remains limited. We show how to browse Linked Data with Fenfire, a Free and Open Source Software RDF browser and editor that employs a graph view and focuses on an engaging and interactive browsing experience. This sets Fenfire apart from previous table- and outline-based Linked Data browsers." }, { "instance_id": "R6756xR6429", "comparison_id": "R6756", "paper_id": "R6429", "text": "Using Clusters in RDF Visualization Clustered graph visualization techniques are an easy to understand way of hiding complex parts of a visualized graph when they are not needed by the user. When visualizing RDF, there are several situations where such clusters are defined in a very natural way. Using this techniques, we can give the user optional access to some detailed information without unnecessarily occupying space in the basic view of the data. This paper describes algorithms for clustered visualization used in the Trisolda RDF visualizer. Most notable is the newly added clustered navigation technique." }, { "instance_id": "R6756xR6413", "comparison_id": "R6756", "paper_id": "R6413", "text": "NodeTrix: a Hybrid Visualization of Social Networks The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results." }, { "instance_id": "R6756xR6473", "comparison_id": "R6756", "paper_id": "R6473", "text": "User-Oriented Visualization of Ontologies Ontologies become increasingly important as a means to structure and organize information. This requires methods and tools that enable not only ontology experts but also other user groups to work with ontologies and related data. We have developed VOWL, a comprehensive and well-specified visual language for the user-oriented representation of ontologies, and conducted a comparative study on an initial version of VOWL. Based upon results from that study, as well as an extensive review of other ontology visualizations, we have reworked many parts of VOWL. In this paper, we present the new version VOWL 2 and describe how the initial definitions were used to systematically redefine the visual notation. Besides the novelties of the visual language, which is based on a well-defined set of graphical primitives and an abstract color scheme, we briefly describe two implementations of VOWL 2. To gather some insight into the user experience with the new version of VOWL, we have conducted a qualitative user study. We report on the study and its results, which confirmed that not only the general ideas of VOWL but also most of our enhancements for VOWL 2 can be well understood by casual ontology users." }, { "instance_id": "R6756xR6453", "comparison_id": "R6756", "paper_id": "R6453", "text": "LODWheel - JavaScript-based Visualization of RDF Data. Visualizing Resource Description Framework (RDF) data to support decision-making processes is an important and challenging aspect of consuming Linked Data. With the recent development of JavaScript libraries for data visualization, new opportunities for Web-based visualization of Linked Data arise. This paper presents an extensive evaluation of JavaScript-based libraries for visualizing RDF data. A set of criteria has been devised for the evaluation and 15 major JavaScript libraries have been analyzed against the criteria. The two JavaScript libraries with the highest score in the evaluation acted as the basis for developing LODWheel (Linked Open Data Wheel) - a prototype for visualizing Linked Open Data in graphs and charts - introduced in this paper. This way of visualizing RDF data leads to a great deal of challenges related to data-categorization and connecting data resources together in new ways, which are discussed in this paper." }, { "instance_id": "R6756xR6409", "comparison_id": "R6756", "paper_id": "R6409", "text": "A tool for visualization and editing of OWL ontologies In an effort to optimize visualization and editing of OWL ontologies we have developed GrOWL: a browser and visual editor for OWL that accurately visualizes the underlying DL semantics of OWL ontologies while avoiding the difficulties of the verbose OWL syntax. In this paper, we discuss GrOWL visualization model and the essential visualization techniques implemented in GrOWL." }, { "instance_id": "R6756xR6477", "comparison_id": "R6756", "paper_id": "R6477", "text": "graphVizdb: A Scalable Platform for Interactive Large Graph Visualization. We present a novel platform for the interactive visualization of very large graphs. The platform enables the user to interact with the visualized graph in a way that is very similar to the exploration of maps at multiple levels. Our approach involves an offline preprocessing phase that builds the layout of the graph by assigning coordinates to its nodes with respect to a Euclidean plane. The respective points are indexed with a spatial data structure, i.e., an R-tree, and stored in a database. Multiple abstraction layers of the graph based on various criteria are also created offline, and they are indexed similarly so that the user can explore the dataset at different levels of granularity, depending on her particular needs. Then, our system translates user operations into simple and very efficient spatial operations (i.e., window queries) in the backend. This technique allows for a fine-grained access to very large graphs with extremely low latency and memory requirements and without compromising the functionality of the tool. Our web-based prototype supports three main operations: (1) interactive navigation, (2) multi-level exploration, and (3) keyword search on the graph metadata." }, { "instance_id": "R6756xR6461", "comparison_id": "R6756", "paper_id": "R6461", "text": "LodLive, exploring the web of data LodLive project, http://en.lodlive.it/, provides a demonstration of the use of Linked Data standard (RDF, SPARQL) to browse RDF resources. The application aims to spread linked data principles with a simple and friendly interface and reusable techniques. In this report we present an overview of the potential of LodLive, mentioning tools and methodologies that were used to create it." }, { "instance_id": "R6756xR6433", "comparison_id": "R6756", "paper_id": "R6433", "text": "Visualizing large-scale RDF data using Subsets, Summaries, and Sampling in Oracle The paper addresses the problem of visualizing large scale RDF data via a 3-S approach, namely, by using, 1) Subsets: to present only relevant data for visualisation; both static and dynamic subsets can be specified, 2) Summaries: to capture the essence of RDF data being viewed; summarized data can be expanded on demand thereby allowing users to create hybrid (summary-detail) fisheye views of RDF data, and 3) Sampling: to further optimize visualization of large-scale data where a representative sample suffices. The visualization scheme works with both asserted and inferred triples (generated using RDF(S) and OWL semantics). This scheme is implemented in Oracle by developing a plug-in for the Cytoscape graph visualization tool, which uses functions defined in a Oracle PL/SQL package, to provide fast and optimized access to Oracle Semantic Store containing RDF data. Interactive visualization of a synthesized RDF data set (LUBM 1 million triples), two native RDF datasets (Wikipedia 47 million triples and UniProt 700 million triples), and an OWL ontology (eClassOwl with a large class hierarchy including over 25,000 OWL classes, 5,000 properties, and 400,000 class-properties) demonstrates the effectiveness of our visualization scheme." }, { "instance_id": "R6756xR6457", "comparison_id": "R6756", "paper_id": "R6457", "text": "Using Hierarchical Edge Bundles to visualize complex ontologies in GLOW In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Prot\u00e9g\u00e9, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Prot\u00e9g\u00e9 visualization plug-in Jambalaya." }, { "instance_id": "R6757xR6303", "comparison_id": "R6757", "paper_id": "R6303", "text": "QAKiS: an open domain QA system based on relational patterns. We present QAKiS, a system for open domain Question Answering over linked data. It addresses the problem of question interpretation as a relation-based match, where fragments of the question are matched to binary relations of the triple store, using relational textual patterns automatically collected. For the demo, the relational patterns are automatically extracted from Wikipedia, while DBpedia is the RDF data set to be queried using a natural language interface." }, { "instance_id": "R6757xR6350", "comparison_id": "R6757", "paper_id": "R6350", "text": "Description of the POMELO System for the Task 2 of QALD-2014 In this paper, we present the POMELO system developed for participating in the task 2 of the QALD-4 challenge. For translating natural language questions in SPARQL queries we exploit Natural Language Processing methods, semantic resources and RDF triples description. We designed a four-step method which pre-processes the question, performs an abstraction of the question, then builds a representation of the SPARQL query and finally generates the query. The system was ranked second out of three participating systems. It achieves good performance with 0.85 F-measure on the set of 25 test questions." }, { "instance_id": "R6757xR6401", "comparison_id": "R6757", "paper_id": "R6401", "text": "Natural language questions for the web of data The Linked Data initiative comprises structured databases in the Semantic-Web data model RDF. Exploring this heterogeneous data by structured query languages is tedious and error-prone even for skilled users. To ease the task, this paper presents a methodology for translating natural language questions into structured SPARQL queries over linked-data sources. Our method is based on an integer linear program to solve several disambiguation tasks jointly: the segmentation of questions into phrases; the mapping of phrases to semantic entities, classes, and relations; and the construction of SPARQL triple patterns. Our solution harnesses the rich type system provided by knowledge bases in the web of linked data, to constrain our semantic-coherence objective function. We present experiments on both the question translation and the resulting query answering." }, { "instance_id": "R6757xR6316", "comparison_id": "R6757", "paper_id": "R6316", "text": "A HMM-based approach to question answering against linked data In this paper, we present a QA system enabling NL questions against Linked Data, designed and adopted by the Tor Vergata University AI group in the QALD-3 evaluation. The system integrates lexical semantic modeling and statistical inference within a complex architecture that decomposes the NL interpretation task into a cascade of three different stages: (1) The selection of key ontological information from the question (i.e. predicate, arguments and properties), (2) the location of such salient information in the ontology through the joint disambiguation of the different candidates and (3) the compilation of the final SPARQL query. This architecture characterizes a novel approach for the task and exploits a graphical model (i.e. an Hidden Markov Model) to select the proper ontological triples according to the graph nature of RDF. In particular, for each query an HMM model is produced whose Viterbi solution is the comprehensive joint disambiguation across the sentence elements. The combination of these approaches achieved interesting results in the QALD competition. The RTV is in fact within the group of participants performing slightly below the best system, but with smaller requirements and on significantly poorer input information." }, { "instance_id": "R6757xR6319", "comparison_id": "R6757", "paper_id": "R6319", "text": "QAnswer-enhanced entity matching for question answering over linked data QAnswer is a question answering system that uses DBpedia as a knowledge base and converts natural language questions into a SPARQL query. In order to improve the match between entities and relations and natural language text, we make use of Wikipedia to extract lexicalizations of the DBpedia entities and then match them with the question. These entities are validated on the ontology, while missing ones can be inferred. The proposed system was tested in the QALD-5 challenge and it obtained a F1 score of 0.30, which placed QAnswer in the second position in the challenge, despite the fact that the system used only a small subset of the properties in DBpedia, due to the long extraction process." }, { "instance_id": "R6757xR6300", "comparison_id": "R6757", "paper_id": "R6300", "text": "Question answering over biomedical linked data with Grammatical Framework The blending of linked data with ontologies leverages the access to data. GFMed introduces grammars for a controlled natural language targeted towards biomedical linked data and the corresponding controlled SPARQL language. The grammars are described in Grammatical Framework and introduce linguistic and SPARQL phrases mostly about drugs, diseases and relationships between them. The semantic and linguistic chunks correspond to Description Logic constructors. Problems and solutions for querying biomedical linked data with Romanian, besides English, are also considered in the context of GF." }, { "instance_id": "R6757xR6353", "comparison_id": "R6757", "paper_id": "R6353", "text": "PowerAqua: Supporting users in querying and exploring the Semantic Web With the continued growth of online semantic information, the processes of searching and managing this massive scale and heterogeneous content have become increasingly challenging. In this work, we present PowerAqua, an ontologybased Question Answering system that is able to answer queries by locating and integrating information, which can be distributed across heterogeneous semantic resources. We provide a complete overview of the system including: the research challenges that it addresses, its architecture, the evaluations that have been conducted to test it, and an in-depth discussion showing how PowerAqua effectively supports users in querying and exploring Semantic Web content." }, { "instance_id": "R6757xR6322", "comparison_id": "R6757", "paper_id": "R6322", "text": "ISOFT at QALD-4: semantic similarity-based question answering system over linked data We present a question answering system over linked data. We use natural language processing tools to extract slots and SPARQL templates from the question. Then, we use semantic similarity to map a natural language question to a SPARQL query. We combine important words to avoid loss of meaning, and compare combined words with uniform resource identifiers (URIs) from a knowledgebase (KB). This process is more powerful than comparing each word individually. Using our method, the problem of mapping a phrase of a user question to URIs from a KB can be more easily solved than without our method; this method improves the F-measure of the system." }, { "instance_id": "R6757xR6271", "comparison_id": "R6757", "paper_id": "R6271", "text": "Answering natural language questions with Intui3 Intui3 is one of the participating systems at the fourth evaluation campaign on multilingual question answering over linked data, QALD4. The system accepts as input a question formulated in natural language (in English), and uses syntactic and semantic information to construct its interpretation with respect to a given database of RDF triples (in this case DBpedia 3.9). The interpretation is mapped to the corresponding SPARQL query, which is then run against a SPARQL endpoint to retrieve the answers to the initial question. Intui3 competed in the challenge called Task 1: Multilingual question answering over linked data, which offered 200 training questions and 50 test questions in 7 different languages. It obtained an F-measure of 0.24 by providing a correct answer to 10 of the test questions and a partial answer to 4 of them." }, { "instance_id": "R6757xR6380", "comparison_id": "R6757", "paper_id": "R6380", "text": "Cross-Lingual Question Answering Using Common Semantic Space With the advent of Big Data concept, a lot of attention has been paid to structuring and giving semantic to this data. Knowledge bases like DBPedia play an important role to achieve this goal. Question answering systems are common approach to address expressivity and usability of information extraction from knowledge bases. Recent researches focused only on monolingual QA systems while cross-lingual setting has still so many barriers. In this paper we introduce a new cross-lingual approach using a unified semantic space among languages. After keyword extraction, entity linking and answer type detection, we use cross lingual semantic similarity to extract the answer from knowledge base via relation selection and type matching. We have evaluated our approach on Persian and Spanish which are typologically different languages. Our experiments are on DBPedia. The results are promising for both languages." }, { "instance_id": "R6757xR6364", "comparison_id": "R6757", "paper_id": "R6364", "text": " Our purpose is to hide the complexity of formulating a query expressed in a graph query language such as SPARQL. We propose a mechanism allowing queries to be expressed in a very simple pivot language, mainly composed of keywords and relations between keywords. Our system associates the keywords with the corresponding elements of the ontology (classes, relations, instances). Then it selects pre-written query patterns, and instanciates them with regard to the keywords of the initial query. Several possible queries are generated, ranked and then shown to the user. These queries are presented by means of natural language sentences. The user then selects the query he/she is interested in and the SPARQL query is built." }, { "instance_id": "R68535xR54884", "comparison_id": "R68535", "paper_id": "R54884", "text": "Past warming trend constrains future warming in CMIP6 models Strong future warming in some new climate models is less likely as their recent warming is inconsistent with observed trends. Future global warming estimates have been similar across past assessments, but several climate models of the latest Sixth Coupled Model Intercomparison Project (CMIP6) simulate much stronger warming, apparently inconsistent with past assessments. Here, we show that projected future warming is correlated with the simulated warming trend during recent decades across CMIP5 and CMIP6 models, enabling us to constrain future warming based on consistency with the observed warming. These findings carry important policy-relevant implications: The observationally constrained CMIP6 median warming in high emissions and ambitious mitigation scenarios is over 16 and 14% lower by 2050 compared to the raw CMIP6 median, respectively, and over 14 and 8% lower by 2090, relative to 1995\u20132014. Observationally constrained CMIP6 warming is consistent with previous assessments based on CMIP5 models, and in an ambitious mitigation scenario, the likely range is consistent with reaching the Paris Agreement target." }, { "instance_id": "R68871xR23436", "comparison_id": "R68871", "paper_id": "R23436", "text": "Climate Simulations Using MRI-AGCM3.2 with 20-km Grid A new version of the atmospheric general circulation model of the Meteorological Research Institute (MRI), with a horizontal grid size of about 20 km, has been developed. The previous version of the 20-km model, MRIAGCM3.1, which was developed from an operational numerical weather-prediction model, provided information on possible climate change induced by global warming, including future changes in tropical cyclones, the East Asian monsoon, extreme events, and blockings. For the new version, MRI-AGCM3.2, we have introduced various new parameterization schemes that improve the model climate. Using the new model, we performed a present-day climate experiment using observed sea surface temperature. The model shows improvements in simulating heavy monthly-mean precipitation around the tropical Western Pacific, the global distribution of tropical cyclones, the seasonal march of East Asian summer monsoon, and blockings in the Pacific. Improvements in the model climatologies were confirmed numerically using skill scores (e.g., Taylor\u2019s skill score)." }, { "instance_id": "R68871xR23326", "comparison_id": "R68871", "paper_id": "R23326", "text": "GFDL\u2019s ESM2 Global Coupled Climate\u2013Carbon Earth System Models. Part I: Physical Formulation and Baseline Simulation Characteristics Abstract The physical climate formulation and simulation characteristics of two new global coupled carbon\u2013climate Earth System Models, ESM2M and ESM2G, are described. These models demonstrate similar climate fidelity as the Geophysical Fluid Dynamics Laboratory\u2019s previous Climate Model version 2.1 (CM2.1) while incorporating explicit and consistent carbon dynamics. The two models differ exclusively in the physical ocean component; ESM2M uses Modular Ocean Model version 4p1 with vertical pressure layers while ESM2G uses Generalized Ocean Layer Dynamics with a bulk mixed layer and interior isopycnal layers. Differences in the ocean mean state include the thermocline depth being relatively deep in ESM2M and relatively shallow in ESM2G compared to observations. The crucial role of ocean dynamics on climate variability is highlighted in El Ni\u00f1o\u2013Southern Oscillation being overly strong in ESM2M and overly weak in ESM2G relative to observations. Thus, while ESM2G might better represent climate changes relating to total heat content variability given its lack of long-term drift, gyre circulation, and ventilation in the North Pacific, tropical Atlantic, and Indian Oceans, and depth structure in the overturning and abyssal flows, ESM2M might better represent climate changes relating to surface circulation given its superior surface temperature, salinity, and height patterns, tropical Pacific circulation and variability, and Southern Ocean dynamics. The overall assessment is that neither model is fundamentally superior to the other, and that both models achieve sufficient fidelity to allow meaningful climate and earth system modeling applications. This affords the ability to assess the role of ocean configuration on earth system interactions in the context of two state-of-the-art coupled carbon\u2013climate models." }, { "instance_id": "R68871xR23471", "comparison_id": "R68871", "paper_id": "R23471", "text": "INGV-CMCC Carbon (ICC): A Carbon Cycle Earth System Model This document describes the CMCC Earth System Model (ESM) for the representation of the carbon cycle in the atmosphere, land, and ocean system. The structure of the report follows the software architecture of the full system. It is intended to give a technical description of the numerical models at the base of the ESM, and how they are coupled with each other." }, { "instance_id": "R68871xR23457", "comparison_id": "R68871", "paper_id": "R23457", "text": "Evaluation of the carbon cycle components in the Norwegian Earth System Model (NorESM) Abstract. The recently developed Norwegian Earth System Model (NorESM) is employed for simulations contributing to the CMIP5 (Coupled Model Intercomparison Project phase 5) experiments and the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC-AR5). In this manuscript, we focus on evaluating the ocean and land carbon cycle components of the NorESM, based on the control and historical simulations. Many of the observed large scale ocean biogeochemical features are reproduced satisfactorily by the NorESM. When compared to the climatological estimates from the World Ocean Atlas (WOA), the model simulated temperature, salinity, oxygen, and phosphate distributions agree reasonably well in both the surface layer and deep water structure. However, the model simulates a relatively strong overturning circulation strength that leads to noticeable model-data bias, especially within the North Atlantic Deep Water (NADW). This strong overturning circulation slightly distorts the structure of the biogeochemical tracers at depth. Advancements in simulating the oceanic mixed layer depth with respect to the previous generation model particularly improve the surface tracer distribution as well as the upper ocean biogeochemical processes, particularly in the Southern Ocean. Consequently, near surface ocean processes such as biological production and air-sea gas exchange, are in good agreement with climatological observations. NorESM reproduces the general pattern of land-vegetation gross primary productivity (GPP) when compared to the observationally-based values derived from the FLUXNET network of eddy covariance towers. Globally, the NorESM simulated annual mean GPP and terrestrial respiration are 129.8 and 106.6 Pg C yr\u22121, slightly larger than observed of 119.4 \u00b1 5.9 and 96.4 \u00b1 6.0 Pg C yr\u22121. The latitudinal distribution of GPP fluxes simulated by NorESM shows a GPP overestimation of 10% in the tropics and a substantial underestimation of GPP at high latitudes." }, { "instance_id": "R68871xR23273", "comparison_id": "R68871", "paper_id": "R23273", "text": "The ACCESS coupled model: description, control climate and evaluation 4OASIS3.2\u20135 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 \u00b0C in ACCESS1.0 and 0.04 \u00b0C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed." }, { "instance_id": "R68871xR23398", "comparison_id": "R68871", "paper_id": "R23398", "text": "Development and evaluation of an Earth-System model \u2013 HadGEM2 Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks." }, { "instance_id": "R68871xR23287", "comparison_id": "R68871", "paper_id": "R23287", "text": "A Modified Dynamic Framework for the Atmospheric Spectral Model and Its Application Abstract This paper describes a dynamic framework for an atmospheric general circulation spectral model in which a reference stratified atmospheric temperature and a reference surface pressure are introduced into the governing equations so as to improve the calculation of the pressure gradient force and gradients of surface pressure and temperature. The vertical profile of the reference atmospheric temperature approximately corresponds to that of the U.S. midlatitude standard atmosphere within the troposphere and stratosphere, and the reference surface pressure is a function of surface terrain geopotential and is close to the observed mean surface pressure. Prognostic variables for the temperature and surface pressure are replaced by their perturbations from the prescribed references. The numerical algorithms of the explicit time difference scheme for vorticity and the semi-implicit time difference scheme for divergence, perturbation temperature, and perturbation surface pressure equation are given in detail. The modified numerical framework is implemented in the Community Atmosphere Model version 3 (CAM3) developed at the National Center for Atmospheric Research (NCAR) to test its validation and impact on simulated climate. Both the original and the modified models are run with the same spectral resolution (T42), the same physical parameterizations, and the same boundary conditions corresponding to the observed monthly mean sea surface temperature and sea ice concentration from 1971 to 2000. This permits one to evaluate the performance of the new dynamic framework compared to the commonly used one. Results show that there is a general improvement for the simulated climate at regional and global scales, especially for temperature and wind." }, { "instance_id": "R6947xR6693", "comparison_id": "R6947", "paper_id": "R6693", "text": "TEXT2TABLE: Medical Text Summarization System Based on Named Entity Recognition and Modality Identification With the rapidly growing use of electronic health records, the possibility of large-scale clinical information extraction has drawn much attention. It is not, however, easy to extract information because these reports are written in natural language. To address this problem, this paper presents a system that converts a medical text into a table structure. This system's core technologies are (1) medical event recognition modules and (2) a negative event identification module that judges whether an event actually occurred or not. Regarding the latter module, this paper also proposes an SVM-based classifier using syntactic information. Experimental results demonstrate empirically that syntactic information can contribute to the method's accuracy." }, { "instance_id": "R6947xR6719", "comparison_id": "R6947", "paper_id": "R6719", "text": "ThemeCrowds: multiresolution summaries of twitter usage Users of social media sites, such as Twitter, rapidly generate large volumes of text content on a daily basis. Visual summaries are needed to understand what groups of people are saying collectively in this unstructured text data. Users will typically discuss a wide variety of topics, where the number of authors talking about a specific topic can quickly grow or diminish over time, and what the collective is saying about the subject can shift as a situation develops. In this paper, we present a technique that summarises what collections of Twitter users are saying about certain topics over time. As the correct resolution for inspecting the data is unknown in advance, the users are clustered hierarchically over a fixed time interval based on the similarity of their posts. The visualisation technique takes this data structure as its input. Given a topic, it finds the correct resolution of users at each time interval and provides tags to summarise what the collective is discussing. The technique is tested on a large microblogging corpus, consisting of millions of tweets and over a million users." }, { "instance_id": "R6947xR6736", "comparison_id": "R6947", "paper_id": "R6736", "text": "Towards Unsupervised Learning of Temporal Relations between Events Automatic extraction of temporal relations between event pairs is an important task for several natural language processing applications such as Question Answering, Information Extraction, and Summarization. Since most existing methods are supervised and require large corpora, which for many languages do not exist, we have concentrated our efforts to reduce the need for annotated data as much as possible. This paper presents two different algorithms towards this goal. The first algorithm is a weakly supervised machine learning approach for classification of temporal relations between events. In the first stage, the algorithm learns a general classifier from an annotated corpus. Then, inspired by the hypothesis of \"one type of temporal relation per discourse'', it extracts useful information from a cluster of topically related documents. We show that by combining the global information of such a cluster with local decisions of a general classifier, a bootstrapping cross-document classifier can be built to extract temporal relations between events. Our experiments show that without any additional annotated data, the accuracy of the proposed algorithm is higher than that of several previous successful systems. The second proposed method for temporal relation extraction is based on the expectation maximization (EM) algorithm. Within EM, we used different techniques such as a greedy best-first search and integer linear programming for temporal inconsistency removal. We think that the experimental results of our EM based algorithm, as a first step toward a fully unsupervised temporal relation extraction method, is encouraging." }, { "instance_id": "R6947xR6747", "comparison_id": "R6947", "paper_id": "R6747", "text": "Automatic Keyword Extraction for Text Summarization in e-Newspapers Summarization is the process of reducing a text document to create a summary that retains the most important points of the original document. Extractive summarizers work on the given text to extract sentences that best convey the message hidden in the text. Most extractive summarization techniques revolve around the concept of finding keywords and extracting sentences that have more keywords than the rest. Keyword extraction usually is done by extracting relevant words having a higher frequency than others, with stress on important ones'. Manual extraction or annotation of keywords is a tedious process brimming with errors involving lots of manual effort and time. In this paper, we proposed an algorithm to extract keyword automatically for text summarization in e-newspaper datasets. The proposed algorithm is compared with the experimental result of articles having the similar title in four different e-Newspapers to check the similarity and consistency in summarized results." }, { "instance_id": "R6947xR6725", "comparison_id": "R6947", "paper_id": "R6725", "text": "Using NMF-based text summarization to improve supervised and unsupervised classification This paper presents a new generic text summarization method using Non-negative Matrix Factorization (NMF) to estimate sentence relevance. Proposed sentence relevance estimation is based on normalization of NMF topic space and further weighting of each topic using sentences representation in topic space. The proposed method shows better summarization quality and performance than state of the art methods on DUC 2002 standard dataset. In addition, we study how this method can improve the performance of supervised and unsupervised text classification tasks. In our experiments with Reuters-21578 and Classic4 benchmark datasets we apply developed text summarization method as a preprocessing step for further multi-label classification and clustering. As a result, the quality of classification and clustering has been significantly improved." }, { "instance_id": "R6947xR6715", "comparison_id": "R6947", "paper_id": "R6715", "text": "Automatic Multi-document Summarization Based on Clustering and Nonnegative Matrix Factorization Abstract In this paper, a novel summarization method that uses nonnegative matrix factorization (NMF) and the clustering method is introduced to extract meaningful sentences relevant to a given query. The proposed method decomposes a sentence into the linear combination of sparse nonnegative semantic features so that it can represent a sentence as the sum of a few semantic features that are comprehensible intuitively. It can improve the quality of document summaries because it can avoid extracting those sentences whose similarities with the query are high but that are meaningless by using the similarity between the query and the semantic features. In addition, the proposed approach uses the clustering method to remove noise and avoid the biased inherent semantics of the documents being reflected in summaries. The method can ensure the coherence of summaries by using the rank score of sentences with respect to semantic features. The experimental results demonstrate that the proposed method has better performance than other methods that use the thesaurus, the latent semantic analysis (LSA), the K-means, and the NMF." }, { "instance_id": "R6947xR6722", "comparison_id": "R6947", "paper_id": "R6722", "text": "Framework for Abstractive Summarization using Text-to-Text Generation We propose a new, ambitious framework for abstractive summarization, which aims at selecting the content of a summary not from sentences, but from an abstract representation of the source documents. This abstract representation relies on the concept of Information Items (InIt), which we define as the smallest element of coherent information in a text or a sentence. Our framework differs from previous abstractive summarization models in requiring a semantic analysis of the text. We present a first attempt made at developing a system from this framework, along with evaluation results for it from TAC 2010. We also present related work, both from within and outside of the automatic summarization domain." }, { "instance_id": "R6947xR6743", "comparison_id": "R6947", "paper_id": "R6743", "text": "UWN: A Large Multilingual Lexical Knowledge Base We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names." }, { "instance_id": "R6947xR6733", "comparison_id": "R6947", "paper_id": "R6733", "text": "A Statistical Approach for Automatic Text Summarization by Extraction Automatic Document Summarization is a highly interdisciplinary research area related with computer science as well as cognitive psychology. This Summarization is to compress an original document into a summarized version by extracting almost all of the essential concepts with text mining techniques. This research focuses on developing a statistical automatic text summarization approach, Kmixture probabilistic model, to enhancing the quality of summaries. KSRS employs the K-mixture probabilistic model to establish term weights in a statistical sense, and further identifies the term relationships to derive the semantic relationship significance (SRS) of nouns. Sentences are ranked and extracted based on their semantic relationship significance values. The objective of this research is thus to propose a statistical approach to text summarization. We propose a K-mixture semantic relationship significance (KSRS) approach to enhancing the quality of document summary results. The K-mixture probabilistic model is used to determine the term weights. Term relationships are then investigated to develop the semantic relationship of nouns that manifests sentence semantics. Sentences with significant semantic relationship, nouns are extracted to form the summary accordingly." }, { "instance_id": "R6947xR6712", "comparison_id": "R6947", "paper_id": "R6712", "text": "Understanding Text Corpora with Multiple Facets Text visualization becomes an increasingly more important research topic as the need to understand massive-scale textual information is proven to be imperative for many people and businesses. However, it is still very challenging to design effective visual metaphors to represent large corpora of text due to the unstructured and high-dimensional nature of text. In this paper, we propose a data model that can be used to represent most of the text corpora. Such a data model contains four basic types of facets: time, category, content (unstructured), and structured facet. To understand the corpus with such a data model, we develop a hybrid visualization by combining the trend graph with tag-clouds. We encode the four types of data facets with four separate visual dimensions. To help people discover evolutionary and correlation patterns, we also develop several visual interaction methods that allow people to interactively analyze text by one or more facets. Finally, we present two case studies to demonstrate the effectiveness of our solution in support of multi-faceted visual analysis of text corpora." }, { "instance_id": "R6948xR6586", "comparison_id": "R6948", "paper_id": "R6586", "text": "A multilingual news summarizer Huge multilingual news articles are reported and disseminated on the Internet. How to extract the key information and save the reading time is a crucial issue. This paper proposes architecture of multilingual news summarizer, including monolingual and multilingual clustering, similarity measure among meaningful units, and presentation of summarization results. Translation among news stories, idiosyncrasy among languages, implicit information, and user preference are addressed." }, { "instance_id": "R6948xR6596", "comparison_id": "R6948", "paper_id": "R6596", "text": "WebInEssence: A Personalized Web-Based Multi-Document Summarization and Recommendation System In this paper, we present our recent work on the development of a scalable personalized web-based multi-document summarization and recommendation system: WebInEssence. WebInEssence is designed to help end users effectively search for useful information and automatically summarize selected documents based on the users\u2019 personal profiles. We address some of the design issues to improve the scalability and readability of our multi-document summarizer included in WebInEssence. Some evaluation results with different configurations are also presented." }, { "instance_id": "R6948xR6599", "comparison_id": "R6948", "paper_id": "R6599", "text": "Automated multi-document summarization in NeATS This paper describes the multi-document text summarization system NeATS. Using a simple algorithm, NeATS was among the top two performers of the DUC-01 evaluation." }, { "instance_id": "R6948xR6578", "comparison_id": "R6948", "paper_id": "R6578", "text": "Discourse Trees Are Good Indicators of Importance in Text Researchers in computational linguistics have long speculated that the nuclei of the rhetorical structure tree of a text form an adequate \\summary\" of the text for which that tree was built. However, to my knowledge, there has been no experiment to connrm how valid this speculation really is. In this paper, I describe a psycholinguistic experiment that shows that the concepts of discourse structure and nuclearity can be used eeectively in text summarization. More precisely, I show that there is a strong correlation between the nuclei of the discourse structure of a text and what readers perceive to be the most important units in that text. In addition, I propose and evaluate the quality of an automatic, discourse-based summa-rization system that implements the methods that were validated by the psycholinguistic experiment. The evaluation indicates that although the system does not match yet the results that would be obtained if discourse trees had been built manually, it still signiicantly outperforms both a baseline algorithm and Microsoft's OOce97 summarizer. 1 Motivation Traditionally, previous approaches to automatic text summarization have assumed that the salient parts of a text can be determined by applying one or more of the following assumptions: important sentences in a text contain words that are used frequently (Luhn 1958; Edmundson 1968); important sentences contain words that are used in the title and section headings (Edmundson 1968); important sentences are located at the beginning or end of paragraphs (Baxendale 1958); important sentences are located at positions in a text that are genre dependent, and these positions can be determined automatically, through training important sentences use bonus words such as \\greatest\" and \\signiicant\" or indicator phrases such as \\the main aim of this paper\" and \\the purpose of this article\", while unimportant sentences use stigma words such as \\hardly\" and \\im-possible\" important sentences and concepts are the highest connected entities in elaborate semantic struc-important and unimportant sentences are derivable from a discourse representation of the text (Sparck Jones 1993b; Ono, Sumita, & Miike 1994). In determining the words that occur most frequently in a text or the sentences that use words that occur in the headings of sections, computers are accurate tools. Therefore, in testing the validity of using these indicators for determining the most important units in a text, it is adequate to compare the direct output of a summarization program that implements the assump-tion(s) under scrutiny with a human-made \u2026" }, { "instance_id": "R6948xR6590", "comparison_id": "R6948", "paper_id": "R6590", "text": "Experiments in Single and Multi-Document Summarization Using MEAD In this paper, we describe four experiments in text summarization. The first experiment involves the automatic creation of 120 multi-document summaries and 308 single-document summaries from a set of 30 clusters of related documents. We present official results from a multi-site manual evaluation of the quality of the summaries. The second experiment is about the identification by human subjects of cross-document structural relationships such as identity, paraphrase, elaboration, and fulfillment. The third experiment focuses on a particular cross-document structural relationship, namely subsumption. The last experiment asks human judges to determine which of the input articles in a given cluster were used to produce individual sentences of a manual summary. We present numerical evaluations of all four experiments. All automatic summaries have been produced by MEAD, a flexible summarization system under development at the University of Michigan." }, { "instance_id": "R6948xR6614", "comparison_id": "R6948", "paper_id": "R6614", "text": "Generating Indicative-Informative Summaries with SumUM We present and evaluate SumUM, a text summarization system that takes a raw technical text as input and produces an indicative informative summary. The indicative part of the summary identifies the topics of the document, and the informative part elaborates on some of these topics according to the reader's interest. SumUM motivates the topics, describes entities, and defines concepts. It is a first step for exploring the issue of dynamic summarization. This is accomplished through a process of shallow syntactic and semantic analysis, concept identification, and text regeneration. Our method was developed through the study of a corpus of abstracts written by professional abstractors. Relying on human judgment, we have evaluated indicativeness, informativeness, and text acceptability of the automatic summaries. The results thus far indicate good performance when compared with other summarization technologies." }, { "instance_id": "R6948xR6571", "comparison_id": "R6948", "paper_id": "R6571", "text": "Trainable, scalable summarization using robust NLP and machine learning We describe a trainable and scalable summarization system which utilizes features derived from information retrieval, information extraction, and NLP techniques and on-line resources. The system combines these features using a trainable feature combiner learned from summary examples through a machine learning algorithm. We demonstrate system scalability by reporting results on the best combination of summarization features for different document sources. We also present preliminary results from a task-based evaluation on summarization output usability." }, { "instance_id": "R6948xR6567", "comparison_id": "R6948", "paper_id": "R6567", "text": "Automated text summarization and the SUMMARIST system This paper consists of three parts: a preliminary typology of summaries in general; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system being built sat ISI, and a discussion of three methods to evaluate summaries." }, { "instance_id": "R6948xR6593", "comparison_id": "R6948", "paper_id": "R6593", "text": "NewsInEssence: A System For Domain-Independent, Real-Time News Clustering and Multi-Document Summarization NEWSINESSENCE is a system for finding, visualizing and summarizing a topic-based cluster of news stories. In the generic scenario for NEWSINESSENCE, a user selects a single news story from a news Web site. Our system then searches other live sources of news for other stories related to the same event and produces summaries of a subset of the stories that it finds, according to parameters specified by the user." }, { "instance_id": "R69680xR69657", "comparison_id": "R69680", "paper_id": "R69657", "text": "Knowledge-based interactive postmining of association rules using ontologies In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as itemset concise representations, redundancy reduction, and postprocessing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient postprocessing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the postprocessing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the postprocessing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process." }, { "instance_id": "R69680xR69601", "comparison_id": "R69680", "paper_id": "R69601", "text": "Out of the box: Reasoning with graph convolution nets for factual visual question answering Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel `fact-based' visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to `reason' about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art." }, { "instance_id": "R69680xR69651", "comparison_id": "R69680", "paper_id": "R69651", "text": "Combining data mining and ontology engineering to enrich ontologies and linked data., In this position paper, we claim that the need for time consuming data preparation and result interpretation tasks in knowledge discovery, as well as for costly expert consultation and consensus building activities required for ontology building can be reduced through exploiting the interplay of data mining and ontology engineering. The aim is to obtain in a semi-automatic way new knowledge from distributed data sources that can be used for inference and reasoning, as well as to guide the extraction of further knowledge from these data sources. The proposed approach is based on the creation of a novel knowledge discovery method relying on the combination, through an iterative ?feedbackloop?, of (a) data mining techniques to make emerge implicit models from data and (b) pattern-based ontology engineering to capture these models in reusable, conceptual and inferable artefacts." }, { "instance_id": "R69680xR69538", "comparison_id": "R69680", "paper_id": "R69538", "text": "Ontology based complex object recognition This paper presents a new approach for object categorization involving the following aspects of cognitive vision: learning, recognition and knowledge representation. A major element of our approach is a visual concept ontology composed of several types of concepts (spatial concepts and relations, color concepts and texture concepts). Visual concepts contained in this ontology can be seen as an intermediate layer between domain knowledge and image processing procedures. Machine learning techniques are used to solve the symbol grounding problem (i.e. linking meaningfully symbols to sensory information). This paper shows, how a new object categorization system is set up by a knowledge acquisition and learning phase and then used by an object categorization phase." }, { "instance_id": "R69680xR69599", "comparison_id": "R69680", "paper_id": "R69599", "text": "Explicit knowledge-based reasoning for visual question answering We describe a method for visual question answering which is capable of reasoning about an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can explain the reasoning by which it developed its answer. It is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in testing. We also provide a dataset and a protocol by which to evaluate general visual question answering methods." }, { "instance_id": "R69680xR69568", "comparison_id": "R69680", "paper_id": "R69568", "text": "Zero-shot recognition via semantic embeddings and knowledge graphs We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few)." }, { "instance_id": "R69680xR69626", "comparison_id": "R69680", "paper_id": "R69626", "text": "Semantic explanations of predictions The main objective of explanations is to transmit knowledge to humans. This work proposes to construct informative explanations for predictions made from machine learning models. Motivated by the observations from social sciences, our approach selects data points from the training sample that exhibit special characteristics crucial for explanation, for instance, ones contrastive to the classification prediction and ones representative of the models. Subsequently, semantic concepts are derived from the selected data points through the use of domain ontologies. These concepts are filtered and ranked to produce informative explanations that improves human understanding. The main features of our approach are that (1) knowledge about explanations is captured in the form of ontological concepts, (2) explanations include contrastive evidences in addition to normal evidences, and (3) explanations are user relevant." }, { "instance_id": "R69680xR69633", "comparison_id": "R69680", "paper_id": "R69633", "text": "Learning heterogeneous knowledge base embeddings for explainable recommendation Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms\u2014especially the collaborative filtering (CF)- based approaches with shallow or deep models\u2014usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users\u2019 historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines." }, { "instance_id": "R69680xR69637", "comparison_id": "R69680", "paper_id": "R69637", "text": "Improving sequential recommendation with knowledge-enhanced memory networks With the revival of neural networks, many studies try to adapt powerful sequential neural models, \u0131e Recurrent Neural Networks (RNN), to sequential recommendation. RNN-based networks encode historical interaction records into a hidden state vector. Although the state vector is able to encode sequential dependency, it still has limited representation power in capturing complicated user preference. It is difficult to capture fine-grained user preference from the interaction sequence. Furthermore, the latent vector representation is usually hard to understand and explain. To address these issues, in this paper, we propose a novel knowledge enhanced sequential recommender. Our model integrates the RNN-based networks with Key-Value Memory Network (KV-MN). We further incorporate knowledge base (KB) information to enhance the semantic representation of KV-MN. RNN-based models are good at capturing sequential user preference, while knowledge-enhanced KV-MNs are good at capturing attribute-level user preference. By using a hybrid of RNNs and KV-MNs, it is expected to be endowed with both benefits from these two components. The sequential preference representation together with the attribute-level preference representation are combined as the final representation of user preference. With the incorporation of KB information, our model is also highly interpretable. To our knowledge, it is the first time that sequential recommender is integrated with external memories by leveraging large-scale KB information." }, { "instance_id": "R69680xR6539", "comparison_id": "R69680", "paper_id": "R6539", "text": "Visual analysis of statistical data on maps using linked open data When analyzing statistical data, one of the most basic and at the same time widely used techniques is analyzing correlations. As shown in previous works, Linked Open Data is a rich resource for discovering such correlations. In this demo, we show how statistical analysis and visualization on maps can be combined to facilitate a deeper understanding of the statistical findings." }, { "instance_id": "R69680xR69621", "comparison_id": "R69680", "paper_id": "R69621", "text": "Interaction Embeddings for Prediction and Explanation in Knowledge Graphs Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications. Crossover interactions -- bi-directional effects between entities and relations --- help select related information when predicting a new triple, but haven't been formally discussed before. In this paper, we propose CrossE, a novel knowledge graph embedding which explicitly simulates crossover interactions. It not only learns one general embedding for each entity and relation as most previous methods do, but also generates multiple triple specific embeddings for both of them, named interaction embeddings. We evaluate embeddings on typical link prediction tasks and find that CrossE achieves state-of-the-art results on complex and more challenging datasets. Furthermore, we evaluate embeddings from a new perspective -- giving explanations for predicted triples, which is important for real applications. In this work, an explanation for a triple is regarded as a reliable closed-path between the head and the tail entity. Compared to other baselines, we show experimentally that CrossE, benefiting from interaction embeddings, is more capable of generating reliable explanations to support its predictions." }, { "instance_id": "R69680xR69595", "comparison_id": "R69680", "paper_id": "R69595", "text": "A joint model for question answering over mul- tiple knowledge bases As the amount of knowledge bases (KBs) grows rapidly, the problem of question answering (QA) over multiple KBs has drawn more attention. The most significant distinction between multiple KB-QA and single KB-QA is that the former must consider the alignments between KBs. The pipeline strategy first constructs the alignments independently, and then uses the obtained alignments to construct queries. However, alignment construction is not a trivial task, and the introduced noises would be passed on to query construction. By contrast, we notice that alignment construction and query construction are interactive steps, and jointly considering them would be beneficial. To this end, we present a novel joint model based on integer linear programming (ILP), uniting these two procedures into a uniform framework. The experimental results demonstrate that the proposed approach outperforms state-of-the-art systems, and is able to improve the performance of both alignment construction and query construction." }, { "instance_id": "R69680xR69589", "comparison_id": "R69680", "paper_id": "R69589", "text": "Improving question answering by commonsense-based pre-training Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge. We believe the main reason is the lack of commonsense \\mbox{connections} between concepts. To remedy this, we provide a simple and effective method that leverages external commonsense knowledge base such as ConceptNet. We pre-train direct and indirect relational functions between concepts, and show that these pre-trained functions could be easily added to existing neural network models. Results show that incorporating commonsense-based function improves the baseline on three question answering tasks that require commonsense reasoning. Further analysis shows that our system \\mbox{discovers} and leverages useful evidence from an external commonsense knowledge base, which is missing in existing neural network models and help derive the correct answer." }, { "instance_id": "R69680xR69676", "comparison_id": "R69680", "paper_id": "R69676", "text": "Looking for clusters explanations in a labyrinth of linked data We present Dedalo, a framework which is able to exploit Linked Data to generate explanations for clusters. In general, any result of a Knowledge Discovery process, including clusters, is interpreted by human experts who use their background knowledge to explain them. However, for someone without such expert knowledge, those results may be difficult to understand. Obtaining a complete and satisfactory explanation becomes a laborious and time-consuming process, involving expertise in possibly different domains. Having said so, not only does the Web of Data contain vast amounts of such background knowledge, but it also natively connects those domains. While the efforts put in the interpretation process can be reduced with the support of Linked Data, how to automatically access the right piece of knowledge in such a big space remains an issue. Dedalo is a framework that dynamically traverses Linked Data to find commonalities that form explanations for items of a cluster. We have developed different strategies (or heuristics) to guide this traversal, reducing the time to get the best explanation. In our experiments, we compare those strategies and demonstrate that Dedalo finds relevant and sophisticated Linked Data explanations from different areas." }, { "instance_id": "R69680xR69547", "comparison_id": "R69680", "paper_id": "R69547", "text": "Predicting entry-level categories Entry-level categories\u2014the labels people use to name an object\u2014were originally defined and studied by psychologists in the 1970s and 1980s. In this paper we extend these ideas to study entry-level categories at a larger scale and to learn models that can automatically predict entry-level categories for images. Our models combine visual recognition predictions with linguistic resources like WordNet and proxies for word \u201cnaturalness\u201d mined from the enormous amount of text on the web. We demonstrate the usefulness of our models for predicting nouns (entry-level words) associated with images by people, and for learning mappings between concepts predicted by existing visual recognition systems and entry-level concepts. In this work we make use of recent successful efforts on convolutional network models for visual recognition by training classifiers for 7404 object categories on ConvNet activation features. Results for category mapping and entry-level category prediction for images show promise for producing more natural human-like labels. We also demonstrate the potential applicability of our results to the task of image description generation." }, { "instance_id": "R69680xR69587", "comparison_id": "R69680", "paper_id": "R69587", "text": "Answering science exam questions using query reformulation with background knowledge Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query reformulation, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset." }, { "instance_id": "R69680xR69667", "comparison_id": "R69680", "paper_id": "R69667", "text": "Linked data and online classifications to organise mined patterns in patient data In this paper, we investigate the use of web data resources in medicine, especially through medical classifications made available using the principles of Linked Data, to support the interpretation of patterns mined from patient care trajectories. Interpreting such patterns is naturally a challenge for an analyst, as it requires going through large amounts of results and access to sufficient background knowledge. We employ linked data, especially as exposed through the BioPortal system, to create a navigation structure within the patterns obtained form sequential pattern mining. We show how this approach provides a flexible way to explore data about trajectories of diagnoses and treatments according to different medical classifications." }, { "instance_id": "R69680xR69641", "comparison_id": "R69680", "paper_id": "R69641", "text": "Explod: a framework for explaining recommendations based on the linked open data cloud In this paper we present ExpLOD, a framework which exploits the information available in the Linked Open Data (LOD) cloud to generate a natural language explanation of the suggestions produced by a recommendation algorithm. The methodology is based on building a graph in which the items liked by a user are connected to the items recommended through the properties available in the LOD cloud. Next, given this graph, we implemented some techniques to rank those properties and we used the most relevant ones to feed a module for generating explanations in natural language. In the experimental evaluation we performed a user study with 308 subjects aiming to investigate to what extent our explanation framework can lead to more transparent, trustful and engaging recommendations. The preliminary results provided us with encouraging findings, since our algorithm performed better than both a non-personalized explanation baseline and a popularity-based one." }, { "instance_id": "R69680xR69581", "comparison_id": "R69680", "paper_id": "R69581", "text": "Tell me why: Computational explanation of conceptual similarity judgments In this paper we introduce a system for the computation of explanations that accompany scores in the conceptual similarity task. In this setting the problem is, given a pair of concepts, to provide a score that expresses in how far the two concepts are similar. In order to explain how explanations are automatically built, we illustrate some basic features of COVER, the lexical resource that underlies our approach, and the main traits of the MeRaLi system, that computes conceptual similarity and explanations, all in one. To assess the computed explanations, we have designed a human experimentation, that provided interesting and encouraging results, which we report and discuss in depth." }, { "instance_id": "R69680xR69577", "comparison_id": "R69680", "paper_id": "R69577", "text": "Knowledgeable reader: Enhancing cloze-style read- ing comprehension with external commonsense knowledge We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a key-value memory, in a cloze-style setting. Instead of relying only on document-to-question interaction or discrete features as in prior work, our model attends to relevant external knowledge and combines this knowledge with the context representation before inferring the answer. This allows the model to attract and imply knowledge from an external knowledge source that is not explicitly stated in the text, but that is relevant for inferring the answer. Our model improves results over a very strong baseline on a hard Common Nouns dataset, making it a strong competitor of much more complex models. By including knowledge explicitly, our model can also provide evidence about the background knowledge used in the RC process." }, { "instance_id": "R69680xR69558", "comparison_id": "R69680", "paper_id": "R69558", "text": "A framework for explainable deep neural models using external knowledge graphs Deep neural networks (DNNs) have become the gold standard for solving challenging classification problems, especially given complex sensor inputs (e.g., images and video). While DNNs are powerful, they are also brittle, and their inner workings are not fully understood by humans, leading to their use as \u201cblack-box\u201d models. DNNs often generalize poorly when provided new data sampled from slightly shifted distributions; DNNs are easily manipulated by adversarial examples; and the decision-making process of DNNs can be difficult for humans to interpret. To address these challenges, we propose integrating DNNs with external sources of semantic knowledge. Large quantities of meaningful, formalized knowledge are available in knowledge graphs and other databases, many of which are publicly obtainable. But at present, these sources are inaccessible to deep neural methods, which can only exploit patterns in the signals they are given to classify. In this work, we conduct experiments on the ADE20K dataset, using scene classification as an example task where combining DNNs with external knowledge graphs can result in more robust and explainable models. We align the atomic concepts present in ADE20K (i.e., objects) to WordNet, a hierarchically-organized lexical database. Using this knowledge graph, we expand the concept categories which can be identified in ADE20K and relate these concepts in a hierarchical manner. The neural architecture we present performs scene classification using these concepts, illuminating a path toward DNNs which can efficiently exploit high-level knowledge in place of excessive quantities of direct sensory input. We hypothesize and experimentally validate that incorporating background knowledge via an external knowledge graph into a deep learning-based model should improve the explainability and robustness of the model." }, { "instance_id": "R69680xR69615", "comparison_id": "R69680", "paper_id": "R69615", "text": "An ontology-based approach to explaining artificial neural networks Explainability in Artificial Intelligence has been revived as a topic of active research by the need of conveying safety and trust to users in the `how' and `why' of automated decision-making. Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how this influences the understandability of an explanation from the users' perspective. In this paper we show how ontologies help the understandability of interpretable machine learning models, such as decision trees. In particular, we build on Trepan, an algorithm that explains artificial neural networks by means of decision trees, and we extend it to include ontologies modeling domain knowledge in the process of generating explanations. We present the results of a user study that measures the understandability of decision trees in domains where explanations are critical, namely, in finance and medicine. Our study shows that decision trees taking into account domain knowledge during generation are more understandable than those generated without the use of ontologies." }, { "instance_id": "R69680xR69611", "comparison_id": "R69680", "paper_id": "R69611", "text": "Algorithmic transparency of conversational agents A lack of algorithmic transparency is a major barrier to the adoption of artificial intelligence technologies within contexts which require high risk and high consequence decision making. In this paper we present a framework for providing transparency of algorithmic processes. We include important considerations not identified in research to date for the high risk and high consequence context of defence intelligence analysis. To demonstrate the core concepts of our framework we explore an example application (a conversational agent for knowledge exploration) which demonstrates shared human-machine reasoning in a critical decision making scenario. We include new findings from interviews with a small number of analysts and recommendations for future research." }, { "instance_id": "R69680xR69630", "comparison_id": "R69680", "paper_id": "R69630", "text": "Deep knowledge-aware network for news recommendation Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users\u00bb diverse interests, we also design an attention module in DKN to dynamically aggregate a user\u00bbs history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN." }, { "instance_id": "R69680xR69653", "comparison_id": "R69680", "paper_id": "R69653", "text": "Using taxonomies to facilitate the analysis of the association rules The Data Mining process enables the end users to analyze, understand and use the extracted knowledge in an intelligent system or to support in the decision-making processes. However, many algorithms used in the process encounter large quantities of patterns, complicating the analysis of the patterns. This fact occurs with association rules, a Data Mining technique that tries to identify intrinsic patterns in large data sets. A method that can help the analysis of the association rules is the use of taxonomies in the step of post-processing knowledge. In this paper, the GART algorithm is proposed, which uses taxonomies to generalize association rules, and the RulEE-GAR computational module, that enables the analysis of the generalized rules." }, { "instance_id": "R69680xR69643", "comparison_id": "R69680", "paper_id": "R69643", "text": "Enhancing explanations in recommender systems with knowledge graphs, Abstract Recommender systems are becoming must-have facilities on e-commerce websites to alleviate information overload and to improve user experience. One important component of such systems is the explanations of the recommendations. Existing explanation approaches have been classified by style and the classes are aligned with the ones for recommendation approaches, such as collaborative-based and content-based. Thanks to the semantically interconnected data, knowledge graphs have been boosting the development of content-based explanation approaches. However, most approaches focus on the exploitation of the structured semantic data to which recommended items are linked (e.g. actor, director, genre for movies). In this paper, we address the under-studied problem of leveraging knowledge graphs to explain the recommendations with items\u2019 unstructured textual description data. We point out 3 shortcomings of the state of the art entity-based explanation approach: absence of entity filtering, lack of intelligibility and poor user-friendliness. Accordingly, 3 novel approaches are proposed to alleviate these shortcomings. The first approach leverages a DBpedia category tree for filtering out incorrect and irrelevant entities. The second approach increases the intelligibility of entities with the classes of an integrated ontology (DBpedia, schema.org and YAGO). The third approach explains the recommendations with the best sentences from the textual descriptions selected by means of the entities. We showcase our approaches within a tourist tour recommendation explanation scenario and present a thorough face-to-face user study with a real commercial dataset containing 1310 tours in 106 countries. We showed the advantages of the proposed explanation approaches on five quality aspects: intelligibility, effectiveness, efficiency, relevance and satisfaction." }, { "instance_id": "R69680xR69555", "comparison_id": "R69680", "paper_id": "R69555", "text": "Explaining trained neural networks with semantic web technologies The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept." }, { "instance_id": "R69680xR69655", "comparison_id": "R69680", "paper_id": "R69655", "text": "Ontology-enhanced association mining, The roles of ontologies in KDD are potentially manifold. We track them through different phases of the KDD process, from data understanding through task setting to mining result interpretation and sharing over the semantic web. The underlying KDD paradigm is association mining tailored to our 4ft-Miner tool. Experience from two different application domains-medicine and sociology-is presented throughout the paper. Envisaged software support for prior knowledge exploitation via customisation of an existing user-oriented KDD tool is also discussed." }, { "instance_id": "R69680xR69562", "comparison_id": "R69680", "paper_id": "R69562", "text": "The more you know: Using knowledge graphs for image classification One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification." }, { "instance_id": "R69680xR69584", "comparison_id": "R69680", "paper_id": "R69584", "text": "Exploring knowledge graphs in an interpretable composite approach for text entailment, Recognizing textual entailment is a key task for many semantic applications, such as Question Answering, Text Summarization, and Information Extraction, among others. Entailment scenarios can range from a simple syntactic variation to more complex semantic relationships between pieces of text, but most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. We propose a composite approach for recognizing text entailment which analyzes the entailment pair to decide whether it must be resolved syntactically or semantically. We also make the answer interpretable: whenever an entailment is solved semantically, we explore a knowledge base composed of structured lexical definitions to generate natural language humanlike justifications, explaining the semantic relationship holding between the pieces of text. Besides outperforming wellestablished entailment algorithms, our composite approach gives an important step towards Explainable AI, using world knowledge to make the semantic reasoning process explicit and understandable." }, { "instance_id": "R69680xR69635", "comparison_id": "R69680", "paper_id": "R69635", "text": "Knowledge-aware autoencoders for explainable recommender systems Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guarantee users satisfaction, trust, and loyalty in a system. In this paper, we evaluate how different information encoded in a Knowledge Graph are perceived by users when they are adopted to show them an explanation. More precisely, we compare how the use of categorical information, factual one or a mixture of them both in building explanations, affect explanatory criteria for a recommender system. Experimental results are validated through an A/B testing platform which uses a recommendation engine based on a Semantics-Aware Autoencoder to build users profiles which are in turn exploited to compute recommendation lists and to provide an explanation." }, { "instance_id": "R69680xR69619", "comparison_id": "R69680", "paper_id": "R69619", "text": "Knowledge-driven stock trend prediction and explanation via temporal convolutional network Deep neural networks have achieved promising results in stock trend prediction. However, most of these models have two common drawbacks, including (i) current methods are not sensitive enough to abrupt changes of stock trend, and (ii) forecasting results are not interpretable for humans. To address these two problems, we propose a novel Knowledge-Driven Temporal Convolutional Network (KDTCN) for stock trend prediction and explanation. Firstly, we extract structured events from financial news, and utilize external knowledge from knowledge graph to obtain event embeddings. Then, we combine event embeddings and price values together to forecast stock trend. We evaluate the prediction accuracy to show how knowledge-driven events work on abrupt changes. We also visualize the effect of events and linkage among events based on knowledge graph, to explain why knowledge-driven events are common sources of abrupt changes. Experiments demonstrate that KDTCN can (i) react to abrupt changes much faster and outperform state-of-the-art methods on stock datasets, as well as (ii) facilitate the explanation of prediction particularly with abrupt changes." }, { "instance_id": "R69680xR69543", "comparison_id": "R69680", "paper_id": "R69543", "text": "How a general-purpose common- sense ontology can improve performance of learning-based image retrieval The knowledge representation community has built general-purpose ontologies which contain large amounts of commonsense knowledge over relevant aspects of the world, including useful visual information, e.g.: \"a ball is used by a football player\", \"a tennis player is located at a tennis court\". Current state-of-the-art approaches for visual recognition do not exploit these rule-based knowledge sources. Instead, they learn recognition models directly from training examples. In this paper, we study how general-purpose ontologies\u2014specifically, MIT's ConceptNet ontology\u2014can improve the performance of state-of-the-art vision systems. As a testbed, we tackle the problem of sentence-based image retrieval. Our retrieval approach incorporates knowledge from ConceptNet on top of a large pool of object detectors derived from a deep learning technique. In our experiments, we show that ConceptNet can improve performance on a common benchmark dataset. Key to our performance is the use of the ESPGAME dataset to select visually relevant relations from ConceptNet. Consequently, a main conclusion of this work is that general-purpose commonsense ontologies improve performance on visual reasoning tasks when properly filtered to select meaningful visual relations." }, { "instance_id": "R69680xR69673", "comparison_id": "R69680", "paper_id": "R69673", "text": "Using linked data to interpret tables Vast amounts of information is available in structured forms like spreadsheets, database relations, and tables found in documents and on the Web. We describe an approach that uses linked data to interpret such tables and associate their components with nodes in a reference linked data collection. Our proposed framework assigns a class (i.e. type) to table columns, links table cells to entities, and inferred relations between columns to properties. The resulting interpretation can be used to annotate tables, confirm existing facts in the linked data collection, and propose new facts to be added. Our implemented prototype uses DBpedia as the linked data collection and Wikitology for background knowledge. We evaluated its performance using a collection of tables from Google Squared, Wikipedia and the Web." }, { "instance_id": "R69680xR69648", "comparison_id": "R69680", "paper_id": "R69648", "text": "Knowledge engineering tools for reasoning with scientific observations and interpretations: a neural connectivity use case Abstract Background We address the goal of curating observations from published experiments in a generalizable form; reasoning over these observations to generate interpretations and then querying this interpreted knowledge to supply the supporting evidence. We present web-application software as part of the 'BioScholar' project (R01-GM083871) that fully instantiates this process for a well-defined domain: using tract-tracing experiments to study the neural connectivity of the rat brain. Results The main contribution of this work is to provide the first instantiation of a knowledge representation for experimental observations called 'Knowledge Engineering from Experimental Design' (KEfED) based on experimental variables and their interdependencies. The software has three parts: (a) the KEfED model editor - a design editor for creating KEfED models by drawing a flow diagram of an experimental protocol; (b) the KEfED data interface - a spreadsheet-like tool that permits users to enter experimental data pertaining to a specific model; (c) a 'neural connection matrix' interface that presents neural connectivity as a table of ordinal connection strengths representing the interpretations of tract-tracing data. This tool also allows the user to view experimental evidence pertaining to a specific connection. BioScholar is built in Flex 3.5. It uses Persevere (a noSQL database) as a flexible data store and PowerLoom \u00ae (a mature First Order Logic reasoning system) to execute queries using spatial reasoning over the BAMS neuroanatomical ontology. Conclusions We first introduce the KEfED approach as a general approach and describe its possible role as a way of introducing structured reasoning into models of argumentation within new models of scientific publication. We then describe the design and implementation of our example application: the BioScholar software. This is presented as a possible biocuration interface and supplementary reasoning toolkit for a larger, more specialized bioinformatics system: the Brain Architecture Management System (BAMS)." }, { "instance_id": "R70287xR70054", "comparison_id": "R70287", "paper_id": "R70054", "text": "Design, synthesis and antiviral efficacy of a series of potent chloropyridyl ester-derived SARS-CoV 3CLpro inhibitors Abstract Design, synthesis and biological evaluation of a series of 5-chloropyridine ester-derived severe acute respiratory syndrome-coronavirus chymotrypsin-like protease inhibitors is described. Position of the carboxylate functionality is critical to potency. Inhibitor 10 with a 5-chloropyridinyl ester at position 4 of the indole ring is the most potent inhibitor with a SARS-CoV 3CLpro IC50 value of 30nM and an antiviral EC50 value of 6.9\u03bcM. Molecular docking studies have provided possible binding modes of these inhibitors." }, { "instance_id": "R70287xR51252", "comparison_id": "R70287", "paper_id": "R51252", "text": "Identification of inhibitors of SARS-CoV-2 in-vitro cellular toxicity in human (Caco-2) cells using a large scale drug repurposing collection Abstract To identify possible candidates for progression towards clinical studies against SARS-CoV-2, we screened a well-defined collection of 5632 compounds including 3488 compounds which have undergone clinical investigations (marketed drugs, phases 1 -3, and withdrawn) across 600 indications. Compounds were screened for their inhibition of viral induced cytotoxicity using the human epithelial colorectal adenocarcinoma cell line Caco-2 and a SARS-CoV-2 isolate. The primary screen of 5632 compounds gave 271 hits. A total of 64 compounds with IC50 <20 \u00b5M were identified, including 19 compounds with IC50 < 1 \u00b5M. Of this confirmed hit population, 90% have not yet been previously reported as active against SARS-CoV-2 in-vitro cell assays. Some 37 of the actives are launched drugs, 19 are in phases 1-3 and 10 pre-clinical. Several inhibitors were associated with modulation of host pathways including kinase signaling P53 activation, ubiquitin pathways and PDE activity modulation, with long chain acyl transferases were effective viral inhibitors." }, { "instance_id": "R70287xR69988", "comparison_id": "R70287", "paper_id": "R69988", "text": "Screening of electrophilic compounds yields an aziridinyl peptide as new active-site directed SARS-CoV main protease inhibitor Abstract The coronavirus main protease, Mpro, is considered a major target for drugs suitable to combat coronavirus infections including the severe acute respiratory syndrome (SARS). In this study, comprehensive HPLC- and FRET-substrate-based screenings of various electrophilic compounds were performed to identify potential Mpro inhibitors. The data revealed that the coronaviral main protease is inhibited by aziridine- and oxirane-2-carboxylates. Among the trans-configured aziridine-2,3-dicarboxylates the Gly-Gly-containing peptide 2c was found to be the most potent inhibitor." }, { "instance_id": "R70287xR70164", "comparison_id": "R70287", "paper_id": "R70164", "text": "2- and 3-Fluoro-3-deazaneplanocins, 2-fluoro-3-deazaaristeromycins, and 3-methyl-3-deazaneplanocin: Synthesis and antiviral properties The 3-deaza analogs of the naturally occurring adenine-based carbocyclic nucleosides aristeromycin and neplanocin possess biological properties that have not been optimized. In that direction, this paper reports the strategic placement of a fluorine atom at the C-2 and C-3 positions and a methyl at the C-3 site of the 3-deazaadenine ring of the aforementioned compounds. The synthesis and S-adenosylhomocysteine hydrolase inhibitory and antiviral properties of these targets are described. Some, but not all, compounds in this series showed significant activity toward herpes, arena, bunya, flavi, and orthomyxoviruses." }, { "instance_id": "R70287xR70079", "comparison_id": "R70287", "paper_id": "R70079", "text": "Synthesis and antiviral activity of a series of 1\u2032-substituted 4-aza-7,9-dideazaadenosine C-nucleosides Abstract A series of 1\u2032-substituted analogs of 4-aza-7,9-dideazaadenosine C-nucleoside were prepared and evaluated for the potential as antiviral agents. These compounds showed a broad range of inhibitory activity against various RNA viruses. In particular, the whole cell potency against HCV when R=CN was attributed to inhibition of HCV NS5B polymerase and intracellular concentration of the corresponding nucleoside triphosphate." }, { "instance_id": "R70287xR70030", "comparison_id": "R70287", "paper_id": "R70030", "text": "Synthesis and biological activity evaluation of 5-pyrazoline substituted 4-thiazolidinones Abstract A series of novel 5-pyrazoline substituted 4-thiazolidinones have been synthesized. Target compounds were evaluated for their anticancer activity in vitro within DTP NCI protocol. Among the tested compounds, the derivatives 4d and 4f were found to be the most active, which demonstrated certain sensitivity profile toward the leukemia subpanel cell lines with GI50 value ranges of 2.12\u20134.58 \u03bcM (4d) and 1.64\u20133.20 \u03bcM (4f). The screening of antitrypanosomal and antiviral activities of 5-(3-naphthalen-2-yl-5-aryl-4,5-dihydropyrazol-1-yl)-thiazolidine-2,4-diones was carried out with the promising influence of the mentioned compounds on Trypanosoma brucei, but minimal effect on SARS coronavirus and influenza types A and B viruses." }, { "instance_id": "R70287xR69983", "comparison_id": "R70287", "paper_id": "R69983", "text": "Identification, synthesis and evaluation of SARS-CoV and MERS-CoV 3C-like protease inhibitors Abstract Severe acute respiratory syndrome (SARS) led to a life-threatening form of atypical pneumonia in late 2002. Following that, Middle East Respiratory Syndrome (MERS-CoV) has recently emerged, killing about 36% of patients infected globally, mainly in Saudi Arabia and South Korea. Based on a scaffold we reported for inhibiting neuraminidase (NA), we synthesized the analogues and identified compounds with low micromolar inhibitory activity against 3CLpro of SARS-CoV and MERS-CoV. Docking studies show that a carboxylate present at either R1 or R4 destabilizes the oxyanion hole in the 3CLpro. Interestingly, 3f, 3g and 3m could inhibit both NA and 3CLpro and serve as a starting point to develop broad-spectrum antiviral agents." }, { "instance_id": "R70287xR70102", "comparison_id": "R70287", "paper_id": "R70102", "text": "Design, synthesis and evaluation of a series of acyclic fleximer nucleoside analogues with anti-coronavirus activity Abstract A series of doubly flexible nucleoside analogues were designed based on the acyclic sugar scaffold of acyclovir and the flex-base moiety found in the fleximers. The target compounds were evaluated for their antiviral potential and found to inhibit several coronaviruses. Significantly, compound 2 displayed selective antiviral activity (CC50 >3\u00d7 EC50) towards human coronavirus (HCoV)-NL63 and Middle East respiratory syndrome-coronavirus, but not severe acute respiratory syndrome-coronavirus. In the case of HCoV-NL63 the activity was highly promising with an EC50 <10\u03bcM and a CC50 >100\u03bcM. As such, these doubly flexible nucleoside analogues are viewed as a novel new class of drug candidates with potential for potent inhibition of coronaviruses." }, { "instance_id": "R70287xR69974", "comparison_id": "R70287", "paper_id": "R69974", "text": "Design and synthesis of new tripeptide-type SARS-CoV 3CL protease inhibitors containing an electrophilic arylketone moiety Abstract We describe here the design, synthesis and biological evaluation of a series of molecules toward the development of novel peptidomimetic inhibitors of SARS-CoV 3CLpro. A docking study involving binding between the initial lead compound 1 and the SARS-CoV 3CLpro motivated the replacement of a thiazole with a benzothiazole unit as a warhead moiety at the P1\u2032 site. This modification led to the identification of more potent derivatives, including 2i, 2k, 2m, 2o, and 2p, with IC50 or K i values in the submicromolar to nanomolar range. In particular, compounds 2i and 2p exhibited the most potent inhibitory activities, with K i values of 4.1 and 3.1nM, respectively. The peptidomimetic compounds identified through this process are attractive leads for the development of potential therapeutic agents against SARS. The structural requirements of the peptidomimetics with potent inhibitory activities against SARS-CoV 3CLpro may be summarized as follows: (i) the presence of a benzothiazole warhead at the S1\u2032-position; (ii) hydrogen bonding capabilities at the cyclic lactam of the S1-site; (iii) appropriate stereochemistry and hydrophobic moiety size at the S2-site and (iv) a unique folding conformation assumed by the phenoxyacetyl moiety at the S4-site." }, { "instance_id": "R70287xR51373", "comparison_id": "R70287", "paper_id": "R51373", "text": "Identification of antiviral drug candidates against SARS-CoV-2 from FDA-approved drugs Abstract COVID-19 is an emerging infectious disease and was recently declared as a pandemic by WHO. Currently, there is no vaccine or therapeutic available for this disease. Drug repositioning represents the only feasible option to address this global challenge and a panel of 48 FDA-approved drugs that have been pre-selected by an assay of SARS-CoV was screened to identify potential antiviral drug candidates against SARS-CoV-2 infection. We found a total of 24 drugs which exhibited antiviral efficacy (0.1 \u03bcM < IC 50 < 10 \u03bcM) against SARS-CoV-2. In particular, two FDA-approved drugs - niclosamide and ciclesonide \u2013 were notable in some respects. These drugs will be tested in an appropriate animal model for their antiviral activities. In near future, these already FDA-approved drugs could be further developed following clinical trials in order to provide additional therapeutic options for patients with COVID-19." }, { "instance_id": "R70287xR70008", "comparison_id": "R70287", "paper_id": "R70008", "text": "Development of potent dipeptide-type SARS-CoV 3CL protease inhibitors with novel P3 scaffolds: Design, synthesis, biological evaluation, and docking studies Abstract We report the design and synthesis of a series of dipeptide-type inhibitors with novel P3 scaffolds that display potent inhibitory activity against SARS-CoV 3CLpro. A docking study involving binding between the dipeptidic lead compound 4 and 3CLpro suggested the modification of a structurally flexible P3 N-(3-methoxyphenyl)glycine with various rigid P3 moieties in 4. The modifications led to the identification of several potent derivatives, including 5c\u2013k and 5n with the inhibitory activities (K i or IC50) in the submicromolar to nanomolar range. Compound 5h, in particular, displayed the most potent inhibitory activity, with a K i value of 0.006 \u03bcM. This potency was 65-fold higher than the potency of the lead compound 4 (K i = 0.39 \u03bcM). In addition, the K i value of 5h was in very good agreement with the binding affinity (16 nM) observed in isothermal titration calorimetry (ITC). A SAR study around the P3 group in the lead 4 led to the identification of a rigid indole-2-carbonyl unit as one of the best P3 moieties (5c). Further optimization showed that a methoxy substitution at the 4-position on the indole unit was highly favorable for enhancing the inhibitory potency." }, { "instance_id": "R70287xR70068", "comparison_id": "R70287", "paper_id": "R70068", "text": "Identification of potential treatments for COVID-19 through artificial intelligence-enabled phenomic analysis of human cells infected with SARS-CoV-2 Abstract To identify potential therapeutic stop-gaps for SARS-CoV-2, we evaluated a library of 1,670 approved and reference compounds in an unbiased, cellular image-based screen for their ability to suppress the broad impacts of the SARS-CoV-2 virus on phenomic profiles of human renal cortical epithelial cells using deep learning. In our assay, remdesivir is the only antiviral tested with strong efficacy, neither chloroquine nor hydroxychloroquine have any beneficial effect in this human cell model, and a small number of compounds not currently being pursued clinically for SARS-CoV-2 have efficacy. We observed weak but beneficial class effects of \u03b2-blockers, mTOR/PI3K inhibitors and Vitamin D analogues and a mild amplification of the viral phenotype with \u03b2-agonists." }, { "instance_id": "R70287xR69954", "comparison_id": "R70287", "paper_id": "R69954", "text": "Virtual screening identification of novel severe acute respiratory syndrome 3C-like protease inhibitors and in vitro confirmation Abstract The 3C-like protease (3CLpro) of severe acute respiratory syndrome associated coronavirus (SARS-CoV) is vital for SARS-CoV replication and is a promising drug target. Structure based virtual screening of 308307 chemical compounds was performed using the computation tool Autodock 3.0.5 on a WISDOM Production Environment. The top 1468 ranked compounds with free binding energy ranging from \u221214.0 to \u221217.09kcalmol\u22121 were selected to check the hydrogen bond interaction with amino acid residues in the active site of 3CLpro. Fifty-three compounds from 35 main groups were tested in an in vitro assay for inhibition of 3CLpro expressed by Escherichia coli. Seven of the 53 compounds were selected; their IC50 ranged from 38.57\u00b12.41 to 101.38\u00b13.27\u03bcM. Two strong 3CLpro inhibitors were further identified as competitive inhibitors of 3CLpro with K i values of 9.11\u00b11.6 and 9.93\u00b10.44\u03bcM. Hydrophobic and hydrogen bond interactions of compound with amino acid residues in the active site of 3CLpro were also identified." }, { "instance_id": "R70287xR70125", "comparison_id": "R70287", "paper_id": "R70125", "text": "Structure-Based Design, Synthesis, and Biological Evaluation of a Series of Novel and Reversible Inhibitors for the Severe Acute Respiratory Syndrome\u2212Coronavirus Papain-Like Protease We describe here the design, synthesis, molecular modeling, and biological evaluation of a series of small molecule, nonpeptide inhibitors of SARS-CoV PLpro. Our initial lead compound was identified via high-throughput screening of a diverse chemical library. We subsequently carried out structure-activity relationship studies and optimized the lead structure to potent inhibitors that have shown antiviral activity against SARS-CoV infected Vero E6 cells. Upon the basis of the X-ray crystal structure of inhibitor 24-bound to SARS-CoV PLpro, a drug design template was created. Our structure-based modification led to the design of a more potent inhibitor, 2 (enzyme IC(50) = 0.46 microM; antiviral EC(50) = 6 microM). Interestingly, its methylamine derivative, 49, displayed good enzyme inhibitory potency (IC(50) = 1.3 microM) and the most potent SARS antiviral activity (EC(50) = 5.2 microM) in the series. We have carried out computational docking studies and generated a predictive 3D-QSAR model for SARS-CoV PLpro inhibitors." }, { "instance_id": "R70287xR69966", "comparison_id": "R70287", "paper_id": "R69966", "text": "Synthesis, docking studies, and evaluation of pyrimidines as inhibitors of SARS-CoV 3CL protease Abstract A series of 2-(benzylthio)-6-oxo-4-phenyl-1,6-dihydropyrimidine as SARS-CoV 3CL protease inhibitors were developed and their potency was evaluated by in vitro protease inhibitory assays. Two candidates had encouraging results for the development of new anti-SARS compounds." }, { "instance_id": "R70287xR51231", "comparison_id": "R70287", "paper_id": "R51231", "text": "Broad anti-coronaviral activity of FDA approved drugs against SARS-CoV-2 in vitro and SARS-CoV in vivo Abstract SARS-CoV-2 emerged in China at the end of 2019 and has rapidly become a pandemic with roughly 2.7 million recorded COVID-19 cases and greater than 189,000 recorded deaths by April 23rd, 2020 (www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease." }, { "instance_id": "R70584xR70550", "comparison_id": "R70584", "paper_id": "R70550", "text": "Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS BACKGROUND Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. METHODS We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. RESULTS Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. CONCLUSION Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions." }, { "instance_id": "R70584xR70552", "comparison_id": "R70584", "paper_id": "R70552", "text": "Development and Evaluation of a Machine Learning Model for the Early Identification of Patients at Risk for Sepsis Study objective: The Third International Consensus Definitions (Sepsis\u20103) Task Force recommended the use of the quick Sequential [Sepsis\u2010related] Organ Failure Assessment (qSOFA) score to screen patients for sepsis outside of the ICU. However, subsequent studies raise concerns about the sensitivity of qSOFA as a screening tool. We aim to use machine learning to develop a new sepsis screening tool, the Risk of Sepsis (RoS) score, and compare it with a slate of benchmark sepsis\u2010screening tools, including the Systemic Inflammatory Response Syndrome, Sequential Organ Failure Assessment (SOFA), qSOFA, Modified Early Warning Score, and National Early Warning Score. Methods: We used retrospective electronic health record data from adult patients who presented to 49 urban community hospital emergency departments during a 22\u2010month period (N=2,759,529). We used the Rhee clinical surveillance criteria as our standard definition of sepsis and as the primary target for developing our model. The data were randomly split into training and test cohorts to derive and then evaluate the model. A feature selection process was carried out in 3 stages: first, we reviewed existing models for sepsis screening; second, we consulted with local subject matter experts; and third, we used a supervised machine learning called gradient boosting. Key metrics of performance included alert rate, area under the receiver operating characteristic curve, sensitivity, specificity, and precision. Performance was assessed at 1, 3, 6, 12, and 24 hours after an index time. Results: The RoS score was the most discriminant screening tool at all time thresholds (area under the receiver operating characteristic curve 0.93 to 0.97). Compared with the next most discriminant benchmark (Sequential Organ Failure Assessment), RoS was significantly more sensitive (67.7% versus 49.2% at 1 hour and 84.6% versus 80.4% at 24 hours) and precise (27.6% versus 12.2% at 1 hour and 28.8% versus 11.4% at 24 hours). The sensitivity of qSOFA was relatively low (3.7% at 1 hour and 23.5% at 24 hours). Conclusion: In this retrospective study, RoS was more timely and discriminant than benchmark screening tools, including those recommend by the Sepsis\u20103 Task Force. Further study is needed to validate the RoS score at independent sites." }, { "instance_id": "R70584xR70558", "comparison_id": "R70584", "paper_id": "R70558", "text": "Septic shock prediction for ICU patients via coupled HMM walking on sequential contrast patterns BACKGROUND AND OBJECTIVE Critical care patient events like sepsis or septic shock in intensive care units (ICUs) are dangerous complications which can cause multiple organ failures and eventual death. Preventive prediction of such events will allow clinicians to stage effective interventions for averting these critical complications. METHODS It is widely understood that physiological conditions of patients on variables such as blood pressure and heart rate are suggestive to gradual changes over a certain period of time, prior to the occurrence of a septic shock. This work investigates the performance of a novel machine learning approach for the early prediction of septic shock. The approach combines highly informative sequential patterns extracted from multiple physiological variables and captures the interactions among these patterns via coupled hidden Markov models (CHMM). In particular, the patterns are extracted from three non-invasive waveform measurements: the mean arterial pressure levels, the heart rates and respiratory rates of septic shock patients from a large clinical ICU dataset called MIMIC-II. EVALUATION AND RESULTS For baseline estimations, SVM and HMM models on the continuous time series data for the given patients, using MAP (mean arterial pressure), HR (heart rate), and RR (respiratory rate) are employed. Single channel patterns based HMM (SCP-HMM) and multi-channel patterns based coupled HMM (MCP-HMM) are compared against baseline models using 5-fold cross validation accuracies over multiple rounds. Particularly, the results of MCP-HMM are statistically significant having a p-value of 0.0014, in comparison to baseline models. Our experiments demonstrate a strong competitive accuracy in the prediction of septic shock, especially when the interactions between the multiple variables are coupled by the learning model. CONCLUSIONS It can be concluded that the novelty of the approach, stems from the integration of sequence-based physiological pattern markers with the sequential CHMM model to learn dynamic physiological behavior, as well as from the coupling of such patterns to build powerful risk stratification models for septic shock patients." }, { "instance_id": "R70584xR70548", "comparison_id": "R70584", "paper_id": "R70548", "text": "Machine-Learning-Based Laboratory Developed Test for the Diagnosis of Sepsis in High-Risk Patients Sepsis, a dysregulated host response to infection, is a major health burden in terms of both mortality and cost. The difficulties clinicians face in diagnosing sepsis, alongside the insufficiencies of diagnostic biomarkers, motivate the present study. This work develops a machine-learning-based sepsis diagnostic for a high-risk patient group, using a geographically and institutionally diverse collection of nearly 500,000 patient health records. Using only a minimal set of clinical variables, our diagnostics outperform common severity scoring systems and sepsis biomarkers and benefit from being available immediately upon ordering." }, { "instance_id": "R70584xR70560", "comparison_id": "R70584", "paper_id": "R70560", "text": "Physiological monitoring for critically ill patients: testing a predictive model for the early detection of sepsis \u2022 Objective To assess the predictive value for the early detection of sepsis of the physiological monitoring parameters currently recommended by the Surviving Sepsis Campaign. \u2022 Methods The Project IMPACT data set was used to assess whether the physiological parameters of heart rate, mean arterial pressure, body temperature, and respiratory rate can be used to distinguish between critically ill adult patients with and without sepsis in the first 24 hours of admission to an intensive care unit. \u2022 Results All predictor variables used in the analyses differed significantly between patients with sepsis and patients without sepsis. However, only 2 of the predictor variables, mean arterial pressure and high temperature, were independently associated with sepsis. In addition, the temperature mean for hypothermia was significantly lower in patients without sepsis. The odds ratio for having sepsis was 2.126 for patients with a temperature of 38\u00b0C or higher, 3.874 for patients with a mean arterial blood pressure of less than 70 mm Hg, and 4.63 times greater for patients who had both of these conditions. \u2022 Conclusions The results support the use of some of the guidelines of the Surviving Sepsis Campaign. However, the lowest mean temperature was significantly less for patients without sepsis than for patients with sepsis, a finding that calls into question the clinical usefulness of using hypothermia as an early predictor of sepsis. Alone the group of variables used is not sufficient for discriminating between critically ill patients with and without sepsis." }, { "instance_id": "R70584xR70546", "comparison_id": "R70584", "paper_id": "R70546", "text": "Machine Learning Models for Analysis of Vital Signs Dynamics: A Case for Sepsis Onset Prediction Objective . Achieving accurate prediction of sepsis detection moment based on bedside monitor data in the intensive care unit (ICU). A good clinical outcome is more probable when onset is suspected and treated on time, thus early insight of sepsis onset may save lives and reduce costs. Methodology . We present a novel approach for feature extraction, which focuses on the hypothesis that unstable patients are more prone to develop sepsis during ICU stay. These features are used in machine learning algorithms to provide a prediction of a patient\u2019s likelihood to develop sepsis during ICU stay, hours before it is diagnosed. Results . Five machine learning algorithms were implemented using R software packages. The algorithms were trained and tested with a set of 4 features which represent the variability in vital signs. These algorithms aimed to calculate a patient\u2019s probability to become septic within the next 4 hours, based on recordings from the last 8 hours. The best area under the curve (AUC) was achieved with Support Vector Machine (SVM) with radial basis function, which was 88.38%. Conclusions . The high level of predictive accuracy along with the simplicity and availability of input variables present great potential if applied in ICUs. Variability of a patient\u2019s vital signs proves to be a good indicator of one\u2019s chance to become septic during ICU stay." }, { "instance_id": "R70584xR70554", "comparison_id": "R70584", "paper_id": "R70554", "text": "Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data." }, { "instance_id": "R70605xR70591", "comparison_id": "R70605", "paper_id": "R70591", "text": "Improving Risk Prediction of Clostridium Difficile Infection Using Temporal Event-Pairs Clostridium Difficile Infection (CDI) is a contagious healthcare-associated infection that imposes a significant burden on the healthcare system. In 2011 alone, half a million patients suffered from CDI in the United States, 29,000 dying within 30 days of diagnosis. Determining which hospital patients are at risk for developing CDI is critical to helping healthcare workers take timely measures to prevent or detect and treat this infection. We improve the state of the art of CDI risk prediction by designing an ensemble logistic regression classifier that given partial patient visit histories, outputs the risk of patients acquiring CDI during their current hospital visit. The novelty of our approach lies in the representation of each patient visit as a collection of co-occurring and chronologically ordered pairs of events. This choice is motivated by our hypothesis that CDI risk is influenced not just by individual events (e.g., Being prescribed a first generation cephalosporin antibiotic), but by the temporal ordering of individual events (e.g., Antibiotic prescription followed by transfer to a certain hospital unit). While this choice explodes the number of features, we use a randomized greedy feature selection algorithm followed by BIC minimization to reduce the dimensionality of the feature space, while retaining the most relevant features. We apply our approach to a rich dataset from the University of Iowa Hospitals and Clinics (UIHC), curated from diverse sources, consisting of 200,000 visits (30,000 per year, 2006-2011) involving 125,000 unique patients, 2 million diagnoses, 8 million prescriptions, 400,000 room transfers spanning a hospital with 700 patient rooms and 200 units. Our approach to classification produces better risk predictions (AUC) than existing risk estimators for CDI, even when trained just on data available at patient admission. It also identifies novel risk factors for CDI that are combinations of co-occurring and chronologically ordered events." }, { "instance_id": "R70605xR70597", "comparison_id": "R70605", "paper_id": "R70597", "text": "CREST - Risk Prediction for Clostridium Difficile Infection Using Multimodal Data Mining Clostridium difficile infection (CDI) is a common hospital acquired infection with a $1B annual price tag that resulted in \\(\\sim \\)30,000 deaths in 2011. Studies have shown that early detection of CDI significantly improves the prognosis for the individual patient and reduces the overall mortality rates and associated medical costs. In this paper, we present CREST: CDI Risk Estimation, a data-driven framework for early and continuous detection of CDI in hospitalized patients. CREST uses a three-pronged approach for high accuracy risk prediction. First, CREST builds a rich set of highly predictive features from Electronic Health Records. These features include clinical and non-clinical phenotypes, key biomarkers from the patient\u2019s laboratory tests, synopsis features processed from time series vital signs, and medical history mined from clinical notes. Given the inherent multimodality of clinical data, CREST bins these features into three sets: time-invariant, time-variant, and temporal synopsis features. CREST then learns classifiers for each set of features, evaluating their relative effectiveness. Lastly, CREST employs a second-order meta learning process to ensemble these classifiers for optimized estimation of the risk scores. We evaluate the CREST framework using publicly available critical care data collected for over 12 years from Beth Israel Deaconess Medical Center, Boston. Our results demonstrate that CREST predicts the probability of a patient acquiring CDI with an AUC of 0.76 five days prior to diagnosis. This value increases to 0.80 and even 0.82 for prediction two days and one day prior to diagnosis, respectively." }, { "instance_id": "R70605xR70593", "comparison_id": "R70605", "paper_id": "R70593", "text": "A Multi-Center Prospective Derivation and Validation of a Clinical Prediction Tool for Severe Clostridium difficile Infection Background and Aims Prediction of severe clinical outcomes in Clostridium difficile infection (CDI) is important to inform management decisions for optimum patient care. Currently, treatment recommendations for CDI vary based on disease severity but validated methods to predict severe disease are lacking. The aim of the study was to derive and validate a clinical prediction tool for severe outcomes in CDI. Methods A cohort totaling 638 patients with CDI was prospectively studied at three tertiary care clinical sites (Boston, Dublin and Houston). The clinical prediction rule (CPR) was developed by multivariate logistic regression analysis using the Boston cohort and the performance of this model was then evaluated in the combined Houston and Dublin cohorts. Results The CPR included the following three binary variables: age \u2265 65 years, peak serum creatinine \u22652 mg/dL and peak peripheral blood leukocyte count of \u226520,000 cells/\u03bcL. The Clostridium difficile severity score (CDSS) correctly classified 76.5% (95% CI: 70.87-81.31) and 72.5% (95% CI: 67.52-76.91) of patients in the derivation and validation cohorts, respectively. In the validation cohort, CDSS scores of 0, 1, 2 or 3 were associated with severe clinical outcomes of CDI in 4.7%, 13.8%, 33.3% and 40.0% of cases respectively. Conclusions We prospectively derived and validated a clinical prediction rule for severe CDI that is simple, reliable and accurate and can be used to identify high-risk patients most likely to benefit from measures to prevent complications of CDI." }, { "instance_id": "R70605xR70603", "comparison_id": "R70605", "paper_id": "R70603", "text": "Learning Data-Driven Patient Risk Stratification Models for Clostridium difficile Abstract Background. Although many risk factors are well known, Clostridium difficile infection (CDI) continues to be a significant problem throughout the world. The purpose of this study was to develop and validate a data-driven, hospital-specific risk stratification procedure for estimating the probability that an inpatient will test positive for C difficile. Methods. We consider electronic medical record (EMR) data from patients admitted for \u226524 hours to a large urban hospital in the U.S. between April 2011 and April 2013. Predictive models were constructed using L2-regularized logistic regression and data from the first year. The number of observational variables considered varied from a small set of well known risk factors readily available to a physician to over 10 000 variables automatically extracted from the EMR. Each model was evaluated on holdout admission data from the following year. A total of 34 846 admissions with 372 cases of CDI was used to train the model. Results. Applied to the separate validation set of 34 722 admissions with 355 cases of CDI, the model that made use of the additional EMR data yielded an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% confidence interval [CI], .79\u2013.83), and it significantly outperformed the model that considered only the small set of known clinical risk factors, AUROC of 0.71 (95% CI, .69\u2013.75). Conclusions. Automated risk stratification of patients based on the contents of their EMRs can be used to accurately ide.jpegy a high-risk population of patients. The proposed method holds promise for enabling the selective allocation of interventions aimed at reducing the rate of CDI." }, { "instance_id": "R70605xR70585", "comparison_id": "R70605", "paper_id": "R70585", "text": "Development and validation of a Clostridium difficile infection risk prediction model Objective. To develop and validate a risk prediction model that could identify patients at high risk for Clostridium difficile infection (CDI) before they develop disease. Design and Setting. Retrospective cohort study in a tertiary care medical center. Patients. Patients admitted to the hospital for at least 48 hours during the calendar year 2003. Methods. Data were collected electronically from the hospital's Medical Informatics database and analyzed with logistic regression to determine variables that best predicted patients' risk for development of CDI. Model discrimination and calibration were calculated. The model was bootstrapped 500 times to validate the predictive accuracy. A receiver operating characteristic curve was calculated to evaluate potential risk cutoffs. Results. A total of 35,350 admitted patients, including 329 with CDI, were studied. Variables in the risk prediction model were age, CDI pressure, times admitted to hospital in the previous 60 days, modified Acute Physiology Score, days of treatment with high-risk antibiotics, whether albumin level was low, admission to an intensive care unit, and receipt of laxatives, gastric acid suppressors, or antimotility drugs. The calibration and discrimination of the model were very good to excellent (C index, 0.88; Brier score, 0.009). Conclusions. The CDI risk prediction model performed well. Further study is needed to determine whether it could be used in a clinical setting to prevent CDI-associated outcomes and reduce costs." }, { "instance_id": "R70605xR70599", "comparison_id": "R70605", "paper_id": "R70599", "text": "Waterlow score to predict patients at risk of developing Clostridium difficile-associated disease This study describes the development and testing of an assessment tool to predict the risk of patients developing Clostridium difficile-associated disease (CDAD). The three phases of the study include the development of the tool, prospective testing of the validity of the tool using 1468 patients in a medical assessment unit and external retrospective testing using data from 29 425 patients. In the first phase of the study, receiver operating characteristic (ROC) analysis identified the Waterlow assessment score as having the ability to predict CDAD (area under the curve: 0.827). The Waterlow tool was then tested prospectively with 1468 patients admitted to a medical assessment unit. A total of 1385 patients (94%) had a Waterlow score <20 and 83 patients (6%) had a Waterlow score of > or = 20. After a three-month follow-up, six patients in the low Waterlow score group developed CDAD (0.4%) and 14 patients in the high score group developed CDAD (17%). The sensitivity and specificity of the Waterlow score to predict the risk of developing CDAD were 70% and 95%, respectively. Similar results were obtained when the tool was tested retrospectively on a large external patient data set. The Waterlow score appears to predict patients' risk of developing CDAD and although it did not identify all cases, it highlighted a small group of patients who had a disproportionately large number of CDAD cases. The Waterlow score can be used to target patients most at risk of developing CDAD." }, { "instance_id": "R70605xR70589", "comparison_id": "R70605", "paper_id": "R70589", "text": "Electronic health record-based detection of risk factors for Clostridium difficile infection relapse Objective. A major challenge in treating Clostridium difficile infection (CDI) is relapse. Many new therapies are being developed to help prevent this outcome. We sought to establish risk factors for relapse and determine whether fields available in an electronic health record (EHR) could be used to identify high-risk patients for targeted relapse prevention strategies. Design. Retrospective cohort study. Setting. Large clinical data warehouse at a 4-hospital healthcare organization. Participants. Data were gathered from January 2006 through October 2010. Subjects were all inpatient episodes of a positive C. difficile test where patients were available for 56 days of follow-up. Methods. Relapse was defined as another positive test between 15 and 56 days after the initial test. Multivariable regression was performed to identify factors independently associated with CDI relapse. Results. Eight hundred twenty-nine episodes met eligibility criteria, and 198 resulted in relapse (23.9%). In the final multivariable analysis, risk of relapse was associated with age (odds ratio [OR], 1.02 per year [95% confidence interval (CI), 1.01\u20131.03]), fluoroquinolone exposure in the 90 days before diagnosis (OR, 1.58 [95% CI, 1.11\u20132.26]), intensive care unit stay in the 30 days before diagnosis (OR, 0.47 [95% CI, 0.30\u20130.75]), cephalosporin (OR, 1.80 [95% CI, 1.19\u20132.71]), proton pump inhibitor (PPI; OR, 1.55 [95% CI, 1.05\u20132.29]), and metronidazole exposure after diagnosis (OR, 2.74 [95% CI, 1.64\u20134.60]). A prediction model tuned to ensure a 50% probability of relapse would flag 14.6% of CDI episodes. Conclusions. Data from a comprehensive EHR can be used to identify patients at high risk for CDI relapse. Major risk factors include antibiotic and PPI exposure." }, { "instance_id": "R70630xR70616", "comparison_id": "R70630", "paper_id": "R70616", "text": "An Unsupervised Multivariate Time Series Kernel Approach for Identifying Patients with Surgical Site Infection from Blood Samples A large fraction of the electronic health records consists of clinical measurements collected over time, such as blood tests, which provide important information about the health status of a patient. These sequences of clinical measurements are naturally represented as time series, characterized by multiple variables and the presence of missing data, which complicate analysis. In this work, we propose a surgical site infection detection framework for patients undergoing colorectal cancer surgery that is completely unsupervised, hence alleviating the problem of getting access to labelled training data. The framework is based on powerful kernels for multivariate time series that account for missing data when computing similarities. Our approach show superior performance compared to baselines that have to resort to imputation techniques and performs comparable to a supervised classification baseline." }, { "instance_id": "R70630xR70628", "comparison_id": "R70630", "paper_id": "R70628", "text": "Classification of postoperative surgical site infections from blood measurements with missing data using recurrent neural networks Clinical measurements that can be represented as time series constitute an important fraction of the electronic health records and are often both uncertain and incomplete. Recurrent neural networks are a special class of neural networks that are particularly suitable to process time series data but, in their original formulation, cannot explicitly deal with missing data. In this paper, we explore imputation strategies for handling missing values in classifiers based on recurrent neural network (RNN) and apply a recently proposed recurrent architecture, the Gated Recurrent Unit with Decay, specifically designed to handle missing data. We focus on the problem of detecting surgical site infection in patients by analyzing time series of their blood sample measurements and we compare the results obtained with different RNN-based classifiers." }, { "instance_id": "R70630xR70624", "comparison_id": "R70630", "paper_id": "R70624", "text": "Predictive Modeling of Surgical Site Infections Using Sparse Laboratory Data As part of a data mining competition, a training and test set of laboratory test data about patients with and without surgical site infection (SSI) were provided. The task was to develop predictive models with training set and identify patients with SSI in the no label test set. Lab test results are vital resources that guide healthcare providers make decisions about all aspects of surgical patient management. Many machine learning models were developed after pre-processing and imputing the lab tests data and only the top performing methods are discussed. Overall, RANDOM FOREST algorithms performed better than Support Vector Machine and Logistic Regression. Using a set of 74 lab tests, with RF, there were only 4 false positives in the training set and predicted 35 out of 50 SSI patients in the test set (Accuracy 0.86, Sensitivity 0.68, and Specificity 0.91). Optimal ways to address healthcare data quality concerns and imputation methods as well as newer generalizable algorithms need to be explored further to decipher new associations and knowledge among laboratory biomarkers and SSI." }, { "instance_id": "R70630xR70606", "comparison_id": "R70630", "paper_id": "R70606", "text": "Identification of surgical site infections using electronic health record data HighlightsThe model correctly classified 89% of patients within 30 days after surgery.The comprehensive model was better than available methods.The best model classified 80% of SSIs and 90% of no\u2010SSIs correctly.The best model used 35 variables from the electronic health record. Background: The objective of this study was to develop an algorithm for identifying surgical site infections (SSIs) using independent variables from electronic health record data and outcomes from the American College of Surgeons National Surgical Quality Improvement Program to supplement manual chart review. Methods: We fit 3 models to data from patients undergoing operations at the University of Colorado Hospital between 2013 and 2015: a similar model reported previously in the literature, a comprehensive model with 136 possible predictors, and a combination of those. All models used a generalized linear model with a lasso penalty. Several techniques for handling imbalance in the outcome were also used: Youden's J statistic to optimize the probability cutoff and sampling techniques combined with Youden's J. The models were then tested on data from patients undergoing operations during 2016. Results: Two hundred thirty of 6,840 patients (3.4%) had an SSI. The comprehensive model fit to the full set of training data performed the best, achieving 90% specificity, 80% sensitivity, and an area under the receiver operating characteristic curve of 0.89. Conclusions: We identified a model that accurately identified SSIs. The framework presented can be easily implemented by other American College of Surgeons National Surgical Quality Improvement Program\u2010participating hospitals to develop models for enhancing surveillance of SSIs." }, { "instance_id": "R70630xR70612", "comparison_id": "R70630", "paper_id": "R70612", "text": "Prognostics of surgical site infections using dynamic health data Surgical Site Infection (SSI) is a national priority in healthcare research. Much research attention has been attracted to develop better SSI risk prediction models. However, most of the existing SSI risk prediction models are built on static risk factors such as comorbidities and operative factors. In this paper, we investigate the use of the dynamic wound data for SSI risk prediction. There have been emerging mobile health (mHealth) tools that can closely monitor the patients and generate continuous measurements of many wound-related variables and other evolving clinical variables. Since existing prediction models of SSI have quite limited capacity to utilize the evolving clinical data, we develop the corresponding solution to equip these mHealth tools with decision-making capabilities for SSI prediction with a seamless assembly of several machine learning models to tackle the analytic challenges arising from the spatial-temporal data. The basic idea is to exploit the low-rank property of the spatial-temporal data via the bilinear formulation, and further enhance it with automatic missing data imputation by the matrix completion technique. We derive efficient optimization algorithms to implement these models and demonstrate the superior performances of our new predictive model on a real-world dataset of SSI, compared to a range of state-of-the-art methods." }, { "instance_id": "R70630xR70620", "comparison_id": "R70630", "paper_id": "R70620", "text": "A Prognostic Model of Surgical Site Infection Using Daily Clinical Wound Assessment BACKGROUND Surgical site infection (SSI) remains a common, costly, and morbid health care-associated infection. Early detection can improve outcomes, yet previous risk models consider only baseline risk factors (BF) not incorporating a proximate and timely data source-the wound itself. We hypothesize that incorporation of daily wound assessment improves the accuracy of SSI identification compared with traditional BF alone. STUDY DESIGN A prospective cohort of 1,000 post open abdominal surgery patients at an academic teaching hospital were examined daily for serial features (SF), for example, wound characteristics and vital signs, in addition to standard BF, for example, wound class. Using supervised machine learning, we trained 3 Na\u00efve Bayes classifiers (BF, SF, and BF+SF) using patient data from 1 to 5 days before diagnosis to classify SSI on the following day. For comparison, we also created a simplified SF model that used logistic regression. Control patients without SSI were matched on 5 similar consecutive postoperative days to avoid confounding by length of stay. Accuracy, sensitivity/specificity, and area under the receiver operating characteristic curve were calculated on a training and hold-out testing set. RESULTS Of 851 patients, 19.4% had inpatient SSIs. Univariate analysis showed differences in C-reactive protein, surgery duration, and contamination, but no differences in American Society of Anesthesiologists scores, diabetes, or emergency surgery. The BF, SF, and BF+SF classifiers had area under the receiver operating characteristic curves of 0.67, 0.76, and 0.76, respectively. The best-performing classifier (SF) had optimal sensitivity of 0.80, specificity of 0.64, positive predictive value of 0.35, and negative predictive value of 0.93. Features most associated with subsequent SSI diagnosis were granulation degree, exudate amount, nasogastric tube presence, and heart rate. CONCLUSIONS Serial features provided moderate positive predictive value and high negative predictive value for early identification of SSI. Addition of baseline risk factors did not improve identification. Features of evolving wound infection are discernable before the day of diagnosis, based primarily on visual inspection." }, { "instance_id": "R70630xR70614", "comparison_id": "R70630", "paper_id": "R70614", "text": "Maximizing Interpretability and Cost-Effectiveness of Surgical Site Infection (SSI) Predictive Models Using Feature-Specific Regularized Logistic Regression on Preoperative Temporal Data This study describes a novel approach to solve the surgical site infection (SSI) classification problem. Feature engineering has traditionally been one of the most important steps in solving complex classification problems, especially in cases with temporal data. The described novel approach is based on abstraction of temporal data recorded in three temporal windows. Maximum likelihood L1-norm (lasso) regularization was used in penalized logistic regression to predict the onset of surgical site infection occurrence based on available patient blood testing results up to the day of surgery. Prior knowledge of predictors (blood tests) was integrated in the modelling by introduction of penalty factors depending on blood test prices and an early stopping parameter limiting the maximum number of selected features used in predictive modelling. Finally, solutions resulting in higher interpretability and cost-effectiveness were demonstrated. Using repeated holdout cross-validation, the baseline C-reactive protein (CRP) classifier achieved a mean AUC of 0.801, whereas our best full lasso model achieved a mean AUC of 0.956. Best model testing results were achieved for full lasso model with maximum number of features limited at 20 features with an AUC of 0.967. Presented models showed the potential to not only support domain experts in their decision making but could also prove invaluable for improvement in prediction of SSI occurrence, which may even help setting new guidelines in the field of preoperative SSI prevention and surveillance." }, { "instance_id": "R70630xR70608", "comparison_id": "R70630", "paper_id": "R70608", "text": "Automated Detection of Postoperative Surgical Site Infections Using Supervised Methods with Electronic Health Record Data The National Surgical Quality Improvement Project (NSQIP) is widely recognized as \u201cthe best in the nation\u201d surgical quality improvement resource in the United States. In particular, it rigorously defines postoperative morbidity outcomes, including surgical adverse events occurring within 30 days of surgery. Due to its manual yet expensive construction process, the NSQIP registry is of exceptionally high quality, but its high cost remains a significant bottleneck to NSQIP\u2019s wider dissemination. In this work, we propose an automated surgical adverse events detection tool, aimed at accelerating the process of extracting postoperative outcomes from medical charts. As a prototype system, we combined local EHR data with the NSQIP gold standard outcomes and developed machine learned models to retrospectively detect Surgical Site Infections (SSI), a particular family of adverse events that NSQIP extracts. The built models have high specificity (from 0.788 to 0.988) as well as very high negative predictive values (>0.98), reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the NSQIP extractors\u2019 burden." }, { "instance_id": "R70632xR70595", "comparison_id": "R70632", "paper_id": "R70595", "text": "A Generalizable, Data-Driven Approach to Predict Daily Risk of Clostridium difficile Infection at Two Large Academic Health Centers OBJECTIVE An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH). METHODS We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital. RESULTS Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80\u20130.84) and 0.75 ( 95% CI, 0.73\u20130.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities. CONCLUSION A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies. Infect Control Hosp Epidemiol 2018;39:425\u2013433" }, { "instance_id": "R70632xR70591", "comparison_id": "R70632", "paper_id": "R70591", "text": "Improving Risk Prediction of Clostridium Difficile Infection Using Temporal Event-Pairs Clostridium Difficile Infection (CDI) is a contagious healthcare-associated infection that imposes a significant burden on the healthcare system. In 2011 alone, half a million patients suffered from CDI in the United States, 29,000 dying within 30 days of diagnosis. Determining which hospital patients are at risk for developing CDI is critical to helping healthcare workers take timely measures to prevent or detect and treat this infection. We improve the state of the art of CDI risk prediction by designing an ensemble logistic regression classifier that given partial patient visit histories, outputs the risk of patients acquiring CDI during their current hospital visit. The novelty of our approach lies in the representation of each patient visit as a collection of co-occurring and chronologically ordered pairs of events. This choice is motivated by our hypothesis that CDI risk is influenced not just by individual events (e.g., Being prescribed a first generation cephalosporin antibiotic), but by the temporal ordering of individual events (e.g., Antibiotic prescription followed by transfer to a certain hospital unit). While this choice explodes the number of features, we use a randomized greedy feature selection algorithm followed by BIC minimization to reduce the dimensionality of the feature space, while retaining the most relevant features. We apply our approach to a rich dataset from the University of Iowa Hospitals and Clinics (UIHC), curated from diverse sources, consisting of 200,000 visits (30,000 per year, 2006-2011) involving 125,000 unique patients, 2 million diagnoses, 8 million prescriptions, 400,000 room transfers spanning a hospital with 700 patient rooms and 200 units. Our approach to classification produces better risk predictions (AUC) than existing risk estimators for CDI, even when trained just on data available at patient admission. It also identifies novel risk factors for CDI that are combinations of co-occurring and chronologically ordered events." }, { "instance_id": "R70632xR70585", "comparison_id": "R70632", "paper_id": "R70585", "text": "Development and validation of a Clostridium difficile infection risk prediction model Objective. To develop and validate a risk prediction model that could identify patients at high risk for Clostridium difficile infection (CDI) before they develop disease. Design and Setting. Retrospective cohort study in a tertiary care medical center. Patients. Patients admitted to the hospital for at least 48 hours during the calendar year 2003. Methods. Data were collected electronically from the hospital's Medical Informatics database and analyzed with logistic regression to determine variables that best predicted patients' risk for development of CDI. Model discrimination and calibration were calculated. The model was bootstrapped 500 times to validate the predictive accuracy. A receiver operating characteristic curve was calculated to evaluate potential risk cutoffs. Results. A total of 35,350 admitted patients, including 329 with CDI, were studied. Variables in the risk prediction model were age, CDI pressure, times admitted to hospital in the previous 60 days, modified Acute Physiology Score, days of treatment with high-risk antibiotics, whether albumin level was low, admission to an intensive care unit, and receipt of laxatives, gastric acid suppressors, or antimotility drugs. The calibration and discrimination of the model were very good to excellent (C index, 0.88; Brier score, 0.009). Conclusions. The CDI risk prediction model performed well. Further study is needed to determine whether it could be used in a clinical setting to prevent CDI-associated outcomes and reduce costs." }, { "instance_id": "R70632xR70589", "comparison_id": "R70632", "paper_id": "R70589", "text": "Electronic health record-based detection of risk factors for Clostridium difficile infection relapse Objective. A major challenge in treating Clostridium difficile infection (CDI) is relapse. Many new therapies are being developed to help prevent this outcome. We sought to establish risk factors for relapse and determine whether fields available in an electronic health record (EHR) could be used to identify high-risk patients for targeted relapse prevention strategies. Design. Retrospective cohort study. Setting. Large clinical data warehouse at a 4-hospital healthcare organization. Participants. Data were gathered from January 2006 through October 2010. Subjects were all inpatient episodes of a positive C. difficile test where patients were available for 56 days of follow-up. Methods. Relapse was defined as another positive test between 15 and 56 days after the initial test. Multivariable regression was performed to identify factors independently associated with CDI relapse. Results. Eight hundred twenty-nine episodes met eligibility criteria, and 198 resulted in relapse (23.9%). In the final multivariable analysis, risk of relapse was associated with age (odds ratio [OR], 1.02 per year [95% confidence interval (CI), 1.01\u20131.03]), fluoroquinolone exposure in the 90 days before diagnosis (OR, 1.58 [95% CI, 1.11\u20132.26]), intensive care unit stay in the 30 days before diagnosis (OR, 0.47 [95% CI, 0.30\u20130.75]), cephalosporin (OR, 1.80 [95% CI, 1.19\u20132.71]), proton pump inhibitor (PPI; OR, 1.55 [95% CI, 1.05\u20132.29]), and metronidazole exposure after diagnosis (OR, 2.74 [95% CI, 1.64\u20134.60]). A prediction model tuned to ensure a 50% probability of relapse would flag 14.6% of CDI episodes. Conclusions. Data from a comprehensive EHR can be used to identify patients at high risk for CDI relapse. Major risk factors include antibiotic and PPI exposure." }, { "instance_id": "R70632xR70587", "comparison_id": "R70632", "paper_id": "R70587", "text": "Prediction of Recurrent Clostridium Difficile Infection Using Comprehensive Electronic Medical Records in an Integrated Healthcare Delivery System BACKGROUND Predicting recurrent Clostridium difficile infection (rCDI) remains difficult. METHODS. We employed a retrospective cohort design. Granular electronic medical record (EMR) data had been collected from patients hospitalized at 21 Kaiser Permanente Northern California hospitals. The derivation dataset (2007\u20132013) included data from 9,386 patients who experienced incident CDI (iCDI) and 1,311 who experienced their first CDI recurrences (rCDI). The validation dataset (2014) included data from 1,865 patients who experienced incident CDI and 144 who experienced rCDI. Using multiple techniques, including machine learning, we evaluated more than 150 potential predictors. Our final analyses evaluated 3 models with varying degrees of complexity and 1 previously published model. RESULTS Despite having a large multicenter cohort and access to granular EMR data (eg, vital signs, and laboratory test results), none of the models discriminated well (c statistics, 0.591\u20130.605), had good calibration, or had good explanatory power. CONCLUSIONS Our ability to predict rCDI remains limited. Given currently available EMR technology, improvements in prediction will require incorporating new variables because currently available data elements lack adequate explanatory power. Infect Control Hosp Epidemiol 2017;38:1196\u20131203" }, { "instance_id": "R70632xR70593", "comparison_id": "R70632", "paper_id": "R70593", "text": "A Multi-Center Prospective Derivation and Validation of a Clinical Prediction Tool for Severe Clostridium difficile Infection Background and Aims Prediction of severe clinical outcomes in Clostridium difficile infection (CDI) is important to inform management decisions for optimum patient care. Currently, treatment recommendations for CDI vary based on disease severity but validated methods to predict severe disease are lacking. The aim of the study was to derive and validate a clinical prediction tool for severe outcomes in CDI. Methods A cohort totaling 638 patients with CDI was prospectively studied at three tertiary care clinical sites (Boston, Dublin and Houston). The clinical prediction rule (CPR) was developed by multivariate logistic regression analysis using the Boston cohort and the performance of this model was then evaluated in the combined Houston and Dublin cohorts. Results The CPR included the following three binary variables: age \u2265 65 years, peak serum creatinine \u22652 mg/dL and peak peripheral blood leukocyte count of \u226520,000 cells/\u03bcL. The Clostridium difficile severity score (CDSS) correctly classified 76.5% (95% CI: 70.87-81.31) and 72.5% (95% CI: 67.52-76.91) of patients in the derivation and validation cohorts, respectively. In the validation cohort, CDSS scores of 0, 1, 2 or 3 were associated with severe clinical outcomes of CDI in 4.7%, 13.8%, 33.3% and 40.0% of cases respectively. Conclusions We prospectively derived and validated a clinical prediction rule for severe CDI that is simple, reliable and accurate and can be used to identify high-risk patients most likely to benefit from measures to prevent complications of CDI." }, { "instance_id": "R70632xR70603", "comparison_id": "R70632", "paper_id": "R70603", "text": "Learning Data-Driven Patient Risk Stratification Models for Clostridium difficile Abstract Background. Although many risk factors are well known, Clostridium difficile infection (CDI) continues to be a significant problem throughout the world. The purpose of this study was to develop and validate a data-driven, hospital-specific risk stratification procedure for estimating the probability that an inpatient will test positive for C difficile. Methods. We consider electronic medical record (EMR) data from patients admitted for \u226524 hours to a large urban hospital in the U.S. between April 2011 and April 2013. Predictive models were constructed using L2-regularized logistic regression and data from the first year. The number of observational variables considered varied from a small set of well known risk factors readily available to a physician to over 10 000 variables automatically extracted from the EMR. Each model was evaluated on holdout admission data from the following year. A total of 34 846 admissions with 372 cases of CDI was used to train the model. Results. Applied to the separate validation set of 34 722 admissions with 355 cases of CDI, the model that made use of the additional EMR data yielded an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% confidence interval [CI], .79\u2013.83), and it significantly outperformed the model that considered only the small set of known clinical risk factors, AUROC of 0.71 (95% CI, .69\u2013.75). Conclusions. Automated risk stratification of patients based on the contents of their EMRs can be used to accurately ide.jpegy a high-risk population of patients. The proposed method holds promise for enabling the selective allocation of interventions aimed at reducing the rate of CDI." }, { "instance_id": "R70633xR70591", "comparison_id": "R70633", "paper_id": "R70591", "text": "Improving Risk Prediction of Clostridium Difficile Infection Using Temporal Event-Pairs Clostridium Difficile Infection (CDI) is a contagious healthcare-associated infection that imposes a significant burden on the healthcare system. In 2011 alone, half a million patients suffered from CDI in the United States, 29,000 dying within 30 days of diagnosis. Determining which hospital patients are at risk for developing CDI is critical to helping healthcare workers take timely measures to prevent or detect and treat this infection. We improve the state of the art of CDI risk prediction by designing an ensemble logistic regression classifier that given partial patient visit histories, outputs the risk of patients acquiring CDI during their current hospital visit. The novelty of our approach lies in the representation of each patient visit as a collection of co-occurring and chronologically ordered pairs of events. This choice is motivated by our hypothesis that CDI risk is influenced not just by individual events (e.g., Being prescribed a first generation cephalosporin antibiotic), but by the temporal ordering of individual events (e.g., Antibiotic prescription followed by transfer to a certain hospital unit). While this choice explodes the number of features, we use a randomized greedy feature selection algorithm followed by BIC minimization to reduce the dimensionality of the feature space, while retaining the most relevant features. We apply our approach to a rich dataset from the University of Iowa Hospitals and Clinics (UIHC), curated from diverse sources, consisting of 200,000 visits (30,000 per year, 2006-2011) involving 125,000 unique patients, 2 million diagnoses, 8 million prescriptions, 400,000 room transfers spanning a hospital with 700 patient rooms and 200 units. Our approach to classification produces better risk predictions (AUC) than existing risk estimators for CDI, even when trained just on data available at patient admission. It also identifies novel risk factors for CDI that are combinations of co-occurring and chronologically ordered events." }, { "instance_id": "R70633xR70587", "comparison_id": "R70633", "paper_id": "R70587", "text": "Prediction of Recurrent Clostridium Difficile Infection Using Comprehensive Electronic Medical Records in an Integrated Healthcare Delivery System BACKGROUND Predicting recurrent Clostridium difficile infection (rCDI) remains difficult. METHODS. We employed a retrospective cohort design. Granular electronic medical record (EMR) data had been collected from patients hospitalized at 21 Kaiser Permanente Northern California hospitals. The derivation dataset (2007\u20132013) included data from 9,386 patients who experienced incident CDI (iCDI) and 1,311 who experienced their first CDI recurrences (rCDI). The validation dataset (2014) included data from 1,865 patients who experienced incident CDI and 144 who experienced rCDI. Using multiple techniques, including machine learning, we evaluated more than 150 potential predictors. Our final analyses evaluated 3 models with varying degrees of complexity and 1 previously published model. RESULTS Despite having a large multicenter cohort and access to granular EMR data (eg, vital signs, and laboratory test results), none of the models discriminated well (c statistics, 0.591\u20130.605), had good calibration, or had good explanatory power. CONCLUSIONS Our ability to predict rCDI remains limited. Given currently available EMR technology, improvements in prediction will require incorporating new variables because currently available data elements lack adequate explanatory power. Infect Control Hosp Epidemiol 2017;38:1196\u20131203" }, { "instance_id": "R70633xR70599", "comparison_id": "R70633", "paper_id": "R70599", "text": "Waterlow score to predict patients at risk of developing Clostridium difficile-associated disease This study describes the development and testing of an assessment tool to predict the risk of patients developing Clostridium difficile-associated disease (CDAD). The three phases of the study include the development of the tool, prospective testing of the validity of the tool using 1468 patients in a medical assessment unit and external retrospective testing using data from 29 425 patients. In the first phase of the study, receiver operating characteristic (ROC) analysis identified the Waterlow assessment score as having the ability to predict CDAD (area under the curve: 0.827). The Waterlow tool was then tested prospectively with 1468 patients admitted to a medical assessment unit. A total of 1385 patients (94%) had a Waterlow score <20 and 83 patients (6%) had a Waterlow score of > or = 20. After a three-month follow-up, six patients in the low Waterlow score group developed CDAD (0.4%) and 14 patients in the high score group developed CDAD (17%). The sensitivity and specificity of the Waterlow score to predict the risk of developing CDAD were 70% and 95%, respectively. Similar results were obtained when the tool was tested retrospectively on a large external patient data set. The Waterlow score appears to predict patients' risk of developing CDAD and although it did not identify all cases, it highlighted a small group of patients who had a disproportionately large number of CDAD cases. The Waterlow score can be used to target patients most at risk of developing CDAD." }, { "instance_id": "R70633xR70597", "comparison_id": "R70633", "paper_id": "R70597", "text": "CREST - Risk Prediction for Clostridium Difficile Infection Using Multimodal Data Mining Clostridium difficile infection (CDI) is a common hospital acquired infection with a $1B annual price tag that resulted in \\(\\sim \\)30,000 deaths in 2011. Studies have shown that early detection of CDI significantly improves the prognosis for the individual patient and reduces the overall mortality rates and associated medical costs. In this paper, we present CREST: CDI Risk Estimation, a data-driven framework for early and continuous detection of CDI in hospitalized patients. CREST uses a three-pronged approach for high accuracy risk prediction. First, CREST builds a rich set of highly predictive features from Electronic Health Records. These features include clinical and non-clinical phenotypes, key biomarkers from the patient\u2019s laboratory tests, synopsis features processed from time series vital signs, and medical history mined from clinical notes. Given the inherent multimodality of clinical data, CREST bins these features into three sets: time-invariant, time-variant, and temporal synopsis features. CREST then learns classifiers for each set of features, evaluating their relative effectiveness. Lastly, CREST employs a second-order meta learning process to ensemble these classifiers for optimized estimation of the risk scores. We evaluate the CREST framework using publicly available critical care data collected for over 12 years from Beth Israel Deaconess Medical Center, Boston. Our results demonstrate that CREST predicts the probability of a patient acquiring CDI with an AUC of 0.76 five days prior to diagnosis. This value increases to 0.80 and even 0.82 for prediction two days and one day prior to diagnosis, respectively." }, { "instance_id": "R70633xR70593", "comparison_id": "R70633", "paper_id": "R70593", "text": "A Multi-Center Prospective Derivation and Validation of a Clinical Prediction Tool for Severe Clostridium difficile Infection Background and Aims Prediction of severe clinical outcomes in Clostridium difficile infection (CDI) is important to inform management decisions for optimum patient care. Currently, treatment recommendations for CDI vary based on disease severity but validated methods to predict severe disease are lacking. The aim of the study was to derive and validate a clinical prediction tool for severe outcomes in CDI. Methods A cohort totaling 638 patients with CDI was prospectively studied at three tertiary care clinical sites (Boston, Dublin and Houston). The clinical prediction rule (CPR) was developed by multivariate logistic regression analysis using the Boston cohort and the performance of this model was then evaluated in the combined Houston and Dublin cohorts. Results The CPR included the following three binary variables: age \u2265 65 years, peak serum creatinine \u22652 mg/dL and peak peripheral blood leukocyte count of \u226520,000 cells/\u03bcL. The Clostridium difficile severity score (CDSS) correctly classified 76.5% (95% CI: 70.87-81.31) and 72.5% (95% CI: 67.52-76.91) of patients in the derivation and validation cohorts, respectively. In the validation cohort, CDSS scores of 0, 1, 2 or 3 were associated with severe clinical outcomes of CDI in 4.7%, 13.8%, 33.3% and 40.0% of cases respectively. Conclusions We prospectively derived and validated a clinical prediction rule for severe CDI that is simple, reliable and accurate and can be used to identify high-risk patients most likely to benefit from measures to prevent complications of CDI." }, { "instance_id": "R70633xR70601", "comparison_id": "R70633", "paper_id": "R70601", "text": "Patient Risk Stratification for Hospital-Associated C. diff as a Time-Series Classification Task A patient's risk for adverse events is affected by temporal processes including the nature and timing of diagnostic and therapeutic activities, and the overall evolution of the patient's pathophysiology over time. Yet many investigators ignore this temporal aspect when modeling patient outcomes, considering only the patient's current or aggregate state. In this paper, we represent patient risk as a time series. In doing so, patient risk stratification becomes a time-series classification task. The task differs from most applications of time-series analysis, like speech processing, since the time series itself must first be extracted. Thus, we begin by defining and extracting approximate risk processes, the evolving approximate daily risk of a patient. Once obtained, we use these signals to explore different approaches to time-series classification with the goal of identifying high-risk patterns. We apply the classification to the specific task of identifying patients at risk of testing positive for hospital acquired Clostridium difficile. We achieve an area under the receiver operating characteristic curve of 0.79 on a held-out set of several hundred patients. Our two-stage approach to risk stratification outperforms classifiers that consider only a patient's current state (p<0.05)." }, { "instance_id": "R70633xR70589", "comparison_id": "R70633", "paper_id": "R70589", "text": "Electronic health record-based detection of risk factors for Clostridium difficile infection relapse Objective. A major challenge in treating Clostridium difficile infection (CDI) is relapse. Many new therapies are being developed to help prevent this outcome. We sought to establish risk factors for relapse and determine whether fields available in an electronic health record (EHR) could be used to identify high-risk patients for targeted relapse prevention strategies. Design. Retrospective cohort study. Setting. Large clinical data warehouse at a 4-hospital healthcare organization. Participants. Data were gathered from January 2006 through October 2010. Subjects were all inpatient episodes of a positive C. difficile test where patients were available for 56 days of follow-up. Methods. Relapse was defined as another positive test between 15 and 56 days after the initial test. Multivariable regression was performed to identify factors independently associated with CDI relapse. Results. Eight hundred twenty-nine episodes met eligibility criteria, and 198 resulted in relapse (23.9%). In the final multivariable analysis, risk of relapse was associated with age (odds ratio [OR], 1.02 per year [95% confidence interval (CI), 1.01\u20131.03]), fluoroquinolone exposure in the 90 days before diagnosis (OR, 1.58 [95% CI, 1.11\u20132.26]), intensive care unit stay in the 30 days before diagnosis (OR, 0.47 [95% CI, 0.30\u20130.75]), cephalosporin (OR, 1.80 [95% CI, 1.19\u20132.71]), proton pump inhibitor (PPI; OR, 1.55 [95% CI, 1.05\u20132.29]), and metronidazole exposure after diagnosis (OR, 2.74 [95% CI, 1.64\u20134.60]). A prediction model tuned to ensure a 50% probability of relapse would flag 14.6% of CDI episodes. Conclusions. Data from a comprehensive EHR can be used to identify patients at high risk for CDI relapse. Major risk factors include antibiotic and PPI exposure." }, { "instance_id": "R70640xR70620", "comparison_id": "R70640", "paper_id": "R70620", "text": "A Prognostic Model of Surgical Site Infection Using Daily Clinical Wound Assessment BACKGROUND Surgical site infection (SSI) remains a common, costly, and morbid health care-associated infection. Early detection can improve outcomes, yet previous risk models consider only baseline risk factors (BF) not incorporating a proximate and timely data source-the wound itself. We hypothesize that incorporation of daily wound assessment improves the accuracy of SSI identification compared with traditional BF alone. STUDY DESIGN A prospective cohort of 1,000 post open abdominal surgery patients at an academic teaching hospital were examined daily for serial features (SF), for example, wound characteristics and vital signs, in addition to standard BF, for example, wound class. Using supervised machine learning, we trained 3 Na\u00efve Bayes classifiers (BF, SF, and BF+SF) using patient data from 1 to 5 days before diagnosis to classify SSI on the following day. For comparison, we also created a simplified SF model that used logistic regression. Control patients without SSI were matched on 5 similar consecutive postoperative days to avoid confounding by length of stay. Accuracy, sensitivity/specificity, and area under the receiver operating characteristic curve were calculated on a training and hold-out testing set. RESULTS Of 851 patients, 19.4% had inpatient SSIs. Univariate analysis showed differences in C-reactive protein, surgery duration, and contamination, but no differences in American Society of Anesthesiologists scores, diabetes, or emergency surgery. The BF, SF, and BF+SF classifiers had area under the receiver operating characteristic curves of 0.67, 0.76, and 0.76, respectively. The best-performing classifier (SF) had optimal sensitivity of 0.80, specificity of 0.64, positive predictive value of 0.35, and negative predictive value of 0.93. Features most associated with subsequent SSI diagnosis were granulation degree, exudate amount, nasogastric tube presence, and heart rate. CONCLUSIONS Serial features provided moderate positive predictive value and high negative predictive value for early identification of SSI. Addition of baseline risk factors did not improve identification. Features of evolving wound infection are discernable before the day of diagnosis, based primarily on visual inspection." }, { "instance_id": "R70640xR70624", "comparison_id": "R70640", "paper_id": "R70624", "text": "Predictive Modeling of Surgical Site Infections Using Sparse Laboratory Data As part of a data mining competition, a training and test set of laboratory test data about patients with and without surgical site infection (SSI) were provided. The task was to develop predictive models with training set and identify patients with SSI in the no label test set. Lab test results are vital resources that guide healthcare providers make decisions about all aspects of surgical patient management. Many machine learning models were developed after pre-processing and imputing the lab tests data and only the top performing methods are discussed. Overall, RANDOM FOREST algorithms performed better than Support Vector Machine and Logistic Regression. Using a set of 74 lab tests, with RF, there were only 4 false positives in the training set and predicted 35 out of 50 SSI patients in the test set (Accuracy 0.86, Sensitivity 0.68, and Specificity 0.91). Optimal ways to address healthcare data quality concerns and imputation methods as well as newer generalizable algorithms need to be explored further to decipher new associations and knowledge among laboratory biomarkers and SSI." }, { "instance_id": "R70640xR70614", "comparison_id": "R70640", "paper_id": "R70614", "text": "Maximizing Interpretability and Cost-Effectiveness of Surgical Site Infection (SSI) Predictive Models Using Feature-Specific Regularized Logistic Regression on Preoperative Temporal Data This study describes a novel approach to solve the surgical site infection (SSI) classification problem. Feature engineering has traditionally been one of the most important steps in solving complex classification problems, especially in cases with temporal data. The described novel approach is based on abstraction of temporal data recorded in three temporal windows. Maximum likelihood L1-norm (lasso) regularization was used in penalized logistic regression to predict the onset of surgical site infection occurrence based on available patient blood testing results up to the day of surgery. Prior knowledge of predictors (blood tests) was integrated in the modelling by introduction of penalty factors depending on blood test prices and an early stopping parameter limiting the maximum number of selected features used in predictive modelling. Finally, solutions resulting in higher interpretability and cost-effectiveness were demonstrated. Using repeated holdout cross-validation, the baseline C-reactive protein (CRP) classifier achieved a mean AUC of 0.801, whereas our best full lasso model achieved a mean AUC of 0.956. Best model testing results were achieved for full lasso model with maximum number of features limited at 20 features with an AUC of 0.967. Presented models showed the potential to not only support domain experts in their decision making but could also prove invaluable for improvement in prediction of SSI occurrence, which may even help setting new guidelines in the field of preoperative SSI prevention and surveillance." }, { "instance_id": "R70640xR70616", "comparison_id": "R70640", "paper_id": "R70616", "text": "An Unsupervised Multivariate Time Series Kernel Approach for Identifying Patients with Surgical Site Infection from Blood Samples A large fraction of the electronic health records consists of clinical measurements collected over time, such as blood tests, which provide important information about the health status of a patient. These sequences of clinical measurements are naturally represented as time series, characterized by multiple variables and the presence of missing data, which complicate analysis. In this work, we propose a surgical site infection detection framework for patients undergoing colorectal cancer surgery that is completely unsupervised, hence alleviating the problem of getting access to labelled training data. The framework is based on powerful kernels for multivariate time series that account for missing data when computing similarities. Our approach show superior performance compared to baselines that have to resort to imputation techniques and performs comparable to a supervised classification baseline." }, { "instance_id": "R70640xR70618", "comparison_id": "R70640", "paper_id": "R70618", "text": "A diagnostic algorithm for the surveillance of deep surgical site infections after colorectal surgery Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012\u20132015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932\u20130.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance." }, { "instance_id": "R70640xR70608", "comparison_id": "R70640", "paper_id": "R70608", "text": "Automated Detection of Postoperative Surgical Site Infections Using Supervised Methods with Electronic Health Record Data The National Surgical Quality Improvement Project (NSQIP) is widely recognized as \u201cthe best in the nation\u201d surgical quality improvement resource in the United States. In particular, it rigorously defines postoperative morbidity outcomes, including surgical adverse events occurring within 30 days of surgery. Due to its manual yet expensive construction process, the NSQIP registry is of exceptionally high quality, but its high cost remains a significant bottleneck to NSQIP\u2019s wider dissemination. In this work, we propose an automated surgical adverse events detection tool, aimed at accelerating the process of extracting postoperative outcomes from medical charts. As a prototype system, we combined local EHR data with the NSQIP gold standard outcomes and developed machine learned models to retrospectively detect Surgical Site Infections (SSI), a particular family of adverse events that NSQIP extracts. The built models have high specificity (from 0.788 to 0.988) as well as very high negative predictive values (>0.98), reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the NSQIP extractors\u2019 burden." }, { "instance_id": "R70640xR70628", "comparison_id": "R70640", "paper_id": "R70628", "text": "Classification of postoperative surgical site infections from blood measurements with missing data using recurrent neural networks Clinical measurements that can be represented as time series constitute an important fraction of the electronic health records and are often both uncertain and incomplete. Recurrent neural networks are a special class of neural networks that are particularly suitable to process time series data but, in their original formulation, cannot explicitly deal with missing data. In this paper, we explore imputation strategies for handling missing values in classifiers based on recurrent neural network (RNN) and apply a recently proposed recurrent architecture, the Gated Recurrent Unit with Decay, specifically designed to handle missing data. We focus on the problem of detecting surgical site infection in patients by analyzing time series of their blood sample measurements and we compare the results obtained with different RNN-based classifiers." }, { "instance_id": "R70640xR70622", "comparison_id": "R70640", "paper_id": "R70622", "text": "Improving Prediction of Surgical Site Infection Risk with Multilevel Modeling Background Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule). Aim To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure. Patients and Methods Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure. Results A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10\u22129), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10\u22129), with an area under the ROC curve of 0.84. Conclusion This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix)." }, { "instance_id": "R70642xR70556", "comparison_id": "R70642", "paper_id": "R70556", "text": "Detecting pathogen exposure during the non-symptomatic incubation period using physiological data Abstract Early pathogen exposure detection allows better patient care and faster implementation of public health measures (patient isolation, contact tracing). Existing exposure detection most frequently relies on overt clinical symptoms, namely fever, during the infectious prodromal period. We have developed a robust machine learning based method to better detect asymptomatic states during the incubation period using subtle, sub-clinical physiological markers. Starting with high-resolution physiological waveform data from non-human primate studies of viral (Ebola, Marburg, Lassa, and Nipah viruses) and bacterial ( Y. pestis ) exposure, we processed the data to reduce short-term variability and normalize diurnal variations, then provided these to a supervised random forest classification algorithm and post-classifier declaration logic step to reduce false alarms. In most subjects detection is achieved well before the onset of fever; subject cross-validation across exposure studies (varying viruses, exposure routes, animal species, and target dose) lead to 51h mean early detection (at 0.93 area under the receiver-operating characteristic curve [AUCROC]). Evaluating the algorithm against entirely independent datasets for Lassa, Nipah, and Y. pestis exposures un-used in algorithm training and development yields a mean 51h early warning time (at AUCROC=0.95). We discuss which physiological indicators are most informative for early detection and options for extending this capability to limited datasets such as those available from wearable, non-invasive, ECG-based sensors." }, { "instance_id": "R70642xR70554", "comparison_id": "R70642", "paper_id": "R70554", "text": "Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data." }, { "instance_id": "R70642xR70566", "comparison_id": "R70642", "paper_id": "R70566", "text": "From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system OBJECTIVE To develop a decision support system to identify patients at high risk for hyperlactatemia based upon routinely measured vital signs and laboratory studies. MATERIALS AND METHODS Electronic health records of 741 adult patients at the University of California Davis Health System who met at least two systemic inflammatory response syndrome criteria were used to associate patients' vital signs, white blood cell count (WBC), with sepsis occurrence and mortality. Generative and discriminative classification (na\u00efve Bayes, support vector machines, Gaussian mixture models, hidden Markov models) were used to integrate heterogeneous patient data and form a predictive tool for the inference of lactate level and mortality risk. RESULTS An accuracy of 0.99 and discriminability of 1.00 area under the receiver operating characteristic curve (AUC) for lactate level prediction was obtained when the vital signs and WBC measurements were analysed in a 24 h time bin. An accuracy of 0.73 and discriminability of 0.73 AUC for mortality prediction in patients with sepsis was achieved with only three features: median of lactate levels, mean arterial pressure, and median absolute deviation of the respiratory rate. DISCUSSION This study introduces a new scheme for the prediction of lactate levels and mortality risk from patient vital signs and WBC. Accurate prediction of both these variables can drive the appropriate response by clinical staff and thus may have important implications for patient health and treatment outcome. CONCLUSIONS Effective predictions of lactate levels and mortality risk can be provided with a few clinical variables when the temporal aspect and variability of patient data are considered." }, { "instance_id": "R70642xR70560", "comparison_id": "R70642", "paper_id": "R70560", "text": "Physiological monitoring for critically ill patients: testing a predictive model for the early detection of sepsis \u2022 Objective To assess the predictive value for the early detection of sepsis of the physiological monitoring parameters currently recommended by the Surviving Sepsis Campaign. \u2022 Methods The Project IMPACT data set was used to assess whether the physiological parameters of heart rate, mean arterial pressure, body temperature, and respiratory rate can be used to distinguish between critically ill adult patients with and without sepsis in the first 24 hours of admission to an intensive care unit. \u2022 Results All predictor variables used in the analyses differed significantly between patients with sepsis and patients without sepsis. However, only 2 of the predictor variables, mean arterial pressure and high temperature, were independently associated with sepsis. In addition, the temperature mean for hypothermia was significantly lower in patients without sepsis. The odds ratio for having sepsis was 2.126 for patients with a temperature of 38\u00b0C or higher, 3.874 for patients with a mean arterial blood pressure of less than 70 mm Hg, and 4.63 times greater for patients who had both of these conditions. \u2022 Conclusions The results support the use of some of the guidelines of the Surviving Sepsis Campaign. However, the lowest mean temperature was significantly less for patients without sepsis than for patients with sepsis, a finding that calls into question the clinical usefulness of using hypothermia as an early predictor of sepsis. Alone the group of variables used is not sufficient for discriminating between critically ill patients with and without sepsis." }, { "instance_id": "R70642xR70548", "comparison_id": "R70642", "paper_id": "R70548", "text": "Machine-Learning-Based Laboratory Developed Test for the Diagnosis of Sepsis in High-Risk Patients Sepsis, a dysregulated host response to infection, is a major health burden in terms of both mortality and cost. The difficulties clinicians face in diagnosing sepsis, alongside the insufficiencies of diagnostic biomarkers, motivate the present study. This work develops a machine-learning-based sepsis diagnostic for a high-risk patient group, using a geographically and institutionally diverse collection of nearly 500,000 patient health records. Using only a minimal set of clinical variables, our diagnostics outperform common severity scoring systems and sepsis biomarkers and benefit from being available immediately upon ordering." }, { "instance_id": "R70642xR70562", "comparison_id": "R70642", "paper_id": "R70562", "text": "Predictive models for severe sepsis in adult ICU patients Intensive Care Unit (ICU) patients have significant morbidity and mortality, often from complications that arise during the hospital stay. Severe sepsis is one of the leading causes of death among these patients. Predictive models have the potential to allow for earlier detection of severe sepsis and ultimately earlier intervention. However, current methods for identifying and predicting severe sepsis are biased and inadequate. The goal of this work is to identify a new framework for the prediction of severe sepsis and identify early predictors utilizing clinical laboratory values and vital signs collected in adult ICU patients. We explore models with logistic regression (LR), support vector machines (SVM), and logistic model trees (LMT) utilizing vital signs, laboratory values, or a combination of vital and laboratory values. When applied to a retrospective cohort of ICU patients, the SVM model using laboratory and vital signs as predictors identified 339 (65%) of the 3,446 patients as developing severe sepsis correctly. Based on this new framework and developed models, we provide a recommendation for the use in clinical decision support in ICU and non-ICU environments." }, { "instance_id": "R70642xR70546", "comparison_id": "R70642", "paper_id": "R70546", "text": "Machine Learning Models for Analysis of Vital Signs Dynamics: A Case for Sepsis Onset Prediction Objective . Achieving accurate prediction of sepsis detection moment based on bedside monitor data in the intensive care unit (ICU). A good clinical outcome is more probable when onset is suspected and treated on time, thus early insight of sepsis onset may save lives and reduce costs. Methodology . We present a novel approach for feature extraction, which focuses on the hypothesis that unstable patients are more prone to develop sepsis during ICU stay. These features are used in machine learning algorithms to provide a prediction of a patient\u2019s likelihood to develop sepsis during ICU stay, hours before it is diagnosed. Results . Five machine learning algorithms were implemented using R software packages. The algorithms were trained and tested with a set of 4 features which represent the variability in vital signs. These algorithms aimed to calculate a patient\u2019s probability to become septic within the next 4 hours, based on recordings from the last 8 hours. The best area under the curve (AUC) was achieved with Support Vector Machine (SVM) with radial basis function, which was 88.38%. Conclusions . The high level of predictive accuracy along with the simplicity and availability of input variables present great potential if applied in ICUs. Variability of a patient\u2019s vital signs proves to be a good indicator of one\u2019s chance to become septic during ICU stay." }, { "instance_id": "R76783xR76762", "comparison_id": "R76783", "paper_id": "R76762", "text": "Virtual Knowledge Graphs: An Overview of Systems and Use Cases. In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions." }, { "instance_id": "R76783xR75675", "comparison_id": "R76783", "paper_id": "R75675", "text": "Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term \"Knowledge Graph\" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used." }, { "instance_id": "R76783xR76774", "comparison_id": "R76783", "paper_id": "R76774", "text": "Knowledge Graphs on the Web \u2013 an Overview Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowledge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chapter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap." }, { "instance_id": "R76783xR76754", "comparison_id": "R76783", "paper_id": "R76754", "text": "A Comprehensive Survey of Knowledge Graph Embeddings with Literals: Techniques and Applications Knowledge Graphs are organized to describe entities from any discipline and the interrelations between them. Apart from facilitating the inter-connectivity of datasets in the LOD cloud, KGs have been used in a variety of applications such as Web search or entity linking, and recently are part of popular search systems and Q&A applications etc. However, the KG applications suffer from high computational and storage cost. Hence, there arises the necessity of having a representation learning of the high dimensional KGs into low dimensional spaces preserving structural as well as relational information. In this study, we conduct a comprehensive survey based on techniques of KG embedding models which consider the structured information of the graph as well as the unstructured information in form of literals such as text, numerical values etc. Furthermore, we address the challenges in their embedding models followed by a discussion on different application scenarios." }, { "instance_id": "R76783xR75081", "comparison_id": "R76783", "paper_id": "R75081", "text": "A Survey on Knowledge Graphs: Representation, Acquisition and Applications Human knowledge provides a formal understanding of the world. Knowledge graphs that represent structural relations between entities have become an increasingly popular research direction toward cognition and human-level intelligence. In this survey, we provide a comprehensive review of the knowledge graph covering overall research topics about: 1) knowledge graph representation learning; 2) knowledge acquisition and completion; 3) temporal knowledge graph; and 4) knowledge-aware applications and summarize recent breakthroughs and perspective directions to facilitate future research. We propose a full-view categorization and new taxonomies on these topics. Knowledge graph embedding is organized from four aspects of representation space, scoring function, encoding models, and auxiliary information. For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning are reviewed. We further explore several emerging topics, including metarelational learning, commonsense reasoning, and temporal knowledge graphs. To facilitate future research on knowledge graphs, we also provide a curated collection of data sets and open-source libraries on different tasks. In the end, we have a thorough outlook on several promising research directions." }, { "instance_id": "R76783xR76758", "comparison_id": "R76783", "paper_id": "R76758", "text": "Relational Representation Learning for Dynamic (Knowledge) Graphs: A Survey Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research." }, { "instance_id": "R76783xR76779", "comparison_id": "R76783", "paper_id": "R76779", "text": "Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web The increasingly pervasive nature of the Web, expanding to devices and things in everyday life, along with new trends in Artificial Intelligence call for new paradigms and a new look on Knowledge Representation and Processing at scale for the Semantic Web. The emerging, but still to be concretely shaped concept of \"Knowledge Graphs\" provides an excellent unifying metaphor for this current status of Semantic Web research. More than two decades of Semantic Web research provides a solid basis and a promising technology and standards stack to interlink data, ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphs as such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises \u2013 while often inspired by \u2013 limited to the core Semantic Web stack. This report documents the program and the outcomes of Dagstuhl Seminar 18371 \"Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web\", where a group of experts from academia and industry discussed fundamental questions around these topics for a week in early September 2018, including the following: what are knowledge graphs? Which applications do we see to emerge? Which open research questions still need be addressed and which technology gaps still need to be closed?" }, { "instance_id": "R76785xR76758", "comparison_id": "R76785", "paper_id": "R76758", "text": "Relational Representation Learning for Dynamic (Knowledge) Graphs: A Survey Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research." }, { "instance_id": "R76785xR76770", "comparison_id": "R76785", "paper_id": "R76770", "text": "Knowledge Graphs in Manufacturing and Production: A Systematic Literature Review Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output. This makes knowledge graphs attractive for companies to reach Industry 4.0 goals. However, existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied in the field of manufacturing and production is needed. Therefore, we have conducted a systematic literature review as an attempt to characterize the state-of-the-art in this field, i.e., by identifying existing research and by identifying gaps and opportunities for further research. We have focused on finding the primary studies in the existing literature, which were classified and analyzed according to four criteria: bibliometric key facts, research type facets, knowledge graph characteristics, and application scenarios. Besides, an evaluation of the primary studies has also been carried out to gain deeper insights in terms of methodology, empirical evidence, and relevance. As a result, we can offer a complete picture of the domain, which includes such interesting aspects as the fact that knowledge fusion is currently the main use case for knowledge graphs, that empirical research and industrial application are still missing to a large extent, that graph embeddings are not fully exploited, and that technical literature is fast-growing but still seems to be far from its peak." }, { "instance_id": "R76785xR75081", "comparison_id": "R76785", "paper_id": "R75081", "text": "A Survey on Knowledge Graphs: Representation, Acquisition and Applications Human knowledge provides a formal understanding of the world. Knowledge graphs that represent structural relations between entities have become an increasingly popular research direction toward cognition and human-level intelligence. In this survey, we provide a comprehensive review of the knowledge graph covering overall research topics about: 1) knowledge graph representation learning; 2) knowledge acquisition and completion; 3) temporal knowledge graph; and 4) knowledge-aware applications and summarize recent breakthroughs and perspective directions to facilitate future research. We propose a full-view categorization and new taxonomies on these topics. Knowledge graph embedding is organized from four aspects of representation space, scoring function, encoding models, and auxiliary information. For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning are reviewed. We further explore several emerging topics, including metarelational learning, commonsense reasoning, and temporal knowledge graphs. To facilitate future research on knowledge graphs, we also provide a curated collection of data sets and open-source libraries on different tasks. In the end, we have a thorough outlook on several promising research directions." }, { "instance_id": "R76785xR76779", "comparison_id": "R76785", "paper_id": "R76779", "text": "Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web The increasingly pervasive nature of the Web, expanding to devices and things in everyday life, along with new trends in Artificial Intelligence call for new paradigms and a new look on Knowledge Representation and Processing at scale for the Semantic Web. The emerging, but still to be concretely shaped concept of \"Knowledge Graphs\" provides an excellent unifying metaphor for this current status of Semantic Web research. More than two decades of Semantic Web research provides a solid basis and a promising technology and standards stack to interlink data, ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphs as such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises \u2013 while often inspired by \u2013 limited to the core Semantic Web stack. This report documents the program and the outcomes of Dagstuhl Seminar 18371 \"Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web\", where a group of experts from academia and industry discussed fundamental questions around these topics for a week in early September 2018, including the following: what are knowledge graphs? Which applications do we see to emerge? Which open research questions still need be addressed and which technology gaps still need to be closed?" }, { "instance_id": "R76785xR76754", "comparison_id": "R76785", "paper_id": "R76754", "text": "A Comprehensive Survey of Knowledge Graph Embeddings with Literals: Techniques and Applications Knowledge Graphs are organized to describe entities from any discipline and the interrelations between them. Apart from facilitating the inter-connectivity of datasets in the LOD cloud, KGs have been used in a variety of applications such as Web search or entity linking, and recently are part of popular search systems and Q&A applications etc. However, the KG applications suffer from high computational and storage cost. Hence, there arises the necessity of having a representation learning of the high dimensional KGs into low dimensional spaces preserving structural as well as relational information. In this study, we conduct a comprehensive survey based on techniques of KG embedding models which consider the structured information of the graph as well as the unstructured information in form of literals such as text, numerical values etc. Furthermore, we address the challenges in their embedding models followed by a discussion on different application scenarios." }, { "instance_id": "R76785xR76750", "comparison_id": "R76785", "paper_id": "R76750", "text": "A retrospective of knowledge graphs Information on the Internet is fragmented and presented in different data sources, which makes automatic knowledge harvesting and understanding formidable for machines, and even for humans. Knowledge graphs have become prevalent in both of industry and academic circles these years, to be one of the most efficient and effective knowledge integration approaches. Techniques for knowledge graph construction can mine information from either structured, semi-structured, or even unstructured data sources, and finally integrate the information into knowledge, represented in a graph. Furthermore, knowledge graph is able to organize information in an easy-to-maintain, easy-to-understand and easy-to-use manner.In this paper, we give a summarization of techniques for constructing knowledge graphs. We review the existing knowledge graph systems developed by both academia and industry. We discuss in detail about the process of building knowledge graphs, and survey state-of-the-art techniques for automatic knowledge graph checking and expansion via logical inferring and reasoning. We also review the issues of graph data management by introducing the knowledge data models and graph databases, especially from a NoSQL point of view. Finally, we overview current knowledge graph systems and discuss the future research directions." }, { "instance_id": "R76785xR76766", "comparison_id": "R76785", "paper_id": "R76766", "text": "A Tutorial and Survey on Fault Knowledge Graph Knowledge Graph (KG) is a graph-based data structure that can display the relationship between a large number of semi-structured and unstructured data, and can efficiently and intelligently search for information that users need. KG has been widely used for many fields including finance, medical care, biological, education, journalism, smart search and other industries. With the increase in the application of Knowledge Graphs (KGs) in the field of failure, such as mechanical engineering, trains, power grids, equipment failures, etc. However, the summary of the system of fault KGs is relatively small. Therefore, this article provides a comprehensive tutorial and survey about the recent advances toward the construction of fault KG. Specifically, it will provide an overview of the fault KG and summarize the key techniques for building a KG to guide the construction of the KG in the fault domain. What\u2019s more, it introduces some of the open source tools that can be used to build a KG process, enabling researchers and practitioners to quickly get started in this field. In addition, the article discusses the application of fault KG and the difficulties and challenges in constructing fault KG. Finally, the article looks forward to the future development of KG." }, { "instance_id": "R77162xR75942", "comparison_id": "R77162", "paper_id": "R75942", "text": "Parental well-being in times of Covid-19 in Germany Abstract We examine the effects of Covid-19 and related restrictions on individuals with dependent children in Germany. We specifically focus on the role of day care center and school closures, which may be regarded as a \u201cdisruptive exogenous shock\u201d to family life. We make use of a novel representative survey of parental well-being collected in May and June 2020 in Germany, when schools and day care centers were closed but while other measures had been relaxed and new infections were low. In our descriptive analysis, we compare well-being during this period with a pre-crisis period for different groups. In a difference-in-differences design, we compare the change for individuals with children to the change for individuals without children, accounting for unrelated trends as well as potential survey mode and context effects. We find that the crisis lowered the relative well-being of individuals with children, especially for individuals with young children, for women, and for persons with lower secondary schooling qualifications. Our results suggest that public policy measures taken to contain Covid-19 can have large effects on family well-being, with implications for child development and parental labor market outcomes." }, { "instance_id": "R77162xR76567", "comparison_id": "R77162", "paper_id": "R76567", "text": "Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic. The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved)." }, { "instance_id": "R77162xR76542", "comparison_id": "R77162", "paper_id": "R76542", "text": "Up and About: Older Adults\u2019 Well-being During the COVID-19 Pandemic in a Swedish Longitudinal Study Abstract Objectives To investigate early effects of the COVID-19 pandemic related to (a) levels of worry, risk perception, and social distancing; (b) longitudinal effects on well-being; and (c) effects of worry, risk perception, and social distancing on well-being. Methods We analyzed annual changes in four aspects of well-being over 5 years (2015\u20132020): life satisfaction, financial satisfaction, self-rated health, and loneliness in a subsample (n = 1,071, aged 65\u201371) from a larger survey of Swedish older adults. The 2020 wave, collected March 26\u2013April 2, included measures of worry, risk perception, and social distancing in response to COVID-19. Results (a) In relation to COVID-19: 44.9% worried about health, 69.5% about societal consequences, 25.1% about financial consequences; 86.4% perceived a high societal risk, 42.3% a high risk of infection, and 71.2% reported high levels of social distancing. (b) Well-being remained stable (life satisfaction and loneliness) or even increased (self-rated health and financial satisfaction) in 2020 compared to previous years. (c) More worry about health and financial consequences was related to lower scores in all four well-being measures. Higher societal worry and more social distancing were related to higher well-being. Discussion In the early stage of the pandemic, Swedish older adults on average rated their well-being as high as, or even higher than, previous years. However, those who worried more reported lower well-being. Our findings speak to the resilience, but also heterogeneity, among older adults during the pandemic. Further research, on a broad range of health factors and long-term psychological consequences, is needed." }, { "instance_id": "R77162xR77070", "comparison_id": "R77162", "paper_id": "R77070", "text": "The Impact of the Coronavirus Lockdown on Mental Health: Evidence from the US The coronavirus outbreak has caused significant disruptions to people\u2019s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women\u2019s mental health cannot be explained by an increase in financial worries or childcare responsibilities." }, { "instance_id": "R77162xR76559", "comparison_id": "R77162", "paper_id": "R76559", "text": "Socioeconomic status and well-being during COVID-19: A resource-based examination. The authors assess levels and within-person changes in psychological well-being (i.e., depressive symptoms and life satisfaction) from before to during the COVID-19 pandemic for individuals in the United States, in general and by socioeconomic status (SES). The data is from 2 surveys of 1,143 adults from RAND Corporation's nationally representative American Life Panel, the first administered between April-June, 2019 and the second during the initial peak of the pandemic in the United States in April, 2020. Depressive symptoms during the pandemic were higher than population norms before the pandemic. Depressive symptoms increased from before to during COVID-19 and life satisfaction decreased. Individuals with higher education experienced a greater increase in depressive symptoms and a greater decrease in life satisfaction from before to during COVID-19 in comparison to those with lower education. Supplemental analysis illustrates that income had a curvilinear relationship with changes in well-being, such that individuals at the highest levels of income experienced a greater decrease in life satisfaction from before to during COVID-19 than individuals with lower levels of income. We draw on conservation of resources theory and the theory of fundamental social causes to examine four key mechanisms (perceived financial resources, perceived control, interpersonal resources, and COVID-19-related knowledge/news consumption) underlying the relationship between SES and well-being during COVID-19. These resources explained changes in well-being for the sample as a whole but did not provide insight into why individuals of higher education experienced a greater decline in well-being from before to during COVID-19. (PsycInfo Database Record (c) 2020 APA, all rights reserved)." }, { "instance_id": "R77162xR76575", "comparison_id": "R77162", "paper_id": "R76575", "text": "The gender gap in mental well-being during the Covid-19 outbreak: evidence from the UK We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old." }, { "instance_id": "R77220xR75942", "comparison_id": "R77220", "paper_id": "R75942", "text": "Parental well-being in times of Covid-19 in Germany Abstract We examine the effects of Covid-19 and related restrictions on individuals with dependent children in Germany. We specifically focus on the role of day care center and school closures, which may be regarded as a \u201cdisruptive exogenous shock\u201d to family life. We make use of a novel representative survey of parental well-being collected in May and June 2020 in Germany, when schools and day care centers were closed but while other measures had been relaxed and new infections were low. In our descriptive analysis, we compare well-being during this period with a pre-crisis period for different groups. In a difference-in-differences design, we compare the change for individuals with children to the change for individuals without children, accounting for unrelated trends as well as potential survey mode and context effects. We find that the crisis lowered the relative well-being of individuals with children, especially for individuals with young children, for women, and for persons with lower secondary schooling qualifications. Our results suggest that public policy measures taken to contain Covid-19 can have large effects on family well-being, with implications for child development and parental labor market outcomes." }, { "instance_id": "R77220xR76575", "comparison_id": "R77220", "paper_id": "R76575", "text": "The gender gap in mental well-being during the Covid-19 outbreak: evidence from the UK We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old." }, { "instance_id": "R77220xR76567", "comparison_id": "R77220", "paper_id": "R76567", "text": "Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic. The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved)." }, { "instance_id": "R77220xR75946", "comparison_id": "R77220", "paper_id": "R75946", "text": "Who is most affected by the Corona crisis? An analysis of changes in stress and well-being in Switzerland ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated." }, { "instance_id": "R77220xR77070", "comparison_id": "R77220", "paper_id": "R77070", "text": "The Impact of the Coronavirus Lockdown on Mental Health: Evidence from the US The coronavirus outbreak has caused significant disruptions to people\u2019s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women\u2019s mental health cannot be explained by an increase in financial worries or childcare responsibilities." }, { "instance_id": "R77220xR76554", "comparison_id": "R77220", "paper_id": "R76554", "text": "The COVID-19 pandemic and subjective well-being: longitudinal evidence on satisfaction with work and family ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies \u2013 remote work, short-time work and closure of schools and childcare \u2013 have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected." }, { "instance_id": "R78163xR78133", "comparison_id": "R78163", "paper_id": "R78133", "text": "Delivery of infection from asymptomatic carriers of COVID-19 in a familial cluster Abstract Objectives With the ongoing outbreak of COVID-19 around the world, it has become a worldwide health concern. One previous study reported a family cluster with asymptomatic transmission of COVID-19. Here, we report another series of cases and further demonstrate the repeatability of the transmission of COVID-19 by pre-symptomatic carriers. Methods A familial cluster of five patients associated with COVID-19 was enrolled in the hospital. We collected epidemiological and clinical characteristics, laboratory outcomes from electronic medical records, and also affirmed them with the patients and their families. Results Among them, three family members (Case 3/4/5) had returned from Wuhan. Additionally, two family members, those who had not travelled to Wuhan, also contracted COVID-19 after contacting with the other three family members. Case 1 developed severe pneumonia and was admitted to the ICU. Case 3 and Case 5 presented fever and cough on days 2 through 3 of hospitalization and had ground-glass opacity changes in their lungs. Case 4 presented with diarrhoea and pharyngalgia after admission without radiographic abnormalities. Case 2 presented no clinical or radiographic abnormalities. All the cases had an increasing level of C-reactive protein. Conclusions Our findings indicate that COVID-19 can be transmitted by asymptomatic carriers during the incubation period." }, { "instance_id": "R78163xR78058", "comparison_id": "R78163", "paper_id": "R78058", "text": "Predictive modelling of COVID-19 confirmed cases in Nigeria Abstract The coronavirus outbreak is the most notable world crisis since the Second World War. The pandemic that originated from Wuhan, China in late 2019 has affected all the nations of the world and triggered a global economic crisis whose impact will be felt for years to come. This necessitates the need to monitor and predict COVID-19 prevalence for adequate control. The linear regression models are prominent tools in predicting the impact of certain factors on COVID-19 outbreak and taking the necessary measures to respond to this crisis. The data was extracted from the NCDC website and spanned from March 31, 2020 to May 29, 2020. In this study, we adopted the ordinary least squares estimator to measure the impact of travelling history and contacts on the spread of COVID-19 in Nigeria and made a prediction. The model was conducted before and after travel restriction was enforced by the Federal government of Nigeria. The fitted model fitted well to the dataset and was free of any violation based on the diagnostic checks conducted. The results show that the government made a right decision in enforcing travelling restriction because we observed that travelling history and contacts made increases the chances of people being infected with COVID-19 by 85% and 88% respectively. This prediction of COVID-19 shows that the government should ensure that most travelling agency should have better precautions and preparations in place before re-opening." }, { "instance_id": "R78163xR78136", "comparison_id": "R78163", "paper_id": "R78136", "text": "Hypertension prevalence in human coronavirus disease: the role of ACE system in infection spread and severity Summary The prevalence of hypertension is high in patients affected by COVID infection and it appears related to increased risk of mortality in many epidemiological studies. The ACE system is not uniformly expressed in all the human races, and current differences could hypothesize some geographical discrepancies of infection around the world. However, animal studies showed that ACE2 receptor is a potential pathway for host infection. Because two third of the hypertensive patients take ACE-i/ARB, several concerns have been raised about the detrimental role of current drugs. In this report we summarized the current evidences in favour or against the administration of ACE blockade in the COVID era." }, { "instance_id": "R78163xR78061", "comparison_id": "R78163", "paper_id": "R78061", "text": "Estimative of real number of infections by COVID-19 in Brazil and possible scenarios Abstract This paper attempts to provide methods to estimate the real scenario of the novel coronavirus pandemic crisis on Brazil and the states of Sao Paulo, Pernambuco, Espirito Santo, Amazonas and Distrito Federal. By the use of a SEIRD mathematical model with age division, we predict the infection and death curve, stating the peak date for Brazil and these states. We also carry out a prediction for the ICU demand on these states for a visualization of the size of a possible collapse on the local health system. By the end, we establish some future scenarios including the stopping of social isolation and the introduction of vaccines and efficient medicine against the virus." }, { "instance_id": "R78163xR78148", "comparison_id": "R78163", "paper_id": "R78148", "text": "Epidemiology and transmission of COVID-19 in 391 cases and 1286 of their close contacts in Shenzhen, China: a retrospective cohort study Summary Background Rapid spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in Wuhan, China, prompted heightened surveillance in Shenzhen, China. The resulting data provide a rare opportunity to measure key metrics of disease course, transmission, and the impact of control measures. Methods From Jan 14 to Feb 12, 2020, the Shenzhen Center for Disease Control and Prevention identified 391 SARS-CoV-2 cases and 1286 close contacts. We compared cases identified through symptomatic surveillance and contact tracing, and estimated the time from symptom onset to confirmation, isolation, and admission to hospital. We estimated metrics of disease transmission and analysed factors influencing transmission risk. Findings Cases were older than the general population (mean age 45 years) and balanced between males (n=187) and females (n=204). 356 (91%) of 391 cases had mild or moderate clinical severity at initial assessment. As of Feb 22, 2020, three cases had died and 225 had recovered (median time to recovery 21 days; 95% CI 20\u201322). Cases were isolated on average 4\u00b76 days (95% CI 4\u00b71\u20135\u00b70) after developing symptoms; contact tracing reduced this by 1\u00b79 days (95% CI 1\u00b71\u20132\u00b77). Household contacts and those travelling with a case were at higher risk of infection (odds ratio 6\u00b727 [95% CI 1\u00b749\u201326\u00b733] for household contacts and 7\u00b706 [1\u00b743\u201334\u00b791] for those travelling with a case) than other close contacts. The household secondary attack rate was 11\u00b72% (95% CI 9\u00b71\u201313\u00b78), and children were as likely to be infected as adults (infection rate 7\u00b74% in children <10 years vs population average of 6\u00b76%). The observed reproductive number (R) was 0\u00b74 (95% CI 0\u00b73\u20130\u00b75), with a mean serial interval of 6\u00b73 days (95% CI 5\u00b72\u20137\u00b76). Interpretation Our data on cases as well as their infected and uninfected close contacts provide key insights into the epidemiology of SARS-CoV-2. This analysis shows that isolation and contact tracing reduce the time during which cases are infectious in the community, thereby reducing the R. The overall impact of isolation and contact tracing, however, is uncertain and highly dependent on the number of asymptomatic cases. Moreover, children are at a similar risk of infection to the general population, although less likely to have severe symptoms; hence they should be considered in analyses of transmission and control. Funding Emergency Response Program of Harbin Institute of Technology, Emergency Response Program of Peng Cheng Laboratory, US Centers for Disease Control and Prevention." }, { "instance_id": "R78163xR78145", "comparison_id": "R78163", "paper_id": "R78145", "text": "Modeling Palestinian COVID-19 Cumulative Confirmed Cases: A Comparative Study COVID-19 is still a major pandemic threatening all the world. In Palestine, there were 26,764 COVID-19 cumulative confirmed cases as of 27th August 2020. In this paper, two statistical approaches, autoregressive integrated moving average (ARIMA) and k-th moving averages - ARIMA models are used for modeling the COVID-19 cumulative confirmed cases in Palestine. The data was taken from World Health Organization (WHO) website for one hundred seventy-six (176) days, from March 5, 2020 through August 27, 2020. We identified the best models for the above mentioned approaches that are ARIMA (1,2,4) and 5-th Exponential Weighted Moving Average \u2013 ARIMA (2,2,3). Consequently, we recommended to use the 5-th Exponential Weighted Moving Average \u2013 ARIMA (2,2,3) model in order to forecast new values of the daily cumulative confirmed cases in Palestine. The forecast values are alarming, and giving the Palestinian government a good picture about the next number of COVID-19 cumulative confirmed cases to review her activities and interventions and to provide some robust structures and measures to avoid these challenges." }, { "instance_id": "R78163xR78160", "comparison_id": "R78163", "paper_id": "R78160", "text": "Preliminary estimation of the novel coronavirus disease (COVID-19) cases in Iran: A modelling analysis based on overseas cases and air travel data Abstract As of March 1, 2020, Iran had reported 987 novel coronavirus disease (COVID-19) cases, including 54 associated deaths. At least six neighboring countries (Bahrain, Iraq, Kuwait, Oman, Afghanistan, and Pakistan) had reported imported COVID-19 cases from Iran. In this study, air travel data and the numbers of cases from Iran imported into other Middle Eastern countries were used to estimate the number of COVID-19 cases in Iran. It was estimated that the total number of cases in Iran was 16 533 (95% confidence interval: 5925\u201335 538) by February 25, 2020, before the UAE and other Gulf Cooperation Council countries suspended inbound and outbound flights from Iran." }, { "instance_id": "R78163xR78151", "comparison_id": "R78163", "paper_id": "R78151", "text": "A conceptual model for the coronavirus disease 2019 (COVID-19) outbreak in Wuhan, China with individual reaction and governmental action Abstract The ongoing coronavirus disease 2019 (COVID-19) outbreak, emerged in Wuhan, China in the end of 2019, has claimed more than 2600 lives as of 24 February 2020 and posed a huge threat to global public health. The Chinese government has implemented control measures including setting up special hospitals and travel restriction to mitigate the spread. We propose conceptual models for the COVID-19 outbreak in Wuhan with the consideration of individual behavioural reaction and governmental actions, e.g., holiday extension, travel restriction, hospitalisation and quarantine. We employe the estimates of these two key components from the 1918 influenza pandemic in London, United Kingdom, incorporated zoonotic introductions and the emigration, and then compute future trends and the reporting ratio. The model is concise in structure, and it successfully captures the course of the COVID-19 outbreak, and thus sheds light on understanding the trends of the outbreak." }, { "instance_id": "R78163xR78139", "comparison_id": "R78163", "paper_id": "R78139", "text": "Effect of temperature on the infectivity of COVID-19 Abstract Objectives To evaluate the influence of temperature on the infectivity of COVID-19 in Japan. Methods We evaluated the relationship between the accumulated number of patients per 1,000,000 population and the average temperature in February 2020 in each prefecture by Poisson regression analysis. We introduced the monthly number of inbound visitors from China in January 2020 in each prefecture as an additional explanatory variable in the model. Results Both monthly inbound visitors from China in January 2020 and mean temperature in February 2020 are associated with the cumulative number of COVID-19 case on March 16, 2020. Conclusions Our analysis showed a possible association between low temperature and increased risk of COVID-19 infection. Further evaluation would be desirable at a global level." }, { "instance_id": "R78492xR76567", "comparison_id": "R78492", "paper_id": "R76567", "text": "Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic. The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved)." }, { "instance_id": "R78492xR76559", "comparison_id": "R78492", "paper_id": "R76559", "text": "Socioeconomic status and well-being during COVID-19: A resource-based examination. The authors assess levels and within-person changes in psychological well-being (i.e., depressive symptoms and life satisfaction) from before to during the COVID-19 pandemic for individuals in the United States, in general and by socioeconomic status (SES). The data is from 2 surveys of 1,143 adults from RAND Corporation's nationally representative American Life Panel, the first administered between April-June, 2019 and the second during the initial peak of the pandemic in the United States in April, 2020. Depressive symptoms during the pandemic were higher than population norms before the pandemic. Depressive symptoms increased from before to during COVID-19 and life satisfaction decreased. Individuals with higher education experienced a greater increase in depressive symptoms and a greater decrease in life satisfaction from before to during COVID-19 in comparison to those with lower education. Supplemental analysis illustrates that income had a curvilinear relationship with changes in well-being, such that individuals at the highest levels of income experienced a greater decrease in life satisfaction from before to during COVID-19 than individuals with lower levels of income. We draw on conservation of resources theory and the theory of fundamental social causes to examine four key mechanisms (perceived financial resources, perceived control, interpersonal resources, and COVID-19-related knowledge/news consumption) underlying the relationship between SES and well-being during COVID-19. These resources explained changes in well-being for the sample as a whole but did not provide insight into why individuals of higher education experienced a greater decline in well-being from before to during COVID-19. (PsycInfo Database Record (c) 2020 APA, all rights reserved)." }, { "instance_id": "R78492xR76575", "comparison_id": "R78492", "paper_id": "R76575", "text": "The gender gap in mental well-being during the Covid-19 outbreak: evidence from the UK We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old." }, { "instance_id": "R78492xR75949", "comparison_id": "R78492", "paper_id": "R75949", "text": "Employee psychological well-being during the COVID-19 pandemic in Germany: A longitudinal study of demands, resources, and exhaustion M any governments react to the current coronavirus/COVID-19 pandemic by restricting daily (work) life. On the basis of theories from occupational health, we propose that the duration of the pandemic, its demands (e.g., having to work from home, closing of childcare facilities, job insecurity, work-privacy conflicts, privacy-work conflicts) and personaland job-related resources (co-worker social support, job autonomy, partner support and corona self-efficacy) interact in their effect on employee exhaustion. We test the hypotheses with a three-wave sample of German employees during the pandemic from April to June 2020 (Nw1 = 2900, Nw12 = 1237, Nw123 = 789). Our findings show a curvilinear effect of pandemic duration on working women\u2019s exhaustion. The data also show that the introduction and the easing of lockdown measures affect exhaustion, and that women with children who work from home while childcare is unavailable are especially exhausted. Job autonomy and partner support mitigated some of these effects. In sum, women\u2019s psychological health was more strongly affected by the pandemic than men\u2019s. We discuss implications for occupational health theories and that interventions targeted at mitigating the psychological consequences of the COVID-19 pandemic should target women specifically." }, { "instance_id": "R78492xR75942", "comparison_id": "R78492", "paper_id": "R75942", "text": "Parental well-being in times of Covid-19 in Germany Abstract We examine the effects of Covid-19 and related restrictions on individuals with dependent children in Germany. We specifically focus on the role of day care center and school closures, which may be regarded as a \u201cdisruptive exogenous shock\u201d to family life. We make use of a novel representative survey of parental well-being collected in May and June 2020 in Germany, when schools and day care centers were closed but while other measures had been relaxed and new infections were low. In our descriptive analysis, we compare well-being during this period with a pre-crisis period for different groups. In a difference-in-differences design, we compare the change for individuals with children to the change for individuals without children, accounting for unrelated trends as well as potential survey mode and context effects. We find that the crisis lowered the relative well-being of individuals with children, especially for individuals with young children, for women, and for persons with lower secondary schooling qualifications. Our results suggest that public policy measures taken to contain Covid-19 can have large effects on family well-being, with implications for child development and parental labor market outcomes." }, { "instance_id": "R78492xR75946", "comparison_id": "R78492", "paper_id": "R75946", "text": "Who is most affected by the Corona crisis? An analysis of changes in stress and well-being in Switzerland ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated." }, { "instance_id": "R8342xR8286", "comparison_id": "R8342", "paper_id": "R8286", "text": "The SPAR Ontologies Over the past eight years, we have been involved in the development of a set of complementary and orthogonal ontologies that can be used for the description of the main areas of the scholarly publishing domain, known as the SPAR (Semantic Publishing and Referencing) Ontologies. In this paper, we introduce this suite of ontologies, discuss the basic principles we have followed for their development, and describe their uptake and usage within the academic, institutional and publishing communities." }, { "instance_id": "R8342xR8301", "comparison_id": "R8342", "paper_id": "R8301", "text": "The\u00a0Document\u00a0Components\u00a0Ontology\u00a0(DoCO) The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles." }, { "instance_id": "R8342xR8262", "comparison_id": "R8342", "paper_id": "R8262", "text": "FaBiO and CiTO: Ontologies for describing bibliographic resources and citations Semantic publishing is the use of Web and Semantic Web technologies to enhance the meaning of a published journal article, to facilitate its automated discovery, to enable its linking to semantically related articles, to provide access to data within the article in actionable form, and to facilitate integration of data between articles. Recently, semantic publishing has opened the possibility of a major step forward in the digital publishing world. For this to succeed, new semantic models and visualization tools are required to fully meet the specific needs of authors and publishers. In this article, we introduce the principles and architectures of two new ontologies central to the task of semantic publishing: FaBiO, the FRBR-aligned Bibliographic Ontology, an ontology for recording and publishing bibliographic records of scholarly endeavours on the Semantic Web, and CiTO, the Citation Typing Ontology, an ontology for the characterization of bibliographic citations both factually and rhetorically. We present those two models step by step, in order to emphasise their features and to stress their advantages relative to other pre-existing information models. Finally, we review the uptake of FaBiO and CiTO within the academic and publishing communities." }, { "instance_id": "R8342xR8312", "comparison_id": "R8342", "paper_id": "R8312", "text": "The Publishing Workflow Ontology (PWO) In this paper we introduce the Publishing Workflow Ontology (PWO), i.e., an OWL 2 DL ontology for the description of workflows that is particularly suitable for formalising typical publishing processes such as the publication of articles in journals. We support the presentation with a discussion of all the ontology design patterns that have been reused for modelling the main characteristics of publishing workflows. In addition, we present two possible application of PWO in the publishing and legislative domains." } ] }