{ "instances": [ { "instance_id": "R108331xR108307", "comparison_id": "R108331", "paper_id": "R108307", "text": "Knowledge modelling in weakly\u2010structured business processes In this paper we present a new approach for integrating knowledge management and business process management. We focus on the modelling of weakly\u2010structured knowledge\u2010intensive business processes. We develop a framework for modelling this type of processes that explicitly considers knowledge\u2010related tasks and knowledge objects and present a workflow tool that is an implementation of our theoretical meta\u2010model. As an example, we sketch one case study, the process for granting full old age pension as it is performed in the Greek Social Security Institution. Finally we briefly describe some related approaches and compare them to our work and draw the main conclusions and further research directions." }, { "instance_id": "R108331xR108321", "comparison_id": "R108331", "paper_id": "R108321", "text": "Modelling knowledge transfer: A knowledge dynamics perspective The increasing complexity in design activities leads designers to collaborate and share knowledge within distributed teams. This makes designers use systems such as knowledge management systems to reach their goal. In this article, our aim is to investigate on improving the use of knowledge management systems by defining a framework for modelling knowledge transfer in such context. The proposed framework is partly based on reuse of existing models found in the literature and on a participant observation methodology. Then, we tested this framework through several case studies presented in this article. These investigations enable us to observe, define and model more finely the knowledge dynamics that occur between knowledge workers and knowledge management systems." }, { "instance_id": "R108358xR108156", "comparison_id": "R108358", "paper_id": "R108156", "text": "Potential Use of Airborne Hyperspectral AVIRIS-NG Data for Mapping Proterozoic Metasediments in Banswara, India Airborne Visible InfraRed Imaging Spectrometer \u2014 Next Generation (AVIRIS-NG) data with high spectral and spatial resolutions are used for mapping metasediments in parts of Banswara district, Rajasthan, India. The AVIRIS\u2014NG image spectra of major metasedimentary rocks were compared with their respective laboratory spectra to identify few diagnostic spectral features or absorption features of the rocks. These spectral features were translated from laboratory to image and consistently present in the image spectra of these rocks across the area. After ensuring the persistency of absorption features from sample to image pixels, three AVIRIS\u2014NG based spectral indices is proposed to delineate calcareous (dolomite), siliceous (quartzite) and argillaceous (phyllite) metasedimentary rocks. The index image composite was compared with the reference lithological map of Geological Survey of India and also was validated in the field. The study demonstrates the efficiency of AVIRIS \u2014 NG data for mapping metasedimentary units from the Aravalli Supergroup that are known to host strata bound mineral deposits." }, { "instance_id": "R108358xR108144", "comparison_id": "R108358", "paper_id": "R108144", "text": "Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords\u2014Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper." }, { "instance_id": "R108358xR108153", "comparison_id": "R108358", "paper_id": "R108153", "text": "Comparative analysis of mineral mapping for hyperspectral and multispectral imagery The traditional approaches of mineral-mapping are time consuming and expensive process. Remote sensing is a tool to map the minerals precisely using their physical, chemical and optical properties. In the present study, Tirunelveli district in Tamil Nadu is selected to extract the abundant mineral such as Limestone using Hyperion and Landsat-8 OLI imageries. The chemical composition of the mineral is identified using scanning electron microscope (SEM) and energy dispersive X-ray spectroscopy (EDS) analysis. The spectral reflectance of minerals is characterized using analytical spectral device (ASD) field spectroradiometer. The minerals showed deep absorption in short wave infrared region from 1800 to 2500 nm. The mineral mapping in hyperspectral data is performed using various preliminary processing such as bad band removal, vertical strip removal, radiance and reflectance generation and postprocessing steps such as data dimensional reduction, endmember extraction and classification. To improve the classification accuracy, the vertical strip removal process is performed using a local destriping algorithm. Absolute reflectance of Hyperion and Landsat-8 OLI (Operational Land Imager) imageries is carried out using the FLAASH (fast line-of-sight atmospheric analysis of hypercubes) module. Spectral data reduction techniques in reflectance bands performed using minimum noise fraction method. The noiseless reflectance bands spatial data reduced by the Pixel Purity Index method in the threshold limit of 2.5 under 10,000 repetitions. The obtained reflectance imagery spectra compared with the spectral libraries such as USGS (United States Geological Survey), JPL (Jet Propulsion Laboratory) and field spectra. Endmembers of minerals are carried out using high probability score obtained from the various methods such as SAM (spectral angle mapper), SFF (spectral feature fitting) and BE (binary encoding). The mineral mapping of both imageries is carried out using a supervised classification approach. The results showed that hyperspectral remote sensing performed good results as compared to multispectral data." }, { "instance_id": "R108358xR108132", "comparison_id": "R108358", "paper_id": "R108132", "text": "Analysis of spectral absorption features in hyperspectral imagery Abstract Spectral reflectance in the visible and near-infrared wavelengths provides a rapid and inexpensive means for determining the mineralogy of samples and obtaining information on chemical composition. Absorption-band parameters such as the position, depth, width, and asymmetry of the feature have been used to quantitatively estimate composition of samples from hyperspectral field and laboratory reflectance data. The parameters have also been used to develop mapping methods for the analysis of hyperspectral image data. This has resulted in techniques providing surface mineralogical information (e.g., classification) using absorption-band depth and position. However, no attempt has been made to prepare images of the absorption-band parameters. In this paper, a simple linear interpolation technique is proposed in order to derive absorption-band position, depth and asymmetry from hyperspectral image data. AVIRIS data acquired in 1995 over the Cuprite mining area (Nevada, USA) are used to demonstrate the technique and to interpret the data in terms of the known alteration phases characterizing the area. A sensitivity analysis of the methods proposed shows that good results can be obtained for estimating the absorption wavelength position, however the estimated absorption-band-depth is sensitive to the input parameters chosen. The resulting parameter images (depth, position, asymmetry of the absorption) when carefully examined and interpreted by an experienced remote sensing geologist provide key information on surface mineralogy. The estimates of depth and position can be related to the chemistry of the samples and thus allow to bridge the gap between field geochemistry and remote sensing." }, { "instance_id": "R109612xR108803", "comparison_id": "R109612", "paper_id": "R108803", "text": "Dinitrogen fixation rates in the Bay of Bengal during summer monsoon Abstract Biological dinitrogen (N 2 ) fixation exerts an important control on oceanic primary production by providing bioavailable form of nitrogen (such as ammonium) to photosynthetic microorganisms. N 2 fixation is dominant in nutrient poor and warm surface waters. The Bay of Bengal is one such region where no measurements of phototrophic N 2 fixation rates exist. The surface water of the Bay of Bengal is generally nitrate-poor and warm due to prevailing stratification and thus, could favour N 2 fixation. We commenced the first N 2 fixation study in the photic zone of the Bay of Bengal using 15 N 2 gas tracer incubation experiment during summer monsoon 2018. We collected seawater samples from four depths (covering the mixed layer depth of up to 75 m) at eight stations. N 2 fixation rates varied from 4 to 75 \u03bc mol N m \u22122 d \u22121 . The contribution of N 2 fixation to primary production was negligible (<1%). However, the upper bound of observed N 2 fixation rates is higher than the rates measured in other oceanic regimes, such as the Eastern Tropical South Pacific, the Tropical Northwest Atlantic, and the Equatorial and Southern Indian Ocean." }, { "instance_id": "R109612xR109396", "comparison_id": "R109612", "paper_id": "R109396", "text": "No nitrogen fixation in the Bay of Bengal? Abstract. The Bay of Bengal (BoB) has long stood as a biogeochemical enigma, with subsurface waters containing extremely low, but persistent, concentrations of oxygen in the nanomolar range which \u2013 for some, yet unconstrained, reason \u2013 are prevented from becoming anoxic. One reason for this may be the low productivity of the BoB waters due to nutrient limitation and the resulting lack of respiration of organic material at intermediate waters. Thus, the parameters determining primary production are key in understanding what prevents the BoB from developing anoxia. Primary productivity in the sunlit surface layers of tropical oceans is mostly limited by the supply of reactive nitrogen through upwelling, riverine flux, atmospheric deposition, and biological dinitrogen (N2) fixation. In the BoB, a stable stratification limits nutrient supply via upwelling in the open waters, and riverine or atmospheric fluxes have been shown to support only less than one-quarter of the nitrogen for primary production. This leaves a large uncertainty for most of the BoB's nitrogen input, suggesting a potential role of N2 fixation in those waters. Here, we present a survey of N2 fixation and carbon fixation in the BoB during the winter monsoon season. We detected a community of N2 fixers comparable to other oxygen minimum zone (OMZ) regions, with only a few cyanobacterial clades and a broad diversity of non-phototrophic N2 fixers present throughout the water column (samples collected between 10 and 560 m water depth). While similar communities of N2 fixers were shown to actively fix N2 in other OMZs, N2 fixation rates were below the detection limit in our samples covering the water column between the deep chlorophyll maximum and the OMZ. Consistent with this, no N2 fixation signal was visible in \u03b415N signatures. We suggest that the absence of N2 fixation may be a consequence of a micronutrient limitation or of an O2 sensitivity of the OMZ diazotrophs in the BoB. Exploring how the onset of N2 fixation by cyanobacteria compared to non-phototrophic N2 fixers would impact on OMZ O2 concentrations, a simple model exercise was carried out. We observed that both photic-zone-based and OMZ-based N2 fixation are very sensitive to even minimal changes in water column stratification, with stronger mixing increasing organic matter production and export, which can exhaust remaining O2 traces in the BoB." }, { "instance_id": "R109612xR109579", "comparison_id": "R109612", "paper_id": "R109579", "text": "Nitrogen fixation rates in the eastern Arabian Sea Abstract The Arabian Sea experiences bloom of the diazotroph Trichodesmium during certain times of the year when optimal sea surface temperature and oligotrophic condition favour their growth. We measured nitrogen fixation rates in the euphotic zone during one such event in the Eastern Arabian Sea using 15 N 2 tracer gas dissolution method. The measured rates varied between 0.8 and 225 \u03bcmol N m \u22123 d \u22121 and were higher than those reported from most other oceanic regions. The highest rates (1739 \u03bcmol N m \u22122 d \u22121 ; 0\u201310 m) coincided with the growth phase of Trichodesmium and led to low \u03b4 15 N ( Trichodesmium bloom nitrogen fixation rates were low (0.9\u20131.5 \u03bcmol N m \u22123 d \u22121 ). Due to episodic events of diazotroph bloom, contribution of N 2 fixation to the total nitrogen pool may vary in space and time." }, { "instance_id": "R109904xR109894", "comparison_id": "R109904", "paper_id": "R109894", "text": "A Hybrid Approach Toward Research Paper Recommendation Using Centrality Measures and Author Ranking The volume of research articles in digital repositories is increasing. This spectacular growth of repositories makes it rather difficult for researchers to obtain related research papers in response to their queries. The problem becomes worse when a researcher with insufficient knowledge of searching research articles uses these repositories. In the traditional recommendation approaches, the results of the query miss many high-quality papers, in the related work section, which are either published recently or have low citation count. To overcome this problem, there needs to be a solution which considers not only structural relationships between the papers but also inspects the quality of authors publishing those articles. Many research paper recommendation approaches have been implemented which includes collaborative filtering-based, content-based, and citation analysis-based techniques. The collaborative filtering-based approaches primarily use paper-citation matrix for recommendations, whereas the content-based approaches only consider the content of the paper. The citation analysis considers the structure of the network and focuses on papers citing or cited by the paper of interest. It is therefore very difficult for a recommender system to recommend high-quality papers without a hybrid approach that incorporates multiple features, such as citation information and author information. The proposed method creates a multilevel citation and relationship network of authors in which the citation network uses the structural relationship between the papers to extract significant papers, and authors\u2019 collaboration network finds key authors from those papers. The papers selected by this hybrid approach are then recommended to the user. The results have shown that our proposed method performs exceedingly well as compared with the state-of-the-art existing systems, such as Google scholar and multilevel simultaneous citation network." }, { "instance_id": "R109904xR109860", "comparison_id": "R109904", "paper_id": "R109860", "text": "Applying weighted PageRank to author citation networks This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956\u20132008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. \u00a9 2011 Wiley Periodicals, Inc." }, { "instance_id": "R109904xR109878", "comparison_id": "R109904", "paper_id": "R109878", "text": "Betweenness and diversity in journal citation networks as measures of interdisciplinarity\u2014A tribute to Eugene Garfield Journals were central to Eugene Garfield\u2019s research interests. Among other things, journals are considered as units of analysis for bibliographic databases such as the Web of Science and Scopus. In addition to providing a basis for disciplinary classifications of journals, journal citation patterns span networks across boundaries to variable extents. Using betweenness centrality (BC) and diversity, we elaborate on the question of how to distinguish and rank journals in terms of interdisciplinarity. Interdisciplinarity, however, is difficult to operationalize in the absence of an operational definition of disciplines; the diversity of a unit of analysis is sample-dependent. BC can be considered as a measure of multi-disciplinarity. Diversity of co-citation in a citing document has been considered as an indicator of knowledge integration, but an author can also generate trans-disciplinary\u2014that is, non-disciplined\u2014variation by citing sources from other disciplines. Diversity in the bibliographic coupling among citing documents can analogously be considered as diffusion or differentiation of knowledge across disciplines. Because the citation networks in the cited direction reflect both structure and variation, diversity in this direction is perhaps the best available measure of interdisciplinarity at the journal level. Furthermore, diversity is based on a summation and can therefore be decomposed; differences among (sub)sets can be tested for statistical significance. In the appendix, a general-purpose routine for measuring diversity in networks is provided." }, { "instance_id": "R109904xR109866", "comparison_id": "R109904", "paper_id": "R109866", "text": "Influence of co-authorship networks in the research impact: Ego network analyses from Microsoft Academic Search The main objective of this study is to analyze the relationship between research impact and the structural properties of co-author networks. A new bibliographic source, Microsoft Academic Search, is introduced to test its suitability for bibliometric analyses. Citation counts and 500 one-step ego networks were extracted from this engine. Results show that tiny and sparse networks \u2013 characterized by a high Betweenness centrality and a high Average path length \u2013 achieved more citations per document than dense and compact networks \u2013 described by a high Clustering coefficient and a high Average degree. According to disciplinary differences, Mathematics, Social Sciences and Economics & Business are the disciplines with more sparse and tiny networks; while Physics, Engineering and Geosciences are characterized by dense and crowded networks. This suggests that in sparse ego networks, the central author have more control on their collaborators being more selective in their recruitment and concluding that this behaviour has positive implications in the research impact." }, { "instance_id": "R111045xR111023", "comparison_id": "R111045", "paper_id": "R111023", "text": "Access to divalent lanthanide NHC complexes by redox-transmetallation from silver and CO2 insertion reactions

Divalent NHC\u2013lanthanide complexes were obtained by redox-transmetallation. Treatment with CO2 led to insertion reactions without oxidation of the metal centre.

" }, { "instance_id": "R111045xR110993", "comparison_id": "R111045", "paper_id": "R110993", "text": "Anilido-oxazoline-ligated rare-earth metal complexes: synthesis, characterization and highly cis-1,4-selective polymerization of isoprene

Anilido-oxazoline-ligated rare-earth metal complexes show strong fluorescence emissions and good catalytic performance on isoprene polymerization with high cis-1,4-selectivity.

" }, { "instance_id": "R111045xR111011", "comparison_id": "R111045", "paper_id": "R111011", "text": "A structural and spectroscopic overview of molecular lanthanide complexes with fluorinated O-donor ligands Abstract The lanthanide elements are prevalent in modern electronics, contrast agents, and phosphors. Precursors for luminescent materials frequently use fluorinated ligands to promote volatility for chemical vapor deposition methods. In molecular complexes, fluorination is also a commonly used technique to reduce energy loss during luminescence, and the oxophilicity of the lanthanide ions makes fluorinated alkoxides an attractive tool for the design of luminescent lanthanide complexes. Herein, the structural and photophysical properties of lanthanide complexes ligated by fluorinated alkoxides have been reviewed. Selected examples of several categories are presented in detail, including carboxylates, chelating and non-chelating ligands, and systems with up to three metal centers. Potential areas for further investigation are highlighted." }, { "instance_id": "R112387xR76792", "comparison_id": "R112387", "paper_id": "R76792", "text": "Mining Twitter Feeds for Software User Requirements Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets." }, { "instance_id": "R112387xR112033", "comparison_id": "R112387", "paper_id": "R112033", "text": "Listening to the Crowd for the Release Planning of Mobile Apps The market for mobile apps is getting bigger and bigger, and it is expected to be worth over 100 Billion dollars in 2020. To have a chance to succeed in such a competitive environment, developers need to build and maintain high-quality apps, continuously astonishing their users with the coolest new features. Mobile app marketplaces allow users to release reviews. Despite reviews are aimed at recommending apps among users, they also contain precious information for developers, reporting bugs and suggesting new features. To exploit such a source of information, developers are supposed to manually read user reviews, something not doable when hundreds of them are collected per day. To help developers dealing with such a task, we developed CLAP (Crowd Listener for releAse Planning), a web application able to (i) categorize user reviews based on the information they carry out, (ii) cluster together related reviews, and (iii) prioritize the clusters of reviews to be implemented when planning the subsequent app release. We evaluated all the steps behind CLAP, showing its high accuracy in categorizing and clustering reviews and the meaningfulness of the recommended prioritizations. Also, given the availability of CLAP as a working tool, we assessed its applicability in industrial environments." }, { "instance_id": "R112387xR78392", "comparison_id": "R112387", "paper_id": "R78392", "text": "Bug report, feature request, or simply praise? On automatically classifying app reviews App stores like Google Play and Apple AppStore have over 3 Million apps covering nearly every kind of software and service. Billions of users regularly download, use, and review these apps. Recent studies have shown that reviews written by the users represent a rich source of information for the app vendors and the developers, as they include information about bugs, ideas for new features, or documentation of released features. This paper introduces several probabilistic techniques to classify app reviews into four types: bug reports, feature requests, user experiences, and ratings. For this we use review metadata such as the star rating and the tense, as well as, text classification, natural language processing, and sentiment analysis techniques. We conducted a series of experiments to compare the accuracy of the techniques and compared them with simple string matching. We found that metadata alone results in a poor classification accuracy. When combined with natural language processing, the classification precision got between 70-95% while the recall between 80-90%. Multiple binary classifiers outperformed single multiclass classifiers. Our results impact the design of review analytics tools which help app vendors, developers, and users to deal with the large amount of reviews, filter critical reviews, and assign them to the appropriate stakeholders." }, { "instance_id": "R112387xR112000", "comparison_id": "R112387", "paper_id": "R112000", "text": "Ensemble Methods for App Review Classification: An Approach for Software Evolution (N) App marketplaces are distribution platforms for mobile applications that serve as a communication channel between users and developers. These platforms allow users to write reviews about downloaded apps. Recent studies found that such reviews include information that is useful for software evolution. However, the manual analysis of a large amount of user reviews is a tedious and time consuming task. In this work we propose a taxonomy for classifying app reviews into categories relevant for software evolution. Additionally, we describe an experiment that investigates the performance of individual machine learning algorithms and its ensembles for automatically classifying the app reviews. We evaluated the performance of the machine learning techniques on 4550 reviews that were systematically labeled using content analysis methods. Overall, the ensembles had a better performance than the individual classifiers, with an average precision of 0.74 and 0.59 recall." }, { "instance_id": "R112387xR111988", "comparison_id": "R112387", "paper_id": "R111988", "text": "A Needle in a Haystack: What Do Twitter Users Say about Software? Users of the Twitter microblogging platform share a vast amount of information about various topics through short messages on a daily basis. Some of these so called tweets include information that is relevant for software companies and could, for example, help requirements engineers to identify user needs. Therefore, tweets have the potential to aid in the continuous evolution of software applications. Despite the existence of such relevant tweets, little is known about their number and content. In this paper we report on the results of an exploratory study in which we analyzed the usage characteristics, content and automatic classification potential of tweets about software applications by using descriptive statistics, content analysis and machine learning techniques. Although the manual search of relevant information within the vast stream of tweets can be compared to looking for a needle in a haystack, our analysis shows that tweets provide a valuable input for software companies. Furthermore, our results demonstrate that machine learning techniques have the capacity to identify and harvest relevant information automatically." }, { "instance_id": "R112387xR108341", "comparison_id": "R112387", "paper_id": "R108341", "text": "Release planning of mobile apps based on user reviews Developers have to to constantly improve their apps by fixing critical bugs and implementing the most desired features in order to gain shares in the continuously increasing and competitive market of mobile apps. A precious source of information to plan such activities is represented by reviews left by users on the app store. However, in order to exploit such information developers need to manually analyze such reviews. This is something not doable if, as frequently happens, the app receives hundreds of reviews per day. In this paper we introduce CLAP (Crowd Listener for releAse Planning), a thorough solution to (i) categorize user reviews based on the information they carry out (e.g., bug reporting), (ii) cluster together related reviews (e.g., all reviews reporting the same bug), and (iii) automatically prioritize the clusters of reviews to be implemented when planning the subsequent app release. We evaluated all the steps behind CLAP, showing its high accuracy in categorizing and clustering reviews and the meaningfulness of the recommended prioritizations. Also, given the availability of CLAP as a working tool, we assessed its practical applicability in industrial environments." }, { "instance_id": "R114155xR112472", "comparison_id": "R114155", "paper_id": "R112472", "text": "CRAFT: A Crowd-Annotated Feedback Technique The ever increasing accessibility of the web for the crowd offered by various electronic devices such as smartphones has facilitated the communication of the needs, ideas, and wishes of millions of stakeholders. To cater for the scale of this input and reduce the overhead of manual elicitation methods, data mining and text mining techniques have been utilised to automatically capture and categorise this stream of feedback, which is also used, amongst other things, by stakeholders to communicate their requirements to software developers. Such techniques, however, fall short of identifying some of the peculiarities and idiosyncrasies of the natural language that people use colloquially. This paper proposes CRAFT, a technique that utilises the power of the crowd to support richer, more powerful text mining by enabling the crowd to categorise and annotate feedback through a context menu. This, in turn, helps requirements engineers to better identify user requirements within such feedback. This paper presents the theoretical foundations as well as the initial evaluation of this crowd-based feedback annotation technique for requirements identification." }, { "instance_id": "R114155xR113122", "comparison_id": "R114155", "paper_id": "R113122", "text": "UCFrame: A Use Case Framework for Crowd-Centric Requirement Acquisition To build needed mobile applications in specific domains, requirements should be collected and analyzed in holistic approach. However, resource is limited for small vendor groups to perform holistic requirement acquisition and elicitation. The rise of crowdsourcing and crowdfunding gives small vendor groups new opportunities to build needed mobile applications for the crowd. By finding prior stakeholders and gathering requirements effectively from the crowd, mobile application projects can establish sound foundation in early phase of software process. Therefore, integration of crowd-based requirement engineering into software process is important for small vendor groups. Conventional requirement acquisition and elicitation methods are analyst-centric. Very little discussion is in adapting requirement acquisition tools for crowdcentric context. In this study, several tool features of use case documentation are revised in crowd-centric context. These features constitute a use case-based framework, called UCFrame, for crowd-centric requirement acquisition. An instantiation of UCFrame is also presented to demonstrate the effectiveness of UCFrame in collecting crowd requirements for building two mobile applications." }, { "instance_id": "R114155xR112425", "comparison_id": "R114155", "paper_id": "R112425", "text": "Refinement and Resolution of Just-in-Time Requirements in Open Source Software: A Case Study Just-in-time (JIT) requirements are characterized as not following the traditional requirement engineering approach, instead focusing on elaboration when the implementation begins. In this experience report, we analyze both functional and nonfunctional JIT requirements from three successful open source software (OSS) projects, including Firefox, Lucene, and Mylyn, to explore the common activities that shaped those requirements. We identify a novel refinement and resolution process that all studied requirements followed from requirement inception to their complete realization and subsequent release. This research provides new insights into how OSS project teams create quality features from simple initial descriptions of JIT requirements. Our study also initiates three captivating questions regarding JIT requirements and opens new avenues for further research in this emerging field." }, { "instance_id": "R114155xR113151", "comparison_id": "R114155", "paper_id": "R113151", "text": "Linguistic Analysis of Crowd Requirements: An Experimental Study Users of today's online software services are often diversified and distributed, whose needs are hard to elicit using conventional RE approaches. As a consequence, crowd-based, data intensive requirements engineering approaches are considered important. In this paper, we have conducted an experimental study on a dataset of 2,966 requirements statements to evaluate the performance of three text clustering algorithms. The purpose of the study is to aggregate similar requirement statements suggested by the crowd users, and also to identify domain objects and operations, as well as required features from the given requirements statements dataset. The experimental results are then cross-checked with original tags provided by data providers for validation." }, { "instance_id": "R114155xR112407", "comparison_id": "R114155", "paper_id": "R112407", "text": "Which Feature is Unusable? Detecting Usability and User Experience Issues from User Reviews Usability and user experience (UUX) strongly affect software quality and success. User reviews allow software users to report UUX issues. However, this information can be difficult to access due to the varying quality of the reviews, its large numbers and unstructured nature. In this work we propose an approach to automatically detect the UUX strengths and issues of software features according to user reviews. We use a collocation algorithm for extracting the features, lexical sentiment analysis for uncovering users' satisfaction about a particular feature and machine learning for detecting the specific UUX issues affecting the software application. Additionally, we present two visualizations of the results. An initial evaluation of the approach against human judgement obtained mixed results." }, { "instance_id": "R114155xR108199", "comparison_id": "R114155", "paper_id": "R108199", "text": "A Little Bird Told Me: Mining Tweets for Requirements and Software Evolution Twitter is one of the most popular social networks. Previous research found that users employ Twitter to communicate about software applications via short messages, commonly referred to as tweets, and that these tweets can be useful for requirements engineering and software evolution. However, due to their large number---in the range of thousands per day for popular applications---a manual analysis is unfeasible.In this work we present ALERTme, an approach to automatically classify, group and rank tweets about software applications. We apply machine learning techniques for automatically classifying tweets requesting improvements, topic modeling for grouping semantically related tweets and a weighted function for ranking tweets according to specific attributes, such as content category, sentiment and number of retweets. We ran our approach on 68,108 collected tweets from three software applications and compared its results against software practitioners' judgement. Our results show that ALERTme is an effective approach for filtering, summarizing and ranking tweets about software applications. ALERTme enables the exploitation of Twitter as a feedback channel for information relevant to software evolution, including end-user requirements." }, { "instance_id": "R114155xR113030", "comparison_id": "R114155", "paper_id": "R113030", "text": "Conceptualising, extracting and analysing requirements arguments in users' forums: The CrowdRE\u2010Arg framework Due to the pervasive use of online forums and social media, users' feedback are more accessible today and can be used within a requirements engineering context. However, such information is often fragmented, with multiple perspectives from multiple parties involved during on\u2010going interactions. In this paper, the authors propose a Crowd\u2010based Requirements Engineering approach by Argumentation (CrowdRE\u2010Arg). The framework is based on the analysis of the textual conversations found in user forums, identification of features, issues and the arguments that are in favour or opposing a given requirements statement. The analysis is to generate an argumentation model of the involved user statements, retrieve the conflicting\u2010viewpoints, reason about the winning\u2010arguments and present that to systems analysts to make informed\u2010requirements decisions. For this purpose, the authors adopted a bipolar argumentation framework and a coalition\u2010based meta\u2010argumentation framework as well as user voting techniques. The CrowdRE\u2010Arg approach and its algorithms are illustrated through two sample conversations threads taken from the Reddit forum. Additionally, the authors devised algorithms that can identify conflict\u2010free features or issues based on their supporting and attacking arguments. The authors tested these machine learning algorithms on a set of 3,051 user comments, preprocessed using the content analysis technique. The results show that the proposed algorithms correctly and efficiently identify conflict\u2010free features and issues along with their winning arguments." }, { "instance_id": "R114155xR76123", "comparison_id": "R114155", "paper_id": "R76123", "text": "Crowdsourcing to elicit requirements for MyERP application Crowdsourcing is an emerging method to collect requirements for software systems. Applications seeking global acceptance need to meet the expectations of a wide range of users. Collecting requirements and arriving at consensus with a wide range of users is difficult using traditional method of requirements elicitation. This paper presents crowdsourcing based approach for German medium-size software company MyERP that might help the company to get access to requirements from non-German customers. We present the tasks involved in the proposed solution that would help the company meet the goal of eliciting requirements at a fast pace with non-German customers." }, { "instance_id": "R12250xR12220", "comparison_id": "R12250", "paper_id": "R12220", "text": "Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study Summary Background Since Dec 31, 2019, the Chinese city of Wuhan has reported an outbreak of atypical pneumonia caused by the 2019 novel coronavirus (2019-nCoV). Cases have been exported to other Chinese cities, as well as internationally, threatening to trigger a global outbreak. Here, we provide an estimate of the size of the epidemic in Wuhan on the basis of the number of cases exported from Wuhan to cities outside mainland China and forecast the extent of the domestic and global public health risks of epidemics, accounting for social and non-pharmaceutical prevention interventions. Methods We used data from Dec 31, 2019, to Jan 28, 2020, on the number of cases exported from Wuhan internationally (known days of symptom onset from Dec 25, 2019, to Jan 19, 2020) to infer the number of infections in Wuhan from Dec 1, 2019, to Jan 25, 2020. Cases exported domestically were then estimated. We forecasted the national and global spread of 2019-nCoV, accounting for the effect of the metropolitan-wide quarantine of Wuhan and surrounding cities, which began Jan 23\u201324, 2020. We used data on monthly flight bookings from the Official Aviation Guide and data on human mobility across more than 300 prefecture-level cities in mainland China from the Tencent database. Data on confirmed cases were obtained from the reports published by the Chinese Center for Disease Control and Prevention. Serial interval estimates were based on previous studies of severe acute respiratory syndrome coronavirus (SARS-CoV). A susceptible-exposed-infectious-recovered metapopulation model was used to simulate the epidemics across all major cities in China. The basic reproductive number was estimated using Markov Chain Monte Carlo methods and presented using the resulting posterior mean and 95% credibile interval (CrI). Findings In our baseline scenario, we estimated that the basic reproductive number for 2019-nCoV was 2\u00b768 (95% CrI 2\u00b747\u20132\u00b786) and that 75 815 individuals (95% CrI 37 304\u2013130 330) have been infected in Wuhan as of Jan 25, 2020. The epidemic doubling time was 6\u00b74 days (95% CrI 5\u00b78\u20137\u00b71). We estimated that in the baseline scenario, Chongqing, Beijing, Shanghai, Guangzhou, and Shenzhen had imported 461 (95% CrI 227\u2013805), 113 (57\u2013193), 98 (49\u2013168), 111 (56\u2013191), and 80 (40\u2013139) infections from Wuhan, respectively. If the transmissibility of 2019-nCoV were similar everywhere domestically and over time, we inferred that epidemics are already growing exponentially in multiple major cities of China with a lag time behind the Wuhan outbreak of about 1\u20132 weeks. Interpretation Given that 2019-nCoV is no longer contained within Wuhan, other major Chinese cities are probably sustaining localised outbreaks. Large cities overseas with close transport links to China could also become outbreak epicentres, unless substantial public health interventions at both the population and personal levels are implemented immediately. Independent self-sustaining outbreaks in major cities globally could become inevitable because of substantial exportation of presymptomatic cases and in the absence of large-scale public health interventions. Preparedness plans and mitigation interventions should be readied for quick deployment globally. Funding Health and Medical Research Fund (Hong Kong, China)." }, { "instance_id": "R12250xR12233", "comparison_id": "R12250", "paper_id": "R12233", "text": "Early transmissibility assessment of a novel coronavirus in Wuhan Between December 1, 2019 and January 26, 2020, nearly 3000 cases of respiratory illness caused by a novel coronavirus originating in Wuhan, China have been reported. In this short analysis, we combine publicly available cumulative case data from the ongoing outbreak with phenomenological modeling methods to conduct an early transmissibility assessment. Our model suggests that the basic reproduction number associated with the outbreak (at time of writing) may range from 2.0 to 3.1. Though these estimates are preliminary and subject to change, they are consistent with previous findings regarding the transmissibility of the related SARS-Coronavirus and indicate the possibility of epidemic potential." }, { "instance_id": "R12250xR12247", "comparison_id": "R12250", "paper_id": "R12247", "text": "Early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia Abstract Background The initial cases of novel coronavirus (2019-nCoV)\u2013infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)" }, { "instance_id": "R137469xR137398", "comparison_id": "R137469", "paper_id": "R137398", "text": "Transitions Between and Control of Guided and Branching Streamers in DC Nanosecond Pulsed Excited Plasma Jets Plasma bullets are ionization fronts created in atmospheric-pressure plasma jets. The propagation behavior of those bullets is, in the literature, explained by the formation of an interface between the inert gas and the ambient air created by the gas flow of the plasma jet, which guides these discharges in the formed gas channel. In this paper, we examine this ionization phenomenon in uniform gases at atmospheric pressure where this interface between two gases is not present. By changing electrical parameters and adding admixtures such as oxygen, nitrogen, and air to the gas flow, the conditions for which plasma bullets are present are investigated. Nanosecond time-resolved images have been taken with an ICCD camera to observe the propagation behavior of these discharges. It is argued that the inhomogeneous spatial concentration of metastable atoms and ions, due to the laminar gas flow and the operation frequency of the discharge in the range of a few kilohertz, is responsible for the guidance of the ionization fronts. Furthermore, conditions have been observed at where the branching of the discharge is stable and reproducible over time in the case of a helium plasma by adding admixtures of oxygen. Possible mechanisms for this phenomenon are discussed." }, { "instance_id": "R137469xR137435", "comparison_id": "R137469", "paper_id": "R137435", "text": "On the Use of Atmospheric Pressure Plasma for the Bio-Decontamination of Polymers and Its Impact on Their Chemical and Morphological Surface Properties Low temperature atmospheric pressure plasma processes can be applied to inactivate micro-organisms on products and devices made from synthetic and natural polymers. This study shows that even a short-time exposure to Ar or Ar/O2 plasma of an atmospheric pressure plasma jet leads to an inactivation of Bacillus atrophaeus spores with a maximum reduction of 4 orders of magnitude. However, changes in the surface properties of the plasma exposed material have to be considered, too. Therefore, polyethylene and polystyrene are used as exemplary substrate materials to investigate the effect of plasma treatment in more detail. The influence of process parameters, such as type of operating gas or jet-nozzle to substrate distance, is examined. The results show that short-time plasma treatment with Ar and Ar/O2 affects the surface wettability due to the introduction of polar groups as proofed by X-ray photoelectron spectroscopy. Furthermore, atomic force microscopy images reveal changes in the surface topography. Thus, nanostructures of different heights are observed on the polymeric surface depending on the treatment time and type of process gas." }, { "instance_id": "R137469xR137380", "comparison_id": "R137469", "paper_id": "R137380", "text": "Deposition of a TMDSO-Based Film by a Non-Equilibrium Atmospheric Pressure DC Plasma Jet: Deposition of a TMDSO-Based Film\u2026 This work deals with the deposition of thin films using an atmospheric pressure direct current nitrogen plasma jet with tetramethyldisiloxane as precursor. The effect of O-2 flow and plasma discharge power on film deposition rate and film chemical characteristics is investigated in detail by surface profilometry, Fourier transform infrared spectroscopy, and X-ray photoelectron spectroscopy. It is found that a higher deposition rate is obtained at higher oxygen flow rates and higher discharge powers. Increasing discharge power shows a certain amount of capability to transfer low oxygen content bonds to high oxygen content bonds. Organic films can be deposited in a pure nitrogen atmosphere. The film chemical composition can be tuned to a more inorganic structure by admixture of O-2 leading to an increase in SiO4 units at high oxygen flow rates." }, { "instance_id": "R137469xR137425", "comparison_id": "R137469", "paper_id": "R137425", "text": "Using SiOx nano-films to enhance GZO Thin films properties as front electrodes of a-Si solar cells Abstract One of the essential applications of transparent conductive oxides is as front electrodes for superstrate silicon thin-film solar cells. Textured TCO thin films can improve absorption of sunlight for an a-Si:H absorber during a single optical path. In this study, high-haze and low-resistivity bilayer GZO/SiO x thin films prepared using an atmospheric pressure plasma jet (APPJ) deposition technique and dc magnetron sputtering. The silicon subdioxide nano-film plays an important role in controlling the haze value of subsequent deposited GZO thin films. The bilayer GZO/SiO x (90 sccm) sample has the highest haze value (22.30%), the lowest resistivity (8.98 \u00d7 10 \u22124 \u03a9 cm), and reaches a maximum cell efficiency of 6.85% (enhanced by approximately 19% compared to a sample of non-textured GZO)." }, { "instance_id": "R137469xR137383", "comparison_id": "R137469", "paper_id": "R137383", "text": "Study on Plasma Agent Effect of a Direct-Current Atmospheric Pressure Oxygen-Plasma Jet on Inactivation of E. coli Using Bacterial Mutants Biosensors of single-gene knockout mutants and physical methods using mesh and quartz glass are employed to discriminate plasma agents and assess their lethal effects generated in a Direct-Current atmospheric-pressure oxygen plasma jet. Radicals generated in plasma are determined by optical emission spectroscopy, along with the O3 density measurement by UV absorption spectroscopy. Besides, thermal effect is investigated by an infrared camera. The biosensors include three kinds of Escherichia coli (E. coli) K-12 substrains with their mutants, totalling 8 kinds of bacteria. Results show that oxidative stress plays a main role in the inactivation process. Rather than superoxide O2-, neutral reactive oxygen species such as O3 and O2(a1\u0394g) are identified as dominant sources for oxidative stress. In addition, DNA damage caused by oxidation is found to be an important destruction mechanism." }, { "instance_id": "R137469xR137456", "comparison_id": "R137469", "paper_id": "R137456", "text": "The antibacterial activity of a microwave argon plasma jet at atmospheric pressure relies mainly on UV-C radiations The main bactericidal sources produced by a microwave induced cold argon plasma jet in open air are identified and their relative proportion in the biocide efficiency of the jet is assessed on planktonic Gram-negative bacteria (wild-type strains and deletion mutants of Escherichia coli) diluted in water. In these conditions ultraviolet light (UV) most probably in the UV-C region of the electromagnetic spectrum, is responsible for 86.7 \u00b1 3.2% of the observed bactericidal efficiency of the jet whereas hydrogen peroxide represents 9.9 \u00b1 5.5% of it. The exposition level of the bacteria to UV-C radiations is estimated at 20 mJ cm\u22122 using a specific photodiode and the influence of the initial bacteria concentration on the apparent antibacterial efficiency of the jet is highlighted." }, { "instance_id": "R137469xR137441", "comparison_id": "R137469", "paper_id": "R137441", "text": "Photons and particles emitted from cold atmospheric-pressure plasma inactivate bacteria and biomolecules independently and synergistically Cold atmospheric-pressure plasmas are currently in use in medicine as surgical tools and are being evaluated for new applications, including wound treatment and cosmetic care. The disinfecting properties of plasmas are of particular interest, given the threat of antibiotic resistance to modern medicine. Plasma effluents comprise (V)UV photons and various reactive particles, such as accelerated ions and radicals, that modify biomolecules; however, a full understanding of the molecular mechanisms that underlie plasma-based disinfection has been lacking. Here, we investigate the antibacterial mechanisms of plasma, including the separate, additive and synergistic effects of plasma-generated (V)UV photons and particles at the cellular and molecular levels. Using scanning electron microscopy, we show that plasma-emitted particles cause physical damage to the cell envelope, whereas UV radiation does not. The lethal effects of the plasma effluent exceed the zone of physical damage. We demonstrate that both plasma-generated particles and (V)UV photons modify DNA nucleobases. The particles also induce breaks in the DNA backbone. The plasma effluent, and particularly the plasma-generated particles, also rapidly inactivate proteins in the cellular milieu. Thus, in addition to physical damage to the cellular envelope, modifications to DNA and proteins contribute to the bactericidal properties of cold atmospheric-pressure plasma." }, { "instance_id": "R137469xR137392", "comparison_id": "R137469", "paper_id": "R137392", "text": "Steam plasma jet treatment of phenol in aqueous solution at atmospheric pressure Summary form only given. Steam plasma jet (SPJ) was generated by phenol aqueous solution introduced into an original water plasma torch as plasma forming gas, which outflowed into phenol aqueous solution to conduct oxidation degradation of organic pollutants in aqueous solutions. The experimental results indicated that the phenol was not only rapidly decomposed in thermal plasma jet, but also degraded in phenol aqueous solution due to high concentration hydroxyl radicals. In addition, the outflow of high-velocity jet with dissociated aqueous phenol solution into the treated aqueous solution results in high rates of mass transfer processes, which were beneficial to the active species to liquid and their subsequent participation in chemical reactions with the liquid-phase organic pollutants. The main intermediates of phenol decomposition were pyrocatechol, hydroquinone, maleic acid, butanedioic acid and muconic acid in liquid, which were eventually degraded into CO 2 and H 2 O. The major gaseous effluence products were H 2 , CO and CO 2 . As a result, phenol was not only decomposed by the active hybrid modes, but also converted into resource (syngas), and the energy efficiencies significantly increased from (1.6\u20131.8)\u00d710\u221210 to (4.8\u20138.0)\u00d710\u22128 mol J\u22121 with the initial concentration of phenol increased from 0.5 to 50.0 g L\u22121. This paper highlighted the application of SPJ technology in high concentration of organic polluted wastewater treatment in environmental pollution management." }, { "instance_id": "R137469xR137422", "comparison_id": "R137469", "paper_id": "R137422", "text": "Phase-resolved measurement of electric charge deposited by an atmospheric pressure plasma jet on a dielectric surface The surface charge distribution deposited by the effluent of a dielectric barrier discharge driven atmospheric pressure plasma jet on a dielectric surface has been studied. For the first time, the deposition of charge was observed phase resolved. It takes place in either one or two events in each half cycle of the driving voltage. The charge transfer could also be detected in the electrode current of the jet. The periodic change of surface charge polarity has been found to correspond well with the appearance of ionized channels left behind by guided streamers (bullets) that have been identified in similar experimental situations. The distribution of negative surface charge turned out to be significantly broader than for positive charge. With increasing distance of the jet nozzle from the target surface, the charge transfer decreases until finally the effluent loses contact and the charge transfer stops." }, { "instance_id": "R138127xR137522", "comparison_id": "R138127", "paper_id": "R137522", "text": "Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5\uff05. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel\uff1bDrug delivery\uff1bNanoparticle\uff1bRadiotherapy\uff1bHypoxia\uff1bHuman tumor cells\uff1bcellular uptake" }, { "instance_id": "R138127xR138024", "comparison_id": "R138127", "paper_id": "R138024", "text": "Effects of emulsifiers on the controlled release of paclitaxel (Taxol\u00ae) from nanospheres of biodegradable polymers Paclitaxel (Taxol) is an antineoplastic drug effective for various cancers especially ovarian and breast cancer. Due to its high hydrophobicity, however, an adjuvant such as Cremophor EL has to be used in its clinical administration, which causes serious side-effects. Nanospheres of biodegradable polymers could be an ideal solution. This study investigates the effects of various emulsifiers on the physical/chemical properties and release kinetics of paclitaxel loaded nanospheres fabricated by the solvent extraction/evaporation technique. It is shown that phospholipids could be a novel type of emulsifiers. The nanospheres manufactured with various emulsifiers were characterized by laser light scattering for their size and size distribution; scanning electron microscopy (SEM) and atomic force microscopy (AFM) for their surface morphology; zeta potential analyser for their surface charge; and, most importantly, X-ray photoelectron spectroscopy (XPS) for their surface chemistry. The encapsulation efficiency and in vitro release profile were measured by high performance liquid chromatography (HPLC). It is found that dipalmitoyl-phosphatidylcholine (DPPC) can provide more complete coating on the surface of the products which thus results in a higher emulsifying efficiency compared with polyvinyl alcohol (PVA). Our result shows that the chain length and unsaturation of the lipids have a significant influence on the emulsifying efficiency. Phospholipids with short and saturated chains have excellent emulsifying effects." }, { "instance_id": "R138127xR138001", "comparison_id": "R138127", "paper_id": "R138001", "text": "Paclitaxel-loaded PLGA nanoparticles: preparation, physicochemical characterization and in vitro anti-tumoral activity The main objective of this study was to develop a polymeric drug delivery system for paclitaxel, intended to be intravenously administered, capable of improving the therapeutic index of the drug and devoid of the adverse effects of Cremophor EL. To achieve this goal paclitaxel (Ptx)-loaded poly(lactic-co-glycolic acid) (PLGA) nanoparticles (Ptx-PLGA-Nps) were prepared by the interfacial deposition method. The influence of different experimental parameters on the incorporation efficiency of paclitaxel in the nanoparticles was evaluated. Our results demonstrate that the incorporation efficiency of paclitaxel in nanoparticles was mostly affected by the method of preparation of the organic phase and also by the organic phase/aqueous phase ratio. Our data indicate that the methodology of preparation allowed the formation of spherical nanometric (<200 nm), homogeneous and negatively charged particles which are suitable for intravenous administration. The release behaviour of paclitaxel from the developed Nps exhibited a biphasic pattern characterised by an initial fast release during the first 24 h, followed by a slower and continuous release. The in vitro anti-tumoral activity of Ptx-PLGA-Nps developed in this work was assessed using a human small cell lung cancer cell line (NCI-H69 SCLC) and compared to the in vitro anti-tumoral activity of the commercial formulation Taxol. The influence of Cremophor EL on cell viability was also investigated. Exposure of NCI-H69 cells to 25 microg/ml Taxol resulted in a steep decrease in cell viability. Our results demonstrate that incorporation of Ptx in nanoparticles strongly enhances the cytotoxic effect of the drug as compared to Taxol, this effect being more relevant for prolonged incubation times." }, { "instance_id": "R138127xR138058", "comparison_id": "R138127", "paper_id": "R138058", "text": "Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box\u2013Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same." }, { "instance_id": "R139050xR138710", "comparison_id": "R139050", "paper_id": "R138710", "text": "A general prediction model for the detection of ADHD and Autism using structural and functional MRI This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject\u2019s fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data\u2014exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well." }, { "instance_id": "R139050xR139009", "comparison_id": "R139050", "paper_id": "R139009", "text": "Correlating Stressor Events for Social Network Based Adolescent Stress Prediction The increasingly severe psychological stress damages our mental health in this highly competitive society, especially for immature teenagers who cannot settle stress well. It is of great significance to predict teenagers\u2019 psychological stress in advance and prepare targeting help in time. Due to the fact that stressor events are the source of stress and impact the stress progression, in this paper, we give a novel insight into the correlation between stressor events and stress series (stressor-stress correlation, denotes as SSC) and propose a SSC-based stress prediction model upon microblog platform. Considering both linguistic and temporal correlations between stressor series and stress series, we first quantify the stressor-stress correlation with KNN method. Afterward, a dynamic NARX recurrent neural network is constructed to integrate such impact of stressor events for teens\u2019 stress prediction in future episode. Experiment results on the real data set of 124 high school students verify that our prediction framework achieves promising performance and outperforms baseline methods. Integrating the correlation of stressor events is proved to be effective in stress prediction, significantly improving the average prediction accuracy." }, { "instance_id": "R139050xR138791", "comparison_id": "R139050", "paper_id": "R138791", "text": "Ten-year prediction of suicide death using Cox regression and machine learning in a nationwide retrospective cohort study in South Korea BACKGROUND Death by suicide is a preventable public health concern worldwide. The aim of this study is to investigate the probability of suicide death using baseline characteristics and simple medical facility visit history data using Cox regression, support vector machines (SVMs), and deep neural networks (DNNs). METHOD This study included 819,951 subjects in the National Health Insurance Service (NHIS)-Cohort Sample Database from 2004 to 2013. The dataset was divided randomly into two independent training and validation groups. To improve the performance of predicting suicide death, we applied SVM and DNN to the same training set as the Cox regression model. RESULTS Among the study population, 2546 people died by intentional self-harm during the follow-up time. Sex, age, type of insurance, household income, disability, and medical records of eight ICD-10 codes (including mental and behavioural disorders) were selected by a Cox regression model with backward stepwise elimination. The area of under the curve (AUC) of Cox regression (0.688), SVM (0.687), and DNN (0.683) were approximately the same. The group with top .5% of predicted probability had hazard ratio of 26.21 compared to that with the lowest 10% of predicted probability. LIMITATIONS This study is limited by the lack of information on suicidal ideation and attempts, other potential covariates such as information of medication and subcategory ICD-10 codes. Moreover, predictors from the prior 12-24 months of the date of death could be expected to show better performances than predictors from up to 10 years ago. CONCLUSIONS We suggest a 10-year probability prediction model for suicide death using general characteristics and simple insurance data, which are annually conducted by the Korean government. Suicide death prevention might be enhanced by our prediction model." }, { "instance_id": "R139050xR138725", "comparison_id": "R139050", "paper_id": "R138725", "text": "Using deep autoencoders to identify abnormal brain structural patterns in neuropsychiatric disorders: A large\u2010scale multi\u2010sample study Machine learning is becoming an increasingly popular approach for investigating spatially distributed and subtle neuroanatomical alterations in brain\u2010based disorders. However, some machine learning models have been criticized for requiring a large number of cases in each experimental group, and for resembling a \u201cblack box\u201d that provides little or no insight into the nature of the data. In this article, we propose an alternative conceptual and practical approach for investigating brain\u2010based disorders which aim to overcome these limitations. We used an artificial neural network known as \u201cdeep autoencoder\u201d to create a normative model using structural magnetic resonance imaging data from 1,113 healthy people. We then used this model to estimate total and regional neuroanatomical deviation in individual patients with schizophrenia and autism spectrum disorder using two independent data sets (n = 263). We report that the model was able to generate different values of total neuroanatomical deviation for each disease under investigation relative to their control group (p < .005). Furthermore, the model revealed distinct patterns of neuroanatomical deviations for the two diseases, consistent with the existing neuroimaging literature. We conclude that the deep autoencoder provides a flexible and promising framework for assessing total and regional neuroanatomical deviations in neuropsychiatric populations." }, { "instance_id": "R139050xR138782", "comparison_id": "R139050", "paper_id": "R138782", "text": "Ordinal convolutional neural networks for predicting RDoC positive valence psychiatric symptom severity scores BACKGROUND The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) provided a set of 1000 neuropsychiatric notes to participants as part of a competition to predict psychiatric symptom severity scores. This paper summarizes our methods, results, and experiences based on our participation in the second track of the shared task. OBJECTIVE Classical methods of text classification usually fall into one of three problem types: binary, multi-class, and multi-label classification. In this effort, we study ordinal regression problems with text data where misclassifications are penalized differently based on how far apart the ground truth and model predictions are on the ordinal scale. Specifically, we present our entries (methods and results) in the N-GRID shared task in predicting research domain criteria (RDoC) positive valence ordinal symptom severity scores (absent, mild, moderate, and severe) from psychiatric notes. METHODS We propose a novel convolutional neural network (CNN) model designed to handle ordinal regression tasks on psychiatric notes. Broadly speaking, our model combines an ordinal loss function, a CNN, and conventional feature engineering (wide features) into a single model which is learned end-to-end. Given interpretability is an important concern with nonlinear models, we apply a recent approach called locally interpretable model-agnostic explanation (LIME) to identify important words that lead to instance specific predictions. RESULTS Our best model entered into the shared task placed third among 24 teams and scored a macro mean absolute error (MMAE) based normalized score (100\u00b7(1-MMAE)) of 83.86. Since the competition, we improved our score (using basic ensembling) to 85.55, comparable with the winning shared task entry. Applying LIME to model predictions, we demonstrate the feasibility of instance specific prediction interpretation by identifying words that led to a particular decision. CONCLUSION In this paper, we present a method that successfully uses wide features and an ordinal loss function applied to convolutional neural networks for ordinal text classification specifically in predicting psychiatric symptom severity scores. Our approach leads to excellent performance on the N-GRID shared task and is also amenable to interpretability using existing model-agnostic approaches." }, { "instance_id": "R139050xR138865", "comparison_id": "R139050", "paper_id": "R138865", "text": "Detection of mood disorder using speech emotion profiles and LSTM In mood disorder diagnosis, bipolar disorder (BD) patients are often misdiagnosed as unipolar depression (UD) on initial presentation. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. To deal with this misdiagnosis problem, in this study, we experimented on eliciting subjects' emotions by watching six eliciting emotional video clips. After watching each video clips, their speech responses were collected when they were interviewing with a clinician. In mood disorder detection, speech emotions play an import role to detect manic or depressive symptoms. Therefore, speech emotion profiles (EP) are obtained by using the support vector machine (SVM) which are built via speech features adapted from selected databases using a denoising autoencoder-based method. Finally, a Long Short-Term Memory (LSTM) recurrent neural network is employed to characterize the temporal information of the EPs with respect to six emotional videos. Comparative experiments clearly show the promising advantage and efficacy of the LSTM-based approach for mood disorder detection." }, { "instance_id": "R139050xR138679", "comparison_id": "R139050", "paper_id": "R138679", "text": "Synthetic structural magnetic resonance image generator improves deep learning prediction of schizophrenia Despite the rapidly growing interest, progress in the study of relations between physiological abnormalities and mental disorders is hampered by complexity of the human brain and high costs of data collection. The complexity can be captured by deep learning approaches, but they still may require significant amounts of data. In this paper, we seek to mitigate the latter challenge by developing a generator for synthetic realistic training data. Our method greatly improves generalization in classification of schizophrenia patients and healthy controls from their structural magnetic resonance images. A feed forward neural network trained exclusively on continuously generated synthetic data produces the best area under the curve compared to classifiers trained on real data alone." }, { "instance_id": "R139050xR138814", "comparison_id": "R139050", "paper_id": "R138814", "text": "A deep learning based scoring system for prioritizing susceptibility variants for mental disorders Many rare and common genetic variants, including SNPs and CNVs, are reported to be associated with mental disorders, yet more remain to be discovered. However, despite the large amount of high-throughput genomics data, there is a lack of integrative methods to systematically prioritize variants that confer susceptibility to mental disorders in personal genomes. Here, we developed a computational tool: a deep learning based scoring system (ncDeepBrain) to analyze whole genome/exome sequencing data on personal genomes by integrating contributions from coding, non-coding, structural variants, known brain expression quantitative trait locus (eQTLs), and enhancer/promoter peaks from PsychENCODE. The input is whole-genome variants and the output is prioritized list of variants that may be of relevance to the phenotypes. For population studies, our method can help prioritize novel variants that are associated with disease susceptibility; for individual patients, our method can help identify variants with major effect sizes for mental disorders." }, { "instance_id": "R139050xR138984", "comparison_id": "R139050", "paper_id": "R138984", "text": "Cell-Coupled Long Short-Term Memory With $L$ -Skip Fusion Mechanism for Mood Disorder Detection Through Elicited Audiovisual Features In early stages, patients with bipolar disorder are often diagnosed as having unipolar depression in mood disorder diagnosis. Because the long-term monitoring is limited by the delayed detection of mood disorder, an accurate and one-time diagnosis is desirable to avoid delay in appropriate treatment due to misdiagnosis. In this paper, an elicitation-based approach is proposed for realizing a one-time diagnosis by using responses elicited from patients by having them watch six emotion-eliciting videos. After watching each video clip, the conversations, including patient facial expressions and speech responses, between the participant and the clinician conducting the interview were recorded. Next, the hierarchical spectral clustering algorithm was employed to adapt the facial expression and speech response features by using the extended Cohn\u2013Kanade and eNTERFACE databases. A denoizing autoencoder was further applied to extract the bottleneck features of the adapted data. Then, the facial and speech bottleneck features were input into support vector machines to obtain speech emotion profiles (EPs) and the modulation spectrum (MS) of the facial action unit sequence for each elicited response. Finally, a cell-coupled long short-term memory (LSTM) network with an $L$ -skip fusion mechanism was proposed to model the temporal information of all elicited responses and to loosely fuse the EPs and the MS for conducting mood disorder detection. The experimental results revealed that the cell-coupled LSTM with the $L$ -skip fusion mechanism has promising advantages and efficacy for mood disorder detection." }, { "instance_id": "R139050xR139033", "comparison_id": "R139050", "paper_id": "R139033", "text": "Extracting psychiatric stressors for suicide from social media using deep learning BackgroundSuicide has been one of the leading causes of deaths in the United States. One major cause of suicide is psychiatric stressors. The detection of psychiatric stressors in an at risk population will facilitate the early prevention of suicidal behaviors and suicide. In recent years, the widespread popularity and real-time information sharing flow of social media allow potential early intervention in a large-scale population. However, few automated approaches have been proposed to extract psychiatric stressors from Twitter. The goal of this study was to investigate techniques for recognizing suicide related psychiatric stressors from Twitter using deep learning based methods and transfer learning strategy which leverages an existing annotation dataset from clinical text.MethodsFirst, a dataset of suicide-related tweets was collected from Twitter streaming data with a multiple-step pipeline including keyword-based retrieving, filtering and further refining using an automated binary classifier. Specifically, a convolutional neural networks (CNN) based algorithm was used to build the binary classifier. Next, psychiatric stressors were annotated in the suicide-related tweets. The stressor recognition problem is conceptualized as a typical named entity recognition (NER) task and tackled using recurrent neural networks (RNN) based methods. Moreover, to reduce the annotation cost and improve the performance, transfer learning strategy was adopted by leveraging existing annotation from clinical text.Results & conclusionsTo our best knowledge, this is the first effort to extract psychiatric stressors from Twitter data using deep learning based approaches. Comparison to traditional machine learning algorithms shows the superiority of deep learning based approaches. CNN is leading the performance at identifying suicide-related tweets with a precision of 78% and an F-1 measure of 83%, outperforming Support Vector Machine (SVM), Extra Trees (ET), etc. RNN based psychiatric stressors recognition obtains the best F-1 measure of 53.25% by exact match and 67.94% by inexact match, outperforming Conditional Random Fields (CRF). Moreover, transfer learning from clinical notes for the Twitter corpus outperforms the training with Twitter corpus only with an F-1 measure of 54.9% by exact match. The results indicate the advantages of deep learning based methods for the automated stressors recognition from social media." }, { "instance_id": "R139050xR138944", "comparison_id": "R139050", "paper_id": "R138944", "text": "Affective Computational Model to Extract Natural Affective States of Students With Asperger Syndrome (AS) in Computer-Based Learning Environment This paper was inspired by looking at the central role of emotion in the learning process, its impact on students\u2019 performance; as well as the lack of affective computing models to detect and infer affective-cognitive states in real time for students with and without Asperger Syndrome (AS). This model overcomes gaps in other models that were designed for people with autism, which needed the use of sensors or physiological instrumentations to collect data. The model uses a webcam to capture students\u2019 affective-cognitive states of confidence, uncertainty, engagement, anxiety, and boredom. These states have a dominant effect on the learning process. The model was trained and tested on a natural-spontaneous affective dataset for students with and without AS, which was collected for this purpose. The dataset was collected in an uncontrolled environment and included variations in culture, ethnicity, gender, facial and hairstyle, head movement, talking, glasses, illumination changes, and background variation. The model structure used deep learning (DL) techniques like convolutional neural network and long short-term memory. The DL is the-state-of-art tool that used to reduce data dimensionality and capturing non-linear complex features from simpler representations. The affective model provides reliable results with accuracy 90.06%. This model is the first model to detected affective states for adult students with AS without physiological or wearable instruments. For the first time, the occlusions in this model, like hand over face or head were considered an important indicator for affective states like boredom, anxiety, and uncertainty. These occlusions have been ignored in most other affective models. The essential information channels in this model are facial expressions, head movement, and eye gaze. The model can serve as an aided-technology for tutors to monitor and detect the behaviors of all students at the same time and help in predicting negative affective states during learning process." }, { "instance_id": "R139050xR138884", "comparison_id": "R139050", "paper_id": "R138884", "text": "Automatic Detection of ADHD and ASD from Expressive Behaviour in RGBD Data Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) are neurodevelopmental conditions which impact on a significant number of children and adults. Currently, the diagnosis of such disorders is done by experts who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods for their diagnosis are not only subjective, difficult to repeat, and costly but also extremely time consuming. In this work, we present a novel methodology to aid diagnostic predictions about the presence/absence of ADHD and ASD by automatic visual analysis of a persons behaviour. To do so, we conduct the questionnaires in a computer-mediated way while recording participants with modern RGBD (Colour+Depth) sensors. In contrast to previous automatic approaches which have focussed only on detecting certain behavioural markers, our approach provides a fully automatic end-to-end system to directly predict ADHD and ASD in adults. Using state of the art facial expression analysis based on Dynamic Deep Learning and 3D analysis of behaviour, we attain classification rates of 96% for Controls vs Condition (ADHD/ASD) groups and 94% for Comorbid (ADHD+ASD) vs ASD only group. We show that our system is a potentially useful time saving contribution to the clinical diagnosis of ADHD and ASD." }, { "instance_id": "R139050xR138802", "comparison_id": "R139050", "paper_id": "R138802", "text": "Assessing the severity of positive valence symptoms in initial psychiatric evaluation records: Should we use convolutional neural networks? Background and objective Efficiently capturing the severity of positive valence symptoms could aid in risk stratification for adverse outcomes among patients with psychiatric disorders and identify optimal treatment strategies for patient subgroups. Motivated by the success of convolutional neural networks (CNNs) in classification tasks, we studied the application of various CNN architectures and their performance in predicting the severity of positive valence symptoms in patients with psychiatric disorders based on initial psychiatric evaluation records. Methods Psychiatric evaluation records contain unstructured text and semi-structured data such as question\u2013answer pairs. For a given record, we tokenise and normalise the semi-structured content. Pre-processed tokenised words are represented as one-hot encoded word vectors. We then apply different configurations of convolutional and max pooling layers to automatically learn important features from various word representations. We conducted a series of experiments to explore the effect of different CNN architectures on the classification of psychiatric records. Results Our best CNN model achieved a mean absolute error (MAE) of 0.539 and a normalized MAE of 0.785 on the test dataset, which is comparable to the other well-known text classification algorithms studied in this work. Our results also suggest that the normalisation step has a great impact on the performance of the developed models. Conclusions We demonstrate that normalisation of the semi-structured contents can improve the MAE among all CNN configurations. Without advanced feature engineering, CNN-based approaches can provide a comparable solution for classifying positive valence symptom severity in initial psychiatric evaluation records. Although word embedding is well known for its ability to capture relatively low-dimensional similarity between words, our experimental results show that pre-trained embeddings do not improve the classification performance. This phenomenon may be due to the inability of word embeddings to capture problem specific contextual semantic information implying the quality of the employing embedding is critical for obtaining an accurate CNN model." }, { "instance_id": "R139050xR138684", "comparison_id": "R139050", "paper_id": "R138684", "text": "Using deep belief network modelling to characterize differences in brain morphometry in schizophrenia Neuroimaging-based models contribute to increasing our understanding of schizophrenia pathophysiology and can reveal the underlying characteristics of this and other clinical conditions. However, the considerable variability in reported neuroimaging results mirrors the heterogeneity of the disorder. Machine learning methods capable of representing invariant features could circumvent this problem. In this structural MRI study, we trained a deep learning model known as deep belief network (DBN) to extract features from brain morphometry data and investigated its performance in discriminating between healthy controls (N = 83) and patients with schizophrenia (N = 143). We further analysed performance in classifying patients with a first-episode psychosis (N = 32). The DBN highlighted differences between classes, especially in the frontal, temporal, parietal, and insular cortices, and in some subcortical regions, including the corpus callosum, putamen, and cerebellum. The DBN was slightly more accurate as a classifier (accuracy = 73.6%) than the support vector machine (accuracy = 68.1%). Finally, the error rate of the DBN in classifying first-episode patients was 56.3%, indicating that the representations learned from patients with schizophrenia and healthy controls were not suitable to define these patients. Our data suggest that deep learning could improve our understanding of psychiatric disorders such as schizophrenia by improving neuromorphometric analyses." }, { "instance_id": "R139050xR139047", "comparison_id": "R139050", "paper_id": "R139047", "text": "Question Answering for Suicide Risk Assessment Using Reddit Mental Health America designed ten questionnaires that are used to determine the risk of mental disorders. They are also commonly used by Mental Health Professionals (MHPs) to assess suicidality. Specifically, the Columbia Suicide Severity Rating Scale (C-SSRS), a widely used suicide assessment questionnaire, helps MHPs determine the severity of suicide risk and offer an appropriate treatment. A major challenge in suicide treatment is the social stigma wherein the patient feels reluctance in discussing his/her conditions with an MHP, which leads to inaccurate assessment and treatment of patients. On the other hand, the same patient is comfortable freely discussing his/her mental health condition on social media due to the anonymity of platforms such as Reddit, and the ability to control what, when and how to share. The popular \u201cSuicideWatch\u201d subreddit has been widely used among individuals who experience suicidal thoughts, and provides significant cues for suicidality. The timeliness in sharing thoughts, the flexibility in describing feelings, and the interoperability in using medical terminologies make Reddit an important platform to be utilized as a complementary tool to the conventional healthcare system. As MHPs develop an implicit weighting scheme over the questionnaire (i.e., C-SSRS) to assess suicide risk severity, creating a relative weighting scheme for answers to be automatically generated to the questions in the questionnaire poses as a key challenge. In this interdisciplinary study, we position our approach towards a solution for an automated suicide risk-elicitation framework through a novel question answering mechanism. Our two-fold approach benefits from using: 1) semantic clustering, and 2) sequence-to-sequence (Seq2Seq) models. We also generate a gold standard dataset of suicide posts with their risk levels. This work forms a basis for the next step of building conversational agents that elicit suicide-related natural conversation based on questions." }, { "instance_id": "R139050xR138998", "comparison_id": "R139050", "paper_id": "R138998", "text": "Psychological stress detection from cross-media microblog data using Deep Sparse Neural Network Long-term stress may lead to many severe physical and mental problems. Traditional psychological stress detection usually relies on the active individual participation, which makes the detection labor-consuming, time-costing and hysteretic. With the rapid development of social networks, people become more and more willing to share moods via microblog platforms. In this paper, we propose an automatic stress detection method from cross-media microblog data. We construct a three-level framework to formulate the problem. We first obtain a set of low-level features from the tweets. Then we define and extract middle-level representations based on psychological and art theories: linguistic attributes from tweets' texts, visual attributes from tweets' images, and social attributes from tweets' comments, retweets and favorites. Finally, a Deep Sparse Neural Network is designed to learn the stress categories incorporating the cross-media attributes. Experiment results show that the proposed method is effective and efficient on detecting psychological stress from microblog data." }, { "instance_id": "R139050xR138786", "comparison_id": "R139050", "paper_id": "R138786", "text": "Predicting mental conditions based on \u201chistory of present illness\u201d in psychiatric notes with deep neural networks BACKGROUND Applications of natural language processing to mental health notes are not common given the sensitive nature of the associated narratives. The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) changed this scenario by providing the first set of neuropsychiatric notes to participants. This study summarizes our efforts and results in proposing a novel data use case for this dataset as part of the third track in this shared task. OBJECTIVE We explore the feasibility and effectiveness of predicting a set of common mental conditions a patient has based on the short textual description of patient's history of present illness typically occurring in the beginning of a psychiatric initial evaluation note. MATERIALS AND METHODS We clean and process the 1000 records made available through the N-GRID clinical NLP task into a key-value dictionary and build a dataset of 986 examples for which there is a narrative for history of present illness as well as Yes/No responses with regards to presence of specific mental conditions. We propose two independent deep neural network models: one based on convolutional neural networks (CNN) and another based on recurrent neural networks with hierarchical attention (ReHAN), the latter of which allows for interpretation of model decisions. We conduct experiments to compare these methods to each other and to baselines based on linear models and named entity recognition (NER). RESULTS Our CNN model with optimized thresholding of output probability estimates achieves best overall mean micro-F score of 63.144% for 11 common mental conditions with statistically significant gains (p<0.05) over all other models. The ReHAN model with interpretable attention mechanism scored 61.904% mean micro-F1 score. Both models' improvements over baseline models (support vector machines and NER) are statistically significant. The ReHAN model additionally aids in interpretation of the results by surfacing important words and sentences that lead to a particular prediction for each instance. CONCLUSIONS Although the history of present illness is a short text segment averaging 300 words, it is a good predictor for a few conditions such as anxiety, depression, panic disorder, and attention deficit hyperactivity disorder. Proposed CNN and RNN models outperform baseline approaches and complement each other when evaluating on a per-label basis." }, { "instance_id": "R139190xR139086", "comparison_id": "R139190", "paper_id": "R139086", "text": "Generation of atomic oxygen in the effluent of an atmospheric pressure plasma jet The planar 13.56 MHz RF-excited low temperature atmospheric pressure plasma jet (APPJ) investigated in this study is operated with helium feed gas and a small molecular oxygen admixture. The effluent leaving the discharge through the jet's nozzle contains very few charged particles and a high reactive oxygen species' density. As its main reactive radical, essential for numerous applications, the ground state atomic oxygen density in the APPJ's effluent is measured spatially resolved with two-photon absorption laser induced fluorescence spectroscopy. The atomic oxygen density at the nozzle reaches a value of ~1016 cm\u22123. Even at several centimetres distance still 1% of this initial atomic oxygen density can be detected. Optical emission spectroscopy (OES) reveals the presence of short living excited oxygen atoms up to 10 cm distance from the jet's nozzle. The measured high ground state atomic oxygen density and the unaccounted for presence of excited atomic oxygen require further investigations on a possible energy transfer from the APPJ's discharge region into the effluent: energetic vacuum ultraviolet radiation, measured by OES down to 110 nm, reaches far into the effluent where it is presumed to be responsible for the generation of atomic oxygen." }, { "instance_id": "R139190xR139135", "comparison_id": "R139190", "paper_id": "R139135", "text": "2D spatially resolved O atom density profiles in an atmospheric pressure plasma jet: from the active plasma volume to the effluent Two-dimensional spatially resolved absolute atomic oxygen densities are measured within an atmospheric pressure micro plasma jet and in its effluent. The plasma is operated in helium with an admixture of 0.5% of oxygen at 13.56 MHz and with a power of 1 W. Absolute atomic oxygen densities are obtained using two photon absorption laser induced fluorescence spectroscopy. The results are interpreted based on measurements of the electron dynamics by phase resolved optical emission spectroscopy in combination with a simple model that balances the production of atomic oxygen with its losses due to chemical reactions and diffusion. Within the discharge, the atomic oxygen density builds up with a rise time of 600 \u00b5s along the gas flow and reaches a plateau of 8 \u00d7 1015 cm\u22123. In the effluent, the density decays exponentially with a decay time of 180 \u00b5s (corresponding to a decay length of 3 mm at a gas flow of 1.0 slm). It is found that both, the species formation behavior and the maximum distance between the jet nozzle and substrates for possible oxygen treatments of surfaces can be controlled by adjusting the gas flow." }, { "instance_id": "R139190xR139103", "comparison_id": "R139190", "paper_id": "R139103", "text": "Summarizing results on the performance of a selective set of atmospheric plasma jets for separation of photons and reactive particles A microscale atmospheric-pressure plasma jet is a remote plasma jet, where plasma-generated reactive particles and photons are involved in substrate treatment. Here, we summarize our efforts to develop and characterize a particle- or photon-selective set of otherwise identical jets. In that way, the reactive species or photons can be used separately or in combination to study their isolated or combined effects to test whether the effects are additive or synergistic. The final version of the set of three jets\u2014particle-jet, photon-jet and combined jet\u2014is introduced. This final set realizes the highest reproducibility of the photon and particle fluxes, avoids turbulent gas flow, and the fluxes of the selected plasma-emitted components are almost identical in the case of all jets, while the other component is effectively blocked, which was verified by optical emission spectroscopy and mass spectrometry. Schlieren-imaging and a fluid dynamics simulation show the stability of the gas flow. The performance of these selective jets is demonstrated with the example of the treatment of E. coli bacteria with the different components emitted by a He-only, a He/N2 and a He/O2 plasma. Additionally, measurements of the vacuum UV photon spectra down to the wavelength of 50 nm can be made with the photon-jet and the relative comparison of spectral intensities among different gas mixtures is reported here. The results will show that the vacuum UV photons can lead to the inactivation of the E.coli bacteria." }, { "instance_id": "R139190xR139074", "comparison_id": "R139190", "paper_id": "R139074", "text": "RF Capillary Jet - a Tool for Localized Surface Treatment The UV/VUV spectrum of a non-thermal capillary plasma jet operating with Ar at ambient atmosphere and the temperature load of a substrate exposed to the jet have been measured. The VUV radiation is assigned to N, H, and O atomic lines along with an Ar*2 excimer continuum. The absolute radiance (115-200 nm) of the source has been determined. Maximum values of 880 \u03bcW/mm2sr are obtained. Substrate temperatures range between 35 \u00b0C for low powers and high gas flow conditions and 95 \u00b0C for high powers and reduced gas flow. The plasma source (13.56, 27.12 or 40.78 MHz) can be operated in Ar and in N2. The further addition of a low percentage of silicon containing reactive admixtures has been demonstrated for thin film deposition. Several further applications related to surface modification have been successfully applied. (\u00a9 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)" }, { "instance_id": "R139190xR139106", "comparison_id": "R139190", "paper_id": "R139106", "text": "Concepts and characteristics of the \u2018COST Reference Microplasma Jet\u2019 Biomedical applications of non-equilibrium atmospheric pressure plasmas have attracted intense interest in the past few years. Many plasma sources of diverse design have been proposed for these applications, but the relationship between source characteristics and application performance is not well-understood, and indeed many sources are poorly characterized. This circumstance is an impediment to progress in application development. A reference source with well-understood and highly reproducible characteristics may be an important tool in this context. Researchers around the world should be able to compare the characteristics of their own sources and also their results with this device. In this paper, we describe such a reference source, developed from the simple and robust micro-scaled atmospheric pressure plasma jet (\u03bc-APPJ) concept. This development occurred under the auspices of COST Action MP1101 'Biomedical Applications of Atmospheric Pressure Plasmas'. Gas contamination and power measurement are shown to be major causes of irreproducible results in earlier source designs. These problems are resolved in the reference source by refinement of the mechanical and electrical design and by specifying an operating protocol. These measures are shown to be absolutely necessary for reproducible operation. They include the integration of current and voltage probes into the jet. The usual combination of matching unit and power supply is replaced by an integrated LC power coupling circuit and a 5 W single frequency generator. The design specification and operating protocol for the reference source are being made freely available." }, { "instance_id": "R139190xR139124", "comparison_id": "R139190", "paper_id": "R139124", "text": "Comparison of electron heating and energy loss mechanisms in an RF plasma jet operated in argon and helium The \u00b5-APPJ is a well-investigated atmospheric pressure RF plasma jet. Up to now, it has mainly been operated using helium as feed gas due to stability restrictions. However, the COST-Jet design including precise electrical probes now offers the stability and reproducibility to create equi-operational plasmas in helium as well as in argon. In this publication, we compare fundamental plasma parameters and physical processes inside the COST reference microplasma jet, a capacitively coupled RF atmospheric pressure plasma jet, under operation in argon and in helium. Differences already observable by the naked eye are reflected in differences in the power-voltage characteristic for both gases. Using an electrical model and a power balance, we calculated the electron density and temperature at 0.6 W to be 9e17 m-3, 1.2 eV and 7.8e16 m-3, 1.7 eV for argon and helium, respectively. In case of helium, a considerable part of the discharge power is dissipated in elastic electron-atom collisions, while for argon most of the input power is used for ionization. Phase-resolved emission spectroscopy reveals differently pronounced heating mechanisms. Whereas bulk heating is more prominent in argon compared to helium, the opposite trend is observed for sheath heating. This also explains the different behavior observed in the power-voltage characteristics." }, { "instance_id": "R139190xR139080", "comparison_id": "R139190", "paper_id": "R139080", "text": "Diagnostics on an atmospheric pressure plasma jet The atmospheric pressure plasma jet (APPJ) is a homogeneous non-equilibrium discharge at ambient pressure. It operates with a noble base gas and a percentage-volume admixture of a molecular gas. Applications of the discharge are mainly based on reactive species in the effluent. The effluent region of a discharge operated in helium with an oxygen admixture has been investigated. The optical emission from atomic oxygen decreases with distance from the discharge but can still be observed several centimetres in the effluent. Ground state atomic oxygen, measured using absolutely calibrated two-photon laser induced fluorescence spectroscopy, shows a similar behaviour. Detailed understanding of energy transport mechanisms requires investigations of the discharge volume and the effluent region. An atmospheric pressure plasma jet has been designed providing excellent diagnostics access and a simple geometry ideally suited for modelling and simulation. Laser spectroscopy and optical emission spectroscopy can be applied in the discharge volume and the effluent region." }, { "instance_id": "R139425xR76786", "comparison_id": "R139425", "paper_id": "R76786", "text": "SPARQL basic graph pattern optimization using selectivity estimation In this paper, we formalize the problem of Basic Graph Pattern (BGP) optimization for SPARQL queries and main memory graph implementations of RDF data. We define and analyze the characteristics of heuristics for selectivity-based static BGP optimization. The heuristics range from simple triple pattern variable counting to more sophisticated selectivity estimation techniques. Customized summary statistics for RDF data enable the selectivity estimation of joined triple patterns and the development of efficient heuristics. Using the Lehigh University Benchmark (LUBM), we evaluate the performance of the heuristics for the queries provided by the LUBM and discuss some of them in more details." }, { "instance_id": "R139425xR75504", "comparison_id": "R139425", "paper_id": "R75504", "text": "An ant colony optimisation approach for optimising SPARQL queries by reordering triple patterns Processing the excessive volumes of information on the Web is an important issue. The Semantic Web paradigm has been proposed as the solution. However, this approach generates several challenges, such as query processing and optimisation. This paper proposes a novel approach for optimising SPARQL queries with different graph shapes. This new method reorders the triple patterns using Ant Colony Optimisation (ACO) algorithms. Reordering the triple patterns is a way of decreasing the execution times of the SPARQL queries. The proposed approach is focused on in-memory models of RDF data, and it optimises the SPARQL queries by means of Ant System, Elitist Ant System and MAX-MIN Ant System algorithms. The approach is implemented in the Apache Jena ARQ query engine, which is used for the experimentation, and the new method is compared with Normal Execution, Jena Reorder Algorithms, and the Stocker et al. Algorithms. All of the experiments are performed using the LUBM dataset for various shapes of queries, such as chain, star, cyclic, and chain-star. The first contribution is the real-time optimisation of SPARQL query triple pattern orders using ACO algorithms, and the second contribution is the concrete implementation for the ARQ query engine, which is a component of the widely used Semantic Web framework Apache Jena. The experiments demonstrate that the proposed method reduces the execution time of the queries significantly. HighlightsAn approach for optimising SPARQL SELECT queries with different graph shapes and different number of triple patterns.Ant Colony Optimisation algorithms are used to optimise the queries.The approach is implemented in the Apache Jena ARQ query engine.Experiments are performed using the LUBM dataset for various shapes of queries.The experiments demonstrate that the proposed method reduces the execution time of the queries significantly." }, { "instance_id": "R139526xR139463", "comparison_id": "R139526", "paper_id": "R139463", "text": "Energy management and optimization: case study of a textile plant in Istanbul, Turkey Purpose This paper aims to present the results of energy management and optimization studies in one Turkish textile factory. In a case study of a print and dye factory in Istanbul, the authors identified energy-sensitive processes and proposed energy management applications. Design/methodology/approach Appropriate energy management methods have been implemented in the factory, and the results were examined in terms of energy efficiency and cost reduction. Findings By applying the methods for fuel distribution optimization, the authors demonstrated that energy costs could be decreased by approximately. Originality/value Energy management is a vital issue for industries particularly in developing countries such as Turkey. Turkey is an energy poor country and imports more than half of its energy to satisfy its increasing domestic demands. An important share of these demands stems from the presence of a strong textile industry that operates throughout the country." }, { "instance_id": "R139526xR139478", "comparison_id": "R139526", "paper_id": "R139478", "text": "Heuristic lot size scheduling on unrelated parallel machines with applications in the textile industry In this paper, we present an industrial problem found in a company that produces acrylic fibres to be used by the textile industry. The problem is a particular case of the discrete lot sizing and scheduling problem (DLSP). In this problem, lots of similar products must be generated and sequenced in ten unrelated parallel machines, in order to minimize tool changeovers and the quantity of fibre delivered after the required due date. The company problem is original because a changeover can occur between two lots of the same product due to tool wear. We analyse the problem in detail and present an adaptation of a heuristic found in the literature to solve it. Results obtained with the proposed heuristic are compared with results that used to be obtained by the production planner, using historical data." }, { "instance_id": "R139526xR139487", "comparison_id": "R139526", "paper_id": "R139487", "text": "Scheduling with multi-attribute set-up times on unrelated parallel machines This paper studies a problem in the knitting process of the textile industry. In such a production system, each job has a number of attributes and each attribute has one or more levels. Because there is at least one different attribute level between two adjacent jobs, it is necessary to make a set-up adjustment whenever there is a switch to a different job. The problem can be formulated as a scheduling problem with multi-attribute set-up times on unrelated parallel machines. The objective of the problem is to assign jobs to different machines to minimise the makespan. A constructive heuristic is developed to obtain a qualified solution. To improve the solution further, a meta-heuristic that uses a genetic algorithm with a new crossover operator and three local searches are proposed. The computational experiments show that the proposed constructive heuristic outperforms two existed heuristics and the current scheduling method used by the case textile plant." }, { "instance_id": "R139567xR139313", "comparison_id": "R139567", "paper_id": "R139313", "text": "A hybrid Semantic driven recommender for services in the eGovernment domain In its way towards the maturity of eGovernment solutions a number of paths are being explored. Among them, one of the not fully explored mechanisms are the use of social features for a better provisioning of domain services. This paper explores how to provide support for the discovery of services from Public Administrations using folksonomies. Taking advantage of these, authors develop a social site and they provide a complete mechanism to recommend new services to users using techniques from CF and CBF recommenders. Also, some conclusions are presented to enlighten future practitioners and researchers." }, { "instance_id": "R139567xR139297", "comparison_id": "R139567", "paper_id": "R139297", "text": "What's going on in my city?: recommender systems and electronic participatory budgeting In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area." }, { "instance_id": "R139567xR139316", "comparison_id": "R139567", "paper_id": "R139316", "text": "Proactive and reactive e-government services recommendation AbstractGovernmental portals designed to provide electronic services are generally overloaded with information that may hinder the effectiveness of e-government services. This paper proposes a new framework to supply citizens with adapted content and personalized services that satisfy their requirements and fit with their profiles in order to guarantee universal access to governmental services. The proposed reactive and proactive solutions combine several recommendation techniques that use different data sources i.e., citizen profile, social media databases, citizen\u2019s feedback databases and service databases. It is shown that recommender systems provide citizens with accessible personalized e-government services." }, { "instance_id": "R139567xR139303", "comparison_id": "R139567", "paper_id": "R139303", "text": "Comparing Three Online Civic Engagement Platforms using the \u201cSpectrum of Public Participation\u201d Framework Author(s): Nelimarkka, Matti; Nonnecke, Brandie; Krishnan, Sanjay; Aitumurto, Tanja; Catterson, Daniel; Crittenden, Camille; Garland, Chris; Gregory, Conrad; Huang, Ching-Chang (Allen); Newsom, Gavin; Patel, Jay; Scott, John; Goldberg, Ken | Abstract: Online civic engagement platforms accessed via desktops or mobile devices can provide new opportunities for the public to express views and insights, consider the views of others, assist in identifying innovative ideas and new approaches to public policy issues, and directly engage with elected leaders. Existing platforms vary widely in their approaches to: assessment, engagement, ideation, evaluation, and deliberation. We consider three online platforms: the Living Voters Guide, including its earlier iterations Consider.it and Reflect; the Open Town Hall; and the California Report Card. We compare them using the International Association of Public Participation\u2019s \u201cSpectrum of Public Participation\u201d framework. Using a 10-point scale, we evaluate the user interface of each platform in terms of how well it supports the Spectrum\u2019s levels of civic engagement (inform, consult, involve, collaborate, and empower). Results suggest how user interface design affects civic engagement and suggest opportunities for future work" }, { "instance_id": "R139642xR139634", "comparison_id": "R139642", "paper_id": "R139634", "text": "Highly Reproducible Sn-Based Hybrid Perovskite Solar Cells with 9% Efficiency The low power conversion efficiency (PCE) of tin-based hybrid perovskite solar cells (HPSCs) is mainly attributed to the high background carrier density due to a high density of intrinsic defects such as Sn vacancies and oxidized species (Sn4+) that characterize Sn-based HPSCs. Herein, this study reports on the successful reduction of the background carrier density by more than one order of magnitude by depositing near-single-crystalline formamidinium tin iodide (FASnI3) films with the orthorhombic a-axis in the out-of-plane direction. Using these highly crystalline films, obtained by mixing a very small amount (0.08 m) of layered (2D) Sn perovskite with 0.92 m (3D) FASnI3, for the first time a PCE as high as 9.0% in a planar p\u2013i\u2013n device structure is achieved. These devices display negligible hysteresis and light soaking, as they benefit from very low trap-assisted recombination, low shunt losses, and more efficient charge collection. This represents a 50% improvement in PCE compared to the best reference cell based on a pure FASnI3 film using SnF2 as a reducing agent. Moreover, the 2D/3D-based HPSCs show considerable improved stability due to the enhanced robustness of the perovskite film compared to the reference cell." }, { "instance_id": "R139642xR139623", "comparison_id": "R139642", "paper_id": "R139623", "text": "Hybrid Perovskite Films by a New Variant of Pulsed Excimer Laser Deposition: A Room-Temperature Dry Process A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3\u2013xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%." }, { "instance_id": "R139642xR139605", "comparison_id": "R139642", "paper_id": "R139605", "text": "Device modeling of perovskite solar cells based on structural similarity with thin film inorganic semiconductor solar cells Device modeling of CH3NH3PbI3\u2212xCl3 perovskite-based solar cells was performed. The perovskite solar cells employ a similar structure with inorganic semiconductor solar cells, such as Cu(In,Ga)Se2, and the exciton in the perovskite is Wannier-type. We, therefore, applied one-dimensional device simulator widely used in the Cu(In,Ga)Se2 solar cells. A high open-circuit voltage of 1.0 V reported experimentally was successfully reproduced in the simulation, and also other solar cell parameters well consistent with real devices were obtained. In addition, the effect of carrier diffusion length of the absorber and interface defect densities at front and back sides and the optimum thickness of the absorber were analyzed. The results revealed that the diffusion length experimentally reported is long enough for high efficiency, and the defect density at the front interface is critical for high efficiency. Also, the optimum absorber thickness well consistent with the thickness range of real devices was derived." }, { "instance_id": "R139642xR139626", "comparison_id": "R139642", "paper_id": "R139626", "text": "Efficient, stable and scalable perovskite solar cells using poly(3-hexylthiophene) Perovskite solar cells typically comprise electron- and hole-transport materials deposited on each side of a perovskite active layer. So far, only two organic hole-transport materials have led to state-of-the-art performance in these solar cells1: poly(triarylamine) (PTAA)2\u20135 and 2,2\u02b9,7,7\u02b9-tetrakis(N,N-di-p-methoxyphenylamine)-9,9\u02b9-spirobifluorene (spiro-OMeTAD)6,7. However, these materials have several drawbacks in terms of commercialization, including high cost8, the need for hygroscopic dopants that trigger degradation of the perovskite layer9 and limitations in their deposition processes10. Poly(3-hexylthiophene) (P3HT) is an alternative hole-transport material with excellent optoelectronic properties11\u201313, low cost8,14 and ease of fabrication15\u201318, but so far the efficiencies of perovskite solar cells using P3HT have reached only around 16 per cent19. Here we propose a device architecture for highly efficient perovskite solar cells that use P3HT as a hole-transport material without any dopants. A thin layer of wide-bandgap halide perovskite is formed on top of the narrow-bandgap light-absorbing layer by an in situ reaction of n-hexyl trimethyl ammonium bromide on the perovskite surface. Our device has a certified power conversion efficiency of 22.7 per cent with hysteresis of \u00b10.51 per cent; exhibits good stability at 85 per cent relative humidity without encapsulation; and upon encapsulation demonstrates long-term operational stability for 1,370 hours under 1-Sun illumination at room temperature, maintaining 95 per cent of the initial efficiency. We extend our platform to large-area modules (24.97 square centimetres)\u2014which are fabricated using a scalable bar-coating method for the deposition of P3HT\u2014and achieve a power conversion efficiency of 16.0 per cent. Realizing the potential of P3HT as a hole-transport material by using a wide-bandgap halide could be a valuable direction for perovskite solar-cell research.A double-layered halide architecture for perovskite solar cells enables the use of dopant-free poly(3-hexylthiophene) as a hole-transport material, forming stable and scalable devices with a certified power conversion efficiency of 22.7 per cent." }, { "instance_id": "R139972xR139942", "comparison_id": "R139972", "paper_id": "R139942", "text": "Effect of gas pressure on the sensitivity of a micromachined thermal accelerometer Abstract This paper describes the effect of gas pressure on the sensitivity of a micromachined thermal accelerometer. The sensor principle is as follows: a heating resistor creates a symmetrical temperature profile; two temperature detectors, placed symmetrically on both sides of the heater, measure a differential temperature. When an acceleration is applied on the sensitive axis x of the sensor, the convection heat transfer and the temperature profile become asymmetric and the differential temperature was shown to be proportional to the acceleration. Since this differential temperature is due to free convection, a simple model has been developed suggesting that the response of thermal accelerometers is linearly proportional to the Grashof number. In this case, the sensor sensitivity should be proportional to the square of gas pressure. Therefore, a thermal accelerometer with three pairs of detectors placed at 100, 300 and 500 \u03bcm from the heater was manufactured using the techniques of silicon micromachining and was used for the study of the temperature profile: it was shown that the thickness of the thermal boundary layer decreases and the power consumption increases when the pressure increases. The study of the sensitivity according to the gas pressure has shown a square dependence of the sensor response in low pressure range. For higher pressures, different optimum sensitivities were obtained according to the distance heater\u2013detector and to the gas pressure. Finally, a displacement of this optimum from 500 to 300 \u03bcm is observed when the pressure increases from 1 to 30 bars." }, { "instance_id": "R139972xR139957", "comparison_id": "R139972", "paper_id": "R139957", "text": "A Microinjected 3-Axis Thermal Accelerometer Abstract A completely new approach for the fabrication of 3-axis thermal accelerometers is presented in this paper. Micromolded polystyrene micro-parts are assembled with polyimide membranes enabling the construction of thermal accelerometers. The use of polymers (polystyrene and polyimide) with low thermal conductivities improves the overall power consumption of the thermal accelerometer and enables a simple and low-cost fabrication process (no clean room infrastructure required). The accelerometer is composed of 4 polystyrene microinjected structural micro- parts (two identical top parts and two identical central parts) and three polyimide membranes (two identical z-axis membranes and a central membrane). The microinjected parts provide the mechanical support for the active elements that are placed on the membranes (the heater and the temperature sensors). Coupled 3D thermo-electric-fluidic FEM simulations show that current design has a sensitivity of 1.6 \u00b0C/g in the X-Y directions and 0.2 \u00b0C/g in the Z direction for a central heater temperature of 300 \u00b0C." }, { "instance_id": "R139972xR139954", "comparison_id": "R139972", "paper_id": "R139954", "text": "Development of a dual-axis micromachined convective accelerometer with an effective heater geometry This paper describes the design, fabrication and testing of a dual-axis micromachined convective accelerometer with a diamond-shaped heater. Modification of heater geometry is advantageous because it is simple and ensures enhanced sensitivity without constraining device size or operating power. The diamond-shaped heater induces active heat flow and a sharp temperature gradient around the heater; together these effects provide high sensitivity. When the fabricated convective accelerometer used SF6 as an enclosed gas medium, its measured sensitivity was 3.5mV/g when operating power was 7.4mW and its bandwidth at -3dB was 25Hz." }, { "instance_id": "R140131xR74705", "comparison_id": "R140131", "paper_id": "R74705", "text": "Smart Cities and Cultural Heritage \u2013 A Review of Developments and Future Opportunities Soja & Kanai (2006) use the terms \u201cglobal city region\u201d to refer to \u201ca new metropolitan form characterised by sprawling polycentric networks of urban centres \u2026\u201d Such networks are becoming identified with both the potential and the reality of \u2018smart\u2019 city infrastructures of connected transportation, financial, energy, health, information and cultural systems." }, { "instance_id": "R140131xR139927", "comparison_id": "R140131", "paper_id": "R139927", "text": "Smart Cities and Historical Heritage The theme of smart grids will connote in the immediate future the production and distribution of electricity, integrating effectively and in a sustainable way energy deriving from large power stations with that distributed and supplied by renewable sources. In programmes of urban redevelopment, however, the historical city has not yet been subject to significant experimentation, also due to the specific safeguard on this kind of Heritage. This reflection opens up interesting new perspectives of research and operations, which could significantly contribute to the pursuit of the aims of the Smart City. This is the main goal of the research here presented and focused on the binomial renovation of a historical complex/enhancement and upgrading of its energy efficiency." }, { "instance_id": "R140131xR140106", "comparison_id": "R140131", "paper_id": "R140106", "text": "Smart Cities in Europe Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the \u201csmart city\u201d has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the \u201csmart city.\u201d We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape." }, { "instance_id": "R140348xR140138", "comparison_id": "R140348", "paper_id": "R140138", "text": "Rdf2vec: Rdf graph embeddings for data mining Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph sub-structures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks." }, { "instance_id": "R140348xR140174", "comparison_id": "R140348", "paper_id": "R140174", "text": "Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts Many large-scale knowledge bases simultaneously represent two views of knowledge graphs (KGs): an ontology view for abstract and commonsense concepts, and an instance view for specific entities that are instantiated from ontological concepts. Existing KG embedding models, however, merely focus on representing one of the two views alone. In this paper, we propose a novel two-view KG embedding model, JOIE, with the goal to produce better knowledge embedding and enable new applications that rely on multi-view knowledge. JOIE employs both cross-view and intra-view modeling that learn on multiple facets of the knowledge base. The cross-view association model is learned to bridge the embeddings of ontological concepts and their corresponding instance-view entities. The intra-view models are trained to capture the structured knowledge of instance and ontology views in separate embedding spaces, with a hierarchy-aware encoding technique enabled for ontologies with hierarchies. We explore multiple representation techniques for the two model components and investigate with nine variants of JOIE. Our model is trained on large-scale knowledge bases that consist of massive instances and their corresponding ontological concepts connected via a (small) set of cross-view links. Experimental results on public datasets show that the best variant of JOIE significantly outperforms previous models on instance-view triple prediction task as well as ontology population on ontology-view KG. In addition, our model successfully extends the use of KG embeddings to entity typing with promising performance." }, { "instance_id": "R140348xR140171", "comparison_id": "R140348", "paper_id": "R140171", "text": "On2vec: Embedding-based relation prediction for ontology population Populating ontology graphs represents a long-standing problem for the Semantic Web community. Recent advances in translation-based graph embedding methods for populating instance-level knowledge graphs lead to promising new approaching for the ontology population problem. However, unlike instance-level graphs, the majority of relation facts in ontology graphs come with comprehensive semantic relations, which often include the properties of transitivity and symmetry, as well as hierarchical relations. These comprehensive relations are often too complex for existing graph embedding methods, and direct application of such methods is not feasible. Hence, we propose On2Vec, a novel translation-based graph embedding method for ontology population. On2Vec integrates two model components that effectively characterize comprehensive relation facts in ontology graphs. The first is the Component-specific Model that encodes concepts and relations into low-dimensional embedding spaces without a loss of relational properties; the second is the Hierarchy Model that performs focused learning of hierarchical relation facts. Experiments on several well-known ontology graphs demonstrate the promising capabilities of On2Vec in predicting and verifying new relation facts. These promising results also make possible significant improvements in related methods." }, { "instance_id": "R140348xR140212", "comparison_id": "R140348", "paper_id": "R140212", "text": "Knowledge graph embedding with iterative guidance from soft rules (RUGE) Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from this https URL" }, { "instance_id": "R140348xR140180", "comparison_id": "R140348", "paper_id": "R140180", "text": "Beta embeddings for multi-hop logical reasoning in knowledge graphs One of the fundamental problems in Artificial Intelligence is to perform complex multi-hop logical reasoning over the facts captured by a knowledge graph (KG). This problem is challenging, because KGs can be massive and incomplete. Recent approaches embed KG entities in a low dimensional space and then use these embeddings to find the answer entities. However, it has been an outstanding challenge of how to handle arbitrary first-order logic (FOL) queries as present methods are limited to only a subset of FOL operators. In particular, the negation operator is not supported. An additional limitation of present methods is also that they cannot naturally model uncertainty. Here, we present BetaE, a probabilistic embedding framework for answering arbitrary FOL queries over KGs. BetaE is the first method that can handle a complete set of first-order logical operations: conjunction ($\\wedge$), disjunction ($\\vee$), and negation ($\\neg$). A key insight of BetaE is to use probabilistic distributions with bounded support, specifically the Beta distribution, and embed queries/entities as distributions, which as a consequence allows us to also faithfully model uncertainty. Logical operations are performed in the embedding space by neural operators over the probabilistic embeddings. We demonstrate the performance of BetaE on answering arbitrary FOL queries on three large, incomplete KGs. While being more general, BetaE also increases relative performance by up to 25.4% over the current state-of-the-art KG reasoning methods that can only handle conjunctive queries without negation." }, { "instance_id": "R140543xR140535", "comparison_id": "R140543", "paper_id": "R140535", "text": "Physisorption-Based Charge Transfer in Two-Dimensional SnS2 for Selective and Reversible NO2 Gas Sensing Nitrogen dioxide (NO2) is a gas species that plays an important role in certain industrial, farming, and healthcare sectors. However, there are still significant challenges for NO2 sensing at low detection limits, especially in the presence of other interfering gases. The NO2 selectivity of current gas-sensing technologies is significantly traded-off with their sensitivity and reversibility as well as fabrication and operating costs. In this work, we present an important progress for selective and reversible NO2 sensing by demonstrating an economical sensing platform based on the charge transfer between physisorbed NO2 gas molecules and two-dimensional (2D) tin disulfide (SnS2) flakes at low operating temperatures. The device shows high sensitivity and superior selectivity to NO2 at operating temperatures of less than 160 \u00b0C, which are well below those of chemisorptive and ion conductive NO2 sensors with much poorer selectivity. At the same time, excellent reversibility of the sensor is demonstrated, which has rarely been observed in other 2D material counterparts. Such impressive features originate from the planar morphology of 2D SnS2 as well as unique physical affinity and favorable electronic band positions of this material that facilitate the NO2 physisorption and charge transfer at parts per billion levels. The 2D SnS2-based sensor provides a real solution for low-cost and selective NO2 gas sensing." }, { "instance_id": "R140543xR140522", "comparison_id": "R140543", "paper_id": "R140522", "text": "Highly sensitive MoTe\n 2\n chemical sensor with fast recovery rate through gate biasing The unique properties of two dimensional (2D) materials make them promising candidates for chemical and biological sensing applications. However, most 2D nanomaterial sensors suffer very long recovery time due to slow molecular desorption at room temperature. Here, we report a highly sensitive molybdenum ditelluride (MoTe2) gas sensor for NO2 and NH3 detection with greatly enhanced recovery rate. The effects of gate bias on sensing performance have been systematically studied. It is found that the recovery kinetics can be effectively adjusted by biasing the sensor to different gate voltages. Under the optimum biasing potential, the MoTe2 sensor can achieve more than 90% recovery after each sensing cycle well within 10 min at room temperature. The results demonstrate the potential of MoTe2 as a promising candidate for high-performance chemical sensors. The idea of exploiting gate bias to adjust molecular desorption kinetics can be readily applied to much wider sensing platforms based on 2D nanomaterials." }, { "instance_id": "R140543xR140530", "comparison_id": "R140543", "paper_id": "R140530", "text": "Flexible NO2 sensors fabricated by layer-by-layer covalent anchoring and in situ reduction of graphene oxide Novel flexible NO2 gas sensors were fabricated by covalently bonding graphene oxide (GO) to a gold electrode on a plastic substrate using a peptide chemical protocol and then reducing in situ GO film to a reduced GO (RGO) film. A pair of comb-like Au electrodes on a polyethylene terephthalate (PET) substrate were pretreated with cysteamine hydrochloride (CH) and then reacted with GO using N-(3-dimethylaminopropyl)-N\u2032-ethylcarbodiimide hydrochloride (EDC) and N-hydroxysuccinimide (NHS) as the peptide coupling reagent, before undergoing a final reduction by sodium borohydride (NaBH4). The anchored RGO film was characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), electrochemical impedance spectroscopy (EIS) and Fourier transform infrared spectroscopy (FTIR). The gas sensing properties, including sensitivity, sensing linearity, reproducibility, response time, recovery time, cross-sensitivity effects and long-term stability, were investigated. Interfering gas NH3 affected the limit of detection (LOD) of a target NO2 gas in a real-world binary gas mixture. The flexible NO2 gas sensor exhibited a strong response and good flexibility that exceeded that of sensors that were made from graphene film grown by chemical vapor deposition method (CVD-graphene) at room temperature. Its use is practical because it is so easy to fabricate." }, { "instance_id": "R140543xR140539", "comparison_id": "R140543", "paper_id": "R140539", "text": "Porous ZnO Polygonal Nanoflakes: Synthesis, Use in High-Sensitivity NO2 Gas Sensor, and Proposed Mechanism of Gas Sensing Unique porous ZnO polygonal nanoflakes were synthesized by the microwave hydrothermal method. The structural properties of the products were investigated by using X-ray diffraction, scanning electron microscopy, transmission electron microscopy (TEM), and high-resolution TEM techniques. In situ diffuse reflectance infrared Fourier transform spectroscopy technique was employed to investigate the mechanism of NO2 sensing. Free nitrate ions, nitrate ions, and nitrite anions were the main adsorbed species. N2O was formed via NO\u2013 and N2O2\u2013 that were stemmed from NO. Comparative tests for gas sensing between gas sensors based on the as-prepared porous ZnO nanoflakes and purchased ZnO nanoparticles clearly showed that the former exhibited more excellent NO2 sensing performances. Photoluminescence and X-ray photoelectron spectroscopy spectra further proved that the intensities of donors (oxygen vacancy (VO) and/or zinc interstitial (Zni)) and surface oxygen species (O2\u2013 and O2), which were involved in the mechani..." }, { "instance_id": "R141156xR141127", "comparison_id": "R141156", "paper_id": "R141127", "text": "RF MEMS Switches With Enhanced Power-Handling Capabilities This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches." }, { "instance_id": "R141156xR141139", "comparison_id": "R141156", "paper_id": "R141139", "text": "Fabrication of low pull-in voltage RF MEMS switches on glass substrate in recessed CPW configuration for V-band application A new technique for the fabrication of radio frequency (RF) microelectromechanical systems (MEMS) shunt switches in recessed coplaner waveguide (CPW) configuration on glass substrates is presented. Membranes with low spring constant are used for reducing the pull-in voltage. A layer of silicon dioxide is deposited on glass wafer and is used to form the recess, which partially defines the gap between the membrane and signal line. Positive photoresist S1813 is used as a sacrificial layer and gold as the membrane material. The membranes are released with the help of Pirhana solution and finally rinsed in low surface tension liquid to avoid stiction during release. Switches with 500 \u00b5m long two-meander membranes show very high isolation of greater than 40 dB at their resonant frequency of 61 GHz and pull-in voltage less than 15 V, while switches with 700 \u00b5m long six-strip membranes show isolation greater than 30 dB at the frequency of 65 GHz and pull-in voltage less than 10 V. Both types of switches show insertion loss less than 0.65 dB up to 65 GHz." }, { "instance_id": "R141156xR141153", "comparison_id": "R141156", "paper_id": "R141153", "text": "Effect of Environmental Humidity on Dielectric Charging Effect in RF MEMS Capacitive Switches Based on C\u2013V Properties A capacitance-voltage (C- V) model is developed for RF microelectromechanical systems (MEMS) switches at upstate and downstate. The transient capacitance response of the RF MEMS switches at different switch states was measured for different humidity levels. By using the C -V model as well as the voltage shift dependent of trapped charges, the transient trapped charges at different switch states and humidity levels are obtained. Charging models at different switch states are explored in detail. It is shown that the injected charges increase linearly with humidity levels and the internal polarization increases with increasing humidity at downstate. The speed of charge injection at 80% relative humidity (RH) is about ten times faster than that at 20% RH. A measurement of pull-in voltage shifts by C- V sweep cycles at 20% and 80 % RH gives a reasonable evidence. The present model is useful to understand the pull-in voltage shift of the RF MEMS switch." }, { "instance_id": "R141425xR141419", "comparison_id": "R141425", "paper_id": "R141419", "text": "Identification of sialic acid-binding function for the Middle East respiratory syndrome coronavirus spike glycoprotein Middle East respiratory syndrome coronavirus (MERS-CoV) targets the epithelial cells of the respiratory tract both in humans and in its natural host, the dromedary camel. Virion attachment to host cells is mediated by 20-nm-long homotrimers of spike envelope protein S. The N-terminal subunit of each S protomer, called S1, folds into four distinct domains designated S1 A through S1 D . Binding of MERS-CoV to the cell surface entry receptor dipeptidyl peptidase 4 (DPP4) occurs via S1 B . We now demonstrate that in addition to DPP4, MERS-CoV binds to sialic acid (Sia). Initially demonstrated by hemagglutination assay with human erythrocytes and intact virus, MERS-CoV Sia-binding activity was assigned to S subdomain S1 A . When multivalently displayed on nanoparticles, S1 or S1 A bound to human erythrocytes and to human mucin in a strictly Sia-dependent fashion. Glycan array analysis revealed a preference for \u03b12,3-linked Sias over \u03b12,6-linked Sias, which correlates with the differential distribution of \u03b12,3-linked Sias and the predominant sites of MERS-CoV replication in the upper and lower respiratory tracts of camels and humans, respectively. Binding is hampered by Sia modifications such as 5- N -glycolylation and (7,)9- O -acetylation. Depletion of cell surface Sia by neuraminidase treatment inhibited MERS-CoV entry of Calu-3 human airway cells, thus providing direct evidence that virus\u2013Sia interactions may aid in virion attachment. The combined observations lead us to propose that high-specificity, low-affinity attachment of MERS-CoV to sialoglycans during the preattachment or early attachment phase may form another determinant governing the host range and tissue tropism of this zoonotic pathogen." }, { "instance_id": "R141425xR141421", "comparison_id": "R141425", "paper_id": "R141421", "text": "Species-Specific Colocalization of Middle East Respiratory Syndrome Coronavirus Attachment and Entry Receptors MERS-CoV uses the S1 B domain of its spike protein to attach to its host receptor, dipeptidyl peptidase 4 (DPP4). The tissue localization of DPP4 has been mapped in different susceptible species. On the other hand, the S1 A domain, the N-terminal domain of this spike protein, preferentially binds to several glycotopes of \u03b12,3-sialic acids, the attachment factor of MERS-CoV. Here we show, using a novel method, that the S1 A domain specifically binds to the nasal epithelium of dromedary camels, alveolar epithelium of humans, and intestinal epithelium of common pipistrelle bats. In contrast, it does not bind to the nasal epithelium of pigs or rabbits, nor does it bind to the intestinal epithelium of serotine bats and frugivorous bat species. This finding supports the importance of the S1 A domain in MERS-CoV infection and tropism, suggests its role in transmission, and highlights its potential use as a component of novel vaccine candidates." }, { "instance_id": "R141425xR141415", "comparison_id": "R141425", "paper_id": "R141415", "text": "Development of Label-Free Colorimetric Assay for MERS-CoV Using Gold Nanoparticles Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV\u2013vis wavelength range. We designed a pair of thiol-modified probes at either the 5\u2032 end or 3\u2032 end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/\u03bcL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings." }, { "instance_id": "R141425xR141411", "comparison_id": "R141425", "paper_id": "R141411", "text": "MERS-CoV spike nanoparticles protect mice from MERS-CoV infection Abstract The Middle East respiratory syndrome coronavirus (MERS-CoV) was first discovered in late 2012 and has gone on to cause over 1800 infections and 650 deaths. There are currently no approved therapeutics or vaccinations for MERS-CoV. The MERS-CoV spike (S) protein is responsible for receptor binding and virion entry to cells, is immunodominant and induces neutralizing antibodies in vivo, all of which, make the S protein an ideal target for anti-MERS-CoV vaccines. In this study, we demonstrate protection induced by vaccination with a recombinant MERS-CoV S nanoparticle vaccine and Matrix-M1 adjuvant combination in mice. The MERS-CoV S nanoparticle vaccine produced high titer anti-S neutralizing antibody and protected mice from MERS-CoV infection in vivo." }, { "instance_id": "R141425xR141407", "comparison_id": "R141425", "paper_id": "R141407", "text": "A self-adjuvanted nanoparticle based vaccine against infectious bronchitis virus Infectious bronchitis virus (IBV) affects poultry respiratory, renal and reproductive systems. Currently the efficacy of available live attenuated or killed vaccines against IBV has been challenged. We designed a novel IBV vaccine alternative using a highly innovative platform called Self-Assembling Protein Nanoparticle (SAPN). In this vaccine, B cell epitopes derived from the second heptad repeat (HR2) region of IBV spike proteins were repetitively presented in its native trimeric conformation. In addition, flagellin was co-displayed in the SAPN to achieve a self-adjuvanted effect. Three groups of chickens were immunized at four weeks of age with the vaccine prototype, IBV-Flagellin-SAPN, a negative-control construct Flagellin-SAPN or a buffer control. The immunized chickens were challenged with 5x104.7 EID50 IBV M41 strain. High antibody responses were detected in chickens immunized with IBV-Flagellin-SAPN. In ex vivo proliferation tests, peripheral mononuclear cells (PBMCs) derived from IBV-Flagellin-SAPN immunized chickens had a significantly higher stimulation index than that of PBMCs from chickens receiving Flagellin-SAPN. Chickens immunized with IBV-Flagellin-SAPN had a significant reduction of tracheal virus shedding and lesser tracheal lesion scores than did negative control chickens. The data demonstrated that the IBV-Flagellin-SAPN holds promise as a vaccine for IBV." }, { "instance_id": "R141425xR141405", "comparison_id": "R141425", "paper_id": "R141405", "text": "Immunomodulatory nanodiamond aggregate-based platform for the treatment of rheumatoid arthritis Abstract We previously demonstrated that octadecylamine-functionalized nanodiamond (ND-ODA) and dexamethasone (Dex)-adsorbed ND-ODA (ND-ODA\u2013Dex) promoted anti-inflammatory and pro-regenerative behavior in human macrophages in vitro. In this study, we performed a pilot study to investigate if these immunomodulatory effects translate when used as a treatment for rheumatoid arthritis in mice. Following local injection in limbs of mice with collagen type II-induced arthritis, microcomputed tomography showed that mice treated with a low dose of ND-ODA and ND-ODA\u2013Dex did not experience bone loss to the levels observed in non-treated arthritic controls. A low dose of ND-ODA and ND-ODA\u2013Dex also reduced macrophage infiltration and expression of pro-inflammatory mediators iNOS and tumor necrosis factor-\u03b1 compared to the arthritic control, while a high dose of ND-ODA increased expression of these markers. Overall, these results suggest that ND-ODA may be useful as an inherently immunomodulatory platform, and support the need for an in-depth study, especially with respect to the effects of dose." }, { "instance_id": "R141593xR141460", "comparison_id": "R141593", "paper_id": "R141460", "text": "An In Situ Comparison between VUV Photon and Ion Energy Fluxes to Polymer Surfaces Immersed in an RF Plasma Absolutely calibrated vacuum ultraviolet (VUV) spectroscopy has been used to determine the energy fluxes of VUV photons at an electrically floating substrate in a low-pressure 13.56-MHz radiofrequency plasma reactor used for polymer surface treatments. These fluxes have been compared with the positive ion flux that was reported in an earlier study. At the typical operating parameters of 10-mTorr pressure and 10-W power, the total VUV energy flux is 2.2 mW cm-2, compared with a value of 3.3 mW cm-2 from the ions. With increasing power (from 0.5 to 12 W), both the ion and VUV energy fluxes increase monotonically. However, as the pressure increases, (1\u2212100 mTorr), the ion energy flux declines, while the VUV component increases. At discharge powers of 10 W, and pressures greater than 25 mTorr, the greater part of the energy flux to the surface is from the VUV photons. These measurements are used to determine which of the plasma components, VUV or ions, will be most effective in the treatment of polystyrene su..." }, { "instance_id": "R141699xR141624", "comparison_id": "R141699", "paper_id": "R141624", "text": "Point defects assisted NH3 gas sensing properties in ZnO nanostructures Abstract In this report, the NH 3 gas sensing properties of ZnO nanostructures fabricated by radio frequency magnetron sputtering under various argon sputtering pressures have been investigated under various temperatures. The morphological transitions occur from vertical standing nanorods to inclined and tapered nanostructures with increasing the argon sputtering pressure. The dominant green emission at around 2.28 eV in the photoluminescence spectra signifies the presence of oxygen vacancies in the ZnO nanostructures which increases as a function of argon sputtering pressure. Despite low surface area, the nanostructures grown under higher argon sputtering pressure of 10 Pa exhibit excellent NH 3 gas response magnitude since it is exhibiting more oxygen vacancies as compared to other counterparts. For 25 ppm NH 3 gas at room temperature, a response time of 49 s and a fast recovery time of 19 s are attributed to the modification in the intermediate defect states induced by the oxygen vacancies through the adsorption and desorption of gas molecules on the surface of ZnO nanostructures." }, { "instance_id": "R141699xR141614", "comparison_id": "R141699", "paper_id": "R141614", "text": "Room temperature hydrogen gas sensor based on ZnO nanorod arrays grown on a SiO2/Si substrate via a microwave-assisted chemical solution method Abstract High-quality zinc oxide (ZnO) nanorod arrays were grown on a silicon dioxide (SiO 2 /Si) substrate via a microwave irradiation-assisted chemical solution method. The SiO 2 /Si substrate was seeded with polyvinyl alcohol\u2013Zn (OH) 2 nanocomposites prior to the complete growth of ZnO nanorods through a chemical solution method. X-ray diffraction, field-emission scanning electron microscope, and photoluminescence results indicated the high quality of the produced ZnO nanorods. The hydrogen (H 2 )-sensing capabilities of the ZnO nanorod arrays were investigated at room temperature (RT), and the sensitivity was 294% in the presence of 1000 ppm of H 2 . The sensing measurements for H 2 gas at various temperatures (25\u2013250 \u00b0C) were repeatable for over 100 min. The sensor exhibited a sensitivity of 1100% at 250 \u00b0C upon exposure to 1000 ppm of H 2 . Hysteresis was observed in the sensor at different H 2 concentrations at different temperatures. Moreover, the response times ranged from 60 to 25 s over the range of operating temperatures from RT to 250 \u00b0C." }, { "instance_id": "R141699xR141637", "comparison_id": "R141699", "paper_id": "R141637", "text": "Both oxygen vacancies defects and porosity facilitated NO2 gas sensing response in 2D ZnO nanowalls at room temperature Abstract In this report, NO2 gas sensing properties of ZnO nanowalls fabricated by facial solution method under subsequent various annealing temperatures ranging from 350 \u00b0C to 750 \u00b0C in air have been investigated. Upon annealing in air, significant porosity and oxygen vacancies modification of the ZnO nanowalls has been observed through SEM and photoluminescence spectroscopy. The gas sensing behaviors of the fabricated sensors are systematically investigated. The ZnO nanowalls annealed at 450 \u00b0C exhibit excellent NO2 gas response magnitude and fast response and recovery time (23 s, 11 s) at room temperature. The results reveal that, for their good sensing properties, there is a delicate balance between oxygen vacancies defects and porosity dependent on the annealing temperature." }, { "instance_id": "R141699xR141631", "comparison_id": "R141699", "paper_id": "R141631", "text": "Rice Husk Templated Mesoporous ZnO Nanostructures for Ethanol Sensing at Room Temperature Mesoporous zinc oxide nanostructures are successfully synthesized via the sol-gel route by using a rice husk as the template for ethanol sensing at room temperature. The structure and morphology of the nanostructures are characterized by x-ray diffraction, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and nitrogen adsorption\u2013desorption analyses. The mechanism for the growth of zinc oxide nanostructures over the biotemplate is proposed. SEM and TEM observations also reveal the formation of spherical zinc oxide nanoparticles over the interwoven fibrous network. Multiple sized pores having pore diameter ranging from 10\u201340 nm is also evidenced from the pore size distribution plot. The larger surface area and porous nature of the material lead to high sensitivity (40.93% for 300 ppm of ethanol), quick response (42 s) and recovery (40 s) towards ethanol at 300 K. The porous nature of the interwoven fibre-like network affords mass transportation of ethanol vapor, which results in faster surface accessibility, and hence it acts as a potential candidate for ethanol sensing at room temperature." }, { "instance_id": "R141723xR141007", "comparison_id": "R141723", "paper_id": "R141007", "text": "Index of Economic Well-Being for Canada The objective of this paper is to develop an index of economic weel-being for Canada for the period 1971 to 1997 using a framework originally laid out by Osberg (1985). Although the economic well-being of a society depends on the level of average consumption flows, aggregate accumulation of productive stocks, inequality in the distribution of individual incomes and insecurity in the anticipation of future incomes, the weights attached to each component will vary, depending on the values of different observers. It is argued that public debate would improved if there is explicit consideration of the aspects of economic well-being obscured by average income trends and if the weights attached to theses aspects were explicitly open for discussion." }, { "instance_id": "R141752xR141172", "comparison_id": "R141752", "paper_id": "R141172", "text": "Information and communication technology and local governance: understanding the difference between cities in developed and emerging economies The current literature on Information and Communication Technology (ICT) and planning suggests that the use of Information Technology (IT) in local government can enhance the management and functioning of cities. Of particular interest is the phenomenon of e-government, where debates and information surrounding local government matters are conducted in cyberspace. Of relevance also, are the networking opportunities that the Internet can facilitate between city governments and the institutional learning that can emanate from that. The increasing use of web-based Geographic Information Systems (GIS) applications raises awareness of spatial issues that impact on defined municipal areas, whilst interactive mapping provides opportunities for addressing spatial concerns virtually. Most of the literature does, however, focus on the experience of developed countries where capacity and resources permit a sophisticated understanding of ICT. Yet, evidence suggests that these tools are also used in some developing countries, with India often cited as one of the leading countries in achieving ICT prominence, but little seems to be published about this experience in Southern Africa. There are a number of innovative initiatives underway in South African local governments but most of these interventions are in their infancy. In contrast, there are a number of examples in developed countries that may provide some guidance for developing cities. This paper examines the Smart City initiative in Brisbane in Australia, and compares it with moves currently underway in Durban, South Africa to incorporate ICT in local governance. The intention is to expose the differences in approach, understand the capacity and resource issues that may impact, and draw some conclusions with regards to future interventions in Durban. Overall, the paper provides an initial conceptual landscape that begins to determine the extent to which ICT in local government can provide opportunities for Durban by learning from the experience of Brisbane, Australia." }, { "instance_id": "R141780xR141720", "comparison_id": "R141780", "paper_id": "R141720", "text": "N,S co-doped carbon dots as a dual-functional fluorescent sensor for sensitive detection of baicalein and temperature In this work, nitrogen and sulfur dual-doped CDs (N,S-CDs) were prepared via a facile one-pot hydrothermal method from citric acid and N-acetyl-L-cysteine with a high quantum yield (QY) of 49%. As-fabricated N,S-CDs had a size around 2.5 nm and exhibited excitation-independent emission and excellent luminescent properties. The fluorescent sensor based on the N,S-CDs showed a highly sensitive detection of baicalein with a detection limit (LOD) of 0.21 \u03bcmol L-1 in the linear range from 0.69 to 70.0 \u03bcmol L-1. The fluorescence of the N,S-CDs could be effectively quenched by baicalein based on static quenching. In addition, the temperature sensor based on the synthesized N,S-CDs showed a good linear relationship between temperature and fluorescence (FL) intensity with a temperature range from 5 \u00b0C to 75 \u00b0C. Furthermore, the synthesized N,S-CDs were successfully applied to the measurement of baicalein in real samples. In a word, the N,S-CDs had great potential to be worked as fluorescence sensors to monitor the concentration of baicalein and temperature." }, { "instance_id": "R141780xR141701", "comparison_id": "R141780", "paper_id": "R141701", "text": "Carbon Dot Nanothermometry: Intracellular Photoluminescence Lifetime Thermal Sensing Nanoscale biocompatible photoluminescence (PL) thermometers that can be used to accurately and reliably monitor intracellular temperatures have many potential applications in biology and medicine. Ideally, such nanothermometers should be functional at physiological pH across a wide range of ionic strengths, probe concentrations, and local environments. Here, we show that water-soluble N,S-co-doped carbon dots (CDs) exhibit temperature-dependent photoluminescence lifetimes and can serve as highly sensitive and reliable intracellular nanothermometers. PL intensity measurements indicate that these CDs have many advantages over alternative semiconductor- and CD-based nanoscale temperature sensors. Importantly, their PL lifetimes remain constant over wide ranges of pH values (5-12), CD concentrations (1.5 \u00d7 10-5 to 0.5 mg/mL), and environmental ionic strengths (up to 0.7 mol\u00b7L-1 NaCl). Moreover, they are biocompatible and nontoxic, as demonstrated by cell viability and flow cytometry analyses using NIH/3T3 and HeLa cell lines. N,S-CD thermal sensors also exhibit good water dispersibility, superior photo- and thermostability, extraordinary environment and concentration independence, high storage stability, and reusability-their PL decay curves at temperatures between 15 and 45 \u00b0C remained unchanged over seven sequential experiments. In vitro PL lifetime-based temperature sensing performed with human cervical cancer HeLa cells demonstrated the great potential of these nanosensors in biomedicine. Overall, N,S-doped CDs exhibit excitation-independent emission with strongly temperature-dependent monoexponential decay, making them suitable for both in vitro and in vivo luminescence lifetime thermometry." }, { "instance_id": "R141780xR141693", "comparison_id": "R141780", "paper_id": "R141693", "text": "Dual functional N- and S-co-doped carbon dots as the sensor for temperature and Fe3+ ions Abstract In this paper, we synthesized a new kind of dual functional nitrogen- and sulfur-co-doped carbon dots(C-dots) which could be applied as the fluorescent temperature sensor and the probe of trace amounts of Fe 3+ ions via an one-pot facile hydrothermal approach from acrylic acid and methionine. The obtained C-dots with the average diameter of 2.3 nm manifested colorful fluorescence, good-solubility and attractive optical stability. The fluorescence lifetime is 7.92 ns. Although the quantum yield is 10.55%, the fluorescence can be fleetly and selectively quenched by Fe 3+ ions. The detection limit was as low as 1.72 nM. In addition, high ionic strength, mild acids and alkaline only own a small impact on the fluorescence intensity of the C-dots. Furthermore, the C-dots could be deemed as a temperature sensor with significant reversibility, sensitivity and linearity. Meanwhile, the C-dots also possessed great recoverable and thermal stable fluorescence as expected." }, { "instance_id": "R142850xR142566", "comparison_id": "R142850", "paper_id": "R142566", "text": "Formulation and antitumor activity evaluation of nanocrystalline suspensions of poorly soluble anticancer drugs AbstractPurpose. Determine if wet milling technology could be used to formulate water insoluble antitumor agents as stabilized nanocrystalline drug suspensions that retain biological effectiveness following intravenous injection. Methods. The versatility of the approach is demonstrated by evaluation of four poorly water soluble chemotherapeutic agents that exhibit diverse chemistries and mechanisms of action. The compounds selected were: piposulfan (alkylating agent), etoposide (topoisomerase II inhibitor), camptothecin (topoisomerase I inhibitor) and paclitaxel (antimitotic agent). The agents were wet milled as a 2% w/v solids suspension containing 1 % w/v surfactant stabilizer using a low energy ball mill. The size , physical stability and efficacy of the nanocrystalline suspensions were evaluated. Results. The data show the feasibility of formulating poorly water soluble anticancer agents as physically stable aqueous nanocrystalline suspensions. The suspensions are physically stable and efficacious following intravenous injection. Conclusions. Wet milling technology is a feasible approach for formulating poorly water soluble chemotherapeutic agents that may offer a number of advantages over a more classical approach." }, { "instance_id": "R142850xR142774", "comparison_id": "R142850", "paper_id": "R142774", "text": "Biodistribution and bioimaging studies of hybrid paclitaxel nanocrystals: Lessons learned of the EPR effect and image-guided drug delivery Paclitaxel (PTX) nanocrystals (200 nm) were produced by crystallization from a solution. Antitumor efficacy and toxicity were examined through a survival study in a human HT-29 colon cancer xenograft murine model. The antitumor activity of the nanocrystal treatments was comparable with that by the conventional solubilization formulation (Taxol\u00ae), but yielded less toxicity as indicated by the result of a survival study. Tritium-labeled PTX nanocrystals were further produced with a near infrared (NIR) fluorescent dye physically integrated in the crystal lattice. Biodistribution and tumor accumulation of the tritium-labeled PTX nanocrystals were determined immediately after intravenous administration and up to 48 h by scintillation counting. Whole-body optical imaging of animals was concurrently carried out; fluorescent intensities were also measured from excised tumors and major organs of euthanized animals. It was found that drug accumulation in the tumor was less than 1% of 20mg/kg intravenous dose. Qualitatively correlation was identified between the biodistribution determined by using tritium-labeled particles and that using optical imaging, but quantitative divergence existed. The divergent results suggest possible ways to improve the design of hybrid nanocrystals for cancer therapy and diagnosis. The study also raises questions of the general role of the enhanced permeability and retention (EPR) effect in tumor targeting and the effectiveness of bioimaging, specifically for theranostics, in tracking drug distribution and pharmacokinetics." }, { "instance_id": "R142850xR142831", "comparison_id": "R142850", "paper_id": "R142831", "text": "Oridonin nanosuspension enhances anti-tumor efficacy in SMMC-7721 cells and H22 tumor bearing mice PURPOSE The aim of the present study was to evaluate both the in vitro and in vivo antitumor activity of an oridonin nanosuspension (ORI-N) relative to efficacy of bulk oridonin delivery. METHODS ORI-N with a particle size of 897.2\u00b114.2 nm and a zeta potential of -21.8\u00b10.8 mV was prepared by the high-pressure homogenization (HPH) technique. The in vitro cytotoxicity of ORI-N against SMMC-7721 cells was evaluated by MTT[3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay, the effects of ORI-N on cell cycle and cell apoptosis was analyzed by flow cytometry; the in vivo anti-tumor activity was observed in H22 tumor bearing mice. RESULTS ORI-N effectively inhibited the proliferation of SMMC-7721 cells. Flow cytometric analysis demonstrated that ORI-N arrested SMMC-7721 cells in the G2/M cycle, and furthermore, that ORI-N induced a higher apoptotic rate than the bulk ORI solution. In vivo studies ORI-N also showed higher antitumor efficacy as measured by reduced tumor volume and tumor weight, as well as lower toxicity in H22 solid tumor bearing mice compared to free ORI at the same concentration. CONCLUSIONS These results suggest that the delivery of ORI-N as a nanosuspension is a promising approach for treating tumors." }, { "instance_id": "R142850xR142795", "comparison_id": "R142850", "paper_id": "R142795", "text": "Paclitaxel nanosuspensions coated with P-gp inhibitory surfactants: I. Acute toxicity and pharmacokinetics studies PURPOSE The aim of the present study was to evaluate the acute toxicity and pharmacokinetics of paclitaxel nanosuspensions stabilized with TPGS in mice. METHOD The paclitaxel nanosuspensions were prepared by evaporative precipitation into aqueous solution (EPAS) method, and freeze-dried powders of the nanosuspensions were obtained through lyophilization process. The morphology and particle size of nanosuspensions were determined by transmission electron microscope and Zetasizer, respectively. The acute toxicity and pharmacokinetics of paclitaxel nanosuspensions after intravenous administration to Kunming mice were studied. A marketed paclitaxel injectable solution was studied parallelly. RESULTS The paclitaxel nanoparticles were in rod shape under transmission electron microscope, and their mean particle size was 135.4 \u00b1 5.7 nm. Results of acute toxicity showed the LD50 of paclitaxel nanosuspensions was 98.63 mg/kg, twice more than that of the marketed injection (41.46 mg/kg). After intravenous injection paclitaxel nanosuspensions displayed different pharmacokinetic properties in comparison with the marketed injectable solution, including a decreased initial drug concentration, increased plasma half-life, AUC and MRT. CONCLUSIONS The paclitaxel nanosuspensions prepared in this study could markedly enhance the tolerance dosage in mice, and manifest different pharmacokinetic properties compared with the solution." }, { "instance_id": "R144121xR144100", "comparison_id": "R144121", "paper_id": "R144100", "text": "Towards Ontology Learning from Folksonomies A folksonomy refers to a collection of user-defined tags with which users describe contents published on the Web. With the flourish of Web 2.0, folksonomies have become an important mean to develop the Semantic Web. Because tags in folksonomies are authored freely, there is a need to understand the structure and semantics of these tags in various applications. In this paper, we propose a learning approach to create an ontology that captures the hierarchical semantic structure of folksonomies. Our experimental results on two different genres of real world data sets show that our method can effectively learn the ontology structure from the folksonomies." }, { "instance_id": "R144121xR143899", "comparison_id": "R144121", "paper_id": "R143899", "text": "An Integrated Approach to Drive Ontological Structure from Folksonomie Web 2.0 is an evolution toward a more social, interactive and collaborative web, where user is at the center of service in terms of publications and reactions. This transforms the user from his old status as a consumer to a new one as a producer. Folksonomies are one of the technologies of Web 2.0 that permit users to annotate resources on the Web. This is done by allowing users to use any keyword or tag that they find relevant. Although folksonomies require a context-independent and inter-subjective definition of meaning, many researchers have proven the existence of an implicit semantics in these unstructured data. In this paper, we propose an improvement of our previous approach to extract ontological structures from folksonomies. The major contributions of this paper are a Normalized Co-occurrences in Distinct Users (NCDU) similarity measure, and a new algorithm to define context of tags and detect ambiguous ones. We compared our similarity measure to a widely used method for identifying similar tags based on the cosine measure. We also compared the new algorithm with the Fuzzy Clustering Algorithm (FCM) used in our original approach. The evaluation shows promising results and emphasizes the advantage of our approach." }, { "instance_id": "R144512xR144498", "comparison_id": "R144512", "paper_id": "R144498", "text": "Brain targeting with surface-modified poly(d,l-lactic-co-glycolic acid) nanoparticles delivered via carotid artery administration In this study, we investigated surface-modified nanoparticles (NP) formulated using a biodegradable polymer, poly(D,L-lactide-co-glycolide) (PLGA), for targeting central nervous system (CNS) diseases. Polysorbate 80 (P80), poloxamer 188 (P188), and chitosan (CS) were used to modify the surfaces of PLGA NP to improve the brain delivery of NP. Surface-modified PLGA NP were formulated using an emulsion solvent diffusion method. 6-Coumarin was used as a fluorescent label for NP. The different formulations of 6-coumarin-loaded PLGA NP were injected into rats via carotid arteries. NP remaining in the brain were evaluated quantitatively, and brain slices were observed using confocal laser scanning microscopy (CLSM). Carotid artery administration was more effective for delivering NP into the brain compared to intravenous administration. After administration, NP concentrations in the brain were increased by NP surface modification, especially CS- and P80-PLGA NP. CLSM observations indicated that P80-PLGA NP could cross the blood-brain barrier and thus serve as a drug delivery system for the CNS. These results indicate that surface-modified PLGA NP have a high potential for use in CNS delivery systems." }, { "instance_id": "R144512xR144470", "comparison_id": "R144512", "paper_id": "R144470", "text": "Cell-penetrating peptide-modified PLGA nanoparticles for enhanced nose-to-brain macromolecular delivery AbstractMacromolecular drugs become an essential part in neuroprotective treatment. However, the nature of ineffective delivery crossing the blood brain barrier (BBB) renders those macromolecules undruggable for clinical practice. Recently, brain target via intranasal delivery have provided a promising solution to circumventing the BBB. Despite the direct route from nose to brain (i.e. olfactory pathway), there still are big challenges for large compounds like proteins to overcome the multiple delivery barriers such as nasal mucosa penetration, intracellular transport along the olfactory neuron, and diffusion across the heterogeneous brain compartments. Herein presented is an intranasal strategy mediated by cell-penetrating peptide modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles for the delivery of insulin to the brain, a potent therapeutic against Alzheimer\u2019s disease. The results revealed that the cell-penetrating peptide can potentially deliver insulin into brain via the nasal route, showing a total brain delivery efficiency of 6%. It could serve as a potential treatment for neurodegenerative diseases." }, { "instance_id": "R144512xR144485", "comparison_id": "R144512", "paper_id": "R144485", "text": "PLGA nanoparticles modified with a BBB-penetrating peptide co-delivering A\u03b2 generation inhibitor and curcumin attenuate memory deficits and neuropathology in Alzheimer's disease mice Alzheimer's disease (AD) is the most common form of dementia, characterized by the formation of extracellular senile plaques and neuronal loss caused by amyloid \u03b2 (A\u03b2) aggregates in the brains of AD patients. Conventional strategies failed to treat AD in clinical trials, partly due to the poor solubility, low bioavailability and ineffectiveness of the tested drugs to cross the blood-brain barrier (BBB). Moreover, AD is a complex, multifactorial neurodegenerative disease; one-target strategies may be insufficient to prevent the processes of AD. Here, we designed novel kind of poly(lactide-co-glycolic acid) (PLGA) nanoparticles by loading with A\u03b2 generation inhibitor S1 (PQVGHL peptide) and curcumin to target the detrimental factors in AD development and by conjugating with brain targeting peptide CRT (cyclic CRTIGPSVC peptide), an iron-mimic peptide that targets transferrin receptor (TfR), to improve BBB penetration. The average particle size of drug-loaded PLGA nanoparticles and CRT-conjugated PLGA nanoparticles were 128.6 nm and 139.8 nm, respectively. The results of Y-maze and new object recognition test demonstrated that our PLGA nanoparticles significantly improved the spatial memory and recognition in transgenic AD mice. Moreover, PLGA nanoparticles remarkably decreased the level of A\u03b2, reactive oxygen species (ROS), TNF-\u03b1 and IL-6, and enhanced the activities of super oxide dismutase (SOD) and synapse numbers in the AD mouse brains. Compared with other PLGA nanoparticles, CRT peptide modified-PLGA nanoparticles co-delivering S1 and curcumin exhibited most beneficial effect on the treatment of AD mice, suggesting that conjugated CRT peptide, and encapsulated S1 and curcumin exerted their corresponding functions for the treatment." }, { "instance_id": "R145685xR145191", "comparison_id": "R145685", "paper_id": "R145191", "text": "A theoretical study of the plasma broadening of helium-like transitions for high-Z emitters Calculations of the spectral line broadening for 1s2 1S-1snp 1p transitions in helium-like ions have been performed for emitters with relatively high Z (silicon and argon). These calculations illustrate the plasma dependent shifts and asymmetries which are due to the high emitter charge. A discussion of the effects is presented, with particular reference to the deviations from the corresponding hydrogenic line profiles." }, { "instance_id": "R145685xR145216", "comparison_id": "R145685", "paper_id": "R145216", "text": "Relativistic quantum mechanical calculations of electron-impact broadening for spectral lines in Be-like ions Aims. We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s \u2190 2s3p in four Be-like ions from NIV to NeVII. Methods. In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results. A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics." }, { "instance_id": "R145685xR145174", "comparison_id": "R145685", "paper_id": "R145174", "text": "Theory of Line Broadening in Multiplet Spectra A theory of line broadening in the impect approximation is developed which includes the case of overlapping lines. It is assumed that the collisions which give rise to the broadening do not cause transitions between states with different principal quantam numbers. The theory was worked out in detail in two cases: (1) the broadening arises only from perturbations of the upper state with arbitrary splitting of the substates. This approximation may be used if the perturbations of the lower state are relatively unimportant (e.g.. the higher series members of the Balmer lines), and is exact if the perturbations do not affect the lower state as in the case of the ground stute of hydrogen perturbed by electron collisions; (2) complete degeneracy of the initial and final states. This approximation is also valid on the far wing of the line if there is splitting, i.e.. for frequencies large compared to thc splitting, and is a generalization of Anderson's theory. The formal theory is worked out by two different methods. The method of calculation for nearly degenerate initial and final states with splitting is indicated. Method I is particularly suited for calculating the wing distribution while Method II is more suitable formore \u00bb finding tbe intensity distribution at the line center for overlapping lines. The line profile is made up of a sum of dispersion profiles and asymmetric terms whicb arise from interferences when the transition operator is not diagonal. The shift and half-width parameters are found from the roots of a secular equation and depend on the splitting as well as the density. temperature. and the character of the perturbation. (auth)\u00ab less" }, { "instance_id": "R145685xR145210", "comparison_id": "R145685", "paper_id": "R145210", "text": "Stark broadening of the B III2s\u22122plines We present a quantum-mechanical calculation of Stark linewidths from electron-ion collisions for the 2s{sub 1/2}-2p{sub 1/2,3/2}, {lambda}=2066 and 2067 {Angstrom}, resonance transitions in BIII. The results confirm previous quantum-mechanical R-matrix calculations, but contradict recent measurements and semiclassical and some semiempirical calculations. The differences between the calculations can be attributed to the dominance of small L partial waves in the electron-atom scattering, while the large Stark widths inferred from the measurements would be substantially reduced if allowance is made for hydrodynamic turbulence from high-Reynolds-number flows and the associated Doppler broadening. {copyright} {ital 1997} {ital The American Physical Society}" }, { "instance_id": "R145685xR145205", "comparison_id": "R145685", "paper_id": "R145205", "text": "Atomic data for opacity calculations. VIII. Line-profile parameters for 42 transitions in Li-like and Be-like ions Widths and shifts are calculated in the electron impact approximation, using close-coupling theory, for the transitions 2s-2p, 2s-3p, 2p-3s, 2p-3d, 3s-3p and 3p-3d in Be II, B III, C IV, O VI and Ne VIII, and the transitions 2s2 1S-2s2p 1Po, 2s2p 3Po-2p2 3P, 2s2p 1Po-2p2 1D and 1S in C III, O V and Ne VII. Results are compared with those from previous calculations and from experiments. Approximate formulae must be used to estimate linewidths for some 106 transitions which are of importance for the calculation of stellar envelope opacities. Results for the quantum mechanical calculations for 42 transitions are used to obtain provisional best estimates for the parameters in these formulae." }, { "instance_id": "R145950xR142724", "comparison_id": "R145950", "paper_id": "R142724", "text": "The SSN ontology of the W3C semantic sensor network incubator group The W3C Semantic Sensor Network Incubator group (the SSN-XG) produced an OWL 2 ontology to describe sensors and observations - the SSN ontology, available at http://purl.oclc.org/NET/ssnx/ssn. The SSN ontology can describe sensors in terms of capabilities, measurement processes, observations and deployments. This article describes the SSN ontology. It further gives an example and describes the use of the ontology in recent research projects." }, { "instance_id": "R145950xR142752", "comparison_id": "R145950", "paper_id": "R142752", "text": "Smart traffic analytics in the semantic web with STAR-CITY: Scenarios, system and lessons learned in Dublin City This paper gives a high-level presentation of STAR-CITY, a system supporting semantic traffic analytics and reasoning for city. STAR-CITY, which integrates (human and machine-based) sensor data using variety of formats, velocities and volumes, has been designed to provide insight on historical and real-time traffic conditions, all supporting efficient urban planning. Our system demonstrates how the severity of road traffic congestion can be smoothly analyzed, diagnosed, explored and predicted using semantic web technologies. Our prototype of semantics-aware traffic analytics and reasoning, illustrated and experimented in Dublin Ireland, but also tested in Bologna Italy, Miami USA and Rio Brazil works and scales efficiently with real, historical together with live and heterogeneous stream data. This paper highlights the lessons learned from deploying and using a system in Dublin City based on Semantic Web technologies." }, { "instance_id": "R146458xR146060", "comparison_id": "R146458", "paper_id": "R146060", "text": "Tools of quality economics: sustainable development of a \u2018smart city\u2019 under conditions of digital transformation of the economy The article covers the issues of ensuring sustainable city development based on the achievements of digitalization. Attention is also paid to the use of quality economy tools in managing 'smart' cities under conditions of the digital transformation of the national economy. The current state of 'smart' cities and the main factors contributing to their sustainable development, including the digitalization requirements is analyzed. Based on the analysis of statistical material, the main prospects to form the 'smart city' concept, the possibility to assess such parameters as 'life quality', 'comfort', 'rational organization', 'opportunities', 'sustainable development', 'city environment accessibility', 'use of communication technologies'. The role of tools for quality economics is revealed in ensuring the big city life under conditions of digital economy. The concept of 'life quality' is considered, which currently is becoming one of the fundamental vectors of the human civilization development, a criterion that is increasingly used to compare countries and territories. Special attention is paid to such tools and methods of quality economics as standardization, metrology and quality management. It is proposed to consider these tools as a mechanism for solving the most important problems in the national economy development under conditions of digital transformation." }, { "instance_id": "R146458xR146105", "comparison_id": "R146458", "paper_id": "R146105", "text": "Enterprise Architectures for the Digital Transformation in Small and Medium-sized Enterprises Abstract The transformation towards smart connected factories causes enormous changes in mechanical engineering industry starting from the development of cyber-physical production systems up to their application in production. Enterprise architectures already offer suitable methods to support the alignment of the internal IT landscape. New demands like customer involvement, iterative development and increased business-orientation arising with these digitized products require new approaches and methods. This paper presents the foundation and the first steps aiming at the development of a method for the holistic planning of the digital transformation in small and medium-sized mechanical engineering enterprises." }, { "instance_id": "R146458xR146090", "comparison_id": "R146458", "paper_id": "R146090", "text": "Internet of Things, legal and regulatory framework in digital transformation from smart to intelligent cities Digital transformation from \u201cSmart\u201d to \u201cIntelligent city\u201d is based on new information technologies and knowledge, as well as on organizational and security processes. The authors of this paper will present the legal and regulatory framework and challenges of Internet of things in development of smart cities on the way to become intelligent cities. The special contribution of the paper will be an overview of new legal and regulatory framework General Data Protection Regulation (GDPR) which is of great importance for European union legal and regulation framework and bringing novelties in citizen's privacy and protection of personal data." }, { "instance_id": "R146458xR146039", "comparison_id": "R146458", "paper_id": "R146039", "text": "The Evolving Enterprise Architecture: A Digital Transformation Perspective The advancement of technology has influenced all the enterprises. Enterprises should come up with the evolving approaches to face the challenges. With an evolving approach, the enterprise will be able to adapt to successive changes. Enterprise architecture is introduced as an approach to confront these challenges. The main issue is the generalization of this evolving approach to enterprise architecture. In an evolving approach, all aspects of the enterprise, as well as the ecosystem of the enterprise are considered. In this study, the notion of Internet of Things is considered as a transition factor in enterprise and enterprise architecture. Industry 4.0 and digital transformation have also been explored in the enterprise. Common challenges are extracted and defined." }, { "instance_id": "R146458xR146173", "comparison_id": "R146458", "paper_id": "R146173", "text": "Digital enterprise architecture with micro-granular systems and services The digitization of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change interacts with all information processes and systems that are important business enablers for the context of digitization since years. Our aim is to support flexibility and agile transformations for both business domains and related information technology with more flexible enterprise information systems through adaptation and evolution of digital enterprise architectures. The present research paper investigates the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like Microservices and the Internet of Things, as part of a new digital enterprise architecture. To integrate micro-granular architecture models to living architectural model versions we are extending more traditional enterprise architecture reference models with state of art elements for agile architectural engineering to support the digitization of products, services, and processes." }, { "instance_id": "R146851xR145901", "comparison_id": "R146851", "paper_id": "R145901", "text": "Evaluating the electronic tuberculosis register surveillance system in Eden District, Western Cape, South Africa, 2015 ABSTRACT Background: Tuberculosis (TB) surveillance data are crucial to the effectiveness of National TB Control Programs. In South Africa, few surveillance system evaluations have been undertaken to provide a rigorous assessment of the platform from which the national and district health systems draws data to inform programs and policies. Objective: Evaluate the attributes of Eden District\u2019s TB surveillance system, Western Cape Province, South Africa. Methods: Data quality, sensitivity and positive predictive value were assessed using secondary data from 40,033 TB cases entered in Eden District\u2019s ETR.Net from 2007 to 2013, and 79 purposively selected TB Blue Cards (TBCs), a medical patient file and source document for data entered into ETR.Net. Simplicity, flexibility, acceptability, stability and usefulness of the ETR.Net were assessed qualitatively through interviews with TB nurses, information health officers, sub-district and district coordinators involved in the TB surveillance. Results: TB surveillance system stakeholders report that Eden District\u2019s ETR.Net system was simple, acceptable, flexible and stable, and achieves its objective of informing TB control program, policies and activities. Data were less complete in the ETR.Net (66\u2013100%) than in the TBCs (76\u2013100%), and concordant for most variables except pre-treatment smear results, antiretroviral therapy (ART) and treatment outcome. The sensitivity of recorded variables in ETR.Net was 98% for gender, 97% for patient category, 93% for ART, 92% for treatment outcome and 90% for pre-treatment smear grading. Conclusions: Our results reveal that the system provides useful information to guide TB control program activities in Eden District. However, urgent attention is needed to address gaps in clinical recording on the TBC and data capturing into the ETR.Net system. We recommend continuous training and support of TB personnel involved with TB care, management and surveillance on TB data recording into the TBCs and ETR.Net as well as the implementation of a well-structured quality control and assurance system." }, { "instance_id": "R146851xR144726", "comparison_id": "R146851", "paper_id": "R144726", "text": "A systems overview of the Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE II) The Electronic Surveillance System for the Early Notification of Community-Based Epidemics, or ESSENCE II, uses syndromic and nontraditional health information to provide very early warning of abnormal health conditions in the National Capital Area (NCA). ESSENCE II is being developed for the Department of Defense Global Emerging Infections System and is the only known system to combine both military and civilian health care information for daily outbreak surveillance. The National Capital Area has a complicated, multijurisdictional structure that makes data sharing and integrated regional surveillance challenging. However, the strong military presence in all jurisdictions facilitates the collection of health care information across the region. ESSENCE II integrates clinical and nonclinical human behavior indicators as a means of identifying the abnormality as close to the time of onset of symptoms as possible. Clinical data sets include emergency room syndromes, private practice billing codes grouped into syndromes, and veterinary syndromes. Nonclinical data include absenteeism, nurse hotline calls, prescription medications, and over-the-counter self-medications. Correctly using information marked by varying degrees of uncertainty is one of the more challenging as pects of this program. The data (without personal identifiers) are captured in an electronic format, encrypted, archived, and processed at a secure facility. Aggregated information is then provided to users on secure Web sites. When completed, the system will provide automated capture, archiving, processing, and notification of abnormalities to epidemiologists and analysts. Outbreak detection methods currently include temporal and spatial variations of odds ratios, autoregressive modeling, cumulative summation, matched filter, and scan statistics. Integration of nonuniform data is needed to increase sensitivity and thus enable the earliest notification possible. The performance of various detection techniques was compared using results obtained from the ESSENCE II system." }, { "instance_id": "R146851xR146461", "comparison_id": "R146851", "paper_id": "R146461", "text": "Performance characteristics and associated outcomes for an automated surveillance tool for bloodstream infection BACKGROUND The objective of this study was to evaluate performance metrics and associated patient outcomes of an automated surveillance system, the blood Nosocomial Infection Marker (NIM). METHODS We reviewed records of 237 patients with and 36,927 patients without blood NIM using the National Healthcare Safety Network (NHSN) definition for laboratory-confirmed bloodstream infection (BSI) as the gold standard. We matched cases with noncases by propensity score and estimated attributable mortality and cost of NHSN-reportable central line-associated bloodstream infections (CLABSIs) and non-NHSN-reportable BSIs. RESULTS For patients with central lines (CL), the blood NIM had 73.2% positive predictive value (PPV), 99.9% negative predictive value (NPV), 89.2% sensitivity, and 99.7% specificity. For all patients regardless of CL status, the blood NIM had 53.6% PPV, 99.9% NPV, 84.0% sensitivity, and 99.9% specificity. For CLABSI cases compared with noncases, mortality was 17.5% versus 9.4% (P = .098), and median charge was $143,935 (interquartile range [IQR], $89,794-$257,447) versus $115,267 (IQR, $74,937-$173,053) (P < .01). For non-NHSN-reportable BSI cases compared with noncases, mortality was 23.6% versus 6.7% (P < .0001), and median charge was $86,927 (IQR, $54,728-$156,669) versus $62,929 (IQR, $36,743-$115,693) (P < .0001). CONCLUSIONS The NIM is an effective screening tool for BSI. Both NHSN-reportable and nonreportable BSI cases were associated with increased mortality and cost." }, { "instance_id": "R146851xR145039", "comparison_id": "R146851", "paper_id": "R145039", "text": "Statewide System of Electronic Notifiable Disease Reporting From Clinical Laboratories: Comparing Automated Reporting With Conventional Methods CONTEXT Notifiable disease surveillance is essential to rapidly identify and respond to outbreaks so that further illness can be prevented. Automating reports from clinical laboratories has been proposed to reduce underreporting and delays. OBJECTIVE To compare the timeliness and completeness of a prototypal electronic reporting system with that of conventional laboratory reporting. DESIGN Laboratory-based reports for 5 conditions received at a state health department between July 1 and December 31, 1998, were reviewed. Completeness of coverage for each reporting system was estimated using capture-recapture methods. SETTING Three statewide private clinical laboratories in Hawaii. MAIN OUTCOME MEASURES The number and date of reports received, by reporting system, laboratory, and pathogen; completeness of data fields. RESULTS A total of 357 unique reports of illness were identified; 201 (56%) were received solely through the automated electronic system, 32 (9%) through the conventional system only, and 124 (35%) through both. Thus, electronic reporting resulted in a 2.3-fold (95% confidence interval [CI], 2.0-2.6) increase in reports. Electronic reports arrived an average of 3.8 (95% CI, 2.6-5.0) days earlier than conventional reports. Of 21 data fields common to paper and electronic formats, electronic reports were significantly more likely to be complete for 12 and for 1 field with the conventional system. The estimated completeness of coverage for electronic reporting was 80% (95% CI, 75%-85%) [corrected] compared with 38% (95% CI, 36%-41%) [corrected] for the conventional system. CONCLUSIONS In this evaluation, electronic reporting more than doubled the total number of laboratory-based reports received. On average, the electronic reports were more timely and more complete, suggesting that electronic reporting may ultimately facilitate more rapid and comprehensive institution of disease control measures." }, { "instance_id": "R146851xR146543", "comparison_id": "R146851", "paper_id": "R146543", "text": "The electronic medical record as a tool for infection surveillance: Successful automation of device-days BACKGROUND Manual collection of central venous catheter, ventilator, and indwelling urinary catheter device-days is time-consuming, often restricted to intensive care units (ICU) and prone to error. METHODS We describe the use of an electronic medical record to extract existing clinical documentation of invasive devices. This allowed automated device-days calculations for device-associated infection surveillance in an acute care setting. RESULTS The automated system had high sensitivity, specificity, and positive and negative predictive values (>0.90) compared with chart review. The system is not restricted to ICUs and reduces surveillance efforts by a conservative estimate of over 3.5 work-weeks per year in our setting. Eighty percent of urinary catheter days and 50% of central venous catheter-days occurred outside the ICU. CONCLUSION Device-days may be automatically extracted from an existing electronic medical record with a higher degree of accuracy than manual collection while saving valuable personnel resources." }, { "instance_id": "R146851xR145065", "comparison_id": "R146851", "paper_id": "R145065", "text": "Description and validation of a new automated surveillance system for Clostridium difficile in Denmark SUMMARY The surveillance of Clostridium difficile (CD) in Denmark consists of laboratory based data from Departments of Clinical Microbiology (DCMs) sent to the National Registry of Enteric Pathogens (NREP). We validated a new surveillance system for CD based on the Danish Microbiology Database (MiBa). MiBa automatically collects microbiological test results from all Danish DCMs. We built an algorithm to identify positive test results for CD recorded in MiBa. A CD case was defined as a person with a positive culture for CD or PCR detection of toxin A and/or B and/or binary toxin. We compared CD cases identified through the MiBa-based surveillance with those reported to NREP and locally in five DCMs representing different Danish regions. During 2010\u20132014, NREP reported 13 896 CD cases, and the MiBa-based surveillance 21 252 CD cases. There was a 99\u00b79% concordance between the local datasets and the MiBa-based surveillance. Surveillance based on MiBa was superior to the current surveillance system, and the findings show that the number of CD cases in Denmark hitherto has been under-reported. There were only minor differences between local data and the MiBa-based surveillance, showing the completeness and validity of CD data in MiBa. This nationwide electronic system can greatly strengthen surveillance and research in various applications." }, { "instance_id": "R147040xR145554", "comparison_id": "R147040", "paper_id": "R145554", "text": "Identifying the Main Mosquito Species in China Based on DNA Barcoding Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected." }, { "instance_id": "R147040xR145437", "comparison_id": "R147040", "paper_id": "R145437", "text": "DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae) The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007\u20130.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known." }, { "instance_id": "R147040xR146938", "comparison_id": "R147040", "paper_id": "R146938", "text": "Evaluation of DNA barcoding and identification of new haplomorphs in Canadian deerflies and horseflies This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two\u2010parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour\u2010joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (\u223c10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli." }, { "instance_id": "R147040xR146932", "comparison_id": "R147040", "paper_id": "R146932", "text": "DNA barcodes reveal cryptic genetic diversity within the blackfly subgenus Trichodagmia Enderlein (Diptera: Simuliidae: Simulium) and related taxa in the New World In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8\u201319.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0\u20131.2%). Higher values of genetic divergence (3.2\u20133.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in \u2018weak\u2019 Carnoy\u2019s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia." }, { "instance_id": "R147040xR145434", "comparison_id": "R147040", "paper_id": "R145434", "text": "DNA Barcoding of Neotropical Sand Flies (Diptera, Psychodidae, Phlebotominae): Species Identification and Discovery within Brazil DNA barcoding has been an effective tool for species identification in several animal groups. Here, we used DNA barcoding to discriminate between 47 morphologically distinct species of Brazilian sand flies. DNA barcodes correctly identified approximately 90% of the sampled taxa (42 morphologically distinct species) using clustering based on neighbor-joining distance, of which four species showed comparatively higher maximum values of divergence (range 4.23\u201319.04%), indicating cryptic diversity. The DNA barcodes also corroborated the resurrection of two species within the shannoni complex and provided an efficient tool to differentiate between morphologically indistinguishable females of closely related species. Taken together, our results validate the effectiveness of DNA barcoding for species identification and the discovery of cryptic diversity in sand flies from Brazil." }, { "instance_id": "R148381xR148201", "comparison_id": "R148381", "paper_id": "R148201", "text": "Anti-glioma activity and the mechanism of cellular uptake of asiatic acid-loaded solid lipid nanoparticles Asiatic acid (AA), a pentacyclic triterpene found in Centella Asiatica, has shown neuroprotective and anti-cancer activity against glioma. However, owing to its poor aqueous solubility, effective delivery and absorption across biological barriers, in particular the blood brain barrier (BBB), are challenging. Solid lipid nanoparticles (SLNs) have shown a promising potential as a drug delivery system to carry lipophilic drugs across the BBB, a major obstacle in brain cancer therapy. Nevertheless, limited information is available about the cytotoxic mechanisms of nano-lipidic carriers with AA on normal and glioma cells. This study assessed the anti-cancer efficacy of AA-loaded SLNs against glioblastoma and their cellular uptake mechanism in comparison with SVG P12 (human foetal glial) cells. SLNs were systematically investigated for three different solid lipids; glyceryl monostearate (MS), glyceryl distearate (DS) and glyceryl tristearate (TS). The non-drug containing MS-SLNs (E-MS-SLNs) did not show any apparent toxicity towards normal SVG P12 cells, whilst the AA-loaded MS-SLNs (AA-MS-SLNs) displayed a more favourable drug release profile and higher cytotoxicity towards U87 MG cells. Therefore, MS-SLNs were chosen for further in vitro studies. Cytotoxicity studies of SLNs (\u00b1 AA) were performed using MTT assay where AA-SLNs showed significantly higher cytotoxicity towards U87 MG cells than SVG P12 normal cells, as confirmed by flow cell cytometry. Cellular uptake of SLNs also appeared to be preferentially facilitated by energy-dependent endocytosis as evidenced by fluorescence imaging and flow cell cytometry. Using the Annexin V-PI double staining technique, it was found that these AA-MS-SLNs displayed concentration-dependent apoptotic activity on glioma cells, which further confirms the potential of exploiting these AA-loaded MS-SLNs for brain cancer therapy." }, { "instance_id": "R148381xR148304", "comparison_id": "R148381", "paper_id": "R148304", "text": "Parenteral nanoemulsions as promising carriers for brain delivery of risperidone: Design, characterization and in vivo pharmacokinetic evaluation This paper describes design and evaluation of parenteral lecithin-based nanoemulsions intended for brain delivery of risperidone, a poorly water-soluble psychopharmacological drug. The nanoemulsions were prepared through cold/hot high pressure homogenization and characterized regarding droplet size, polydispersity, surface charge, morphology, drug-vehicle interactions, and physical stability. To estimate the simultaneous influence of nanoemulsion formulation and preparation parameters--co-emulsifier type, aqueous phase type, homogenization temperature--on the critical quality attributes of developed nanoemulsions, a general factorial experimental design was applied. From the established design space and stability data, promising risperidone-loaded nanoemulsions (mean size about 160 nm, size distribution <0.15, zeta potential around -50 mV), containing sodium oleate in the aqueous phase and polysorbate 80, poloxamer 188 or Solutol(\u00ae) HS15 as co-emulsifier, were produced by hot homogenization and their ability to improve risperidone delivery to the brain was assessed in rats. Pharmacokinetic study demonstrated erratic brain profiles of risperidone following intraperitoneal administration in selected nanoemulsions, most probably due to their different droplet surface properties (different composition of the stabilizing layer). Namely, polysorbate 80-costabilized nanoemulsion showed increased (1.4-7.4-fold higher) risperidone brain availability compared to other nanoemulsions and drug solution, suggesting this nanoemulsion as a promising carrier worth exploring further for brain targeting." }, { "instance_id": "R148381xR147032", "comparison_id": "R148381", "paper_id": "R147032", "text": "Glycosylated Sertraline-Loaded Liposomes for Brain Targeting: QbD Study of Formulation Variabilities and Brain Transport Effectiveness of CNS-acting drugs depends on the localization, targeting, and capacity to be transported through the blood\u2013brain barrier (BBB) which can be achieved by designing brain-targeting delivery vectors. Hence, the objective of this study was to screen the formulation and process variables affecting the performance of sertraline (Ser-HCl)-loaded pegylated and glycosylated liposomes. The prepared vectors were characterized for Ser-HCl entrapment, size, surface charge, release behavior, and in vitro transport through the BBB. Furthermore, the compatibility among liposomal components was assessed using SEM, FTIR, and DSC analysis. Through a thorough screening study, enhancement of Ser-HCl entrapment, nanosized liposomes with low skewness, maximized stability, and controlled drug leakage were attained. The solid-state characterization revealed remarkable interaction between Ser-HCl and the charging agent to determine drug entrapment and leakage. Moreover, results of liposomal transport through mouse brain endothelialpolyoma cells demonstrated greater capacity of the proposed glycosylated liposomes to target the cerebellar due to its higher density of GLUT1 and higher glucose utilization. This transport capacity was confirmed by the inhibiting action of both cytochalasin B and phenobarbital. Using C6 glioma cells model, flow cytometry, time-lapse live cell imaging, and in vivo NIR fluorescence imaging demonstrated that optimized glycosylated liposomes can be transported through the BBB by classical endocytosis, as well as by specific transcytosis. In conclusion, the current study proposed a thorough screening of important formulation and process variabilities affecting brain-targeting liposomes for further scale-up processes." }, { "instance_id": "R148574xR148421", "comparison_id": "R148574", "paper_id": "R148421", "text": "Nano-lipoidal carriers of tretinoin with enhanced percutaneous absorption, photostability, biocompatibility and anti-psoriatic activity Tretinoin (TRE) is a widely used retinoid for the topical treatment of acne, psoriasis, skin cancer and photoaging. Despite unmatchable efficacy, it is associated with several vexatious side effects like marked skin erythema, peeling and irritation, eventually leading to poor patient compliance. Its photo-instability and high lipophilicity also pose challenges in the development of a suitable topical product. The present study, therefore, aims to develop biocompatible lipid-based nanocarriers of TRE to improve its skin delivery, photostability, biocompatibility and pharmacodynamic efficacy. The TRE-loaded liposomes, ethosomes, solid lipid nanoparticles (SLNs) and nanostructured lipidic carriers (NLCs) were prepared and characterized for micromeritics, surface charge, percent drug efficiency and morphology. Bioadhesive hydrogels of the developed systems were also evaluated for rheological characterization, photostability, ex vivo skin permeation and retention employing porcine skin, and anti-psoriatic activity in mouse tail model. Nanoparticulate carriers (SLNs, NLCs) offered enhanced photostability, skin transport and anti-psoriatic activity vis-\u00e0-vis the vesicular carriers (liposomes, ethosomes) and the marketed product. However, all the developed nanocarriers were found to be more biocompatible and effective than the marketed product. These encouraging findings can guide in proper selection of topical carriers among diversity of such available carriers systems." }, { "instance_id": "R148574xR148393", "comparison_id": "R148574", "paper_id": "R148393", "text": "Dermal and transdermal delivery of an anti-psoriatic agent via ethanolic liposomes The aim of the current investigation is to evaluate the transdermal potential of novel vesicular carrier, ethosomes, bearing methotrexate (MTX), an anti-psoriatic, anti-neoplastic, highly hydrosoluble agent having limited transdermal permeation. MTX loaded ethosomes were prepared, optimized and characterized for vesicular shape and surface morphology, vesicular size, entrapment efficiency, stability, in vitro human skin permeation and vesicle-skin interaction. The formulation (EE(9)) having 3% phospholipid content and 45% ethanol showing the greatest entrapment (68.71+/-1.4%) and optimal nanometric size range (143+/-16 nm) was selected for further transdermal permeation studies. Stability profile of prepared system assessed for 120 days revealed very low aggregation and growth in vesicular size (8.8+/-1.2%). MTX loaded ethosomal carriers also provided an enhanced transdermal flux of 57.2+/-4.34 microg/cm(2)/h and decreased lag time of 0.9 h across human cadaver skin. Skin permeation profile of the developed formulation further assessed by confocal laser scanning microscopy (CLSM) revealed an enhanced permeation of Rhodamine Red (RR) loaded formulations to the deeper layers of the skin (170 microm). Also, the formulation retained its penetration power after storage. Vesicle skin interaction study also highlighted the penetration enhancing effect of ethosomes with some visual penetration pathways and corneocytes swelling, a measure of retentive nature of formulation. Our results suggests that ethosomes are an efficient carrier for dermal and transdermal delivery of MTX." }, { "instance_id": "R148574xR148414", "comparison_id": "R148574", "paper_id": "R148414", "text": "Evaluation of psoralen ethosomes for topical delivery in rats by using in vivo microdialysis This study aimed to improve skin permeation and deposition of psoralen by using ethosomes and to investigate real-time drug release in the deep skin in rats. We used a uniform design method to evaluate the effects of different ethosome formulations on entrapment efficiency and drug skin deposition. Using in vitro and in vivo methods, we investigated skin penetration and release from psoralen-loaded ethosomes in comparison with an ethanol tincture. In in vitro studies, the use of ethosomes was associated with a 6.56-fold greater skin deposition of psoralen than that achieved with the use of the tincture. In vivo skin microdialysis showed that the peak concentration and area under the curve of psoralen from ethosomes were approximately 3.37 and 2.34 times higher, respectively, than those of psoralen from the tincture. Moreover, it revealed that the percutaneous permeability of ethosomes was greater when applied to the abdomen than when applied to the chest or scapulas. Enhanced permeation and skin deposition of psoralen delivered by ethosomes may help reduce toxicity and improve the efficacy of long-term psoralen treatment." }, { "instance_id": "R150058xR74026", "comparison_id": "R150058", "paper_id": "R74026", "text": "Task 11 at SemEval-2021: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. \u2018the NCG task\u2019) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article\u2019s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted." }, { "instance_id": "R150058xR147722", "comparison_id": "R150058", "paper_id": "R147722", "text": "TSE-NER: An Iterative Approach for Long-Tail Entity Extraction in Scientific Publications Named Entity Recognition and Typing (NER/NET) is a challenging task, especially with long-tail entities such as the ones found in scientific publications. These entities (e.g. \u201cWebKB\u201d,\u201cStatSnowball\u201d) are rare, often relevant only in specific knowledge domains, yet important for retrieval and exploration purposes. State-of-the-art NER approaches employ supervised machine learning models, trained on expensive type-labeled data laboriously produced by human annotators. A common workaround is the generation of labeled training data from knowledge bases; this approach is not suitable for long-tail entity types that are, by definition, scarcely represented in KBs. This paper presents an iterative approach for training NER and NET classifiers in scientific publications that relies on minimal human input, namely a small seed set of instances for the targeted entity type. We introduce different strategies for training data extraction, semantic expansion, and result entity filtering. We evaluate our approach on scientific publications, focusing on the long-tail entities types Datasets, Methods in computer science publications, and Proteins in biomedical publications." }, { "instance_id": "R150058xR147638", "comparison_id": "R150058", "paper_id": "R147638", "text": "Identifying used methods and datasets in scientific publications Although it has become common to assess publications and researchers by means of their citation count (e.g., using the h-index), measuring the impact of scientific methods and datasets (e.g., using an h-index for datasets) has been performed only to a limited extent. This is not surprising because the usage information of methods and datasets is typically not explicitly provided by the authors, but hidden in a publication\u2019s text. In this paper, we propose an approach to identifying methods and datasets in texts that have actually been used by the authors. Our approach first recognizes datasets and methods in the text by means of a domain-specific named entity recognition method with minimal human interaction. It then classifies these mentions into used vs. non-used based on the textual contexts. The obtained labels are aggregated on the document level and integrated into the Microsoft Academic Knowledge Graph modeling publications\u2019 metadata. In experiments based on the Microsoft Academic Graph, we show that both method and dataset mentions can be identified and correctly classified with respect to their usage to a high degree. Overall, our approach facilitates method and dataset recommendation, enhanced paper recommendation, and scientific impact quantification. It can be extended in such a way that it can identify mentions of any entity type (e.g., task)." }, { "instance_id": "R150058xR147657", "comparison_id": "R150058", "paper_id": "R147657", "text": "Concept-based analysis of scientific literature This paper studies the importance of identifying and categorizing scientific concepts as a way to achieve a deeper understanding of the research literature of a scientific community. To reach this goal, we propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts. Our evaluation of the algorithms against gold standards shows significant improvement over state-of-the-art results. More importantly, we analyze the computational linguistic literature using the proposed algorithms and show four different ways to summarize and understand the research community which are difficult to obtain using existing techniques." }, { "instance_id": "R150058xR146357", "comparison_id": "R150058", "paper_id": "R146357", "text": "The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable." }, { "instance_id": "R150570xR148083", "comparison_id": "R150570", "paper_id": "R148083", "text": "Building a semantically annotated corpus of clinical texts In this paper, we describe the construction of a semantically annotated corpus of clinical texts for use in the development and evaluation of systems for automatically extracting clinically significant information from the textual component of patient records. The paper details the sampling of textual material from a collection of 20,000 cancer patient records, the development of a semantic annotation scheme, the annotation methodology, the distribution of annotations in the final corpus, and the use of the corpus for development of an adaptive information extraction system. The resulting corpus is the most richly semantically annotated resource for clinical text processing built to date, whose value has been demonstrated through its use in developing an effective information extraction system. The detailed presentation of our corpus construction and annotation methodology will be of value to others seeking to build high-quality semantically annotated corpora in biomedical domains." }, { "instance_id": "R150570xR148112", "comparison_id": "R150570", "paper_id": "R148112", "text": "2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text Abstract The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate." }, { "instance_id": "R150570xR148576", "comparison_id": "R150570", "paper_id": "R148576", "text": "Exploiting syntax when detecting protein names in text This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness." }, { "instance_id": "R150570xR148450", "comparison_id": "R150570", "paper_id": "R148450", "text": "The ITI TXM corpora: Tissue expressions and protein-protein interactions We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA)." }, { "instance_id": "R150570xR148501", "comparison_id": "R150570", "paper_id": "R148501", "text": "Integrated Annotation for Biomedical Information Extraction We describe an approach to two areas of biomedical information extraction, drug development and cancer genomics. We have developed a framework which includes corpus annotation integrated at multiple levels: a Treebank containing syntactic structure, a Propbank containing predicate-argument structure, and annotation of entities and relations among the entities. Crucial to this approach is the proper characterization of entities as relation components, which allows the integration of the entity annotation with the syntactic structure while retaining the capacity to annotate and extract more complex events. We are training statistical taggers using this annotation for such extraction as well as using them for improving the annotation process." }, { "instance_id": "R25093xR25085", "comparison_id": "R25093", "paper_id": "R25085", "text": "Recognising personality traits using facebook status updates Gaining insight in a web user's personality is very valuable for applications that rely on personalisation, such as recommender systems and personalised advertising. In this paper we explore the use of machine learning techniques for inferring a user's personality traits from their Facebook status updates. Even with a small set of training examples we can outperform the majority class baseline algorithm. Furthermore, the results are improved by adding training examples from another source. This is an interesting result because it indicates that personality trait recognition generalises across social media platforms." }, { "instance_id": "R25093xR25091", "comparison_id": "R25093", "paper_id": "R25091", "text": "Towards automated personality identification using speech acts The way people communicate \u2014 be it verbally, visually, or via text\u2013 is indicative of personality traits. In social media the concept of the status update is used for individuals to communicate to their social networks in an always-on fashion. In doing so individuals utilize various kinds of speech acts that, while primarily communicating their content, also leave traces of their personality dimensions behind. We human-coded a set of Facebook status updates from the myPersonality dataset in terms of speech acts label and then experimented with surface level linguistic features including lexical, syntactic, and simple sentiment detection to automatically label status updates as their appropriate speech act. We apply supervised learning to the dataset and using our features are able to classify with high accuracy two dominant kinds of acts that have been found to occur in social media. At the same time we used the coded data to perform a regression analysis to determine which speech acts are significant of certain personality dimensions. The implications of our work allow for automatic large-scale personality identification through social media status updates." }, { "instance_id": "R25093xR25070", "comparison_id": "R25093", "paper_id": "R25070", "text": "Leveraging online social networks and external data sources to predict personality Over the past decade, people have been expressing more and more of their personalities online. Online social networks such as Facebook.com capture much of individuals' personalities through their published interests, attributes and social interactions. Knowledge of an individual's personality can be of wide utility, either for social research, targeted marketing or a variety of other fields A key problem to predicting and utilizing personality information is the myriad of ways it is expressed across various people, locations and cultures. Similarly, a model predicting personality based on online data which cannot be extrapolated to \"real world\" situations is of limited utility for researchers. This paper presents initial work done on generating a probabilistic model of personality which uses representations of people's connections to other people, places, cultures, and ideas, as expressed through Face book. To this end, personality was predicted using a machine learning method known as a Bayesian Network. The model was trained using Face book data combined with external data sources to allow further inference. The results of this paper present one predictive model of personality that this project has produced. This model demonstrates the potential of this methodology in two ways: First, it is able to explain up to 56% of all variation in a personality trait from a sample of 615 individuals. Second it is able to clearly present how this variability is explained through findings such as how to determine how agreeable a man is based on his age, number of Face book wall posts, and his willingness to disclose his preference for music made by Lady Gaga." }, { "instance_id": "R25093xR25068", "comparison_id": "R25093", "paper_id": "R25068", "text": "Our Twitter Profiles, Our Selves: Predicting Personality with Twitter Psychological personality has been shown to affect a variety of aspects: preferences for interaction styles in the digital world and for music genres, for example. Consequently, the design of personalized user interfaces and music recommender systems might benefit from understanding the relationship between personality and use of social media. Since there has not been a study between personality and use of Twitter at large, we set out to analyze the relationship between personality and different types of Twitter users, including popular users and influentials. For 335 users, we gather personality data, analyze it, and find that both popular users and influentials are extroverts and emotionally stable (low in the trait of Neuroticism). Interestingly, we also find that popular users are `imaginative' (high in Openness), while influentials tend to be `organized' (high in Conscientiousness). We then show a way of accurately predicting a user's personality simply based on three counts publicly available on profiles: following, followers, and listed counts. Knowing these three quantities about an active user, one can predict the user's five personality traits with a root-mean-squared error below 0.88 on a $[1,5]$ scale. Based on these promising results, we argue that being able to predict user personality goes well beyond our initial goal of informing the design of new personalized applications as it, for example, expands current studies on privacy in social media." }, { "instance_id": "R25115xR25109", "comparison_id": "R25115", "paper_id": "R25109", "text": "Examining participation Participatory Design (PD) seeks to promote and regulate the negotiation of social change. Although many methods claim to be participatory, empirical evidence to support them is lacking. Few comprehensive criteria exist to describe and evaluate participation as experienced by stakeholders. There is a need for rigorous research tools to study, validate and improve PD practice. This paper presents the development and initial testing of PartE (Participation Evaluation), an interdisciplinary and intercommunity approach to studying and supporting participation in PD. Semi-structured interviews based on the framework showed it to be useful in: a) revealing differences in how stakeholders view participation and design, b) developing a personal frame of participation c) exploration of the future of participatory practices; and d) suggesting actions to resolve specific challenges or contradictions in participation at a broader level. The paper discusses the need to move away from considering PD as a practice claimed by designers towards a more open dialogue between all stakeholders to collective redefine \"Participation and Design\" for social change." }, { "instance_id": "R25115xR25113", "comparison_id": "R25115", "paper_id": "R25113", "text": "An insider perspective on community gains: A subjective account of a Namibian rural communities\u05f3 perception of a long-term participatory design project Community-based co-design takes place within a communal value system and opens up a new debate around the principles of participation and its benefits within HCI4D and ICTD projects. This study contributes to a current gap of expression of participants' gains, especially from an indigenous and marginalized rural communities' perspective. We have collected community viewpoints concurrently over the past five years of our longitudinal research project in rural Namibia. A number of themes have emerged out of the data as extracted by our native researcher, such as the special importance of learning technology, appreciation of the common project goal, the intrinsic pleasure of participation, frustrations about exclusions and other concerns, as well as immediate rewards and expectations of gaining resources. We acknowledge our own bias in the curation of viewpoints, and incompleteness of subjectivities while embedding our discussion within a local contextual interpretation. Through our learning from the communities we argue for a shift in perspective that acknowledges local epistemologies in HCI and participatory design and research. We suggest considering harmony and humanness as the primary values guiding community-based interactions. We discuss several challenges in the collaboration and co-creation of new knowledge at the frontier of multiple cultural, linguist, research and design paradigms. In the absence of generalized guidelines we suggest to pursue local workability while producing trans-contextual credibility. Subjective viewpoints of participants in participatory design project.Local rural researcher's reflection and data collection presented.Importance of learning, project aim alignment and intrinsic pleasure in participation.Frustrations about exclusions, project continuity and resource scarcity.Self-consciousness and pride as main impact of participation." }, { "instance_id": "R25115xR25105", "comparison_id": "R25115", "paper_id": "R25105", "text": "User gains and PD aims We present a study of user gains from their participation in a participatory design (PD) project at Danish primary schools. We explore user experiences and reported gains from the project in relation to the multiple aims of PD, based on a series of interviews with pupils, teachers, administrators, and consultants, conducted approximately three years after the end of the project. In particular, we reflect on how the PD initiatives were sustained after the project had ended. We propose that not only are ideas and initiatives disseminated directly within the organization, but also through networked relationships among people, stretching across organizations and project groups. Moreover, we demonstrate how users' gains related to their acting within these networks. These results suggest a heightened focus on the indirect and distributed channels through which the long-term impact of PD emerges." }, { "instance_id": "R25160xR25131", "comparison_id": "R25160", "paper_id": "R25131", "text": "Evaluation of Six Night Vision Enhancement Systems: Qualitative and Quantitative Support for Intelligent Image Processing Objective: An evaluation study was conducted to answer the question of which system properties of night vision enhancement systems (NVESs) provide a benefit for drivers without increasing their workload. Background: Different infrared sensor, image processing, and display technologies can be integrated into an NVES to support nighttime driving. Because each of these components has its specific strengths and weaknesses, careful testing is required to determine their best combination. Method: Six prototypical systems were assessed in two steps. First, a heuristic evaluation with experts from ergonomics, perception, and traffic psychology was conducted. It produced a broad overview of possible effects of system properties on driving. Based on these results, an experimental field study with 15 experienced drivers was performed. Criteria used to evaluate the development potential of the six prototypes were the usability dimensions of effectiveness, efficiency, and user satisfaction (International Organization for Standardization, 1998). Results: Results showed that the intelligibility of information, the easiness with which obstacles could be located in the environment, and the position of the display presenting the output of the system were of crucial importance for the usability of the NVES and its acceptance. Conclusion: All relevant requirements are met best by NVESs that are positioned at an unobtrusive location and are equipped with functions for the automatic identification of objects and for event-based warnings. Application: These design recommendations and the presented approach to evaluate the systems can be directly incorporated into the development process of future NVESs." }, { "instance_id": "R25160xR25126", "comparison_id": "R25160", "paper_id": "R25126", "text": "Ambient light based interaction concept for an integrative driver assistance system -a driving simulator study For today\u2019s vehicles several advanced driver assistance systems are on the market supporting the driver in critical driving situations or automating parts of the driving tasks. In the future there will be even more. Currently, those assistance systems do not use a common and consistent interaction strategy to communicate with the driver. The goal of the present study is to present and to evaluate a concept using ambient light for presenting information of different assistance systems in an integrated way. Research on visual peripheral warnings showed positive effects on driver reaction times in demanding situations. This paper presents results of a driving simulator experiment, in which an ambient light concept using peripheral visual perception were tested. A 360\u00b0 LED stripe was installed around the driver in a fixed-based driving simulator providing interaction signals via peripheral vision. The developed ambient light display should support the driver in different driving situations by a consistent colour-coded interaction design. In a between-group design 41 participants (21 with and 20 without ambient light) drove eight different highway scenarios to test the display. Results of the ambient light interaction design regarding driver reactions and subjective evaluation regarding comprehensibility of the ambient light concept are reported and discussed." }, { "instance_id": "R25160xR25149", "comparison_id": "R25160", "paper_id": "R25149", "text": "Simple gaze-contingent cues guide eye movements in a realistic driving simulator Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself." }, { "instance_id": "R25160xR25139", "comparison_id": "R25160", "paper_id": "R25139", "text": "LED-A-pillars As the chassis of cars become more robust, the pillars of a car become broader in order to increase driver safety. As A-pillars grow wider, so too does their negative affect on the panoramic view of the driver and with a smaller field of vision, the risk of overlooking a pedestrian or an object outside the car increases. In order to deal with A-pillar blind spots, this project examined how distances and directions of possible obstacles can be displayed and how different visualization types with LED strips on the A-pillars can affect drivers perception. The result of this study shows that such a prototype improves the panoramic view for car drivers resulting in higher security for road users." }, { "instance_id": "R25160xR25156", "comparison_id": "R25160", "paper_id": "R25156", "text": "heart rate Electric Vehicles (EVs) are an emerging technology and open up an exciting new space for designing in-car interfaces. This technology enhances driving experience by a strong acceleration, regenerative breaking and especially a reduced noise level. However, engine vibrations and sound transmit valuable feedback to drivers of conventional cars, e.g. signaling that the engine is running and ready to go. We address this lack of feedback with Heartbeat, a multimodal electric vehicle information system. Heartbeat communicates (1) the state of the electric drive including energy flow and (2) the energy level of the batteries in a natural and experienceable way. We enhance the underlying Experience Design process by formulating working principles derived from an experience story in order to transport its essence throughout the following design phases. This way, we support the design of a consistent experience and resolve the tension between implementation constraints (e.g., space) and the persistence of the underlying story while building prototypes and integrating them into a technical environment (e.g., a dashboard)." }, { "instance_id": "R25160xR25124", "comparison_id": "R25160", "paper_id": "R25124", "text": "\"Should I stay or should I go?\" Ambient lighting systems have been introduced by several manufacturers to increase the driver's comfort. Also, some works proposed warning systems based on light displays. Expanding on those works, we are searching for designs of Lumicons (i.e. light patterns) that can not only warn drivers in critical situations but also keep them informed in a non-distracting way. We present first ideas for Lumicons for a given scenario coming from a participatory design process." }, { "instance_id": "R25201xR25185", "comparison_id": "R25201", "paper_id": "R25185", "text": "Effect of Euler Number as a Feature in gender Recognition System from Offline HandwrittenSignature Using Neural Networks Recent growth of technology has also increased identification insecurity. Signature is a unique feature which is different for every other person, and each person can be identified using their own handwritten signature. Gender identification is one of key feature in case of human identification. In this paper, a feature based gender detection method has been proposed. The proposed framework takes handwritten signature as an input. Afterwards, several features are extracted from those images. The extracted features and their values are stored as data, which is further classified using Back Propagation Neural Network (BPNN). Gender classification is done using BPNN which is one of the most popular classifier. The proposed system is broken into two parts. In the first part, several features such as roundness, skewness, kurtosis, mean, standard deviation, area, Euler number, distribution density of black pixel, entropy, equi-diameter, connected component (cc) and perimeter were taken as feature. Then obtained features are divided into two categories. In the first category experimental feature set contains Euler number, whereas in the second category the obtained feature set excludes the same. BPNN is used to classify both types of feature sets to recognize the gender. Our study reports an improvement of 4.7% in gender classification system by the inclusion of Euler number as a feature." }, { "instance_id": "R25201xR25172", "comparison_id": "R25201", "paper_id": "R25172", "text": "Off-line Signature Verification Using Flexible Grid Features and Classifier Fusion In this paper we present two novel off-line signature verification systems, constructed by combining an ensemble of eight base classifiers. Both score-based and decision-based fusion strategies are investigated. Each base classifier utilises the novel flexible grid-based feature extraction technique proposed in this paper. We show that the flexible grid-based approach consistently outperforms the existing rigid grid-based approach. We also show that the combined classifiers outperform the most proficient base classifier. When evaluated on Dolfing\u2019s data set, a signature database containing 1530 genuine signatures and 3000 amateur skilled forgeries, we show that the combined classifiers presented in this paper outperform existing systems that were also evaluated on this data set." }, { "instance_id": "R25201xR25169", "comparison_id": "R25201", "paper_id": "R25169", "text": "SVM-DSmT Combination for Off-Line Signature Verification We propose in this work a signature verification system based on decision combination of off-line signatures for managing conflict provided by the SVM classifiers. The system is basically divided into three modules: i) Radon Transform-SVM, ii) Ridgelet Transform-SVM and iii) PCR5 combination rule based on the generalized belief functions of Dezert-Smarandache theory. The proposed framework allows combining the normalized SVM outputs and uses an estimation technique based on the dissonant model of Appriou to compute the belief assignments. Decision making is performed through likelihood ratio. Experiments are conducted on the well known CEDAR database using false rejection and false acceptance criteria. The obtained results show that the proposed combination framework improves the verification accuracy compared to individual SVM classifiers." }, { "instance_id": "R25201xR25188", "comparison_id": "R25201", "paper_id": "R25188", "text": "Offline Signature Verification Using Shape Dissimilarities Offline signature verification is a challenging and important form of biometric identification. Other biometric measures don't have variability as that of signatures which poses difficult problem in verification of signatures. In this paper, we explore a novel approach for verification of signatures based on curve matching using shape descriptor and Euclidian distance. In our approach, the measurement of similarities are proceeded by 1)finding correspondences between signatures, we attach shape descriptor (shape context) with Euclidian distance between the sample points of one signature and the sample point of other signature for better results, 2)we estimate aligning transforms by using this correspondences between signatures, 3) classify the signatures using linear discriminant analysis and measures of shape dissimilarity between signatures based on shape context distance, bending energy, registration residual, anisotropic scaling." }, { "instance_id": "R25223xR25219", "comparison_id": "R25223", "paper_id": "R25219", "text": "On Replica Placement for QOSAware Content Distribution The rapid growth of time-critical information services and business-oriented applications is making quality of service (QoS) support increasingly important in content distribution. This paper investigates the problem of placing object replicas (e.g., web pages and images) to meet the QoS requirements of clients with the objective of minimizing the replication cost. We consider two classes of service models: replica-aware service and replica-blind service. In the replica-aware model, the servers are aware of the locations of replicas and can therefore direct requests to the nearest replica. We show that the QoS-aware placement problem for replica-aware services is NP-complete. Several heuristic algorithms for efficient computation of suboptimal solutions are proposed and experimentally evaluated. In the replica-blind model, the servers are not aware of the locations of replicas or even their existence. As a result, each replica only serves the requests flowing through it under some given routing strategy. We show that there exist polynomial optimal solutions to the QoS-aware placement problem for replica-blind services. Efficient algorithms are proposed to compute the optimal locations of replicas under different cost models." }, { "instance_id": "R25223xR25207", "comparison_id": "R25223", "paper_id": "R25207", "text": "Replica Placement Algorithms in Content Distribution Networks The replica placement problems (RPPs) in the Content Distribution Networks have been widely studied. In this paper, we propose an optimization model for the RPPs and design efficient algorithms to minimize the total cost of the network. The algorithms include three parts: replication algorithm preprocess, constraint p-median model and algorithm of solving constraint p-median models. In the simulation, we compare our algorithms to other heuristic methods numerically. The results show that our algorithms perform better with less cost." }, { "instance_id": "R25223xR25217", "comparison_id": "R25223", "paper_id": "R25217", "text": "A Distributed Algorithm for the Replica Placement Problem Caching and replication of popular data objects contribute significantly to the reduction of the network bandwidth usage and the overall access time to data. Our focus is to improve the efficiency of object replication within a given distributed replication group. Such a group consists of servers that dedicate certain amount of memory for replicating objects requested by their clients. The content replication problem we are solving is defined as follows: Given the request rates for the objects and the server capacities, find the replica allocation that minimizes the access time over all servers and objects. We design a distributed approximation algorithm that solves this problem and prove that it provides a 2-approximation solution. We also show that the communication and computational complexity of the algorithm is polynomial with respect to the number of servers, the number of objects, and the sum of the capacities of all servers. Finally, we perform simulation experiments to investigate the performance of our algorithm. The experiments show that our algorithm outperforms the best existing distributed algorithm that solves the replica placement problem." }, { "instance_id": "R25255xR25235", "comparison_id": "R25255", "paper_id": "R25235", "text": "Automatic Lexicon Construction for Arabic Sentiment Analysis Sentiment Analysis (SA) is the process of determining the sentiment of a text written in a natural language to be positive, negative or neutral. It is one of the most interesting subfields of natural language processing (NLP) and Web mining due to its diverse applications and the challenges associated with applying it on the massive amounts of textual data available online (especially, on social networks). Most of the current works on SA focus on the English language and follow one of two main approaches, (corpus-based and lexicon-based) or a hybrid of them. This work focuses on a less studied aspect of SA, which is lexicon-based SA for the Arabic language. In addition to experimenting and comparing three different lexicon construction techniques, an Arabic SA tool is designed and implemented to effectively take advantage of the constructed lexicons. The proposed SA tool possesses many novel features such as the way negation and intensification are handled. The experimental results show encouraging outcomes with 74.6% accuracy in addition to revealing new insights and guidelines that could direct the future research efforts." }, { "instance_id": "R25255xR25237", "comparison_id": "R25255", "paper_id": "R25237", "text": "A Large Scale Arabic Sentiment Lexicon for Arabic Opinion Mining Most opinion mining methods in English rely successfully on sentiment lexicons, such as English SentiWordnet (ESWN). While there have been efforts towards building Arabic sentiment lexicons, they suffer from many deficiencies: limited size, unclear usability plan given Arabic\u2019s rich morphology, or nonavailability publicly. In this paper, we address all of these issues and produce the first publicly available large scale Standard Arabic sentiment lexicon (ArSenL) using a combination of existing resources: ESWN, Arabic WordNet, and the Standard Arabic Morphological Analyzer (SAMA). We compare and combine two methods of constructing this lexicon with an eye on insights for Arabic dialects and other low resource languages. We also present an extrinsic evaluation in terms of subjectivity and sentiment analysis." }, { "instance_id": "R25255xR25239", "comparison_id": "R25255", "paper_id": "R25239", "text": "Building an Arabic sentiment lexicon using semi-supervised learning Sentiment analysis is the process of determining a predefined sentiment from text written in a natural language with respect to the entity to which it is referring. A number of lexical resources are available to facilitate this task in English. One such resource is the SentiWordNet, which assigns sentiment scores to words found in the English WordNet. In this paper, we present an Arabic sentiment lexicon that assigns sentiment scores to the words found in the Arabic WordNet. Starting from a small seed list of positive and negative words, we used semi-supervised learning to propagate the scores in the Arabic WordNet by exploiting the synset relations. Our algorithm assigned a positive sentiment score to more than 800, a negative score to more than 600 and a neutral score to more than 6000 words in the Arabic WordNet. The lexicon was evaluated by incorporating it into a machine learning-based classifier. The experiments were conducted on several Arabic sentiment corpora, and we were able to achieve a 96% classification accuracy." }, { "instance_id": "R25255xR25253", "comparison_id": "R25255", "paper_id": "R25253", "text": "AraSenTi: Large-Scale Twitter-Specific Arabic Sentiment Lexicons Sentiment Analysis (SA) is an active research area nowadays due to the tremendous interest in aggregating and evaluating opinions being disseminated by users on the Web. SA of English has been thoroughly researched; however research on SA of Arabic has just flourished. Twitter is considered a powerful tool for disseminating information and a rich resource for opinionated text containing views on many different topics. In this paper we attempt to bridge a gap in Arabic SA of Twitter which is the lack of sentiment lexicons that are tailored for the informal language of Twitter. We generate two lexicons extracted from a large dataset of tweets using two approaches and evaluate their use in a simple lexicon based method. The evaluation is performed on internal and external datasets. The performance of these automatically generated lexicons was very promising, albeit the simple method used for classification. The best F-score obtained was 89.58% on the internal dataset and 63.1-64.7% on the exter-" }, { "instance_id": "R25358xR25304", "comparison_id": "R25358", "paper_id": "R25304", "text": "Self-adaptive semantic web service matching method Web service has become a major software paradigm and computing resource, while how to implement web service matching also has become a key issue. In this paper, we present a self-adaptive semantic web service matching method, which improves the precision and recall of service discovery. In this method, requirement document and service profile ontology of OWL-S are transformed into ontology trees respectively. Conception similarity, attribute similarity and structure similarity of corresponding nodes in trees are calculated through taxonomic and hierarchical methodology. Then a serial of constraints are defined according to the relationship between conception similarity and structure similarity, to get the corresponding restructure rules. By restructuring requirement ontology tree in self-adaptive way, we achieve more accurate destination service collections. In the end, we propose matching algorithm of semantic web service and implement prototype system of OWLS-CPS. We prove the feasibility and effectiveness through evaluating and comparing to the OWLS-M4." }, { "instance_id": "R25358xR25296", "comparison_id": "R25358", "paper_id": "R25296", "text": "Approximate Structure-Preserving Semantic Matching Typical ontology matching applications, such as ontology integration, focus on the computation of correspondences holding between the nodes of two graph-like structures, e.g., between concepts in two ontologies. However, there are applications, such as web service integration, where we may need to establish whether full graph structures correspond to one another globally, preserving certain structural properties of the graphs being considered. The goal of this paper is to provide a new matching operation, called structure preserving matching. This operation takes two graph-like structures and produces a set of correspondences between those nodes of the graphs that correspond semantically to one another, (i) still preserving a set of structural properties of the graphs being matched, (ii) only in the case if the graphs are globally similar to one another. We present a novel approximate structure preserving matching approach that implements this operation. It is based on a formal theory of abstraction and on a tree edit distance measure. We have evaluated our solution with encouraging results." }, { "instance_id": "R25358xR25344", "comparison_id": "R25358", "paper_id": "R25344", "text": "A new service matching definition and algorithm with SAWSDL Semantic Web service (or briefly \u201cSWS\u201d) matching is a potential solution for automatic service discovery in various applications such as dynamic and automatic Web service composition. A number of approaches for SWS matching have been proposed. Most of them solely relied on concept subsumption relationship to improve the precision of Web service matching result from lexical-based methods. However such approaches suffer from several limitations such as not providing a fine-grained service matching degree, not supporting many-to-many matching between operation parameters, and not deriving data mappings between them. In this paper, we introduce a novel method for SWS matching based on SAWSDL, a semantic annotation specification for WSDL. We first define a fine-grained service matching result and then develop a corresponding service matching algorithm which can facilitate many-to-many matching of operation parameters as well as deriving concrete data mappings between them. Semantic techniques are used to infer and match data elements of operation parameters between a request and a service. The proposed algorithm was implemented and applied in the SeTEF \u2014 an automatic task-oriented business process execution system." }, { "instance_id": "R25358xR25323", "comparison_id": "R25358", "paper_id": "R25323", "text": "Semantic Web Service Selection Based on Business Offering Semantic Web service discovery finds a match between service requirement and service advertisements based on the semantic description. The discovery mechanism does not consider quality and business offers of advertised Web services. In this paper, we propose ontology based semantic Web service architecture for selection which recommends the best match for the requester. We design semantic broker which allows providers to advertise their services by creating OWL-S service profile consisting of functional, quality and business offers. The broker computes and records information for matchmaking during service publishing to improve the performance. The broker reads requirements from the requester and finds the best (profitable) Web service by matching functionality, capability, quality and business offers." }, { "instance_id": "R25358xR25302", "comparison_id": "R25358", "paper_id": "R25302", "text": "On the functional quality of service (FQoS) to discover and compose interoperable web services Despite its prevalence, the service-oriented architecture (SOA) still has an imperative challenge to achieve interoperability within and across enterprise applications. Minimal conditions for interoperability are: (1) to discover and plug in proper services for integration and (2) to support for seamless data exchanges between component services. A similarity-based approximate matching is a practical approach for both, in that the service discovery relies on functional matches between a query and service descriptions, and the seamless data exchange is granted by mapping information from a service to others. To these ends, this paper comprehensively investigates functional attributes of web services and their manipulation, and particularly highlights an information compatibility and mapping analysis. The functional quality of service (FQoS) allows service discovery and selection to step forward. Simulation results show that the present FQoS metrics are effective in service discovery." }, { "instance_id": "R25358xR25300", "comparison_id": "R25358", "paper_id": "R25300", "text": "Web services discovery and rank: An information retrieval approach With the rapid development of e-commerce over the internet, web services have attracted much attention in recent years. Nowadays, enterprises are able to outsource their internal business processes as services and make them accessible via the Web. They can then dynamically combine individual services to provide new value-added services. A main problem that remains is how to discover desired web services. In this paper, we propose a novel IR-Style mechanism for discovering and ranking web services automatically, given a textual description of desired services. In particular, we introduce the notion of preference degree for web services and then we define service relevance and service importance as two desired properties for measuring the preference degree. Furthermore, various algorithms are given for computing the relevance and importance of services, respectively. At the same time, we also develop a new schema tree matching algorithm to measure service connectivity, which is a novel metric to evaluate the importance of services. Experimental results show the proposed IR-style search strategy is efficient and practical." }, { "instance_id": "R25358xR25272", "comparison_id": "R25358", "paper_id": "R25272", "text": "Flexible Semantic-Based Service Matchmaking and Discovery Automated techniques and tools are required to effectively locate services that fulfill a given user request in a mobility context. To this purpose, the use of semantic descriptions of services has been widely motivated and recommended for automated service discovery under highly dynamic and context-dependent requirements. Our aim in this work is to propose an ontology-based hybrid approach where different kinds of matchmaking strategies are combined together to provide an adaptive, flexible and efficient service discovery environment. The approach, in particular, exploits the semantic knowledge about the business domain provided by a domain ontology underlying service descriptions, and the semantic organization of services in a service ontology, at different levels of abstraction." }, { "instance_id": "R25358xR25341", "comparison_id": "R25358", "paper_id": "R25341", "text": "Discovering Services during Service-Based System Design Using UML Recently, there has been a proliferation of service-based systems, i.e., software systems that are composed of autonomous services but can also use software code. In order to support the development of these systems, it is necessary to have new methods, processes, and tools. In this paper, we describe a UML-based framework to assist with the development of service-based systems. The framework adopts an iterative process in which software services that can provide functional and nonfunctional characteristics of a system being developed are discovered, and the identified services are used to reformulate the design models of the system. The framework uses a query language to represent structural, behavioral, and quality characteristics of services to be identified, and a query processor to match the queries against service registries. The matching process is based on distance measurements between the queries and service specifications. A prototype tool has been implemented. The work has been evaluated in terms of recall, precision, and performance measurements." }, { "instance_id": "R25358xR25288", "comparison_id": "R25358", "paper_id": "R25288", "text": "A QoS Broker Based Architecture for Dynamic Web Service Selection The increasing number of Web services over the Web makes the requester to use tools to search for suitable Web services available throughout the globe. UDDI is the first step towards meeting these demands. However the requester's demand may include not only functional aspects of Web services but also nonfunctional aspects like quality of service (QoS). There is a need to select the most suitable (qualitatively optimal) Web service based on the requester's QoS requirements and preferences. In this paper we explore the different types of requester's QoS requirements (demands) with illustrations. We propose the QoS broker based architecture for dynamic Web service selection which facilitates the requester to specify his/her QoS requirements along with functional requirements. The paper presents the Web service selection mechanism which selects the best (most suitable) Web service based on the requester's functional and quality requirements." }, { "instance_id": "R25358xR25294", "comparison_id": "R25358", "paper_id": "R25294", "text": "A hybrid approach to semantic web services matchmaking Deploying the semantics embedded in web services is a mandatory step in the automation of discovery, invocation and composition activities. The semantic annotation is the ''add-on'' to cope with the actual interoperability limitations and to assure a valid support to the interpretation of services capabilities. Nevertheless many issues have to be reached to support semantics in the web services and to guarantee accurate functionality descriptions. Early efforts address automatic matchmaking tasks, in order to find eligible advertised services which appropriately meet the consumer's demand. In the most of approaches, this activity is often entrusted to software agents, able to drive reasoning/planning activities, to discover the required service which can be single or composed of more atomic services. This paper presents a hybrid framework which achieves a fuzzy matchmaking of semantic web services. Central role is entrusted to task-oriented agents that, given a service request, interact to discover approximate reply, when no exact match occurs among the available web services. The matchmaking activity exploits a mathematical model, the fuzzy multiset to suitably represent the multi-granular information, enclosed into an OWLS-based description of a semantic web service." }, { "instance_id": "R25358xR25311", "comparison_id": "R25358", "paper_id": "R25311", "text": "Web Service Matching by Ontology Instance Categorization Identifying similar Web services is becoming increasingly important to ensure the success of dynamically integrated Web-service-based applications. We propose a categorization-based scheme to match equivalent Web services that can operate on heterogeneous domain ontologies. Given the upper ontology for services and domain ontologies, our service matching scheme determines whether a given Web service is a possible replacement using a categorization utility called OnExCat. OnExCat categorizes ontology instances extracted from the service descriptions by a probabilistic categorization measurement that incorporates the concept relationships in the upper ontology for services. In addition to tackling the issue of heterogeneity of domain ontology in service descriptions using categorization, our matching scheme also adapts itself by enhancing the known ontologies with newly discovered ontology instances. Experiments on service matching using our matching scheme based on the OnExCat utility have been performed with promising results, a correct matching rate of over 85%." }, { "instance_id": "R25400xR25394", "comparison_id": "R25400", "paper_id": "R25394", "text": "VbTrace: using view-based and model-driven development to support traceability in process-driven SOAs In process-driven, service-oriented architectures, there are a number of important factors that hinder the traceability between design and implementation artifacts. First of all, there are no explicit links between process design and implementation languages not only due to the differences of syntax and semantics but also the differences of granularity. The second factor is the complexity caused by tangled process concerns that multiplies the difficulty of analyzing and understanding the trace dependencies. Finally, there is a lack of adequate tool support for establishing and maintaining the trace dependencies between process designs and implementations. We present in this article a view-based, model-driven traceability approach that tackles these challenges. Our approach supports (semi-)automatically eliciting and (semi-)formalizing trace dependencies among process development artifacts at different levels of granularity and abstraction. A proof-of-concept tool support has been realized, and its functionality is illustrated via an industrial case study." }, { "instance_id": "R25400xR25388", "comparison_id": "R25400", "paper_id": "R25388", "text": "Continuous and automated evolution of architecture-to-implementation traceability links Abstract A traditional obstacle in the use of multiple representations is the need to maintain traceability among the representations in the face of evolution. The introduction of software architecture, and architecture-based development, has brought this need to architectural descriptions and corresponding source code. Specifically, the task is to relate versions of architectural elements to versions of source code configuration items, and to update those relations as new versions of the architecture and source code are produced. We present ArchTrace, a new approach that we developed to address this problem. ArchTrace distinguishes itself by continuously updating traceability relations from architectural elements to code elements through a policy-based extensible infrastructure that allows a group of developers to choose a set of traceability management policies that best match their situational needs and/or working styles. We introduce the high-level approach of ArchTrace, discuss its extensible infrastructure, and present our current set of ten pluggable traceability management policies. We conclude with a retrospective analysis of data collected from a twenty month period of development and maintenance of Odyssey, a component-based software development environment comprised of over 50,000 lines of code. This analysis shows that our approach is promising: with respect to the ideal set of traceability links, the policies applied resulted in a precision of 95% and recall of 89%." }, { "instance_id": "R25400xR25391", "comparison_id": "R25400", "paper_id": "R25391", "text": "The molhado hypertext versioning system This paper describes Molhado, a hypertext versioning and software configuration management system that is distinguished from previous systems by its flexible product versioning and structural configuration management model. The model enables a unified versioning framework for atomic and composite software artifacts, and hypermedia structures among them in a fine-grained manner at the logical level. Hypermedia structures are managed separately from documents' contents. Molhado explicitly represents hyperlinks, allowing them to be browsed, visualized, and systematically analyzed. Molhado not only versions complex hypermedia structures (e.g., multi links), but also supports versioning of individual hyperlinks. This paper focuses on Molhado's hypertext versioning and its use in the Software Concordance environment to manage the evolution of a software project and hypermedia structures." }, { "instance_id": "R25400xR25374", "comparison_id": "R25400", "paper_id": "R25374", "text": "DSL-based support for semi-automated architectural component model abstraction throughout the software lifecycle In this paper we present an approach for supporting the semi-automated abstraction of architectural models throughout the software lifecycle. It addresses the problem that the design and the implementation of a software system often drift apart as software systems evolve, leading to architectural knowledge evaporation. Our approach provides concepts and tool support for the semi-automatic abstraction of architectural knowledge from implemented systems and keeping the abstracted architectural knowledge up-to-date. In particular, we propose architecture abstraction concepts that are supported through a domain-specific language (DSL). Our main focus is on providing architectural abstraction specifications in the DSL that only need to be changed, if the architecture changes, but can tolerate non-architectural changes in the underlying source code. The DSL and its tools support abstracting the source code into UML component models for describing the architecture. Once the software architect has defined an architectural abstraction in the DSL, we can automatically generate UML component models from the source code and check whether the architectural design constraints are fulfilled by the models. Our approach supports full traceability between source code elements and architectural abstractions, and allows software architects to compare different versions of the generated UML component model with each other. We evaluate our research results by studying the evolution of architectural abstractions in different consecutive versions and the execution times for five existing open source systems." }, { "instance_id": "R25447xR25426", "comparison_id": "R25447", "paper_id": "R25426", "text": "An Enhance Approach For Web Services Discovery with QoS The Quality of Service for web services here mainly refers to the quality aspect of a web service. The QoS for web services is becoming increasingly important to service providers and service requesters due to increasing use of web services. Web services providing similar functionalities, more emphasis is being placed on how to find the service that best fits the consumer's requirements. In order to find services that best meet their QoS requirements, the service consumers and/or discovery agents need to know both the QoS information for the services and the reliability of this information. In this paper first of all we implement Reputation-Enhanced Web Services Discovery protocol. And after implementation we enhance the protocol over memory used, time to discovery and response time of given web service." }, { "instance_id": "R25447xR25420", "comparison_id": "R25447", "paper_id": "R25420", "text": "Hybrid Reliability Model to Enhance the Efficiency of Composite Web Services In this paper, a service-oriented reliability model that calculates the reliability of composite web services is designed. This model is based on the real-time reliabilities of the atomic web services of the composition. This manuscript contradicts the fact that the reliability of system is based on the exponential function of error arrival rate. The Reliability rate of composite web services is inversely proportional to the workload of the servers. After obtaining the server workload, we dispatch the services into Idle and Active State using doubly stochastic model. This model assumes a Single Composite Service and the atomicity of the services are dispatched in both serial and parallel configurations. A Broker Architecture based on Bounded set technique is designed to deduce the error in the active state by calculating accessibility, availability and error rate. These factors are represented in terms of MTTE, MTBF, MEAR which in turn used to dispatch the services into the server which provides high reliability." }, { "instance_id": "R25447xR25445", "comparison_id": "R25447", "paper_id": "R25445", "text": "Estimating Reliability Of Service-Oriented Systems: A Rule- Based Approach\u201d In service-oriented architecture (SOA), the entire software system consists of an interacting group of autonomous services. In order to make such a system reliable, it should inhibit guarantee for basic service, data flow, composition of services, and the complete workflow. This paper discusses the important factor of SOA and their role in the entire SOA system reliability. We focus on the factors that have the strongest effect of SOA system reliability. Based on these factors, we used a fuzzy-based approach to estimate the SOA reliability. The proposed approach is implemented on a database obtained for SOA application, and the results obtained validate and confirm the effectiveness of the proposed fuzzy approach. Furthermore, one can make trade-off analyses between different parameters for reliability." }, { "instance_id": "R25447xR25412", "comparison_id": "R25447", "paper_id": "R25412", "text": "End-to-end reliability of service oriented applications As organizations move towards adopting a service oriented architecture that permits the coexistence of multiple technology environments, an increasing number of applications will be developed through the assembly of existing software components with standard web service interfaces. These components with web service interfaces may be available in-house, or may be supplied or hosted by external vendors. The use of multiple services, possibly utilizing different technologies, providers, locations, and sources, has implications for the end-to-end reliability of these applications to support a business process. Selecting the best service for individual tasks in a business process does not guarantee the most effective overall solution, particularly if criteria other than functional characteristics are employed. This paper examines reliability issues associated with applications developed within service oriented architecture. It develops a measure for deriving end-to-end application reliability, and develops a model to help select appropriate services for tasks in the business process which accommodate the redundant and overlapping functionality of available services and planned redundancy in task support to satisfy the reliability requirement of the resulting application. A genetic algorithm approach is adopted to select promising services to assemble the application using end-to-end reliability as the criterion of interest. An application to a real-world business process illustrates the effectiveness of the approach." }, { "instance_id": "R25495xR25461", "comparison_id": "R25495", "paper_id": "R25461", "text": "A genetic algorithm approach to customizing a glucose model based on usual therapeutic parameters Type 1 diabetes mellitus is a chronic disease characterized by the increase of glucose in the blood due to a defect in the action or in the production of insulin. For completely autonomous glycemic regulation, a model would be required which permits the future evolution of blood glucose to be estimated. One of the main problems in identifying models is the high variability of glucose profiles both from one patient to another, and in the same patient under not very different conditions. In this paper, we propose a method using an evolutionary algorithm to define the values of the parameters of a minimal model based on standard clinical therapy for a several-day horizon. The algorithm is able to show the trend of blood glucose in a 5-day profile by adjusting the glucose model." }, { "instance_id": "R25495xR25453", "comparison_id": "R25495", "paper_id": "R25453", "text": "Individualized model predictive control for the artificial pancreas: In silico evaluation of closed-loop glucose control Despite the continuous efforts devoted to AP development in the last decades, an artificial pancreas (AP) system is not yet available on the market. One of the major issues involves the inter-subject variability affecting type 1 diabetes (T1D) patients, which makes the definition of a single controller suitable for any patient practically impossible. Moreover, a state-of-the-art, noninvasive, and portable AP system is composed of subcutaneous hardware components, and the control algorithm must be properly designed to reside on a standalone device with limited battery life and computational power. These characteristics make the design of a safe and effective AP system even more challenging, due to the inherent delays affecting the subcutaneous insulin delivery route and the tradeoff between control performance and computational power expenditure. As a result of the model predictive control's (MPC's) ability to address inherent delays of the process under control, it is one of the most promising control approaches in the context of an AP. However, the achievable control performance is strictly related to the prediction capabilities of the model included in the controller, which, in general, can be highly nonlinear. The currently used MPC in clinical experiments relies on a linear average glucose-insulin model designed to represent the average dynamics of a subject with diabetes. This non-individualized MPC is not designed to cope with patient-specific dynamics but is designed to be non-computationally demanding and robust enough to result in a safe and effective control law." }, { "instance_id": "R25495xR25487", "comparison_id": "R25495", "paper_id": "R25487", "text": "Development of a Reinforcement Learning-based Evolutionary Fuzzy Rule-Based System for diabetes diagnosis The early diagnosis of disease is critical to preventing the occurrence of severe complications. Diabetes is a serious health problem. A variety of methods have been developed for diagnosing diabetes. The majority of these methods have been developed in a black-box manner, which cannot be used to explain the inference and diagnosis procedure. Therefore, it is essential to develop methods with high accuracy and interpretability. In this study, a Reinforcement Learning-based Evolutionary Fuzzy Rule-Based System (RLEFRBS) is developed for diabetes diagnosis. The proposed model involves the building of a Rule Base (RB) and rule optimization. The initial RB is constructed using numerical data without initial rules; after learning the rules, redundant rules are eliminated based on the confidence measure. Next, redundant conditions in the antecedent parts are pruned to yield simpler rules with higher interpretability. Finally, an appropriate subset of the rules is selected using a Genetic Algorithm (GA), and the RB is constructed. Evolutionary tuning of the membership functions and weight adjusting using Reinforcement Learning (RL) are used to improve the performance of RLEFRBS. Moreover, to deal with uncovered instances, it makes use of an efficient rule stretching method. The performance of RLEFRBS was examined using two common datasets: Pima Indian Diabetes (PID) and BioSat Diabetes Dataset (BDD). The experimental results show that the proposed model provides a more compact, interpretable and accurate RB that can be considered to be a promising alternative for diagnosis of diabetes." }, { "instance_id": "R25495xR25465", "comparison_id": "R25495", "paper_id": "R25465", "text": "Neural network-based model predictive control for type 1 diabetic rats on artificial pancreas system AbstractArtificial pancreas system (APS) is a viable option to treat diabetic patients. Researchers, however, have not conclusively determined the best control method for APS. Due to intra-/inter-variability of insulin absorption and action, an individualized algorithm is required to control blood glucose level (BGL) for each patient. To this end, we developed model predictive control (MPC) based on artificial neural networks (ANNs), which combines ANN for BGL prediction based on inputs and MPC for BGL control based on the ANN (NN-MPC). First, we developed a mathematical model for diabetic rats, which was used to identify individual virtual subjects by fitting to empirical data collected through an APS, including BGL data, insulin injection, and food intake. Then, the virtual subjects were used to generate datasets for training ANNs. The NN-MPC determines control actions (insulin injection) based on BGL predicted by the ANN. To evaluate the NN-MPC, we conducted experiments using four virtual subjects under three different scenarios. Overall, the NN-MPC maintained BGL within the normal range about 90% of the time with a mean absolute deviation of 4.7 mg/dl from a desired BGL. Our findings suggest that the NN-MPC can provide subject-specific BGL control in conjunction with a closed-loop APS. Graphical abstract\u115f" }, { "instance_id": "R25495xR25457", "comparison_id": "R25495", "paper_id": "R25457", "text": "Event-Triggered Model Predictive Control for Embedded Artificial Pancreas Systems Objective: The development of artificial pancreas (AP) technology for deployment in low-energy, embedded devices is contingent upon selecting an efficient control algorithm for regulating glucose in people with type 1 diabetes mellitus. In this paper, we aim to lower the energy consumption of the AP by reducing controller updates, that is, the number of times the decision-making algorithm is invoked to compute an appropriate insulin dose. Methods: Physiological insights into glucose management are leveraged to design an event-triggered model predictive controller (MPC) that operates efficiently, without compromising patient safety. The proposed event-triggered MPC is deployed on a wearable platform. Its robustness to latent hypoglycemia, model mismatch, and meal misinformation is tested, with and without meal announcement, on the full version of the US-FDA accepted UVA/Padova metabolic simulator. Results: The event-based controller remains on for 18 h of 41 h in closed loop with unannounced meals, while maintaining glucose in 70\u2013180 mg/dL for 25 h, compared to 27 h for a standard MPC controller. With meal announcement, the time in 70\u2013180 mg/dL is almost identical, with the controller operating a mere 25.88% of the time in comparison with a standard MPC. Conclusion: A novel control architecture for AP systems enables safe glycemic regulation with reduced processor computations. Significance: Our proposed framework integrated seamlessly with a wide variety of popular MPC variants reported in AP research, customizes tradeoff between glycemic regulation and efficacy according to prior design specifications, and eliminates judicious prior selection of controller sampling times." }, { "instance_id": "R25495xR25449", "comparison_id": "R25495", "paper_id": "R25449", "text": "Robust PBPK/PD-Based Model Predictive Control of Blood Glucose Goal: Automated glucose control (AGC) has not yet reached the point where it can be applied clinically [3]. Challenges are accuracy of subcutaneous (SC) glucose sensors, physiological lag times, and both inter- and intraindividual variability. To address above issues, we developed a novel scheme for MPC that can be applied to AGC. Results: An individualizable generic whole-body physiology-based pharmacokinetic and dynamics (PBPK/PD) model of the glucose, insulin, and glucagon metabolism has been used as the predictive kernel. The high level of mechanistic detail represented by the model takes full advantage of the potential of MPC and may make long-term prediction possible as it captures at least some relevant sources of variability [4]. Robustness against uncertainties was increased by a control cascade relying on proportional-integrative derivative-based offset control. The performance of this AGC scheme was evaluated in silico and retrospectively using data from clinical trials. This analysis revealed that our approach handles sensor noise with a MARD of 10%-14%, and model uncertainties and disturbances. Conclusion: The results suggest that PBPK/PD models are well suited for MPC in a glucose control setting, and that their predictive power in combination with the integrated database-driven (a priori individualizable) model framework will help overcome current challenges in the development of AGC systems. Significance: This study provides a new, generic, and robust mechanistic approach to AGC using a PBPK platform with extensive a priori (database) knowledge for individualization." }, { "instance_id": "R25495xR25451", "comparison_id": "R25495", "paper_id": "R25451", "text": "One-Day Bayesian Cloning of Type 1 Diabetes Subjects: Toward a Single-Day UVA/Padova Type 1 Diabetes Simulator Objective: The UVA/Padova Type 1 Diabetes (T1DM) Simulator has been shown to be representative of a T1DM population observed in a clinical trial, but has not yet been identified on T1DM data. Moreover, the current version of the simulator is \u201csingle meal\u201d while making it \u201csingle-day centric,\u201d i.e., by describing intraday variability, would be a step forward to create more realistic in silico scenarios. Here, we propose a Bayesian method for the identification of the model from plasma glucose and insulin concentrations only, by exploiting the prior model parameter distribution. Methods: The database consists of 47 T1DM subjects, who received dinner, breakfast, and lunch (respectively, 80, 50, and 60 CHO grams) in three 23-h occasions (one openand one closed-loop). The model is identified using the Bayesian Maximum a Posteriori technique, where the prior parameter distribution is that of the simulator. Diurnal variability of glucose absorption and insulin sensitivity is allowed. Results: The model well describes glucose traces (coefficient of determination R2 = 0.962 \u00b1 0.027) and the posterior parameter distribution is similar to that included in the simulator. Absorption parameters at breakfast are significantly different from those at lunch and dinner, reflecting more rapid dynamics of glucose absorption. Insulin sensitivity varies in each individual but without a specific pattern. Conclusion: The incorporation of glucose absorption and insulin sensitivity diurnal variability into the simulator makes it more realistic. Significance: The proposed method, applied to the increasing number of longterm artificial pancreas studies, will allow to describe week/month variability, thus further refining the simulator." }, { "instance_id": "R25529xR25501", "comparison_id": "R25529", "paper_id": "R25501", "text": "Platform Strategy and Market Response Impact on the Success of Crowdfunding: A Chinese Case Nowadays, crowdfunding presents a promising development. This research focuses on the influence of platform strategy and market response on the success of crowdfunding from the perspective of the elaboration likelihood model (ELM) theory. Detailed product specifications, crowdfunding difficulty coefficient, vivid advertising video such as introduction and music, and recommendations from relevant figures are all used to depict platform strategy. Meanwhile, we use the number of lovers, followers, comments and 1 RMB backers to measure the level of market response. And thus, we model the impact of platform strategy and market response on crowdfunding success with empirical studies based on 400 samples of observed value. We found firstly that there exist significant positive relations between the total amount of funds pledged and detailed product specification, vivid advertising video, recommendations from relevant figures and the number of 1 RMB backers. Secondly, the crowdfunding difficulty of projects affects negatively, and significantly, the total amount of funds pledged. Thirdly, the influence of the number of lovers and followers on funds pledged is not significant." }, { "instance_id": "R25529xR25499", "comparison_id": "R25529", "paper_id": "R25499", "text": "A rewarding experience? Exploring how crowdfunding is affecting music industry business models This paper provides an exploratory study of how rewards-based crowdfunding affects business model development for music industry artists, labels and live sector companies. The empirical methodology incorporated a qualitative, semi-structured, three-stage interview design with fifty seven senior executives from industry crowdfunding platforms and three stakeholder groups. The results and analysis cover new research ground and provide conceptual models to develop theoretical foundations for further research in this field. The findings indicate that the financial model benefits of crowdfunding for independent artists are dependent on fan base demographic variables relating to age group and genre due to sustained apprehension from younger audiences. Furthermore, major labels are now considering a more user-centric financial model as an innovation strategy, and the impact of crowdfunding on their marketing model may already be initiating its development in terms of creativity, strength and artist relations." }, { "instance_id": "R25529xR25525", "comparison_id": "R25529", "paper_id": "R25525", "text": "Effects of Social Interaction Dynamics on Platforms Abstract Despite the increasing relevance of online social interactions on platforms, there is still little research on the temporal interaction dynamics between electronic word-of-mouth (eWOM, a form of opinion-based social interaction), popularity information (a form of action-based social interaction), and consumer decision making. Drawing on a panel data set of more than 23,300 crowdfunding campaigns from Indiegogo, we investigate the dynamic effects of these social interactions on consumers\u2019 funding decisions using the panel vector autoregressive methodology. Our analysis shows that both eWOM and popularity information are critical influencing mechanisms in crowdfunding. However, our overarching finding is that eWOM surrounding crowdfunding campaigns on Indiegogo or Facebook has a significant yet substantially weaker predictive power than popularity information. We also find that whereas popularity information has a more immediate effect on consumers\u2019 funding behavior, its effectiveness decays rather quickly, while the impact of eWOM recedes more slowly. This study contributes to the extant literature by (1) providing a more nuanced understanding of the dynamic effects of opinion-based and action-based social interactions, (2) unraveling both within-platform and cross-platform dynamics, and (3) showing that social interactions are perceived as quality indicators on crowdfunding platforms that help consumers reduce risks associated with their investment decisions. These results can help platform providers and complementors to stimulate contribution behavior and increase the prosperity of a platform." }, { "instance_id": "R25529xR25517", "comparison_id": "R25529", "paper_id": "R25517", "text": "Choose wisely: Crowdfunding through the stages of the startup life cycle Crowdfunding is attractive to startups as an alternative funding source and offers nonmonetary resources through organizational learning. It encompasses the outsourcing of an organizational function, through IT, to a strategically defined network of actors (i.e., the crowd) in the form of an open call\u2014specifically, requesting monetary contributions toward a commercial or social business goal. Nonetheless, many startups are hesitant to consider crowdfunding because little guidance exists on how the various types of crowdfunding add value in different life cycle stages and which type is best suited for which stage. In response to this gap, this article introduces a typology of crowdfunding, the benefits it offers, and how specific benefits relate to the identified crowdfunding types. On this basis, we present a framework for choosing the right crowdfunding type for each stage in the startup life cycle, in addition to providing practical advice on crowdfunding best practices. The best practices outlined have shown demonstrable contributions toward achieving funding goals and are likely to prove valuable for startups." }, { "instance_id": "R25529xR25519", "comparison_id": "R25529", "paper_id": "R25519", "text": "The role of balanced centricity in the Spanish creative industries adopting a crowdfunding organisational model Purpose \u2013 The purpose of this paper is to analyse the structures of the relationships between actors in the creative industries sector using crowd-funding, and how co-creation is the basis for reaching balanced centricity in the creative industries. Design/methodology/approach \u2013 The Many-to-Many Marketing Theory, Service-Dominant Logic and Service Logic are the theoretical bases for explaining how the changing roles of the actors in the creative industries sector have given the crowd a great capacity for deciding in the value-creation process. A qualitative, case-based approach is used, given the complexity of the phenomenon to be analysed. Findings \u2013 The findings of the empirical approach have important theoretical and practical implications. On the theoretical side, it analyses the importance of balanced centricity instead of customer centricity as the basis for system stability. Findings also have implications for service managers, as this can be considered an alternative for certain business projects,..." }, { "instance_id": "R25583xR25545", "comparison_id": "R25583", "paper_id": "R25545", "text": "MDA game design for video game development by genre Game \u0301s development process remains a difficult task due to game platform\u2019s increasing technological complexity and lack of game \u0301s development methodologies for unified processes. In this work we show a way to develop different types of arcade games genre using Model Driven Architecture (MDA). We present a metamodel for game design that allows the specification for a high level abstraction independently of platform. This proposal shows that it is possible to generate a 2D game from the essential characteristics that make up such type of video game. Also, some model transformation rules to generate executable Java code from a specific model are shown." }, { "instance_id": "R25583xR25535", "comparison_id": "R25583", "paper_id": "R25535", "text": "Automatic prototyping in model-driven game development Model-driven game development (MDGD) is an emerging paradigm where models become first-order elements in game development, maintenance, and evolution. In this article, we present a first approach to 2D platform game prototyping automatization through the use of model-driven engineering (MDE). Platform-independent models (PIM) define the structure and the behavior of the games and a platform-specific model (PSM) describes the game control mapping. Automatic MOFscript transformations from these models generate the software prototype code in C++. As an example, Bubble Bobble has been prototyped in a few hours following the MDGD approach. The resulting code generation represents 93% of the game prototype." }, { "instance_id": "R25583xR25547", "comparison_id": "R25583", "paper_id": "R25547", "text": "Model-driven development of user interfaces for educational games The main topic of this paper is the problem of developing user interfaces for educational games. Focus of educational games is usually on the knowledge while it should be evenly distributed to the user interface as well. Our proposed solution is based on the model-driven approach, thus we created a framework that incorporates meta-models, models, transformations and software tools. We demonstrated practical application of the mentioned framework by developing user interface for educational adventure game." }, { "instance_id": "R25583xR25579", "comparison_id": "R25583", "paper_id": "R25579", "text": "Improving digital game development with software product lines Introducing reuse and software product line (SPL) concepts into digital game-development processes isn't a straightforward task. This work presents a systematic process for bridging SPLs to game development, culminating with domain-specific languages and generators streamlined for game subdomains. The authors present a game SPL for arcade games as a case study to illustrate and evaluate their proposed guidelines. This article is part of a special issue on games." }, { "instance_id": "R25583xR25531", "comparison_id": "R25583", "paper_id": "R25531", "text": "A DSL for rapid prototyping of cross-platform tower defense games Because of the increasing expansion of the videogame industry, shorten videogame time to market for diverse platforms (e.g, Mac, android, iOS, BlackBerry) is a quest. This paper presents how a Domain Specific Language (DSL) in conjunction with Model-Driven Engineering (MDE) techniques can automate the development of games, in particular, tower defense games such as Plants vs. Zombies. The DSL allows the expression of structural and behavioral aspects of tower defense games. The MDE techniques allow us to generate code from the game expressed in the DSL. The generated code is written in an existing open source language that leverages the portability of the games. We present our approach using an example so-called Space Attack. The example shows the significant benefits offered by our proposal in terms of productivity and portability." }, { "instance_id": "R25583xR25563", "comparison_id": "R25583", "paper_id": "R25563", "text": "PULP scription: A DSL for mobile HTML5 game applications As applications and especially games are moving to the web and mobile environments, different tools are needed to design these applications and their behavior. HTML5 in combination with JavaScript is a promising basis for such applications on a wide range of platforms. Content producers and designers often lack the tools for such developments, or the expertise to operate existing, but too complex tools. This paper presents work in progress about a novel domain-specific language (DSL) PULP that aims at closing this gap. The language allows tying content such as images and media files together by modeling the dynamic behavior, movements, and control flow. The DSL helps abstracting from asynchronous JavaScript, state machines, and access to cross-platform media playback, which is generated in a final model-to-text transformation. The DSL and tooling were created and evaluated in close cooperation with content authors." }, { "instance_id": "R25583xR25537", "comparison_id": "R25583", "paper_id": "R25537", "text": "Building a Game Engine: A Tale of Modern Model-Driven Engineering Game engines enable developers to reuse assets from previously developed games, thus easing the software-engineering challenges around the video-game development experience and making the implementation of games less expensive, less technologically brittle, and more efficient. However, the construction of game engines is challenging in itself, it involves the specification of well defined architectures and typical game play behaviors, flexible enough to enable game designers to implement their vision, while, at the same time, simplifying the implementation through asset and code reuse. In this paper we present a set of lessons learned through the design and construction PhyDSL-2, a game engine for 2D physics-based games. Our experience involves the active use of modern model-driven engineering technologies, to overcome the complexity of the engine design and to systematize its maintenance and evolution." }, { "instance_id": "R25583xR25575", "comparison_id": "R25583", "paper_id": "R25575", "text": "Facilitating language-oriented game development by the help of language workbenches In recent years, a strong tendency towards language-oriented engineering became visible within game development projects. This approach is typically based on data-driven game engines and scripting languages resp. editing tools alike and already provided a great deal of overall productivity improvements. However, in its current form, potential benefits are not able to fully unfold yet. This is due to a mostly manual tool development process, which provokes substantial costs and lacks flexibility -- especially during prototyping phases of development. Language workbenches seem to be a viable solution to this problem as they promise the ability of (visual) language (re-)generation by introducing a meta-level of development. This paper picks up that idea and evaluates its application in the area of game development. In this particular case, we discuss first findings of an ongoing case study, covering the development of level editors for several classic games, which have been built by the help of a language workbench." }, { "instance_id": "R25629xR25597", "comparison_id": "R25629", "paper_id": "R25597", "text": "An empirical study on the relationship between the use of agile practices and the success of Scrum projects In this article, factors considered critical for the success of projects managed using Scrum are correlated to the results of software projects in industry. Using a set of 25 factors compiled in by other researchers, a cross section survey was conducted to evaluate the presence or application of these factors in 11 software projects that used Scrum in 9 different software companies located in Recife-PE, Brazil. The questionnaire was applied to 65 developers and Scrum Masters, representing 75% (65/86) of the professionals that have participated in the projects. The result was correlated with the level of success achieved by the projects, measured by the subjective perception of the project participant, using Spearman's rank correlation coefficient. The main finding is that only 32% (8/25) of the factors correlated positively with project success, raising the question of whether the factors hypothesized in the literature as being critical to the success of agile software projects indeed have an effect on project success. Given the limitations regarding the generalization of this result, other forms of empirical results, in particular case-studies, are needed to test this question." }, { "instance_id": "R25629xR25605", "comparison_id": "R25629", "paper_id": "R25605", "text": "Agile Team Perceptions of Productivity Factors In this paper, we investigate agile team perceptions of factors impacting their productivity. Within this overall goal, we also investigate which productivity concept was adopted by the agile teams studied. We here conducted two case studies in the industry and analyzed data from two projects that we followed for six months. From the perspective of agile team members, the three most perceived factors impacting on their productivity were appropriate team composition and allocation, external dependencies, and staff turnover. Teams also mentioned pair programming and collocation as agile practices that impact productivity. As a secondary finding, most team members did not share the same understanding of the concept of productivity. While some known factors still impact agile team productivity, new factors emerged from the interviews as potential productivity factors impacting agile teams." }, { "instance_id": "R25629xR25587", "comparison_id": "R25629", "paper_id": "R25587", "text": "Empirical Investigation on Agile Methods Usage: Issues Identified from Early Adopters in Malaysia) Agile Methods are a set of software practices that can help to produce products faster and at the same time deliver what customers want. Despite the benefits that Agile methods can deliver, however, we found few studies from the Southeast Asia region, particularly Malaysia. As a result, less empirical evidence can be obtained in the country making its implementation harder. To use a new method, experience from other practitioners is critical, which describes what is important, what is possible and what is not possible concerning Agile. We conducted a qualitative study to understand the issues faced by early adopters in Malaysia where Agile methods are still relatively new. The initial study involves 13 participants including project managers, CEOs, founders and software developers from seven organisations. Our study has shown that social and human aspects are important when using Agile methods. While technical aspects have always been considered to exist in software development, we found these factors to be less important when using Agile methods. The results obtained can serve as guidelines to practitioners in the country and the neighbouring regions." }, { "instance_id": "R25629xR25621", "comparison_id": "R25629", "paper_id": "R25621", "text": "The influence of organizational culture on the adoption of extreme programming The adoption of extreme programming (XP) method requires a very peculiar cultural context in software development companies. However, stakeholders do not always consider this matter and tend to stand to technical requirements of the method. Hence this paper aims at identifying aspects of organizational culture that may influence favorably or unfavorably the use of XP. In order to identify those aspects, this study analyzes dimensions of organizational culture under the perspective of practices and values of XP. This paper is based on the review of the literature of the area and empirical observations carried out with six software companies. This study does not intend to develop a tool for measurement of XP's compatibility with the organizational culture of each company. It intends to provide parameters (favorable and unfavorable aspects) for previous consideration of the convenience of XP implementation." }, { "instance_id": "R25629xR25610", "comparison_id": "R25629", "paper_id": "R25610", "text": "A qualitative study of the determinants of self-managing team effectiveness in a scrum team There are many evidences in the literature that the use self-managing teams has positive impacts on several dimensions of team effectiveness. Agile methods, supported by the Agile Manifesto, defend the use of self-managing teams in software development in substitution of hierarchically managed, traditional teams. The goal of this research was to study how a self-managing software team works in practice and how the behaviors of the software organization support or hinder the effectiveness of such teams. We performed a single case holistic case study, looking in depth into the actual behavior of a mature Scrum team in industry. Using interviews and participant observation, we collected qualitative data from five team members in several interactions. We extract the behavior of the team and of the software company in terms of the determinants of self-managing team effectiveness defined in a theoretical model from the literature. We found evidence that 17 out of 24 determinants of this model exist in the studied context. We concluded that certain determinants can support or facilitate the adoption of methodologies like Scrum, while the use of Scrum may affect other determinants." }, { "instance_id": "R25629xR25623", "comparison_id": "R25629", "paper_id": "R25623", "text": "Drivers of agile software development use: Dialectic interplay between benefits and hindrances. Information and Software Technology Context: Agile software development with its emphasis on producing working code through frequent releases, extensive client interactions and iterative development has emerged as an alternative to traditional plan-based software development methods. While a number of case studies have provided insights into the use and consequences of agile, few empirical studies have examined the factors that drive the adoption and use of agile. Objective: We draw on intention-based theories and a dialectic perspective to identify factors driving the use of agile practices among adopters of this software development methodology. Method: Data for the study was gathered through an anonymous online survey of software development professionals. We requested participation from members of a selected list of online discussion groups, and received 98 responses. Results: Our analyses reveal that subjective norm and training play a significant role in influencing software developers' use of agile processes and methods, while perceived benefits and perceived limitations are not primary drivers of agile use among adopters. Interestingly, perceived benefit emerges as a significant predictor of agile use only if adopters face hindrances to their agile practices. Conclusion: We conclude that research in the adoption of software development innovations should examine the effects of both enabling and detracting factors and the interactions between them. Since training, subjective norm, and the interplay between perceived benefits and perceived hindrances appear to be key factors influencing the adoption of agile methods, researchers can focus on how to (a) perform training on agile methods more effectively, (b) facilitate the dialog between developers and managers about perceived benefits and hindrances, and (c) capitalize on subjective norm to publicize the benefits of agile methods within an organization. Further, when managing the transition to new software development methods, we recommend that practitioners adapt their strategies and tactics contingent on the extent of perceived hindrances to the change." }, { "instance_id": "R25663xR25655", "comparison_id": "R25663", "paper_id": "R25655", "text": "Comparing order picking assisted by head-up display versus pick-by-light with explicit pick confirmation Manual order picking is an important part of distribution. Many techniques have been proposed to improve pick efficiency and accuracy. Previous studies compared pick-by-HUD (Head-Up Display) with pick-by-light but without the explicit pick confirmation that is typical in industrial environments. We compare a pick-by-light system designed to emulate deployed systems with a pick-by-HUD system using Google Glass. The pick-by-light system tested 50% slower than pick-by-HUD and required a higher workload. The number of errors committed and picker preference showed no statistically significant difference." }, { "instance_id": "R25663xR25659", "comparison_id": "R25663", "paper_id": "R25659", "text": "Exploring the role of picker personality in predicting picking performance with pick by voice, pick to light and RF-terminal picking Order pickers and individual differences between them could have a substantial impact on picking performance, but are largely ignored in studies on order picking. This paper explores the role of individual differences in picking performance with various picking tools (pick by voice, RF-terminal picking and pick to light) and methods (parallel, zone and dynamic zone picking). A unique realistic field experiment with 101 participants (academic students, vocational students and professional pickers) is employed to investigate the influence of individual differences, especially the Big Five personality traits, on picking performance in terms of productivity and quality. The results suggest that (PbV) performs better than RF-terminal picking, and that Neuroticism, Extraversion, Conscientiousness and the age of the picker play a significant role in predicting picking performance with voice and RF-terminals. Furthermore, achieving higher productivity appears to be possible without sacrificing quality. Managers can increase picking performance by incorporating the insights in assigning the right pickers to work with a particular picking tool or method, leading to increased picking performance and reduced warehousing costs." }, { "instance_id": "R25663xR25641", "comparison_id": "R25663", "paper_id": "R25641", "text": "An empirical task analysis of warehouse order picking using head-mounted displays Evaluations of task guidance systems often focus on evaluations of new technologies rather than comparing the nuances of interaction across the various systems. One common domain for task guidance systems is warehouse order picking. We present a method involving an easily reproducible ecologically motivated order picking environment for quantitative user studies designed to reveal differences in interactions. Using this environment, we perform a 12 participant within-subjects experiment demonstrating the advantages of a head-mounted display based picking chart over a traditional text-based pick list, a paper-based graphical pick chart, and a mobile pick-by-voice system. The test environment proved sufficiently sensitive, showing statistically significant results along several metrics with the head-mounted display system performing the best. We also provide a detailed analysis of the strategies adopted by our participants." }, { "instance_id": "R25694xR6511", "comparison_id": "R25694", "paper_id": "R6511", "text": "SemLens: visual analysis of semantic data with scatter plots and semantic lenses Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration." }, { "instance_id": "R25726xR25724", "comparison_id": "R25726", "paper_id": "R25724", "text": "graphVizdb: A scalable platform for interactive large graph visualization We present a novel platform for the interactive visualization of very large graphs. The platform enables the user to interact with the visualized graph in a way that is very similar to the exploration of maps at multiple levels. Our approach involves an offline preprocessing phase that builds the layout of the graph by assigning coordinates to its nodes with respect to a Euclidean plane. The respective points are indexed with a spatial data structure, i.e., an R-tree, and stored in a database. Multiple abstraction layers of the graph based on various criteria are also created offline, and they are indexed similarly so that the user can explore the dataset at different levels of granularity, depending on her particular needs. Then, our system translates user operations into simple and very efficient spatial operations (i.e., window queries) in the backend. This technique allows for a fine-grained access to very large graphs with extremely low latency and memory requirements and without compromising the functionality of the tool. Our web-based prototype supports three main operations: (1) interactive navigation, (2) multi-level exploration, and (3) keyword search on the graph metadata." }, { "instance_id": "R25726xR25722", "comparison_id": "R25726", "paper_id": "R25722", "text": "Visualizing ontologies with VOWL The Visual Notation for OWL Ontologies (VOWL) is a well-specified visual language for the user-oriented representation of ontologies. It defines graphical depictions for most elements of the Web Ontology Language (OWL) that are combined to a force-directed graph layout visualizing the ontology. In contrast to related work, VOWL aims for an intuitive and comprehensive representation that is also understandable to users less familiar with ontologies. This article presents VOWL in detail and describes its implementation in two different tools: ProtegeVOWL and WebVOWL. The first is a plugin for the ontology editor Protege, the second a standalone web application. Both tools demonstrate the applicability of VOWL by means of various ontologies. In addition, the results of three user studies that evaluate the comprehensibility and usability of VOWL are summarized. They are complemented by findings from an interview with experienced ontology users and from testing the visual scope and completeness of VOWL with a benchmark ontology. The evaluations helped to improve VOWL and confirm that it produces comparatively intuitive and comprehensible ontology visualizations." }, { "instance_id": "R25726xR25720", "comparison_id": "R25726", "paper_id": "R25720", "text": "A Visual Summary for Linked Open Data sources In this paper we propose LODeX, a tool that produces a representative summary of a Linked open Data (LOD) source starting from scratch, thus supporting users in exploring and understanding the contents of a dataset. The tool takes in input the URL of a SPARQL endpoint and launches a set of predefined SPARQL queries, from the results of the queries it generates a visual summary of the source. The summary reports statistical and structural information of the LOD dataset and it can be browsed to focus on particular classes or to explore their properties and their use. LODeX was tested on the 137 public SPARQL endpoints contained in Data Hub (formerly CKAN), one of the main Open Data catalogues. The statistical and structural information extraction was successfully performed on 107 sources, among these the most significant ones are included in the online version of the tool." }, { "instance_id": "R25762xR25748", "comparison_id": "R25762", "paper_id": "R25748", "text": "Efficient mining of weighted association rules (WAR) In this paper, we extend the tradition association rule problem by allowing a weight to be associated with each item in a transaction, to re ect interest/intensity of the item within the transaction. This provides us in turn with an opportunity to associate a weight parameter with each item in the resulting association rule. We call it weighted association rule (WAR). WAR not only improves the con dence of the rules, but also provides a mechanism to do more effective target marketing by identifying or segmenting customers based on their potential degree of loyalty or volume of purchases. Our approach mines WARs by rst ignoring the weight and nding the frequent itemsets (via a traditional frequent itemset discovery algorithm), and is followed by introducing the weight during the rule generation. It is shown by experimental results that our approach not only results in shorter average execution times, but also produces higher quality results than the generalization of previous known methods on quantitative association rules." }, { "instance_id": "R25762xR25732", "comparison_id": "R25762", "paper_id": "R25732", "text": "Fast Algorithms for Mining Association Rules We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving thii problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems. We also show how the best features of the two proposed algorithms can be combined into a hybrid algorithm, called AprioriHybrid. Scale-up experiments show that AprioriHybrid scales linearly with the number of transactions. AprioriHybrid also has excellent scale-up properties with respect to the transaction size and the number of items in the database." }, { "instance_id": "R25762xR25760", "comparison_id": "R25762", "paper_id": "R25760", "text": "Experiences of Using a Quantitative Approach for Mining Association Rules In recent years interest has grown in \u201cmining\u201d large databases to extract novel and interesting information. Knowledge Discovery in Databases (KDD) has been recognised as an emerging research area. Association rules discovery is an important KDD technique for better data understanding. This paper proposes an enhancement with a memory efficient data structure of a quantitative approach to mine association rules from data. The best features of the three algorithms (the Quantitative Approach, DHP, and Apriori) were combined to constitute our proposed approach. The obtained results accurately reflected knowledge hidden in the datasets under examination. Scale-up experiments indicated that the proposed algorithm scales linearly as the size of the dataset increases." }, { "instance_id": "R25762xR25758", "comparison_id": "R25762", "paper_id": "R25758", "text": "PRICES: An Efficient Algorithm for Mining Association Rules In this paper, we present PRICES, an efficient algorithm for mining association rules, which first identifies all large itemsets and then generates association rules. Our approach reduces large itemset generation time, known to be the most time-consuming step, by scanning the database only once and using logical operations in the process. Experimental results and comparisons with the state of the art algorithm Apriori shows that PRICES very efficient and in some cases up to ten times as fast as Apriori." }, { "instance_id": "R25762xR25756", "comparison_id": "R25762", "paper_id": "R25756", "text": "A trie-based APRIORI implementation for mining frequent item sequences In this paper we investigate a trie-based APRIORI algorithm for mining frequent item sequences in a transactional database. We examine the data structure, implementation and algorithmic features mainly focusing on those that also arise in frequent itemset mining. In our analysis we take into consideration modern processors' properties (memory hierarchies, prefetching, branch prediction, cache line size, etc.), in order to better understand the results of the experiments." }, { "instance_id": "R25857xR25826", "comparison_id": "R25857", "paper_id": "R25826", "text": "Supported gold catalysts for selective hydrogenation of 1,3-butadiene in the presence of an excess of alkenes Supported gold catalysts were investigated in the selective gas phase hydrogenation of 1,3-butadiene in an excess of propene (0.3% butadiene, 30% propene and 20% hydrogen), in order to simulate the process required for the purification of industrial alkenes streams to prevent poisoning of the polymerisation catalysts used for polyalkene production. Gold catalysts containing small gold particles (between 2 to 5 nm in average) are less active than commercial palladium catalysts, but they are much more selective. Under our experimental conditions, 100% of butadiene can be converted at \u2248170\u00b0C into 100% butenes with 1-butene as the main product, and with only very small amount of alkanes formed (\u2248100 ppm). The absence or presence of propene does not drastically modify the rate of hydrogenation of butadiene.Parameters directly related to the nature of the gold catalysts were also investigated. For a given preparation method (deposition-precipitation with urea (DPU)), gold particle size and gold loading, the nature of the oxide support (alumina, titania, zirconia, ceria) does not influence the gold reactivity. The variations of gold particle size and gold loading do not induce changes in the TOF (expressed per surface gold atoms). The method of preparation has an influence when it leaves chlorine in the samples (impregnation in excess of solution and anionic adsorption). In such a case, the gold catalysts are less active." }, { "instance_id": "R25857xR25853", "comparison_id": "R25857", "paper_id": "R25853", "text": "Semihydrogenation of Acetylene on Indium Oxide: Proposed Single-Ensemble Catalysis Indium oxide catalyzes acetylene hydrogenation with high selectivity to ethylene (>85 %); even with a large excess of the alkene. In situ characterization reveals the formation of oxygen vacancies under reaction conditions, while an in depth theoretical analysis links the surface reduction with the creation of well-defined vacancies and surrounding In3 O5 ensembles, which are considered responsible for this outstanding catalytic function. This behavior, which differs from that of other common reducible oxides, originates from the presence of four crystallographically inequivalent oxygen sites in the indium oxide surface. These resulting ensembles are 1) stable against deactivation, 2) homogeneously and densely distributed, and 3) spatially isolated and confined against transport; thereby broadening the scope of oxides in hydrogenation catalysis." }, { "instance_id": "R25857xR25791", "comparison_id": "R25857", "paper_id": "R25791", "text": "Performance of Cu-Alloyed Pd Single-Atom Catalyst for Semihydrogenation of Acetylene under Simulated Front-End Conditions Selective hydrogenation of acetylene to ethylene is an industrially important reaction. Pd-based catalysts have been proved to be efficient for the acetylene conversion, while enhancing the selectivity to ethylene is challenging. Here, we chose Cu as the partner of Pd, fabricated an alloyed Pd single-atom catalyst (SAC), and investigated its catalytic performance for the selective hydrogenation of acetylene to ethylene under a simulated front-end hydrogenation process in industry: that is, with a high concentration of hydrogen and ethylene. The Cu-alloyed Pd SAC showed \u223c85% selectivity to ethylene and 100% acetylene elimination. In comparison with the Au- or Ag-alloyed Pd SAC, the Cu-alloyed analogue exceeded both of them in conversion, while the selectivity rivaled that of the Ag-alloyed Pd SAC and surpassed that of the Au-alloyed Pd SAC. As Cu is a low-cost metal, Cu-alloyed Pd SAC would minimize the noble-metal usage and possess high utilization potential for industry. The Cu-alloyed Pd SAC was verifie..." }, { "instance_id": "R25857xR25846", "comparison_id": "R25857", "paper_id": "R25846", "text": "Al13Fe4 as a low-cost alternative for palladium in heterogeneous hydrogenation Replacing noble metals in heterogeneous catalysts by low-cost substitutes has driven scientific and industrial research for more than 100 years. Cheap and ubiquitous iron is especially desirable, because it does not bear potential health risks like, for example, nickel. To purify the ethylene feed for the production of polyethylene, the semi-hydrogenation of acetylene is applied (80 \u00d7 10(6) tons per annum; refs 1-3). The presence of small and separated transition-metal atom ensembles (so-called site-isolation), and the suppression of hydride formation are beneficial for the catalytic performance. Iron catalysts necessitate at least 50 bar and 100 \u00b0C for the hydrogenation of unsaturated C-C bonds, showing only limited selectivity towards semi-hydrogenation. Recent innovation in catalytic semi-hydrogenation is based on computational screening of substitutional alloys to identify promising metal combinations using scaling functions and the experimental realization of the site-isolation concept employing structurally well-ordered and in situ stable intermetallic compounds of Ga with Pd (refs 15-19). The stability enables a knowledge-based development by assigning the observed catalytic properties to the crystal and electronic structures of the intermetallic compounds. Following this approach, we identified the low-cost and environmentally benign intermetallic compound Al(13)Fe(4) as an active and selective semi-hydrogenation catalyst. This knowledge-based development might prove applicable to a wide range of heterogeneously catalysed reactions." }, { "instance_id": "R25857xR25776", "comparison_id": "R25857", "paper_id": "R25776", "text": "Selective hydrogenation of mixed alkyne/alkene streams at elevated pressure over a palladium sulfide catalyst Abstract The Pd4S phase of palladium sulfide is known to be a highly selective alkyne hydrogenation catalyst at atmospheric pressure. Results presented here demonstrate that high selectivity can be retained at the elevated pressures required in industrial application. For example, in a mixed acetylene/ethylene feed, 100% conversion of acetylene was attained with a selectivity to ethylene in excess of 80% at 18 bar pressure. Similarly, almost 85% selectivity can be obtained with mixed C3 feeds containing methyl acetylene, propadiene, propylene and propane at 18 bar pressure. Using a low loaded sample (0.1 wt% Pd) it was possible to estimate the TOF to be 27 s\u22121. High selectivity was related to the crystal structure of Pd4S with the unique spatial arrangement thought to favour Pd atoms acting in isolation from one another. Based on these results, it is proposed that this catalyst could be a potential replacement for PdAg alloys currently used by industry." }, { "instance_id": "R25857xR25824", "comparison_id": "R25857", "paper_id": "R25824", "text": "50 ppm of Pd dispersed on Ni(OH)2 nanosheets catalyzing semi-hydrogenation of acetylene with high activity and selectivity We report a highly efficient Pd/Ni(OH)2 catalyst loaded with ultra-low levels of palladium (50 ppm Pd by mass) for the selective hydrogenation of acetylene to ethylene. The turnover frequency for acetylene conversion over the 0.005% Pd/Ni(OH)2 catalyst is twice that of the equivalent 0.8% Pd/Ni(OH)2 catalyst. Notably, an acetylene-to-ethylene selectivity of 80% was achieved over a wide range of temperatures. Aberration-corrected high-angle annular dark-field scanning transmission electron microscopy was used to reveal the atomically dispersed nature of palladium in the 0.005% Pd/Ni(OH)2 catalyst. The excellent selectivity of this catalyst is attributed to its atomically dispersed Pd sites, while the abundant hydroxyl groups of the support significantly enhance the acetylene conversion activity. This work opens up innovative opportunities for new types of highly efficient catalysts with trace noble-metal loadings for a wide variety of reactions." }, { "instance_id": "R25857xR25832", "comparison_id": "R25857", "paper_id": "R25832", "text": "Selective hydrogenation of acetylene in excess ethylene over SiO2 supported Au\u2013Ag bimetallic catalyst Abstract Supported gold nanocatalysts have been reported to be active in selective hydrogenation of acetylene. In this work, SiO 2 supported Au\u2013Ag bimetallic catalyst is studied in the selective hydrogenation of acetylene in excess ethylene. Au and Ag were reductively deposited on a silica surface functionalized by APTES (3-aminopropyltriethoxysilane). They form Au\u2013Ag alloy nanoparticles of very small size. The catalytic activity of Au\u2013Ag bimetallic system showed better catalytic activity at high temperature than that of monometallic gold catalyst. According to the TEM and XRD results, Ag stabilized the nanoparticles against sintering during high temperature calcinations. Non-thermal O 2 plasma was applied to remove the APTES under mild conditions instead of high temperature calcination. The results showed that the conversion of acetylene was much higher over Au\u2013Ag/SiO 2 catalyst pretreated by O 2 plasma than that of pretreated by calcination at 500 \u00b0C, although the latter catalyst had similar particle size." }, { "instance_id": "R25857xR25770", "comparison_id": "R25857", "paper_id": "R25770", "text": "A Highly Selective Catalyst for Partial Hydrogenation of 1,3-Butadiene: MgO-Supported Rhodium Clusters Selectively Poisoned with CO The research was supported by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n8 PIOF-GA-2009-253129 (P.S.) and by DOE Basic Energy Sciences (Contract No. FG02-04ER15513) (DY). We thank the DOE Division of Materials Sciences for its role in the operation and development of beam lines 4-1 at the Stanford Synchrotron Radiation Lightsource. We thank the beamline staff for valuable support." }, { "instance_id": "R25857xR25781", "comparison_id": "R25857", "paper_id": "R25781", "text": "Pd catalyst promoted by two metal oxides with different reducibilities: Properties and performance in the selective hydrogenation of acetylene Abstract La oxide is known to be the best promoter among reducible metal oxides for acetylene hydrogenation. However, it requires high-temperature reduction, which is not feasible in commercial processes. To maintain the enhanced catalytic performance by La oxide addition while lowering the reduction temperature for application in commercial process, we added Ti oxide as a second promoter, which has a higher reducibility than La oxide. The Ti oxide is added to the Pd surface, which has been partially covered by La oxide, and maintains the modified geometric and electronic structures of the Pd catalyst induced by the high-temperature-reduced La oxide even after low-temperature reduction, as confirmed by H2 chemisorption and X-ray photoelectron spectroscopy. Surprisingly, Ti oxide further modifies the electronic structure of Pd, even for low-temperature reduction, due to its high reducibility, leading to higher ethylene selectivity than when La oxide is used exclusively. We also confirmed that a similar additive effect also applies to other metal oxides, i.e., Nb2O5." }, { "instance_id": "R25857xR25828", "comparison_id": "R25857", "paper_id": "R25828", "text": "Selective Hydrogenation of 1,3-Butadiene in the Presence of an Excess of Alkenes over Supported Bimetallic Gold\u2212Palladium Catalysts Supported Au catalysts modified by the addition of low amount of Pd (Au/Pd atomic ratio \u226510), prepared by either codeposition\u2212precipitation (DP) or coimpregnation in excess of solution (IES), result in the formation of bimetallic Au\u2212Pd particles that promote the selective hydrogenation of butadiene in the presence of propene. The DP method appears more appropriate to obtain reproducible and homogeneous bimetallic catalysts with smaller nanoparticles than the IES method, although it does not allow one to perfectly control the Au/Pd ratio. Playing with the Au/Pd ratio, it is possible to modulate the catalytic properties, especially in the case of the DP samples, and to reach a satisfying compromise between activity and selectivity, with a very low amount of alkanes formed at complete conversion of butadiene. CO adsorption followed by DRIFTS indicates that bimetallic Au/Pd alloy particles are formed, except for the catalyst prepared by IES with the lowest Au/Pd atomic ratio of 10. Pd segregation to the surfa..." }, { "instance_id": "R25857xR25822", "comparison_id": "R25857", "paper_id": "R25822", "text": "Pd/ZnO catalysts with different origins for high chemoselectivity in acetylene semi-hydrogenation The heterogeneity of active sites is the main obstacle for selectivity control in heterogeneous catalysis. Single atom catalysts (SACs) with homogeneous isolated active sites are highly desired in chemoselective transformations. In this work, a Pd1/ZnO catalyst with single-atom dispersion of Pd active sites was achieved by decreasing the Pd loading and reducing the sample at a relatively low temperature. The Pd1/ZnO SAC exhibited excellent catalytic performance in the chemoselective hydrogenation of acetylene with comparable chemoselectivity to that of PdZn intermetallic catalysts and a greatly enhanced utilization of Pd metal. Such unusual behaviors of the Pd1/ZnO SAC in acetylene semi-hydrogenation were ascribed to the high-valent single Pd active sites, which could promote electrostatic interactions with acetylene but restrain undesired ethylene hydrogenation via the spatial restrictions of \u03c3-chemical bonding toward ethylene." }, { "instance_id": "R25857xR25830", "comparison_id": "R25857", "paper_id": "R25830", "text": "Room temperature O2 plasma treatment of SiO2 supported Au catalysts for selective hydrogenation of acetylene in the presence of large excess of ethylene Abstract Supported gold nanoparticles have been proven to be active in the hydrogenation of the acetylene. In this work, we applied gold nanoparticles supported on silica for the selective hydrogenation of acetylene in excess ethylene that was close to current industrial practices. Amine group surface-functionalized silica was used to absorb the gold precursor AuCl 4 - , and small size-controlled gold nanoparticles are formed after chemical reduction. For the first time, O 2 plasma was employed to remove the APTES grafted on the surface of silica for preparing the highly dispersed gold nanoparticles (\u223c3 nm). The results of IR, TGA, XRD, and TEM showed that O 2 plasma working under mild conditions (room temperature and low pressure) can efficiently remove the organic compounds without causing the aggregation of the gold nanoparticles. The plasma-treated catalyst Au/SiO 2 gave excellent low-temperature selective catalytic activity in the hydrogenation of acetylene. The effects of the reduction temperatures on the catalytic performances of the Au/SiO 2 were investigated. The enhanced performance of gold nanoparticles supported on silica as pretreated by O 2 plasma was ascribed to two effects: (1) the small size of the gold nanoparticles supported on silica, (2) the nearly neutral charge on the Au nanoparticle, which is favorable to the activation of hydrogen." }, { "instance_id": "R25857xR25787", "comparison_id": "R25857", "paper_id": "R25787", "text": "Promotional effect of Pd single atoms on Au nanoparticles supported on silica for the selective hydrogenation of acetylene in excess ethylene A Pd single-atom alloy (SAA) structure was constructed by alloying Pd with Au supported on silica. The XRD and HRTEM results demonstrated that the addition of a small amount of Pd efficiently prevented the sintering of Au nanoparticles. The DRIFTS and EXAFS results confirmed that the Pd SAA structure was formed when the atomic ratios of Pd/Au were lower than 0.025. The Pd SAA structure exhibits a much better catalytic performance for the selective hydrogenation of acetylene in excess ethylene than the corresponding monometallic Au or Pd systems." }, { "instance_id": "R25900xR25894", "comparison_id": "R25900", "paper_id": "R25894", "text": "Single atom alloy surface analogs in Pd0.18Cu15 nanoparticles for selective hydrogenation reactions We report a novel synthesis of nanoparticle Pd-Cu catalysts, containing only trace amounts of Pd, for selective hydrogenation reactions. Pd-Cu nanoparticles were designed based on model single atom alloy (SAA) surfaces, in which individual, isolated Pd atoms act as sites for hydrogen uptake, dissociation, and spillover onto the surrounding Cu surface. Pd-Cu nanoparticles were prepared by addition of trace amounts of Pd (0.18 atomic (at)%) to Cu nanoparticles supported on Al2O3 by galvanic replacement (GR). The catalytic performance of the resulting materials for the partial hydrogenation of phenylacetylene was investigated at ambient temperature in a batch reactor under a head pressure of hydrogen (6.9 bar). The bimetallic Pd-Cu nanoparticles have over an order of magnitude higher activity for phenylacetylene hydrogenation when compared to their monometallic Cu counterpart, while maintaining a high selectivity to styrene over many hours at high conversion. Greater than 94% selectivity to styrene is observed at all times, which is a marked improvement when compared to monometallic Pd catalysts with the same Pd loading, at the same total conversion. X-ray photoelectron spectroscopy and UV-visible spectroscopy measurements confirm the complete uptake and alloying of Pd with Cu by GR. Scanning tunneling microscopy and thermal desorption spectroscopy of model SAA surfaces confirmed the feasibility of hydrogen spillover onto an otherwise inert Cu surface. These model studies addressed a wide range of Pd concentrations related to the bimetallic nanoparticles." }, { "instance_id": "R25900xR25858", "comparison_id": "R25900", "paper_id": "R25858", "text": "Beyond the use of modifiers in selective alkyne hydrogenation: silver and gold nanocatalysts in flow mode for sustainable alkene production

Supported silver and gold nanoparticles are highly stereo and chemoselective catalysts for the three-phase hydrogenation of alkynes in continuous mode.

" }, { "instance_id": "R25900xR25880", "comparison_id": "R25900", "paper_id": "R25880", "text": "Palladium nanoparticles supported on mpg-C3N4 as active catalyst for semihydrogenation of phenylacetylene under mild conditions Pd-nanoparticles supported on mesoporous graphitic carbon nitride is found to be an effective, heterogeneous catalyst for the liquid-phase semihydrogenation of phenylacetylenes under mild conditions." }, { "instance_id": "R25900xR25872", "comparison_id": "R25900", "paper_id": "R25872", "text": "Metal-Ligand Core-Shell Nanocomposite Catalysts for the Selective Semihydrogenation of Alkynes In recent years, hybrid nanocomposites with core\u2013shell structures have increasingly attracted enormous attention in many important research areas such as quantum dots, optical, magnetic, and electronic devices, and catalysts. In the catalytic applications of core\u2013shell materials, core-metals having magnetic properties enable easy separation of the catalysts from the reaction mixtures by a magnet. The core-metals can also affect the active shell-metals, delivering significant improvements in their activities and selectivities. However, it is difficult for core-metals to act directly as the catalytic active species because they are entirely covered by the shell. Thus, few successful designs of core\u2013shell nanocomposite catalysts having active metal species in the core have appeared to date. Recently, we have demonstrated the design of a core\u2013shell catalyst consisting of active metal nanoparticles (NPs) in the core and closely assembled oxides with nano-gaps in the shell, allowing the access of substrates to the core-metal. The shell acted as a macro ligand (shell ligand) for the core-metal and the core\u2013shell structure maximized the metal\u2013ligand interaction (ligand effect), promoting highly selective reactions. The design concept of core\u2013shell catalysts having core-metal NPs with a shell ligand is highly useful for selective organic transformations owing to the ideal structure of these catalysts for maximizing the ligand effect, leading to superior catalytic performances compared to those of conventional supported metal NPs. Semihydrogenation of alkynes is a powerful tool to synthesize (Z)-alkenes which are important building blocks for fine chemicals, such as bioactive molecules, flavors, and natural products. In this context, the Lindlar catalyst (Pd/ CaCO3 treated with Pb(OAc)2) has been widely used. [13] Unfortunately, the Lindlar catalyst has serious drawbacks including the requirement of a toxic lead salt and the addition of large amounts of quinoline to suppress the over-hydrogenation of the product alkenes. Furthermore, the Lindlar catalyst has a limited substrate scope; terminal alkynes cannot be converted selectively into terminal alkenes because of the rapid over-hydrogenation of the resulting alkenes to alkanes. Aiming at the development of environmentally benign catalyst systems, a number of alternative lead-free catalysts have been reported. 15] Recently, we also developed a leadfree catalytic system for the selective semihydrogenation consisting of SiO2-supported Pd nanoparticles (PdNPs) and dimethylsulfoxide (DMSO), in which the addition of DMSO drastically suppressed the over-hydrogenation and isomerization of the alkene products even after complete consumption of the alkynes. This effect is due to the coordination of DMSO to the PdNPs. DMSO adsorbed on the surface of PdNPs inhibits the coordination of alkenes to the PdNPs, while alkynes can adsorb onto the PdNPs surface because they have a higher coordination ability than DMSO. This phenomenon inspired us to design PdNPs coordinated with a DMSO-like species in a solid matrix. If a core\u2013shell structured nanocomposite involving PdNPs encapsulated by a shell having a DMSO-like species could be constructed, it would act as an efficient and functional solid catalyst for the selective semihydrogenation of alkynes. Herein, we successfully synthesized core\u2013shell nanocomposites of PdNPs covered with a DMSO-like matrix on the surface of SiO2 (Pd@MPSO/SiO2). The shell, consisting of an alkyl sulfoxide network, acted as a macroligand and allowed the selective access of alkynes to the active center of the PdNPs, promoting the selective semihydrogenation of not only internal but also terminal alkynes without any additives. Moreover, these catalysts were reusable while maintaining high activity and selectivity. Pd@MPSO/SiO2 catalysts were synthesized as follows. Pd/ SiO2 prepared according to our procedure [16] was stirred in n-heptane with small amounts of 3,5-di-tert-butyl-4-hydroxytoluene (BHT) and water at room temperature. Next, methyl3-trimethoxysilylpropylsulfoxide (MPSO) was added to the mixture and the mixture was heated. The slurry obtained was collected by filtration, washed, and dried in vacuo, affording Pd@MPSO/SiO2 as a gray powder. Altering the molar ratios of MPSO to Pd gave two kinds of catalysts: Pd@MPSO/SiO21 (MPSO:Pd = 7:1), and Pd@MPSO/SiO2-2 (MPSO:Pd = 100:1). [*] Dr. T. Mitsudome, Y. Takahashi, Dr. T. Mizugaki, Prof. Dr. K. Jitsukawa, Prof. Dr. K. Kaneda Department of Materials Engineering Science Graduate School of Engineering Science, Osaka University 1\u20133, Machikaneyama, Toyonaka, Osaka 560-8531 (Japan) E-mail: kaneda@cheng.es.osaka-u.ac.jp" }, { "instance_id": "R25900xR25888", "comparison_id": "R25900", "paper_id": "R25888", "text": "Formation and Characterization of PdZn Alloy: A Very Selective Catalyst for Alkyne Semihydrogenation The formation of a PdZn alloy from a 4.3% Pd/ZnO catalyst was characterized by combined in situ high-resolution X-ray diffraction (HRXRD) and X-ray absorption spectroscopy (XAS). Alloy formation started already at around 100 \u00b0C, likely at the surface, and reached the bulk with increasing temperature. The structure of the catalyst was close to the bulk value of a 1:1 PdZn alloy with a L1o structure (RPd\u2212Pd = 2.9 A, RPd\u2212Zn = 2.6 A, CNPd\u2212Zn = 8, CNPd\u2212Pd = 4) after reduction at 300 \u00b0C and above. The activity of the gas-phase hydrogenation of 1-pentyne decreased with the formation of the PdZn alloy. In contrast to Pd/SiO2, no full hydrogenation occurred over Pd/ZnO. Over time, only slight decomposition of the alloy occurred under reaction conditions." }, { "instance_id": "R25900xR25861", "comparison_id": "R25900", "paper_id": "R25861", "text": "One-step Synthesis of Core-Gold/Shell- Ceria Nanomaterial and Its Catalysis for Highly Selective Semi- hydrogenation of Alkynes We report a facile synthesis of new core-Au/shell-CeO2 nanoparticles (Au@CeO2) using a redox-coprecipitation method, where the Au nanoparticles and the nanoporous shell of CeO2 are simultaneously formed in one step. The Au@CeO2 catalyst enables the highly selective semihydrogenation of various alkynes at ambient temperature under additive-free conditions. The core-shell structure plays a crucial role in providing the excellent selectivity for alkenes through the selective dissociation of H2 in a heterolytic manner by maximizing interfacial sites between the core-Au and the shell-CeO2." }, { "instance_id": "R25999xR25995", "comparison_id": "R25999", "paper_id": "R25995", "text": "Information theoretic analysis of postal address fields for automatic address interpretation This paper concerns a study of information content in postal address fields for automatic address interpretation. Information provided by a combination of address components and information interaction among components is characterized in terms of Shannon's entropy. The efficiency of assignment strategies for determining a delivery point code can be compared by the propagation of uncertainty in address components. The quantity of redundancy between components can be computed from the information provided by these components. This information is useful in developing a strategy for selecting a useful component for recovering the value of an uncertain component. The uncertainty of a component based on another known component can be measured by conditional entropy. By ranking the uncertainty quantity, the effective processing flow for determining the value of a candidate component can be constructed." }, { "instance_id": "R26063xR26059", "comparison_id": "R26063", "paper_id": "R26059", "text": "Strength of adhesive joints with adherend yielding: I. Analytical model A sandwich element can be isolated in all two-dimensional adhesive joints, thereby simplifying the analysis of strain and stress. An adhesive sandwich model has been developed that accommodates arbitrary loading, a bilinear adherend stress-strain response, and any form of nonlinear adhesive behavior. The model accounts for both the bending deformation and the shear deformation of the adherends. Stress and strain distributions in the adhesive were obtained by solving a system of six differential equations using a finite-difference method. For a sample adhesive sandwich, the adhesive strains and stresses from the new model were compared with those of other models. Finally, the model was coupled with an analytical solution for the detached section of an adhesive joint in peel. The stress and strain distributions in the adhesive and the root curvature of the peel adherend were then compared with finite element results. An accompanying article in this issue uses the model with experimental peel data to investigate the suitability of various adhesive failure criteria." }, { "instance_id": "R26063xR26035", "comparison_id": "R26063", "paper_id": "R26035", "text": "Stresses in Adhesively Bonded Joints: A Closed-Form Solution In this paper the general plane strain problem of adhesively bonded struc tures which consist of two different orthotropic adherends is considered. Assuming that the thicknesses of the adherends are constant and are small in relation to the lateral dimensions of the bonded region, the adherends are treated as plates. Also, assuming that the thickness of the adhesive is small compared to that of the adherends, the thickness variation of the stresses in the adhesive layer is neglected. However, the transverse shear effects in the adherends and the in-plane normal strain in the adhesive are taken into ac count. The problem is reduced to a system of differential equations for the adhesive stresses which is solved in closed form. A single lap joint and a stif fened plate under various loading conditions are considered as examples. To verify the basic trend of the solutions obtained from the plate theory and to give some idea about the validity of the plate assumption itself, a sample pro blem is solved by using the finite element method and by treating the adherends and the adhesive as elastic continua. It is found that the plate theory used in the analysis not only predicts the correct trend for the adhesive stresses but also gives rather surprisingly accurate results. The solution is ob tained by assuming linear stress-strain relations for the adhesive. In the Ap pendix the problem is formulated by using a nonlinear material for the adhesive and by following two different approaches." }, { "instance_id": "R26063xR26051", "comparison_id": "R26063", "paper_id": "R26051", "text": "Analysis of Adhesive-Bonded Joints, Square-End, and Spew-Fillet\u2014High-Order Theory Approach The analysis of adhesive-bonded joints using a closed-form high-order theory (CFHO theory) is presented, and its capabilities are demonstrated numerically for the case of single lap joints with and without a \u201cspew-fillet.\u201d The governing equations based on the CFHO theory are presented along with the appropriate boundary/continuity conditions at the free edges. The joints consist of two metallic or composite laminated adherents that are interconnected through equilibrium and compatibility requirements by a 2D linear elastic adhesive layer. The CFHO theory predicts that the distributions of the displacements through the thickness of the adhesive layer are nonlinear in general (high-order effects) and are a result of \\Inot presumed\\N displacement patterns. The spew-fillet is modeled through an equivalent tensile bar, which enables quantification of the effects of the spew-fillet size on the stress fields. Satisfactory comparisons with two-parameter elastic foundation solution (Goland-Reissner type) results and finite-element results are presented." }, { "instance_id": "R26063xR26023", "comparison_id": "R26063", "paper_id": "R26023", "text": "Two Dimensional Displacement-Stress Distributions in Adhesive Bonded Composite Structures Abstract Computerized analysis of composite structures formed by the adhesive bonding of materials is presented. The adhesive is considered to be a part of a linearly elastic system whose components are individually characterized by two bulk property elastic constants. Solution is obtained by finite difference minimization of the internal energy distribution in a discretized, piecewise homogeneous continuum. The plane-stress, plane-strain problems are considered, and yield displacement and stress distributions for the composite system. Displacement and/or stress boundary conditions are allowed. Acute contour angles are not allowed. This is the only restriction for otherwise arbitrary plane geometries. Results are presented for typical lap shear specimens as well as for a particular case of a butt joint in which a void exists in the adhesive layer." }, { "instance_id": "R26063xR26027", "comparison_id": "R26063", "paper_id": "R26027", "text": "The efficient design of adhesive bonded joints Abstract A concise method of analysis is used to study the numerous parameters influencing the stress distribution within the adhesive of a single lap joint. The formulation includes transverse shear and normal strain deformations. Both isotropic or anisotropic material systems of similar or dissimilar adherends are analysed. Results indicate that the primary Young's modulus of the adherend, the overlap length, and the adhesive's material properties are the parameters most influential in optimizing the design of a single lap joint." }, { "instance_id": "R26107xR26084", "comparison_id": "R26107", "paper_id": "R26084", "text": "Thermal comfort in residential buildings \u2013 Failure to predict by Standard model Abstract A field study, conducted in 189 dwellings in winter and 205 dwellings in summer, included measurement of hygro-thermal conditions and documentation of occupant responses and behavior patterns. Both samples included both passive and actively space-conditioned dwellings. Predicted mean votes (PMV) computed using Fanger's model yielded significantly lower-than-reported thermal sensation (TS) values, especially for the winter heated and summer air-conditioned groups. The basic model assumption of a proportional relationship between thermal response and thermal load proved to be inadequate, with actual thermal comfort achieved at substantially lower loads than predicted. Survey results also refuted the model's second assumption that symmetrical responses in the negative and positive directions of the scale represent similar comfort levels. Results showed that the model's curve of predicted percentage of dissatisfied (PPD) substantially overestimated the actual percentage of dissatisfied within the partial group of respondents who voted TS > 0 in winter as well as within the partial group of respondents who voted TS" }, { "instance_id": "R26107xR26095", "comparison_id": "R26107", "paper_id": "R26095", "text": "Field study on occupant comfort and the office thermal environment in rooms with displacement ventilation UNLABELLED A field survey of occupants' response to the indoor environment in 10 office buildings with displacement ventilation was performed. The response of 227 occupants was analyzed. About 24% of the occupants in the survey complained that they were daily bothered by draught, mainly at the lower leg. Vertical air temperature difference measured between head and feet levels was less than 3 degrees C at all workplaces visited. Combined local discomfort because of draught and vertical temperature difference does not seem to be a serious problem in rooms with displacement ventilation. Almost one half (49%) of the occupants reported that they were daily bothered by an uncomfortable room temperature. Forty-eight per cent of the occupants were not satisfied with the air quality. PRACTICAL IMPLICATIONS The PMV and the Draught Rating indices as well as the specifications for local discomfort because of the separate impact of draught and vertical temperature difference, as defined in the present standards, are relevant for the design of a thermal environment in rooms with displacement ventilation and for its assessment in practice. Increasing the supply air temperature in order to counteract draught discomfort is a measure that should be considered carefully; even if the desired stratification of pollution in the occupied zone is preserved, an increase of the inhaled air temperature may have a negative effect on perceived air quality." }, { "instance_id": "R26107xR26099", "comparison_id": "R26107", "paper_id": "R26099", "text": "Linking indoor environment conditions to job satisfaction: a field study Physical and questionnaire data were collected from 95 workstations at an open-plan office building in Michigan, US. The physical measurements encompassed thermal, lighting, and acoustic variables, furniture dimensions, and an assessment of potential exterior view. Occupants answered a detailed questionnaire concerning their environmental and job satisfaction, and aspects of well-being. These data were used to test, via mediated regression, a model linking the physical environment, through environmental satisfaction, to job satisfaction and other related measures. In particular, a significant link was demonstrated between overall environmental satisfaction and job satisfaction, mediated by satisfaction with management and with compensation. Analysis of physical data was limited to the lighting domain. Results confirmed the important role of window access at the desk in satisfaction with lighting, particularly through its effect on satisfaction with outside view. Des donn\u00e9es physiques et des donn\u00e9es obtenues par questionnaire ont \u00e9t\u00e9 recueillies aupr\u00e8s de 95 postes de travail dans un immeuble de bureaux d\u00e9cloisonn\u00e9s du Michigan, aux Etats-Unis. Les mesures physiques comprenaient des variables thermiques, acoustiques et relatives \u00e0 l'\u00e9clairage, les dimensions des meubles, ainsi qu'une \u00e9valuation de la vue ext\u00e9rieure potentielle. Les occupants ont r\u00e9pondu \u00e0 un questionnaire d\u00e9taill\u00e9 portant sur la satisfaction \u00e0 l'\u00e9gard de leur environnement et de leur travail, et sur des aspects relatifs au bien-\u00eatre. Ces donn\u00e9es ont \u00e9t\u00e9 utilis\u00e9es pour tester, au moyen d'une r\u00e9gression m\u00e9diatis\u00e9e, un mod\u00e8le liant l'environnement physique, par la satisfaction \u00e0 l'\u00e9gard de l'environnement, \u00e0 la satisfaction dans le travail et aux autres mesures li\u00e9es. Il a en particulier \u00e9t\u00e9 d\u00e9montr\u00e9 qu'il existe un lien important entre la satisfaction globale \u00e0 l'\u00e9gard de l'environnement et la satisfaction dans le travail, m\u00e9diatis\u00e9 par la satisfaction vis-\u00e0-vis de la direction et de la r\u00e9mun\u00e9ration. L'analyse des donn\u00e9es physiques a \u00e9t\u00e9 limit\u00e9e au domaine de l'\u00e9clairage. Les r\u00e9sultats ont confirm\u00e9 que le fait de pouvoir acc\u00e9der \u00e0 une fen\u00eatre au bureau joue un r\u00f4le important dans la satisfaction \u00e0 l'\u00e9gard de l'\u00e9clairage, en particulier par son effet sur la satisfaction vis-\u00e0-vis de la vue ext\u00e9rieure. Mots cl\u00e9s: satisfaction \u00e0 l'\u00e9gard de l'environnement, satisfaction dans le travail, \u00e9clairage, perception par les occupants, bureaux, productivit\u00e9 organisationnelle, vue, bien-\u00eatre" }, { "instance_id": "R26127xR26111", "comparison_id": "R26127", "paper_id": "R26111", "text": "Underground activity and institutional change: Productive, protective and predatory behavior in transition economies This paper examines why some transitions are more successful than others by focusing attention on the role of productive, protective and predatory behaviors from the perspective of the new institutional economics. Many transition economies are characterized by a fundamental inconsistency between formal and informal institutions. When formal and informal rules clash, noncompliant behaviors proliferate, among them, tax evasion, corruption, bribery, organized criminality, and theft of government property. These wealth redistributing protective and predatory behaviors activities absorb resources that could otherwise be used for wealth production resulting in huge transition costs. Noncompliant behaviors--evasion, avoidance, circumvention, abuse, and/or corruption of institutional rules--comprise what we can be termed underground economies. A variety of underground economies can be differentiated according to the types of rules violated by the noncompliant behaviors. The focus of the new institutional economics is on the consequences of institutions--the rules that structure and constrain economic activity--for economic outcomes. Underground economics is concerned with instances in which the rules are evaded, circumvented, and violated. It seeks to determine the conditions likely to foster rule violations, and to understand the various consequences of noncompliance with institutional rules. Noncompliance with \u2018bad\u201d rules may actually foster development whereas non compliance with \u201cgood\u201d rules will hinder development. Since rules differ, both the nature and consequences of rule violations will therefore depend on the particular rules violated. Institutional economics and underground economics are therefore highly complementary. The former examines the rules of the game, the latter the strategic responses of individuals and organizations to those rules. Economic performance depends on both the nature of the rules and the extent of compliance with them. Institutions therefore do affect economic performance, but it is not always obvious which institutional rules dominate. Where formal and informal institutions are coherent and consistent, the incentives produced by the formal rules will affect economic outcomes. Under these circumstances, the rule of law typically secures property rights, reduces uncertainty, and lowers transaction costs. In regimes of discretionary authority where formal institutions conflict with informal norms, noncompliance with the formal rules becomes pervasive, and underground economic activity is consequential for economic outcomes." }, { "instance_id": "R26146xR26140", "comparison_id": "R26146", "paper_id": "R26140", "text": "Integrating the unofficial economy into the dynamics of post-socialist economies: A framework of analysis and evidence Over a third of economic activity in theformer Soviet countries was estimated to occur in the unofficial economy by the mid-1990s; in Central and Eastern Europe, the average is close to one-quarter. Intraregional variations are great: in some countries 10 to 15 percent of economic activity is unofficial, and in some more than half of it. The growth of unofficial activity in most post-socialist countries, and its mitigating effect on the decline in official output during the early stages of the transition, have been marked. In this paper, the authors challenge the conventional view of how post-socialist economies function by incorporating the unofficial economy into an analysis of the full economy. Then they advance a simple framework for understanding the evolution of the unofficial economy, and the links between both economies, highlighting the main characteristics of\"officialdom,\"contrasting conventional notions of\"informal\"or\"shadow\"economies, and focusing on what determines the decision to cross over from one segment to another. The initial empirical results seem to support hypothetical explanations of what determines the dynamics of the unofficial economy. The authors emphasize the speedy liberalization of markets, macro stability, and a stable and moderate tax regime. Although widespread, most\"unofficialdom\"in the region is found to be relatively shallow--subject to reversal by appropriate economic policies. The framework and evidence presented here have implications for measurement, forecasting, and policymaking--calling for even faster liberalization and privatization than already advocated. And the lessons in social protection and taxation policy differ from conventional advice." }, { "instance_id": "R26194xR26173", "comparison_id": "R26194", "paper_id": "R26173", "text": "An Integrated Inventory Allocation and Vehicle Routing Problem We address the problem of distributing a limited amount of inventory among customers using a fleet of vehicles so as to maximize profit. Both the inventory allocation and the vehicle routing problems are important logistical decisions. In many practical situations, these two decisions are closely interrelated, and therefore, require a systematic approach to take into account both activities jointly. We formulate the integrated problem as a mixed integer program and develop a Lagrangian-based procedure to generate both good upper bounds and heuristic solutions. Computational results show that the procedure is able to generate solutions with small gaps between the upper and lower bounds for a wide range of cost structures." }, { "instance_id": "R26194xR26167", "comparison_id": "R26194", "paper_id": "R26167", "text": "An Allocation and Distribution Model for Perishable Products This paper presents an allocation model for a perishable product, distributed from a regional center to a given set of locations with random demands. We consider the combined problem of allocating the available inventory at the center while deciding how these deliveries should be performed. Two types of delivery patterns are analyzed: the first pattern assumes that all demand points receive individual deliveries; the second pattern subsumes the frequently occurring case in which deliveries are combined in multistop routes traveled by a fleet of vehicles. Computational experience is reported." }, { "instance_id": "R26262xR26244", "comparison_id": "R26262", "paper_id": "R26244", "text": "A branch-and-cut algorithm for a vendor-managed inventory-routing problem We consider a distribution problem in which a product has to be shipped from a supplier to several retailers over a given time horizon. Each retailer defines a maximum inventory level. The supplier monitors the inventory of each retailer and determines its replenishment policy, guaranteeing that no stockout occurs at the retailer (vendor-managed inventory policy). Every time a retailer is visited, the quantity delivered by the supplier is such that the maximum inventory level is reached (deterministic order-up-to level policy). Shipments from the supplier to the retailers are performed by a vehicle of given capacity. The problem is to determine for each discrete time instant the quantity to ship to each retailer and the vehicle route. We present a mixed-integer linear programming model and derive new additional valid inequalities used to strengthen the linear relaxation of the model. We implement a branch-and-cut algorithm to solve the model optimally. We then compare the optimal solution of the problem with the optimal solution of two problems obtained by relaxing in different ways the deterministic order-up-to level policy. Computational results are presented on a set of randomly generated problem instances." }, { "instance_id": "R26262xR26222", "comparison_id": "R26262", "paper_id": "R26222", "text": "Deterministic Order-Up-To Level Policies in an Inventory Routing Problem We consider a distribution problem in which a set of products has to be shipped from a supplier to several retailers in a given time horizon. Shipments from the supplier to the retailers are performed by a vehicle of given capacity and cost. Each retailer determines a minimum and a maximum level of the inventory of each product, and each must be visited before its inventory reaches the minimum level. Every time a retailer is visited, the quantity of each product delivered by the supplier is such that the maximum level of the inventory is reached at the retailer. The problem is to determine for each discrete time instant the retailers to be visited and the route of the vehicle. Various objective functions corresponding to different decision policies, and possibly to different decision makers, are considered. We present a heuristic algorithm and compare the solutions obtained with the different objective functions on a set of randomly generated problem instances." }, { "instance_id": "R26262xR26201", "comparison_id": "R26262", "paper_id": "R26201", "text": "An interactive, computer-aided ship scheduling system Abstract This paper is concerned with a fleet scheduling and inventory resupply problem faced by an international chemical operation. The firm uses a fleet of small ocean-going tankers to deliver bulk fluid to warehouses all over the world. The scheduling problem centers around decisions on routes, arrival/departure times, and inventory replenishment quantities. An interactive computer system was developed and implemented at the firm, and was successfully used to address daily scheduling issues as well as longer range planning problems. The purpose of this paper is to first present how the underlying decision problem was analyzed using both a network flow model and a mixed integer programming model, and then to describe the components of the decision support system developed to generate schedules. The use of the system in various decision making applications is also described." }, { "instance_id": "R26262xR26228", "comparison_id": "R26262", "paper_id": "R26228", "text": "A Periodic Inventory Routing Problem at a Supermarket Chain Albert Heijn, BV, a supermarket chain in the Netherlands, faces a vehicle routing and delivery scheduling problem once every three to six months. Given hourly demand forecasts for each store, travel times and distances, cost parameters, and various transportation constraints, the firm seeks to determine a weekly delivery schedule specifying the times when each store should be replenished from a central distribution center, and to determine the vehicle routes that service these requirements at minimum cost. We describe the development and implementation of a system to solve this problem at Albert Heijn. The system resulted in savings of 4% of distribution costs in its first year of implementation and is expected to yield 12%-20% savings as the firm expands its usage. It also has tactical and strategic advantages for the firm, such as in assessing the cost impact of various logistics and marketing decisions, in performance measurement, and in competing effectively through reduced lead time and increased frequency of replenishment." }, { "instance_id": "R26262xR26209", "comparison_id": "R26262", "paper_id": "R26209", "text": "Solving An Integrated Logistics Problem Arising In Grocery Distribution AbstractA complex allocation-routing problem arising in grocery distribution is described. It is solved by means of a heuristic that alternates between these two components. Tests on real and artificial data confirm the efficiency and the robustness of the proposed approach." }, { "instance_id": "R26352xR26292", "comparison_id": "R26352", "paper_id": "R26292", "text": "Minimization of logistic costs with given frequencies We study the problem of shipping products from one origin to several destinations, when a given set of possible shipping frequencies is available. The objective of the problem is the minimization of the transportation and inventory costs. We present different heuristic algorithms and test them on a set of randomly generated problem instances. The heuristics are based upon the idea of solving, in a first phase, single link problems, and of locally improving the solution in subsequent phases." }, { "instance_id": "R26352xR26346", "comparison_id": "R26352", "paper_id": "R26346", "text": "Replenishment routing problems between a single supplier and multiple retailers with direct delivery We consider the replenishment routing problems of one supplier who can replenish only one of multiple retailers per period, while different retailers need different periodical replenishment. For simple cases satisfying certain conditions, we obtain the simple routing by which the supplier can replenish each retailer periodically so that shortage will not occur. For complicated cases, using number theory, especially the Chinese remainder theorem, we present an algorithm to calculate a feasible routing so that the supplier can replenish the selected retailers on the selected periods without shortages." }, { "instance_id": "R26352xR26272", "comparison_id": "R26352", "paper_id": "R26272", "text": "On the Effectiveness of Direct Shipping Strategy for the One-Warehouse Multi-Retailer R-Systems We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of one depot and many geographically dispersed retailers. All stock enters the system through the depot and is distributed to the retailers by vehicles of limited constant capacity. We assume that each one of the retailers faces a constant, retailer specific, demand rate and that inventory is charged only at the retailers but not at the depot. We provide a lower bound on the long run average cost over all inventory-routing strategies. We use this lower bound to show that the effectiveness of direct shipping over all inventory-routing strategies is at least 94% whenever the Economic Lot Size of each of the retailers is at least 71% of vehicle capacity. The effectiveness deteriorates as the Economic Lot Sizes become smaller. These results are important because they provide useful guidelines as to when to embark into the much more difficult task of finding cost-effective routes. Additional advantages of direct shipping are lower in-transit inventory and ease of coordination." }, { "instance_id": "R26352xR26290", "comparison_id": "R26352", "paper_id": "R26290", "text": "Direct shipping and the dynamic single-depot/ multi-retailer inventory system In this paper we study a single-depot/multi-retailer system with independent stochastic stationary demands, linear inventory costs, and backlogging at the retailers over an infinite horizon. In addition, we also consider the transportation cost between the depot and the retailers. Orders are placed each period by the depot. The orders arrive at the depot and are allocated and delivered to the retailers. No inventory is held at the depot. We consider a specific policy of direct shipments. That is, a lower bound on the long run average cost per period for the system over all order/delivery strategies is developed. The simulated long term average cost per period of the delivery strategy of direct shipping with fully loaded trucks is examined via comparison to the derived lower bound. Simulation studies demonstrate that very good results can be achieved by a direct shipping policy." }, { "instance_id": "R26352xR26324", "comparison_id": "R26352", "paper_id": "R26324", "text": "Modeling inventory routing problems in supply chains of high consumption products Given a distribution center and a set of sales-points with their demand rates, the objective of the inventory routing problem (IRP) is to determine a distribution plan that minimizes fleet operating and average total distribution and inventory holding costs without causing a stock-out at any of the sales-points during a given planning horizon. We propose a new model for the long-term IRP when demand rates are stable and economic order quantity-like policies are used to manage inventories of the sales-points. The proposed model extends the concept of vehicle routes (tours) to vehicle multi-tours. To solve the nonlinear mixed integer formulation of this problem, a column generation based approximation method is suggested. The resulting sub-problems are solved using a savings-based approximation method. The approach is tested on randomly generated problems with different settings of some critical factors to compare our model using multi-tours as basic constructs to the model using simple tours as basic constructs." }, { "instance_id": "R26352xR26313", "comparison_id": "R26352", "paper_id": "R26313", "text": "The Stochastic Inventory Routing Problem with Direct Deliveries Vendor managed inventory replenishment is a business practice in which vendors monitor their customers' inventories, and decide when and how much inventory should be replenished. The inventory routing problem addresses the coordination of inventory management and transportation. The ability to solve the inventory routing problem contributes to the realization of the potential savings in inventory and transportation costs brought about by vendor managed inventory replenishment. The inventory routing problem is hard, especially if a large number of customers is involved. We formulate the inventory routing problem as a Markov decision process, and we propose approximation methods to find good solutions with reasonable computational effort. Computational results are presented for the inventory routing problem with direct deliveries." }, { "instance_id": "R26352xR26330", "comparison_id": "R26352", "paper_id": "R26330", "text": "On the Interactions Between Routing and Inventory-Management Policies in a One-WarehouseN-Retailer Distribution System This paper examines the interactions between routing and inventory-management decisions in a two-level supply chain consisting of a cross-docking warehouse and N retailers. Retailer demand is normally distributed and independent across retailers and over time. Travel times are fixed between pairs of system sites. Every m time periods, system inventory is replenished at the warehouse, whereupon an uncapacitated vehicle departs on a route that visits each retailer once and only once, allocating all of its inventory based on the status of inventory at the retailers who have not yet received allocations. The retailers experience newsvendor-type inventory-holding and backorder-penalty costs each period; the vehicle experiences in-transit inventory-holding costs each period. Our goal is to determine a combined system inventory-replenishment, routing, and inventory-allocation policy that minimizes the total expected cost/period of the system over an infinite time horizon. Our analysis begins by examining the determination of the optimal static route, i.e., the best route if the vehicle must travel the same route every replenishment-allocation cycle. Here we demonstrate that the optimal static route is not the shortest-total-distance (TSP) route, but depends on the variance of customer demands, and, if in-transit inventory-holding costs are charged, also on mean customer demands. We then examine dynamic-routing policies, i.e., policies that can change the route from one system-replenishment-allocation cycle to another, based on the status of the retailers' inventories. Here we argue that in the absence of transportation-related cost, the optimal dynamic-routing policy should be viewed as balancing management's ability to respond to system uncertainties (by changing routes) against system uncertainties that are induced by changing routes. We then examine the performance of a change-revert heuristic policy. Although its routing decisions are not fully dynamic, but determined and fixed for a given cycle at the time of each system replenishment, simulation tests with N = 2 and N = 6 retailers indicate that its use can substantially reduce system inventory-related costs even if most of the time the chosen route is the optimal static route." }, { "instance_id": "R26352xR26350", "comparison_id": "R26352", "paper_id": "R26350", "text": "A practical solution approach for the cyclic inventory routing problem Vendor managed inventory (VMI) is an example of effective cooperation and partnering practices between up- and downstream stages in a supply chain. In VMI, the supplier takes the responsibility for replenishing his customers' inventories based on their consumption data, with the aim of optimizing the over all distribution and inventory costs throughout the supply chain. This paper discusses the challenging optimization problem that arises in this context, known as the inventory routing problem (IRP). The objective of this IRP problem is to determine a distribution plan that minimizes average distribution and inventory costs without causing any stock-out at the customers. Deterministic constant customer demand rates are assumed and therefore, a long-term cyclical approach is adopted, integrating fleet sizing, vehicle routing, and inventory management. Further, realistic side-constraints such as limited storage capacities, driving time restrictions and constant replenishment intervals are taken into account. A heuristic solution approach is proposed, analyzed and evaluated against a comparable state-of-the-art heuristic." }, { "instance_id": "R26352xR26343", "comparison_id": "R26352", "paper_id": "R26343", "text": "Scenario Tree-Based Heuristics for Stochastic Inventory-Routing Problems In vendor-managed inventory replenishment, the vendor decides when to make deliveries to customers, how much to deliver, and how to combine shipments using the available vehicles. This gives rise to the inventory-routing problem in which the goal is to coordinate inventory replenishment and transportation to minimize costs. The problem tackled in this paper is the stochastic inventory-routing problem, where stochastic demands are specified through general discrete distributions. The problem is formulated as a discounted infinite-horizon Markov decision problem. Heuristics based on finite scenario trees are developed. Computational results confirm the efficiency of these heuristics." }, { "instance_id": "R26421xR26405", "comparison_id": "R26421", "paper_id": "R26405", "text": "Purification, Characterization, and Gene Analysis of a Chitosanase (ChoA) from Matsuebacter chitosanotabidus3001 ABSTRACT The extracellular chitosanase (34,000 M r ) produced by a novel gram-negative bacterium Matsuebacter chitosanotabidus 3001 was purified. The optimal pH of this chitosanase was 4.0, and the optimal temperature was between 30 and 40\u00b0C. The purified chitosanase was most active on 90% deacetylated colloidal chitosan and glycol chitosan, both of which were hydrolyzed in an endosplitting manner, but this did not hydrolyze chitin, cellulose, or their derivatives. Among potential inhibitors, the purified chitosanase was only inhibited by Ag + . Internal amino acid sequences of the purified chitosanase were obtained. A PCR fragment corresponding to one of these amino acid sequences was then used to screen a genomic library for the entire choA gene encoding chitosanase. Sequencing of the choA gene revealed an open reading frame encoding a 391-amino-acid protein. The N-terminal amino acid sequence had an excretion signal, but the sequence did not show any significant homology to other proteins, including known chitosanases. The 80-amino-acid excretion signal of ChoA fused to green fluorescent protein was functional in Escherichia coli . Taken together, these results suggest that we have identified a novel, previously unreported chitosanase." }, { "instance_id": "R26421xR26391", "comparison_id": "R26421", "paper_id": "R26391", "text": "Biochemical and Genetic Properties ofPaenibacillusGlycosyl Hydrolase Having Chitosanase Activity and Discoidin Domain Cells of \u201cPaenibacillus fukuinensis\u201d D2 produced chitosanase into surrounding medium, in the presence of colloidal chitosan or glucosamine. The gene of this enzyme was cloned, sequenced, and subjected to site-directed mutation and deletion analyses. The nucleotide sequence indicated that the chitosanase was composed of 797 amino acids and its molecular weight was 85,610. Unlike conventional family 46 chitosanases, the enzyme has family 8 glycosyl hydrolase catalytic domain, at the amino-terminal side, and discoidin domain at the carboxyl-terminal region. Expression of the cloned gene in Escherichia coli revealed \u03b2-1,4-glucanase function, besides chitosanase activity. Analyses by zymography and immunoblotting suggested that the active enzyme was, after removal of signal peptide, produced from inactive 81-kDa form by proteolysis at the carboxyl-terminal region. Replacements of Glu115 and Asp176, highly conserved residues in the family 8 glycosylase region, with Gln and Asn caused simultaneous loss of chitosanase and glucanase activities, suggesting that these residues formed part of the catalytic site. Truncation experiments demonstrated indispensability of an amino-terminal region spanning 425 residues adjacent to the signal peptide." }, { "instance_id": "R26421xR26399", "comparison_id": "R26421", "paper_id": "R26399", "text": "Production of Two Chitosanases from a Chitosan-Assimilating Bacterium, Acinetobacter sp. Strain CHB101. A bacterial strain capable of utilizing chitosan as a sole carbon source was isolated from soil and was identified as a member of the genus Acinetobacter. This strain, designated CHB101, produced extracellular chitosan-degrading enzymes in the absence of chitosan. The chitosan-degrading activity in the culture fluid increased when cultures reached the early stationary phase, although the level of activity was low in the exponential growth phase. Two chitosanases, chitosanases I and II, which had molecular weights of 37,000 and 30,000, respectively, were purified from the culture fluid. Chitosanase I exhibited substrate specificity for chitosan that had a low degree of acetylation (10 to 30%), while chitosanase II degraded colloidal chitin and glycol chitin, as well as chitosan that had a degree of acetylation of 30%. Rapid decreases in the viscosities of chitosan solutions suggested that both chitosanases catalyzed an endo type of cleavage reaction; however, chitosan oligomers (molecules smaller than pentamers) were not produced after a prolonged reaction." }, { "instance_id": "R26421xR26379", "comparison_id": "R26421", "paper_id": "R26379", "text": "Purification and characterization of an extracellular chitosanase produced by Amycolatopsis sp. CsO-2 Abstract Extracellular chitosanase produced by Amycolatopsis sp. CsO-2 was purified to homogeneity by precipitation with ammonium sulfate followed by cation exchange chromatography. The molecular weight of the chitosanase was estimated to be about 27,000 using SDS-polyacrylamide gel electrophoresis and gel filtration. The maximum velocity of chitosan degradation by the enzyme was attained at 55\u00b0C when the pH was maintained at 5.3. The enzyme was stable over a temperature range of 0\u201350\u00b0C and a pH range of 4.5\u20136.0. About 50% of the initial activity remained after heating at 100\u00b0C for 10 min, indicating a thermostable nature of the enzyme. The isoelectric point of the enzyme was about 8.8. The enzyme degraded chitosan with a range of deacetylation degree from 70% to 100%, but not chitin or CM-cellulose. The most susceptible substrate was 100% deacetylated chitosan. The enzyme degraded glucosamine tetramer to dimer, and pentamer to dimer and trimer, but did not hydrolyze glucosamine dimer and trimer." }, { "instance_id": "R26421xR26417", "comparison_id": "R26421", "paper_id": "R26417", "text": "Purification and Mode of Action of a Chitosanase from Penicillium islandicum Penicillium islandicum produced an inducible extracellular chitosanase when grown on chitosan. Large-scale production of the enzyme was obtained using Rhizopus rhizopodiformis hyphae as substrate. Chitosanase was purified 38-fold to homogeneity by ammonium sulphate fractionation and sequential chromatography on DEAE-Biogel A, Biogel P60 and hydroxyl-apatite. Crude enzyme was unstable at 370C, but was stabilized by 1\u00b70 mm-Ca2+. The pH optimum for activity was broad and dependent on the solubility of the chitosan substrate. Various physical and chemical properties of the purified enzyme were determined. Penicillium islandicum chitosanase cleaved chitosan in an endo-splitting manner with maximal activity on polymers of 30 to 60% acetylation. No activity was found on chitin (100% acetylated chitosan) or trimers and tetramers of N-acetylglucosamine. The latter two oligomers and all small oligomers of glucosamine inhibited the activity of chitosanase on 30% acetylated chitosan. The pentamer of N-acetylglucosamine and glucosamine oligomers were slowly cleaved by the enzyme. Analysis of the reaction products from 30% acetylated chitosan indicated that the major oligomeric product was a trimer; with 60% acetylated chitosan as substrate a dimer was also found. The new terminal reducing groups produced by chitosanase hydrolysis of 30% acetylated chitosan were reduced by sodium boro[3H]hydride. The new end residues were found to be N-acetylglucosamine. The analyses strongly indicated that P. islandicum chitosanase cleaved chitosan between N-acetylglucosamine and glucosamine. Both residues were needed for cleavage, and polymers containing equal proportions of acetylated and non-acetylated sugars were optimal for chitosanase activity. The products of reaction depended on the degree of acetylation of the polymer." }, { "instance_id": "R26421xR26411", "comparison_id": "R26421", "paper_id": "R26411", "text": "In Vitro Suppression of Mycelial Growth of Fusarium oxysporum by Extracellular Chitosanase of Sphingobacterium multivorum and Cloning of the Chitosanase Gene csnSM1 A chitosan-degrading bacterium, isolated from field soil that had been amended with chitin, was identified as Sphingobacterium multivorum KST-009 on the basis of its bacteriological characteristics. The extracellular chitosanase (SM1) secreted by KST-009 was a 34-kDa protein and could be purified through ammonium sulfate precipitation, gel permeation column chromatography and SDS polyacrylamide gel electrophoresis. A chitosanase gene (csnSM1) was isolated from genomic DNA of the bacteria, and the entire nucleotide sequence of the gene and the partial N-terminal amino acid sequence of the purified SM1 were determined. The csnSM1 gene was found to encode 383 amino acids, 72 N-terminal amino acid residues were processed to produce the mature enzyme during the secretion process. Germinated microconidia of four formae speciales (lycopersici, radicis-lycopersici, melonis, and fragariae ) of Fusarium oxysporum were treated with SM1. Chitosanase treatment caused morphological changes, such as swelling of hyphal cells or indistinctness of hyphal cell tips and cessation or reduction of mycelial elongation." }, { "instance_id": "R26421xR26407", "comparison_id": "R26421", "paper_id": "R26407", "text": "The bifunctional enzyme chitosanase-cellulase produced by the gram-negative microorganism Myxobacter sp. AL-1 is highly similar to Bacillus subtilis endoglucanases Abstract The gram-negative bacterium Myxobacter sp. AL-1 produces chitosanase-cellulase activity that is maximally excreted during the stationary phase of growth. Carboxymethylcellulase zymogram analysis revealed that the enzymatic activity was correlated with two bands of 32 and 35 kDa. Ion-exchange-chromatography-enriched preparations of the 32-kDa enzyme were capable of degrading the cellulose fluorescent derivatives 4-methylumbelliferyl-\u03b2-d-cellobioside and 4-methylumbelliferyl-\u03b2-d-cellotrioside. These enzymatic preparations also showed a greater capacity at 70\u00b0 C than at 42\u00b0 C to degrade chitosan oligomers of a minimum size of six units. Conversely, the \u03b2-1,4 glucanolytic activity was more efficient at attacking carboxymethylcellulose and methylumbelliferyl-cellotrioside at 42\u00b0 C than at 70\u00b0 C. The 32-kDa enzyme was purified more than 800-fold to apparent homogeneity by a combination of ion-exchange and molecular-exclusion chromatography. Amino-terminal sequencing indicated that mature chitosanase-cellulase shares more than 70% identity with endocellulases produced by strains DLG, PAP115, and 168 of the gram-positive microorganism Bacillus subtilis." }, { "instance_id": "R26550xR26538", "comparison_id": "R26550", "paper_id": "R26538", "text": "Use of Chitosan as Coagulant to Treat Wastewater from Milk Processing Plant Chitosan is a natural high molecular polymer made from crab, shrimp and lobster shells. When used as coagulant in water treatment, not like aluminum and synthetic polymers, chitosan has no harmful effect on human health, and the disposal of waste from seafood processing industry can also be solved. In this study the wastewater from the system of cleaning in place (CIP) containing high content of fat and protein was coagulated using chitosan, and the fat and the protein can be recycled. Chitosan is a natural material, the sludge cake from the coagulation after dehydrated could be used directly as feed supplement, therefore not only saving the spent on waste disposal but also recycling useful material. The result shows that the optimal result was reached under the condition of pH 7 with the coagulant dosage of 25 mg/l. The analysis of cost-effective shows that no extra cost to use chitosan as coagulant in the wastewater treatment, and it is an expanded application for chitosan." }, { "instance_id": "R26550xR26444", "comparison_id": "R26550", "paper_id": "R26444", "text": "Colon-specific delivery of peptide drugs and antiinflammatory drugs using chitosan capsules Nous avons etudie la delivrance specifique au niveau du colon de principes actifs peptidique ou anti-irfammatoire par des capsules de chitosan. Une faible liberation in vitro de 5(6)-carboxyfluoresceine a ete observee en milieu tampon phosphate. Cependant, elle est notablement augmentee en presence des micro-organismes qui sont largement repandus dans le colon. L'absorption intestinale d'insuline, choisie comme modele de principe actif peptidique, a ete evaluee par mesure des tauxplasmatiques en insuline et de leur effet hypoglycemique apres administration orale de capsules de chitosan. Une augmentation marquee de l'insuline et une diminution correspondante des taux de glucose sanguin ont ete observees apres administration orale de ces capsules renfermant 20 UI d'insuline et du glycocholate de sodium (disponibilite pharmacologique : 3,49%) par rapport a des capsules renfermant seulement 20 UI d'insuline (disponibilite pharmacologique : 1,62%). L'effet hypoglycemique a commence 8 h apres l'administration des capsules de chitosan, quand les capsules entrent dans le colon. Nous avons egalement etudie la liberation specifique au niveau du colon d'un produit actif contre les ulceres coliques, l'acide 5-aminosalicylique (5-ASA), a partir de capsules de chitosan, afin d'accelerer la cicatrisation de colites induites chez le rat par le sel de sodium de l'acide 2,4,6-trinitro-benzene sulfonique. Les concentrations en 5-ASA dans la muqueuse du gros intestin apres administration du produit ont ete superieures a celles obtenues a partir d'une suspension dans la carboxymethyl cellulose (CMC). En outre, l'effet therapeutique du 5-ASA a ete significativement ameliore par l'emploi de capsules de chitosan a base de 5-ASA par rapport a l'emploi de suspension de 5-ASA dans la CMC. Ces resultats suggerent que les capsules de chitosan peuvent etre des transporteurs utiles pour l'administration specifique au niveau du colon de principes actifs peptidiques, y compris l'insuline, aussi bien que de produits anti-inflammatoires, y compris le 5-ASA." }, { "instance_id": "R26550xR26522", "comparison_id": "R26550", "paper_id": "R26522", "text": "Chitin Biotechnology Applications This review article describes the current status of the production and consumption of chitin and chitosan, and their current practical applications in biotechnology with some attempted uses. The applications include: 1) cationic agents for polluted waste-water treatment, 2) agricultural materials, 3) food and feed additives, 4) hypocholesterolemic agents, 5) biomedical and pharmaceutical materials, 6) wound-healing materials, 7) blood anticoagulant, antithrombogenic and hemostatic materials, 8) cosmetic ingredients, 9) textile, paper, film and sponge sheet materials, 10) chromatographic and immobilizing media, and 11) analytical reagents." }, { "instance_id": "R26550xR26519", "comparison_id": "R26550", "paper_id": "R26519", "text": "Effects of natural products on soil organisms and plant health enhancement TerraPy, Magic Wet and Chitosan are soil and plant revitalizers based on natural renewable raw materials. These products stimulate microbial activity in the soil and promote plant growth. Their importance to practical agriculture can be seen in their ability to improve soil health, especially where intensive cultivation has shifted the biological balance in the soil ecosystem to high numbers of plant pathogens. The objective of this study was to investigate the plant beneficial capacities of TerraPy, Magic Wet and Chitosan and to evaluate their effect on bacterial and nematode communities in soils. Tomato seedlings (Lycopersicum esculentum cv. Hellfrucht Fr\u00fchstamm) were planted into pots containing a sand/soil mixture (1:1, v/v) and were treated with TerraPy, Magic Wet and Chitosan at 200 kg/ha. At 0, 1, 3, 7 and 14 days after inoculation the following soil parameters were evaluated: soil pH, bacterial and fungal population density (cfu/g soil), total number of saprophytic and plant-parasitic nematodes. At the final sampling date tomato shoot and root fresh weight as well as Meloidogyne infestation was recorded. Plant growth was lowest and nematode infestation was highest in the control. Soil bacterial population densities increased within 24 hours after treatment between 4-fold (Magic Wet) and 19-fold (Chitosan). Bacterial richness and diversity were not significantly altered. Dominant bacterial genera were Acinetobacter (41%) and Pseudomonas (22%) for TerraPy, Pseudomonas (30%) and Acinetobacter (13%) for Magic Wet, Acinetobacter (8.9%) and Pseuodomonas (81%) for Chitosan and Bacillus (42%) and Pseudomonas (32%) for the control. Increased microbial activity also was associated with higher numbers of saprophytic nematodes. The results demonstrated the positive effects of natural products in stimulating soil microbial activity and thereby the antagonistic potential in soils leading to a reduction in nematode infestation and improved plant growth." }, { "instance_id": "R26550xR26483", "comparison_id": "R26550", "paper_id": "R26483", "text": "Chitosan: A New Hemostatic Chitosan is a deacetylated derivative of arthropod chitin. We found that it formed a coagulum in contact with defibrinated blood, heparinized blood, and washed red cells. When knitted DeBakey grafts were treated with chitosan, they were impermeable to blood. Examination of these grafts at 24 hours revealed no rebleeding. Examination at one, two, three, and four months showed the grafts to be encased in smooth muscle with a living endothelial lining and an abundant vasa vasorum. Control grafts showed the usual fibrous healing." }, { "instance_id": "R26550xR26533", "comparison_id": "R26550", "paper_id": "R26533", "text": "The antimicrobial activity of cotton fabrics treated with different crosslinking agents and chitosan Abstract Cotton fabrics were treated with two different crosslinking agents [butanetetracarboxylic acid (BTCA) and Arcofix NEC (low formaldehyde content)] in the presence of chitosan to provide the cotton fabrics a durable press finishing and antimicrobial properties by chemical linking of chitosan to the cellulose structure. Both type and concentration of finishing agent in the presence of chitosan as well as the treatment conditions significantly affected the performance properties and antimicrobial activity of treated cotton fabrics. The treated cotton fabrics showed broad-spectrum antimicrobial activity against gram-positive and gram-negative bacteria and fungi tested. Treatment of cotton fabrics with BTCA in the presence of chitosan strengthened the antimicrobial activity more than the fabrics treated with Arcofix NEC. The maximum antimicrobial activity was obtained when the cotton fabrics were treated with 0.5\u20130.75% chitosan of molecular weight 1.5\u20135 kDa, and cured at 160 \u00b0C for 2\u20133 min. Application of different metal ions to cotton fabrics treated with finishing agent and chitosan showed a negligible effect on the antimicrobial activity. Partial replacement of Arcofix NEC with BTCA enhanced antimicrobial activity of the treated fabrics in comparison with that of Arcofix NEC alone. Transmission electron microscopy showed that the exposure of bacteria and yeast to chitosan treated fabrics resulted in deformation and shrinkage of cell membranes. The site of chitosan action is probably the microbial membrane and subsequent death of the cell." }, { "instance_id": "R26550xR26450", "comparison_id": "R26550", "paper_id": "R26450", "text": "Chitosan as a novel nasal delivery system for vaccines A variety of different types of nasal vaccine systems has been described to include cholera toxin, microspheres, nanoparticles, liposomes, attenuated virus and cells and outer membrane proteins (proteosomes). The present review describes our work on the use of the cationic polysaccharide, chitosan as a delivery system for nasally administered vaccines. Several animal studies have been carried out on influenza, pertussis and diphtheria vaccines with good results. After nasal administration of the chitosan-antigen nasal vaccines it was generally found that the nasal formulation induced significant serum IgG responses similar to and secretory IgA levels superior to what was induced by a parenteral administration of the vaccine. Animals vaccinated via the nasal route with the various chitosan-antigen vaccines were also found to be protected against the appropriate challenge. So far the nasal chitosan vaccine delivery system has been tested for vaccination against influenza in human subjects. The results of the study showed that the nasal chitosan influenza vaccine was both effective and protective according to the CPMP requirements. The mechanism of action of the chitosan nasal vaccine delivery system is also discussed." }, { "instance_id": "R26550xR26541", "comparison_id": "R26550", "paper_id": "R26541", "text": "Low-cost adsorbents for heavy metals uptake from contaminated water: a review In this article, the technical feasibility of various low-cost adsorbents for heavy metal removal from contaminated water has been reviewed. Instead of using commercial activated carbon, researchers have worked on inexpensive materials, such as chitosan, zeolites, and other adsorbents, which have high adsorption capacity and are locally available. The results of their removal performance are compared to that of activated carbon and are presented in this study. It is evident from our literature survey of about 100 papers that low-cost adsorbents have demonstrated outstanding removal capabilities for certain metal ions as compared to activated carbon. Adsorbents that stand out for high adsorption capacities are chitosan (815, 273, 250 mg/g of Hg(2+), Cr(6+), and Cd(2+), respectively), zeolites (175 and 137 mg/g of Pb(2+) and Cd(2+), respectively), waste slurry (1030, 560, 540 mg/g of Pb(2+), Hg(2+), and Cr(6+), respectively), and lignin (1865 mg/g of Pb(2+)). These adsorbents are suitable for inorganic effluent treatment containing the metal ions mentioned previously. It is important to note that the adsorption capacities of the adsorbents presented in this paper vary, depending on the characteristics of the individual adsorbent, the extent of chemical modifications, and the concentration of adsorbate." }, { "instance_id": "R26550xR26432", "comparison_id": "R26550", "paper_id": "R26432", "text": "Basic study for stabilization of w/o/w emulsion and its application to transcatheter arterial embolization therapy Stabilization of w/o/w emulsion and its application to transcatheter arterial embolization (TAE) therapy are reviewed. W/o/w emulsion was stabilized by making inner aqueous phase hypertonic, addition of chitosan in inner phase, and techniques of phase-inversion with porous membrane. Lipiodol w/o/w emulsion for TAE therapy was prepared by using a two-step pumping emulsification procedure. The procedure is so easy that the emulsion could be prepared even during the surgical operation. The deposition after hepatic arterial administration of the emulsion was detected by an X-ray CT scanner. The concentration of epirubicin hydrochloride (EPI) in liver was increased and its residence was prolonged by encapsulating it in the w/o/w emulsion. The toxic effects of EPI and lipiodol on the normal hepatic cells were reduced. The w/o/w emulsion prepared by us is a suitable formulation for the TAE therapy." }, { "instance_id": "R26550xR26471", "comparison_id": "R26550", "paper_id": "R26471", "text": "Antidiabetic Effects of Chitosan Oligosaccharides in Neonatal Streptozotocin-Induced Noninsulin-Dependent Diabetes Mellitus in Rats The antidiabetic effect of chitosan oligosaccharide (COS) was investigated in neonatal streptozotocin (STZ)-induced noninsulin-dependent diabetes mellitus rats. The fasting glucose level was reduced by about 19% in diabetic rats after treatment with 0.3% COS. Glucose tolerance was lower in the diabetic group compared with the normal group. After diabetic rats had been treated with 0.3% COS for 4 weeks, glucose tolerance increased significantly versus the diabetic control group, and glucose-inducible insulin expression increased significantly. In addition, fed-triglyceride (TG) levels in diabetic rats drinking 0.3% COS were reduced by 49% compared with those in diabetic control rats. The cholesterol levels of animals treated with COS were reduced by about 10% in fed or fasting conditions versus the corresponding controls, although the difference was not statistically significant. It was found that COS has a TG-lowering effect in diabetic rats, and that COS reduces signs of diabetic cardiomyopathy such as vacuolation of mitochondria and the separation and degeneration of myofibrils. In conclusion, these results indicate that COS can be used as an antidiabetic agent because it increases glucose tolerance and insulin secretion and decreases TG." }, { "instance_id": "R26550xR26441", "comparison_id": "R26550", "paper_id": "R26441", "text": "Transdermal permeation enhancement of N-trimethyl chitosan for testosterone The aim of this study was to evaluate the transdermal permeation enhancement of N-trimethyl chitosan (TMC) with different degrees of quaternization (DQ). TMCs with DQ of 40 and 60% (TMC40 and TMC60) were synthesized and characterized by (1)H NMR. Testosterone (TS) used as an effective drug, four different gels were prepared without enhancer, with 5% TMC40, 5% TMC60 or 2% Azone, respectively as enhancer. The effect of TMC60 on the stratum corneum was studied by Attenuated Total Reflection-Fourier Transform Infrared Spectroscopy (ATR-FTIR) combined with the technique of deconvolution. The results showed that TMC60 could significantly affect the secondary structure of keratin in stratum corneum. In vitro permeation studies were carried out using Franz-diffusion cells and in vivo studies were performed in rabbits. Both in vitro and in vivo permeation studies suggested the transdermal permeation enhancement of TMCs. Compared to the TS gel without enhancer, TS gels with enhancers all showed significant enhancing effect on transdermal permeation of TS (P<0.05). Meanwhile, compared to 2% Azone, 5% TMC60 had a stronger enhancement (P<0.05) while 5% TMC40 had a similar effect (P>0.05). The results suggested that the enhancement of TMCs increased with the increase of DQ." }, { "instance_id": "R26654xR26643", "comparison_id": "R26654", "paper_id": "R26643", "text": "Energy-Driven Adaptive Clustering Hierarchy (EDACH) for Wireless Sensor Networks Wireless sensor network consists of small battery powered sensors. Therefore, energy consumption is an important issue and several schemes have been proposed to improve the lifetime of the network. In this paper we propose a new approach called energy-driven adaptive clustering hierarchy (EDACH), which evenly distributes the energy dissipation among the sensor nodes to maximize the network lifetime. This is achieved by using proxy node replacing the cluster-head of low battery power and forming more clusters in the region relatively far from the base station. Comparison with the existing schemes such as LEACH (Low-Energy Adaptive Clustering Hierarchy) and PEACH (Proxy-Enabled Adaptive Clustering Hierarchy) reveals that the proposed EDACH approach significantly improves the network lifetime." }, { "instance_id": "R26654xR26646", "comparison_id": "R26654", "paper_id": "R26646", "text": "A clustering method for energy efficient routing in wireless sensor networks Low-Energy Adaptive Clustering Hierarchy (LEACH) is one of the most popular distributed cluster-based routing protocols in wireless sensor networks. Clustering algorithm of the LEACH is simple but offers no guarantee about even distribution of cluster heads over the network. And it assumes that each cluster head transmits data to sink over a single hop. In this paper, we propose a new method for selecting cluster heads to evenly distribute cluster heads. It avoids creating redundant cluster heads within a small geographical range. Simulation results show that our scheme reduces energy dissipation and prolongs network lifetime as compared with LEACH." }, { "instance_id": "R26654xR26628", "comparison_id": "R26654", "paper_id": "R26628", "text": "The Concentric Clustering Scheme for Efficient Energy Consumption in the PEGASIS The wireless sensor network is a type of the wireless ad-hoc networks. It is composed of a collection of sensor nodes. Sensor nodes collect and deliver necessary data in response to user's specific requests. It is expected to apply the wireless sensor network technology to various application areas such as the health, military and home. However, because of several limitations of sensor nodes, the routing protocols used in the wireless ad-hoc network are not suitable for the wireless sensor networks. For this reasons, many novel routing protocols for the wireless sensor networks are proposed recently. One of these protocols, the PEGASIS (power-efficient gathering in sensor information systems) protocol is a chain-based protocol. In general, the PEGASIS protocol presents twice or more performance in comparison with the LEACH (low energy adaptive clustering hierarchy) protocol. However, the PEGASIS protocol causes the redundant data transmission since one of nodes on the chain is selected as the head node regardless of the base station's location. In this paper, we propose the enhanced PEGASIS protocol based on the concentric clustering scheme to solve this problem. The main idea of the concentric clustering scheme is to consider the location of the base station to enhance its performance and to prolong the lifetime of the wireless sensor networks. As simulation results, the enhanced PEGASIS protocol using the concentric clustering scheme performs better than the current PEGASIS protocol by about 35%." }, { "instance_id": "R26654xR26649", "comparison_id": "R26654", "paper_id": "R26649", "text": "An Adaptive Data Dissemination Strategy for Wireless Sensor Networks Future large-scale sensor networks may comprise thousands of wirelessly connected sensor nodes that could provide an unimaginable opportunity to interact with physical phenomena in real time. However, the nodes are typically highly resource-constrained. Since the communication task is a significant power consumer, various attempts have been made to introduce energy-awareness at different levels within the communication stack. Clustering is one such attempt to control energy dissipation for sensor data dissemination in a multihop fashion. The Time-Controlled Clustering Algorithm (TCCA) is proposed to realize a network-wide energy reduction. A realistic energy dissipation model is derived probabilistically to quantify the sensor network's energy consumption using the proposed clustering algorithm. A discrete-event simulator is developed to verify the mathematical model and to further investigate TCCA in other scenarios. The simulator is also extended to include the rest of the communication stack to allow a comprehensive evaluation of the proposed algorithm." }, { "instance_id": "R26729xR26711", "comparison_id": "R26729", "paper_id": "R26711", "text": "EACLE : Energy-Aware Clustering Scheme with Transmission Power Control for Sensor Networks In this paper, we propose a new energy efficient clustering scheme with transmission power control named \u201cEACLE\u201d (Energy-Aware CLustering scheme with transmission power control for sEnsor networks) for wireless sensor networks, which are composed of the following three components; \u201cEACLE clustering\u201d is a distributed clustering method by means of transmission power control, \u201cEACLE routing\u201d builds a tree rooted at a sink node and sets the paths from sensor nodes taking energy saving into consideration, and \u201cEACLE transmission timing control\u201d changes the transmission timing with different levels of transmission power to avoid packet collisions and facilitates packet binding.With an indoor wireless channel model which we obtained from channel measurement campaigns in rooms and corridors and an energy consumption model which we obtained from a measurement of a chipset, we performed computer simulations to investigate the performance of EACLE in a realistic environment. Our simulation results indicate that EACLE outperforms a conventional scheme such as EAD (Energy-Aware Data-centric routing) in terms of communication success rate and energy consumption. Furthermore, we fully discuss the impact of transmission power and timing control on the performance of EACLE." }, { "instance_id": "R26729xR26679", "comparison_id": "R26729", "paper_id": "R26679", "text": "Distributed clustering with directional antennas for wireless sensor networks This paper proposes a decentralized algorithm for organizing an ad hoc sensor network into clusters with directional antennas. The proposed autonomous clustering scheme aims to reduce the sensing redundancy and maintain sufficient sensing coverage and network connectivity in sensor networks. With directional antennas, random waiting timers, and local criterions, cluster performance may be substantially improved and sensing redundancy can be drastically suppressed. The simulation results show that the proposed scheme achieves connected coverage and provides efficient network topology management." }, { "instance_id": "R26729xR26715", "comparison_id": "R26729", "paper_id": "R26715", "text": "Mobility-based clustering protocol for wireless sensor networks with mobile nodes In this study, the authors propose a mobility-based clustering (MBC) protocol for wireless sensor networks with mobile nodes. In the proposed clustering protocol, a sensor node elects itself as a cluster-head based on its residual energy and mobility. A non-cluster-head node aims at its link stability with a cluster head during clustering according to the estimated connection time. Each non-cluster-head node is allocated a timeslot for data transmission in ascending order in a time division multiple address (TDMA) schedule based on the estimated connection time. In the steady-state phase, a sensor node transmits its sensed data in its timeslot and broadcasts a joint request message to join in a new cluster and avoid more packet loss when it has lost or is going to lose its connection with its cluster head. Simulation results show that the MBC protocol can reduce the packet loss by 25% compared with the cluster-based routing (CBR) protocol and 50% compared with the low-energy adaptive clustering hierarchy-mobile (LEACH-mobile) protocol. Moreover, it outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment." }, { "instance_id": "R26775xR26742", "comparison_id": "R26775", "paper_id": "R26742", "text": "An energy-efficient distributed unequal clustering protocol for wireless sensor networks Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called \u201chot spot\u201d or \u201cenergy hole\u201d problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly." }, { "instance_id": "R26775xR26757", "comparison_id": "R26775", "paper_id": "R26757", "text": "Unequal clustering scheme based leach for wireless sensor networks Clustering technique is an effective topology control approach which can improve the scalability and lifetime in wireless sensor networks (WSNs). LEACH is a classical clustering algorithm for low energy scheme, however, it still have some deficiencies. This paper studies LEACH protocol, and put an Improved LEACH protocol which has more reasonable set-up phase. In the cluster heads election phase, we put the energy ratio and competition distance as two elements to join the cluster head election. Simulation results demonstrate that Improved LEACH algorithm has better energy balance and prolong network lifetime." }, { "instance_id": "R26775xR26751", "comparison_id": "R26775", "paper_id": "R26751", "text": "Luca: an energy-efficient unequal clustering algorithm using location information for wireless sensor networks Over the last several years, various clustering algorithms for wireless sensor networks have been proposed to prolong network lifetime. Most clustering algorithms provide an equal cluster size using node\u2019s ID, degree and etc. However, many of these algorithms heuristically determine the cluster size, even though the cluster size significantly affects the energy consumption of the entire network. In this paper, we present a theoretical model and propose a simple clustering algorithm called Location-based Unequal Clustering Algorithm (LUCA), where each cluster has a different cluster size based on its location information which is the distance between a cluster head and a sink. In LUCA, in order to minimize the energy consumption of entire network, a cluster has a larger cluster size as increasing distance from the sink. Simulation results show that LUCA achieves better performance than conventional equal clustering algorithm for energy efficiency." }, { "instance_id": "R26775xR26763", "comparison_id": "R26775", "paper_id": "R26763", "text": "UHEED - An Unequal Clustering Algorithm for Wireless Sensor Networks Prolonging the lifetime of wireless sensor networks has always been a determining factor when designing and deploying such networks. Clustering is one technique that can be used to extend the lifetime of sensor networks by grouping sensors together. However, there exists the hot spot problem which causes an unbalanced energy consumption in equally formed clusters. In this paper, we propose UHEED, an unequal clustering algorithm which mitigates this problem and which leads to a more uniform residual energy in the network and improves the network lifetime. Furthermore, from the simulation results presented, we were able to deduce the most appropriate unequal cluster size to be used." }, { "instance_id": "R26850xR26846", "comparison_id": "R26850", "paper_id": "R26846", "text": "A deterministic tabu search algorithm for the fleet size and mix vehicle routing problem The fleet size and mix vehicle routing problem consists of defining the type, the number of vehicles of each type, as well as the order in which to serve the customers with each vehicle when a company has to distribute goods to a set of customers geographically spread, with the objective of minimizing the total costs. In this paper, a heuristic algorithm based on tabu search is proposed and tested on several benchmark instances. The computational results show that the proposed algorithm produces high quality results within a reasonable computing time. Some new best solutions are reported for a set of test problems used in the literature." }, { "instance_id": "R26850xR26841", "comparison_id": "R26850", "paper_id": "R26841", "text": "A column generation approach to the heterogeneous fleet vehicle routing problem We consider a vehicle routing problem with a heterogeneous fleet of vehicles having various capacities, fixed costs and variable costs. An approach based on column generation (CG) is applied for its solution, hitherto successful only in the vehicle routing problem with time windows. A tight integer programming model is presented, the linear programming relaxation of which is solved by the CG technique. A couple of dynamic programming schemes developed for the classical vehicle routing problem are emulated with some modifications to efficiently generate feasible columns. With the tight lower bounds thereby obtained, the branch-and-bound procedure is activated to obtain an integer solution. Computational experience with the benchmark test instances confirms that our approach outperforms all the existing algorithms both in terms of the quality of solutions generated and the solution time." }, { "instance_id": "R26850xR26778", "comparison_id": "R26850", "paper_id": "R26778", "text": "A comparison of techniques for solving the fleet size and mix vehicle routing problem SummaryIn the fleet size and mix vehicle routing problem, one decides upon the composition and size of a possibly heterogeneous fleet of vehicles so as to minimize the sum of fixed vehicle acquisition costs and routing costs for customer deliveries. This paper reviews some existing heuristics for this problem as well as a lower bound procedure. Based on the latter, a new heuristic is presented. Computational results are provided for a number of benchmark problems in order to compare the performance of the different solution methods.ZusammenfassungDas Problem der Bestimmung einer optimalen Anzahl von m\u00f6glicherweise verschiedenen Fahrzeugen in einem Fuhrpark sowie die bestm\u00f6gliche Zusammensetzung verschiedener Fahrzeuge bei der Tourenplanung, wobei die Fixkosten der Beschaffung und die Kosten f\u00fcr die laufende Unterhaltung der Routen minimiert werden soll, wird diskutiert. Einige bekannte Heuristiken und ein Algorithmus zur Bestimmung einer unteren Schranke werden besprochen. Mit diesen Grundlagen wird eine neue Heuristik vorgeschlagen. Um die Leistungsf\u00e4higkeit der verschiedenen L\u00f6sungsmethoden zu vergleichen, werden anschlie\u00dfend Rechenergebnisse verschiedener benchmark Probleme vorgestellt." }, { "instance_id": "R26850xR26848", "comparison_id": "R26850", "paper_id": "R26848", "text": "An effective genetic algorithm for the fleet size and mix vehicle routing problems This paper studies the fleet size and mix vehicle routing problem (FSMVRP), in which the fleet is heterogeneous and its composition to be determined. We design and implement a genetic algorithm (GA) based heuristic. On a set of twenty benchmark problems it reaches the best-known solution 14 times and finds one new best solution. It also provides a competitive performance in terms of average solution." }, { "instance_id": "R26850xR26780", "comparison_id": "R26850", "paper_id": "R26780", "text": "The fleet size and mix vehicle routing problem Abstract In this paper, we address the problem of routing a fleet of vehicles from a central depot to customers with known demand. Routes originate and terminate at the central depot and obey vehicle capacity restrictions. Typically, researchers assume that all vehicles are identical. In this work, we relax the homogeneous fleet assumption. The objective is to determine optimal fleet size and mix by minimizing a total cost function which includes fixed cost and variable cost components. We describe several efficient heuristic solution procedures as well as techniques for generating a lower bound and an underestimate of the optimal solution. Finally, we present some encouraging computational results and suggestions for further study." }, { "instance_id": "R26850xR26784", "comparison_id": "R26850", "paper_id": "R26784", "text": "A new heuristic for determining fleet size and composition In the fleet size and composition vehicle routing problem (FSCVRP), one decides upon the composition of a possibly heterogeneous fleet of vehicles so as to minimize the sum of fixed vehicle acquisition costs and routing costs for customer deliveries. In this note, we build upon a previously-described lower bound procedure for the FSCVRP in order to present a new heuristic. Computational results to date have been encouraging." }, { "instance_id": "R26850xR26791", "comparison_id": "R26850", "paper_id": "R26791", "text": "Adaptation of some vehicle fleet mix heuristics Standard models used for the combined vehicle routing and vehicle fleet composition problem use the same value for the unit running cost across vehicles. In practice such a parameter depends on several factors and particularly on the capacity of the vehicle. The purpose of this paper is to show simple modifications of some well known methods to allow for variable running costs; and also to assess the effect of neglecting such variability. Interesting numerical results, measured in terms of changes in total cost or/and fleet configuration, are found at no extra computational effort." }, { "instance_id": "R26850xR26805", "comparison_id": "R26850", "paper_id": "R26805", "text": "A tabu search heuristic for the heterogeneous fleet vehicle routing problem Abstract The Heterogeneous Fleet Vehicle Routing Problem (HVRP) is a variant of the classical Vehicle Routing Problem in which customers are served by a heterogeneous fleet of vehicles with various capacities, fixed costs, and variable costs. This article describes a tabu search heuristic for the HVRP. On a set of benchmark instances, it consistently produces high-quality solutions, including several new best-known solutions. Scope and purpose In distribution management, it is often necessary to determine a combination of least cost vehicle routes through a set of geographically scattered customers, subject to side constraints. The case most frequently studied is where all vehicles are identical. This article proposes a solution methodology for the case where the vehicle fleet is heterogeneous. It describes an efficient tabu search heuristic capable of producing high-quality solutions on a series of benchmark test problems." }, { "instance_id": "R26850xR26816", "comparison_id": "R26850", "paper_id": "R26816", "text": "Tabu search variants for the mix fleet vehicle routing problem The Mix Fleet Vehicle Routing Problem (MFVRP) involves the design of a set of minimum cost routes, originating and terminating at a central depot, for a fleet of heterogeneous vehicles with various capacities, fixed costs and variable costs to service a set of customers with known demands. This paper develops new variants of a tabu search meta-heuristic for the MFVRP. These variants use a mix of different components, including reactive tabu search concepts; variable neighbourhoods, special data memory structures and hashing functions. The reactive concept is used in a new way to trigger the switch between simple moves for intensification and more complex ones for diversification of the search strategies. The special data structures are newly introduced to efficiently search the various neighbourhood spaces. The combination of data structures and strategic balance between intensification and diversification generates an efficient and robust implementation, which is very competitive with other algorithms in the literature on a set of benchmark instances for which some new best-known solutions are provided." }, { "instance_id": "R26881xR26861", "comparison_id": "R26881", "paper_id": "R26861", "text": "A threshold accepting metaheuristic for the heterogeneous fixed fleet vehicle routing problem Abstract The purpose of this paper is to present a new metaheuristic, termed the backtracking adaptive threshold accepting algorithm, for solving the heterogeneous fixed fleet vehicle routing problem (HFFVRP). The HFFVRP is a variant of the classical vehicle routing problem (VRP) and has attracted much less attention in the operational research (OR) literature than the classical VRP. It involves the design of a set of minimum cost routes, originating and terminating at a depot, for a fleet with fixed number of vehicles of each type, with various capacities, and variable costs to service a set of customers with known demands. The numerical results show that the proposed algorithm is robust and efficient. New best solutions are reported over a set of published benchmark problems." }, { "instance_id": "R26881xR26875", "comparison_id": "R26881", "paper_id": "R26875", "text": "A flexible adaptive memory-based algorithm for real-life transportation operations: Two case studies from dairy and construction sector Abstract Effective routing of vehicles remains a focal goal of all modern enterprises, thriving for excellence in project management with minimal investment and operational costs. This paper proposes a metaheuristic methodology for solving a practical variant of the well-known Vehicle Routing Problem, called Heterogeneous Fixed Fleet VRP (HFFVRP). Using a two-phase construction heuristic, called GEneralized ROute Construction Algorithm (GEROCA), the proposed metaheuristic approach enhances its flexibility to easily adopt various operational constraints. Via this approach, two real-life distribution problems faced by a dairy and a construction company were tackled and formulated as HFFVRP. Computational results on the aforementioned case studies show that the proposed metaheuristic approach (a) consistently outperforms previous published metaheuristic approaches we have developed to solve the HFFVRP, and (b) substantially improves upon the current practice of the company. The key result that impressed both companies\u2019 management was the improvement over the bi-objective character of their problems: the minimization of the total distribution cost as well as the minimization of the number of the given heterogeneous number of vehicles used." }, { "instance_id": "R26881xR26869", "comparison_id": "R26881", "paper_id": "R26869", "text": "A heuristic for the routing and carrier selection problem We consider the problem of simultaneously selecting customers to be served by external carriers and routing a heterogeneous internal fleet. Very little attention was devoted to this problem. A recent paper proposed a heuristic solution procedure. Our paper shows that better results can be obtained by a simple method and corrects some erroneous results presented in the previous paper." }, { "instance_id": "R26918xR26896", "comparison_id": "R26918", "paper_id": "R26896", "text": "A hybrid simulated annealing for capacitated vehicle routing problems with the independent route length This paper presents a linear integer model of capacitated vehicle routing problems (VRP) with the independent route length to minimize the heterogeneous fleet cost and maximize the capacity utilization. In the proposed model, the fleet cost is independent on the route length and there is a hard time window over depot. In some real-world situations, the cost of routes is independent on their length, but it is dependent to type and capacity of vehicles allocated to routes where the fleet is mainly heterogeneous. In this case, the route length or travel time is expressed as restriction, that is implicated a hard time window in depot. The proposed model is solved by a hybrid simulated annealing (SA) based on the nearest neighborhood. It is shown that the proposed model enables to establish routes to serve all given customers by the minimum number of vehicles and the maximum capacity used. Also, the proposed heuristic can find good solutions in reasonable time. A number of small and large-scale problems in little and large scale are solved and the associated results are reported." }, { "instance_id": "R26918xR26883", "comparison_id": "R26918", "paper_id": "R26883", "text": "Lagrangian Relaxation Methods for Solving the Minimum Fleet Size Multiple Traveling Salesman Problem with Time Windows We consider the problem of finding the minimum number of vehicles required to visit once a set of nodes subject to time window constraints, for a homogeneous fleet of vehicles located at a common depot. This problem can be formulated as a network flow problem with additional time constraints. The paper presents an optimal solution approach using the augmented Lagrangian method. Two Lagrangian relaxations are studied. In the first one, the time constraints are relaxed producing network subproblems which are easy to solve, but the bound obtained is weak. In the second relaxation, constraints requiring that each node be visited are relaxed producing shortest path subproblems with time window constraints and integrality conditions. The bound produced is always excellent. Numerical results for several actual school busing problems with up to 223 nodes are discussed. Comparisons with a set partitioning formulation solved by column generation are given." }, { "instance_id": "R26918xR26900", "comparison_id": "R26918", "paper_id": "R26900", "text": "Economic Heuristic Optimization for Heterogeneous Fleet VRPHESTW A three-step local search algorithm based on a probabilistic variable neighborhood search is presented for the vehicle routing problem with a heterogeneous fleet of vehicles and soft time windows (VRPHESTW). A generation mechanism based on a greedy randomized adaptive search procedure, a diversification procedure using an extinctive selection evolution strategy, and a postoptimization method based on a threshold algorithm with restarts are considered to solve the problem. The results show the convenience of using an economic objective function to analyze the influence of the changes in the economic environment on the transportation average profit of vehicle routing problems. Near real-world vehicle routing problems need (1) an economic objective function to measure the quality of the solutions as well as (2) an appropriate guide function, which may be different from the economic objective function, for each heuristic method and for each economic scenario." }, { "instance_id": "R26918xR26913", "comparison_id": "R26918", "paper_id": "R26913", "text": "A well-scalable metaheuristic for the fleet size and mix vehicle routing problem with time windows This paper presents an efficient and well-scalable metaheuristic for fleet size and mix vehicle routing with time windows. The suggested solution method combines the strengths of well-known threshold accepting and guided local search metaheuristics to guide a set of four local search heuristics. The computational tests were done using the benchmarks of [Liu, F.-H., & Shen, S.-Y. (1999). The fleet size and mix vehicle routing problem with time windows. Journal of the Operational Research Society, 50(7), 721-732] and 600 new benchmark problems suggested in this paper. The results indicate that the suggested method is competitive and scales almost linearly up to instances with 1000 customers." }, { "instance_id": "R26982xR26938", "comparison_id": "R26982", "paper_id": "R26938", "text": "A decision support system for vehicle fleet planning Abstract A decision support system (DSS) is developed to solve the fleet planning problem. The system can be used by fleet managers to plan fleet size and mix. The decision support system was designed to assist managers in every step of the planning process: (i) To forecast demand; (ii) to determine relevant criteria; (iii) to generate alternative plans; (iv) to assess alternative plans with respect to the criteria determined in ii); and (v) to choose \u2018the best\u2019 plan. Emphasis of the decision support system is on flexibility. Another important feature of the decision support system is that it uses both a multicriteria approach to evaluate alternative plans and a stochastic programming model to generate plans. The system can be used to answer a wide variety of \u2018What if\u2019 questions with potentially significant cost impacts. The example provided shows how the DSS can be useful to improve vehicle fleet planning." }, { "instance_id": "R26982xR26955", "comparison_id": "R26982", "paper_id": "R26955", "text": "Global and Local Moves in Tabu Search: A Real-Life Mail Collecting Application The problem we deal with is the optimization of mail collecting at several customer sites that are scattered around an urban area. It involves the design of a set of minimum cost routes, originating and terminating at a central depot, for a fleet of vehicles that service those customer sites with known demands. We develop a Tabu Search approach where at each iteration the best move is selected among a large variety of possible moves. This new version of the metaheuristic Tabu leads us to determine a good vehicle fleet mix (cheapest cost incorporating routing and fixed vehicle costs) without violating constraints such as time restrictions and capacity." }, { "instance_id": "R26982xR26952", "comparison_id": "R26982", "paper_id": "R26952", "text": "The multi-trip vehicle routing problem The basic vehicle routing problem is concerned with the design of a set of routes to serve a given number of customers, minimising the total distance travelled. In that problem, each vehicle is assumed to be used only once during a planning period, which is typically a day, and therefore is unrepresentative of many practical situations, where a vehicle makes several journeys during a day. The present authors have previously published an algorithm which outperformed an experienced load planner working on the complex, real-life problems of Burton's Biscuits, where vehicles make more than one trip each day. This present paper uses a simplified version of that general algorithm, in order to compare it with a recently published heuristic specially designed for the theoretical multi-trip vehicle routing problem." }, { "instance_id": "R26982xR26962", "comparison_id": "R26982", "paper_id": "R26962", "text": "A robust optimization model for a cross-border logistics problem with fleet composition in an uncertain environment Since the implementation of the open-door policy in China, many Hong Kong-based manufacturers' production lines have moved to China to take advantage of the lower production cost, lower wages, and lower rental costs, and thus, the finished products must be transported from China to Hong Kong. It has been discovered that logistics management often encounters uncertainty and noisy data. In this paper, a robust optimization model is proposed to solve a cross-border logistics problem in an environment of uncertainty. By adjusting penalty parameters, decision-makers can determine an optimal long-term transportation strategy, including the optimal delivery routes and the optimal vehicle fleet composition to minimize total expenditure under different economic growth scenarios. We demonstrate the robustness and effectiveness of our model using the example of a Hong Kong-based manufacturing company. The analysis of the trade-off between model robustness and solution robustness is also presented." }, { "instance_id": "R26982xR26946", "comparison_id": "R26982", "paper_id": "R26946", "text": "A Tabu Search Approach for Delivering Pet Food and Flour in Switzerland In this paper, we consider a real-life vehicle routeing problem that occurs in a major Swiss company producing pet food and flour. In contrast with usual hypothetical problems, a large variety of restrictions has to be considered. The main constraints are relative to the accessibility and the time windows at customers, the carrying capacities of vehicles, the total duration of routes and the drivers' breaks. To find good solutions to this problem, we propose two heuristic methods: a fast straightforward insertion procedure and a method based on tabu search techniques. Next, the produced solutions are compared with the routes actually covered by the company. Our outcomes indicate that the total distance travelled can be reduced significantly when such methods are used." }, { "instance_id": "R27039xR27021", "comparison_id": "R27039", "paper_id": "R27021", "text": "Sizing the US destroyer fleet Abstract For the US Navy to be successful, it must make good investments in combatant ships. Historically a vital component in these decisions is expert opinion. This paper illustrates that the use of quantitative methods in conjunction with expert opinion can add considerable insight. We use the analytic hierarchy process (AHP) to gather expert opinions. Then, distributions are derived based on these expert opinions, and integrated into a mixed integer programming model to derive a distribution for the \u201ceffectiveness\u201d of a fleet with a particular mix of ships. These ideas are applied to the planning scenario for the 2015 conflict on the Korean Peninsula, one of the two key scenarios that the Department of Defense uses for planning." }, { "instance_id": "R27039xR27037", "comparison_id": "R27039", "paper_id": "R27037", "text": "Model Integrating Fleet Design and Ship Routing Problems for Coal Shipping In this paper, an integrated optimization model is developed to improve the efficiency of coal shipping. The objective is (1) to determine the types of ships and the number of each type, (2) to optimize the ship routing, therefore, to minimize the total coal shipping cost. Meanwhile, an algorithm based on two-phase tabu search is designed to solve the model. Numerical tests show that the proposed method can decrease the unit shipping cost and the average ship delay, and improve the reliability of the coal shipping system." }, { "instance_id": "R27039xR27027", "comparison_id": "R27039", "paper_id": "R27027", "text": "Ship Routing and Scheduling: Status and Perspectives The objective of this paper is to review the current status of ship routing and scheduling. We focus on literature published during the last decade. Because routing and scheduling problems are closely related to many other fleet planning problems, we have divided this review into several parts. We start at the strategic fleet planning level and discuss the design of fleets and sea transport systems. We continue with the tactical and operational fleet planning level and consider problems that comprise various ship routing and scheduling aspects. Here, we separately discuss the different modes of operations: industrial, tramp, and liner shipping. Finally, we take a glimpse at naval applications and other related problems that do not naturally fall into these categories. The paper also presents some perspectives regarding future developments and use of optimization-based decision-support systems for ship routing and scheduling. Several of the trends indicate both accelerating needs for and benefits from such systems and, hopefully, this paper will stimulate further research in this area." }, { "instance_id": "R27039xR26987", "comparison_id": "R27039", "paper_id": "R26987", "text": "An Industrial Ocean-Cargo Shipping Problem This paper reports the modeling and solution of an industrial ocean-cargo shipping problem. The problem involves the delivery of bulk products from an overseas port to transshipment ports on the Atlantic Coast, and then over land to customers. The decisions made include the number and the size of ships to charter in each time period during the planning horizon, the number and location of transshipment ports to use, and transportation from ports to customers. The complexity of this problem is compounded by the cost structure, which includes fixed charges in both ship charters and port operations. Such a large scale, dynamic, and stochastic problem is reduced to a solvable stationary, deterministic, and cyclical model. The process of modeling the problem and the solution of the resultant mixed integer program are described in detail. Recommendations from this study have been implemented." }, { "instance_id": "R27039xR26998", "comparison_id": "R27039", "paper_id": "R26998", "text": "Modeling the Increased Complexity of New York City's Refuse Marine Transport System The New York City Department of Sanitation operates the world's largest refuse marine transport system. Waste trucks unload their cargo at land-based transfer stations where refuse is placed in barges and then towed by tugboats to the Fresh Kills Landfill in Staten Island. In the early 1980s, the city commissioned the development of a computer-based model for use in fleet sizing and operations planning. As a result of the complexities introduced by environmental regulation and technological innovation, the marine transport system operations changed and the existing model became obsolete. Based on the success achieved with the first model in 1993, the city commissioned the development of a new model. In this paper, we present a PC-based model developed to meet the increased complexity of the system. Analysis performed for validation and calibration of the model demonstrates that it tracks well the operations of the real system. We illustrate through a detailed design exercise how to use the model to configure the system in a way that meets the requirements of the refuse marine transport system." }, { "instance_id": "R27061xR27049", "comparison_id": "R27061", "paper_id": "R27049", "text": "Smart City Components Architicture The research is essentially to modularize the structure of utilities and develop a system for following up the activities electronically on the city scale. The GIS operational platform will be the base for managing the infrastructure development components with the systems interoperability for the available city infrastructure related systems. The concentration will be on the available utility networks in order to develop a comprehensive, common, standardized geospatial data models. The construction operations for the utility networks such as electricity, water, Gas, district cooling, irrigation, sewerage and communication networks; are need to be fully monitored on daily basis, in order to utilize the involved huge resources and man power. These resources are allocated only to convey the operational status for the construction and execution sections that used to do the required maintenance. The need for a system that serving the decision makers for following up these activities with a proper geographical representation will definitely reduce the operational cost for the long term." }, { "instance_id": "R27061xR27059", "comparison_id": "R27061", "paper_id": "R27059", "text": "Using cloud technologies for large-scale house data in smart city In the smart city environment, a wide variety of data are collected from sensors and devices to achieve value-added services. In this paper, we especially focus on data taken from smart houses in the smart city, and propose a platform, called Scallop4SC, that stores and processes the large-scale house data. The house data is classified into log data or configuration data. Since the amount of the log is extremely large, we introduce the Hadoop/MapReduce with a multi-node cluster. On top of this, we use HBase key-value store to manage heterogeneous log data in a schemaless manner. On the other hand, to manage the configuration data, we choose MySQL to process various queries to the house data efficiently. We propose practical data models of the log data and the configuration data on HBase and MySQL, respectively. We then show how Scallop4SC works as a efficient data platform for smart city services. We implement a prototype with 12 Linux servers. We conduct an experimental evaluation to calculate device-wise energy consumption, using actual house log recorded for one year in our smart house. Based on the result, we discuss the applicability of Scallop4SC to city-scale data processing." }, { "instance_id": "R27061xR27057", "comparison_id": "R27061", "paper_id": "R27057", "text": "Smart City Development: A Business Process-centric Conceptualisation Smart city development has been proposed as a response to urbanisation challenges and changing citizen needs in the cities. It allows the city as a complex system of systems to be efficient and integrated, in order to work as a whole, and provide effective services to citizens through its inter-connected sector. This research attempts to conceptualise smart city, by looking at its requirements and components from a process change perspective, not a merely technology-led innovation within a city. In view of that, the research also gains benefits from the principles of smart city development such as systems thinking approach, city as a system of systems, and the necessity of systems integration. The outcome of this study emphasises the significance of considering a city as a system of systems and necessity of city systems integration and city process change for smart city development. Consequently, the research offers a city process-centric conceptualisation of smart city." }, { "instance_id": "R27235xR27206", "comparison_id": "R27235", "paper_id": "R27206", "text": "Exchange-rate volatility, exchange-rate regime, and trade volume: evidence from the UK\u2013US export function (1889\u20131999) Abstract This paper investigated the impact of exchange-rate volatility and exchange-rate regime on the British exports to the United States using data for the period 1889\u20131999. The empirical findings suggest that neither exchange-rate volatility nor the different exchange-rate regimes that spanned the last century had an effect on export volume." }, { "instance_id": "R27235xR27168", "comparison_id": "R27235", "paper_id": "R27168", "text": "Exchange Rate Volatility and International Prices We examine how exchange rate volatility affects exporter's pricing decisions in the presence of optimal forward covering. By taking account of forward covering, we are able to derive an expression for the risk premium in the foreign exchange market, which is then estimated as a generalized ARCH model to obtain the time-dependent variance of the exchange rate. Our theory implies a connection between the estimated risk premium equation, and the influence of exchange rate volatility on export prices. In particular, we argue that if there is no risk premium, then exchange rate variance can only have a negative impact on export prices. In the presence of a risk premium, however, the effect of exchange rate variance on export prices is ambiguous, and may be statistically insignificant with aggregate data. These results are supported using data on aggregate U.S. imports and exchange rates of the dollar against the pound. yen and mark." }, { "instance_id": "R27235xR27188", "comparison_id": "R27235", "paper_id": "R27188", "text": "The Impact of Exchange Rate Volatility on International Trade: Reduced Form Estimates using the GARCH-in-mean Model Abstract In this paper, we use a multivariate GARCH-in-mean model of the reduced form of multilateral exports to examine the relationship between nominal exchange rate volatility and export flows and prices. The model imposes rationality on perceived exchange rate volatility, unlike conventional, two-step strategies. Tests are performed for five industrialized countries over the post-Bretton Woods era. We find that the GARCH conditional variance has a statistically significant impact on the reduced form equations for all countries. For most of the countries, the magnitude of the effect is stronger for export prices than quantities. In addition, the estimated magnitude of the impact of volatility on exports is not robust to using the conventional estimation strategy. (JEL F41, F31)." }, { "instance_id": "R27235xR27180", "comparison_id": "R27235", "paper_id": "R27180", "text": "Unanticipated exchange rate variability and the growth of international trade ZusammenfassungUnerwartete Wechselkursschwankungen und das Wachstum des internationalen Handels. - Der Verfasser untersucht die oft zitierte These, die Wechselkursvariabilit\u00e4t habe den internationalen Handel beeintr\u00e4chtigt. Im Gegensatz zu fr\u00fcheren Arbeiten formuliert und sch\u00e4tzt er ein Modell mit zwei Gleichungen. Davon sch\u00e4tzt die erste die Bestimmungsgr\u00fcnde der Variabilit\u00e4t der realen Wechselkurse mit dem Ziel, zwischen den erwarteten und den unerwarteten Komponenten dieser Variabilit\u00e4t unterscheiden zu k\u00f6nnen. Die zweite ist eine Gleichung in reduzierter Form f\u00fcr die Bestimmungsgr\u00fcnde des Wachstums realer Exporte. Diese wird zum Testen der Hypothese benutzt, da\u00df nur die unerwarteten Schwankungen der realen Wechselkurse das Wachstum der realen Exporte signifikant beeinflussen. Die Ergebnisse best\u00e4tigen diese Hypothese.R\u00e9sum\u00e9La variabilit\u00e9 non-pr\u00e9vue des taux de change et l\u2019accroissement du commerce international. - Dans cette \u00e9tude l\u2019auteur examine l\u2019hypoth\u00e8se souvent-cit\u00e9e que la variabilit\u00e9 des taux de change a emp\u00each\u00e9 l\u2019accroissement du commerce international. Contraire aux \u00e9tudes ant\u00e9rieures, il formule et estime un mod\u00e8le \u00e0 deux \u00e9quations. La premi\u00e8re \u00e9quation \u00e9value les facteurs d\u00e9terminants de la variabilit\u00e9 des taux de change r\u00e9els pour diff\u00e9rencier entre les \u00e9l\u00e9ments pr\u00e9vus et non-pr\u00e9vus de la variabilit\u00e9 des taux de change r\u00e9els. La deuxi\u00e8me est une \u00e9quation \u00e0 forme r\u00e9duite et contient les facteurs d\u00e9terminants de l\u2019accroissement des exportations r\u00e9elles. Ce mod\u00eble est utilis\u00e9 pour v\u00e9rifier l\u2019hypoth\u00e8se que seulement la variabilit\u00e9 non-pr\u00e9vue des taux de change r\u00e9els a un effet significatif sur l\u2019accroissement des exportations r\u00e9elles. Les r\u00e9sultats confirment l\u2019hypoth\u00e8se.ResumenVariabilidad no anticipada de la tasa de cambio y el crecimiento del comercio international. - En este trabajo se investiga la muy citada hip\u00f3tesis de que la variabilidad de la tasa de cambio ha inhibido el crecimiento del comercio internacional. A diferencia de trabajos previos, se formula y estima un modelo biecuacional. La primera ecuaci\u00f3n estima las determinantes de la variabilidad de la tasa de cambio real (REER), con el fin de distinguir entre los componentes anticipados y no anticipados de la variabilidad de la REER. La segunda es una ecuaci\u00f3n en forma reducida para las d\u00e9terminantes del crecimiento real de las exportaciones. Se utiliza este modelo para llevar a cabo un test de la hip\u00f3tesis de que s\u00f3lo la variabilidad no anticipada de la REER afecta significativamente el crecimiento real del volumen de exportaciones. Los resultados indican que la variabilidad no anticipada de la REER ha inhibido el crecimiento de las exportaciones, mientras que la variabilidad anticipada no ha tenido efecto alguno." }, { "instance_id": "R27235xR27190", "comparison_id": "R27235", "paper_id": "R27190", "text": "Does Exchange Rate Volatility Depress Trade Flows? Evidence from Error- Correction Models This paper examines the impact of exchange rate volatility on the trade flows of the G-7 countries in the context of a multivariate error-correction model. The error-correction models do not show any sign of parameter instability. The results indicate that the exchange rate volatility has a significant negative impact on the volume of exports in each of the G-7 countries. Assuming market participants are risk averse, these results imply that exchange rate uncertainty causes them to reduce their activities, change prices, or shift sources of demand and supply in order to minimize their exposure to the effects of exchange rate volatility. This, in turn, can change the distribution of output across many sectors in these countries. It is quite possible that the surprisingly weak relationship between trade flows and exchange rate volatility reported in several previous studies are due to insufficient attention to the stochastic properties of the relevant time series. Copyright 1993 by MIT Press." }, { "instance_id": "R27235xR27153", "comparison_id": "R27235", "paper_id": "R27153", "text": "Exchange rate uncertainty and foreign trade Abstract This paper starts with reviewing the existing literature on exchange rate uncertainty and trade flows. It then argues that potential costs of medium term uncertainty in exchange rates and competitiveness are likely to be much larger than that of exchange risk which has been the focus of the existing literature. Two measures of medium term exchange rate uncertainty are constructed. One is a weighted function of the magnitude of past movements in nominal exchange rates and the current deviation of the exchange rate from \u2018equilibrium\u2019, while the second depends on both the duration and the amplitude of misalignment from \u2018equilibrium\u2019 exchange rates. The empirical evidence reported in the paper suggests that when exchange rate uncertainty is defined over a medium term period it does affect adversely trade flows of the industrial countries under review, with the notable exception of the United States." }, { "instance_id": "R27235xR27195", "comparison_id": "R27235", "paper_id": "R27195", "text": "The impact of exchange rate volatility on German-US trade flows Abstract This paper analyses the effect of exchange rate volatility on Germany-US bilateral trade flows for the period 1973:4\u20131992:9. ARCH models are used to generate a measure of exchange rate volatility and are then tested against Germany's exports to, and imports from, the US. This paper differs from many papers previously published as the effects of volatility are found to be positive and statistically significant for the period under review. The debate over the use of real or nominal exchange rate data in the derivation of volatility estimation is also addressed." }, { "instance_id": "R27235xR27144", "comparison_id": "R27235", "paper_id": "R27144", "text": "Exchange Rate Risk, Exchange Rate Regime and the Volume of International Trade The authors examine the effect of exchange-rate regimes on the volume of internatio nal trade. Bilateral trade flows among countries with floating exchan ge rates are higher than those among countries with fixed rates. Whil e exchange-rate risk does reduce the volume of trade among countries regardless of the nature of their exchange-rate regime, the greater r isk faced by traders in floating exchange-rate countries is more than offset by the trade-reducing effects of restrictive commercial polic ies imposed by fixed exchange rate countries. Copyright 1988 by WWZ and Helbing & Lichtenhahn Verlag AG" }, { "instance_id": "R27235xR27149", "comparison_id": "R27235", "paper_id": "R27149", "text": "Real Exchange Rate Volatility and U.S. Bilateral Trade: A VAR Approach This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press." }, { "instance_id": "R27235xR27131", "comparison_id": "R27235", "paper_id": "R27131", "text": "Exchange-rate variability and trade performance: evidence for the big seven industrial countries ZusammenfassungVariabilit\u00c4t der Wechselkurse und Entwicklung der Exporte: Evidenz f\u00fcr die sieben gro\\en Industriestaaten. \u2014 Dieser Aufsatz enth\u00c4lt empirische Ergebnisse \u00fcber den Zusammenhang zwischen der Variabilit\u00c4t der Wechselkurse und dem Au\\enhandel der sieben gro\\en OECD-L\u00c4nder. Im Gegensatz zu anderen Arbeiten wird der Einflu\\ der realen Exporterl\u00f6se der \u00f6lf\u00f6rderl\u00c4nder auf die Ausfuhr dieser sieben L\u00c4nder ber\u00fccksichtigt. Au\\erdem wird das ausl\u00c4ndische Einkommen sowohl bei \u201chohen\u201d als auch bei \u201cniedrigen\u201d Dollar-Wechselkursen berechnet, um sicherzustellen, da\\ die Ergebnisse nicht durch die Wahl eines bestimmten Wechselkursniveaus f\u00fcr den Dollar verzerrt werden. Schlie\\lich werden au\\er den unverz\u00f6gerten auch die verz\u00f6gerten Impulse der Wechselkursvariabilit\u00c4t f\u00fcr die Exporte getestet. Die Ergebnisse lassen darauf schlie\\en, da\\ die Wechselkursvariabilit\u00c4t die Exporte keines der sieben gro\\en L\u00c4nder w\u00c4hrend der Periode flexibler Kurse nachteilig beeinflu\\t hat.R\u00e9sum\u00e9Variabilit\u00e9 des taux de change et performance commerciale: Evidence pour les grands sept pays industriels. - Cet article pr\u00e9sente des r\u00e9sultats empiriques concernant la relation entre la variabilit\u00e9 des taux de change et le commerce pour les grands sept pays OCDE. Contrairement aux autres \u00e9tudes empiriques les auteurs consid\u00e8rent l\u2019influence des revenus re\u00e9ls d\u2019exportation des nations producteurs de p\u00e9trole sur les exportations de ces sept pays. Aussi les auteurs mesurent le revenu \u00e9tranger au niveau des taux de change de dollar \u00abhaut\u00bb aussi bien que \u00abbas\u00bb pour garantir que les r\u00e9sultats ne sont pas biaises par le niveau particulier des taux de change choisi pour le dollar EU. Finalement, les auteurs testent les effets imm\u00e9diats et retard\u00e9s de la variabilit\u00e9 des taux de change sur les exportations. Les r\u00e9sultats indiquent que la variabilit\u00e9 des taux de change n\u2019a pas n\u00e9gativement influenc\u00e9 les exportations des grands sept pays pendant la p\u00e9riode des taux de change flexibles.ResumenVariabilidad del tipo de cambio y comercio internacional: Evidencia para los siete pa\u00edses industrializados m\u00e1s importantes. - En este trabajo se presentan resultados emp\u00edricos de la relaci\u00f3n entre la variabilidad del tipo de cambio y el comercio para los siete pa\u00edses m\u00e1s importantes de la OECD. A diferencia de trabajos previos se toma en cuenta la influencia de los ingresos reaies en concepto de exportaciones de los pa\u00edses exportadores de petr\u00f3leo sobre las exportaciones de los siete pa\u00edses estudiados. Adem\u00e1s se mide el ingreso en divisas al cambio alto y bajo del d\u00f3lar, con el fin de evitar el sesgo immanente al utilizar un s\u00f3lo nivel de cambio para el d\u00f3lar. Finalmente, so lleva a cabo un test para estudiar el efecto instant\u00e1neo y desfasado de la variabilidad del tipo de cambio sobre las exportaciones. Los resultados indican que la variabilidad del tipo de cambio no ha afectado negativamente a las exportaciones de los siete pa\u00edses estudiados durante el per\u00edodo de cambios flexibles." }, { "instance_id": "R27264xR27238", "comparison_id": "R27264", "paper_id": "R27238", "text": "Middleware for Robotics: A Survey The field of robotics relies heavily on various technologies such as mechatronics, computing systems, and wireless communication. Given the fast growing technological progress in these fields, robots can offer a wide range of applications. However real world integration and application development for such a distributed system composed of many robotic modules and networked robotic devices is very difficult. Therefore, middleware services provide a novel approach offering many possibilities and drastically enhancing the application development for robots. This paper surveys the current state of middleware approaches in this domain. It discusses middleware challenges in these systems and presents some representative middleware solutions specifically designed for robots. The selection of the studied methods tries to cover most of the middleware platforms, objectives and approaches that have been proposed by researchers in this field." }, { "instance_id": "R27264xR27251", "comparison_id": "R27264", "paper_id": "R27251", "text": "An introduction to robot component model for opros(open platform for robotic services) The OPRoS(Open Platform for Robotic Service) is a platform for network based intelligent robots supported by the IT R&D program of Ministry of Knowledge Economy of KOREA. The OPRoS technology aims at establishing a component based standard software platform for the robot which enables complicated functions to be developed easily by using the standardized COTS components. The OPRoS provides a software component model for supporting reusability and compatibility of the robot software component in the heterogeneous communication network. In this paper, we will introduce the OPRoS component model and its background." }, { "instance_id": "R27380xR27281", "comparison_id": "R27380", "paper_id": "R27281", "text": "On the Changes in Residual Stress Produced by Plastic Torsion Due to Repeated Stressing Setting process is often practiced on coil springs in order to improve their fatigue resistance and prevent their creep deflection. Torsional residual stresses are produced by this process, and it is generally understood that these stresses would play a role in improving the fatigue properties. In this experiment, round bar specimens of the spring steel SUP2 were used, and after being twisted by the torsional moment 25% beyond that corresponding to the yield point, they were subjected to the fatigue test in alternating torsion. The distribution of residual stresses was measured by the etching method, by measuring the angle of torsion during the etching process. Three stress levels were employed in repeated stressing and the number of stress cycles was made to be the same in each stress level. As a new attempt, we studied the fading of residual stresses under repeated stressing in successive two stress levels.The results obtained are summariaed as follows:(1) Residual stresses produced by plastic torsion are of the thermal stress type near the surface, being negative at the surface layers.(2) Residual stresses subjected to repeated stressing fade noticeably in the first stage of fading and then gradually with the repetition of stress cycles. In the second stage of fading, the relation obtained between the ratio of surface residual stresses \u03c4r/\u03c4o, (\u03c4r is the current value and \u03c4o is the initial value of surface residual stress) and the logarithm of cycle ratio n/N, formed straight lines, and experimental formulas concerning the fading of residual stresses were established.(3) In repeated stressing under successive two stress levels, the fading of residual stresses is larger in the case of descending stressing than in the case of ascending stressing, when the same numbers of stress cycles are given to each stress level, respectively. Hardness has also the same tendency as the residual stress." }, { "instance_id": "R27380xR27316", "comparison_id": "R27380", "paper_id": "R27316", "text": "Modelling of the Shot Peening Residual Stress Relaxation in Steel Structure Under Cyclic Loading With the help of a new description on the material cyclic softening law [1] and the elastoplastic calculation method proposed by Zarka et al [2], a theoretical model is developped for calculating shot peening residual stress relaxation under cyclic loadings. This model can take into account the modification of material mechanical properties due to shot peening, material cyclic softening and real local loading conditions. An application of this model on a shot peened plate in the steel SEA4135 subjected to a repeated plane bending is presented. The calculated results can well predict the experimental ones obtained by X-ray diffractometer." }, { "instance_id": "R27380xR27359", "comparison_id": "R27380", "paper_id": "R27359", "text": "Residual Stress Relaxation and Fatigue Strength of AISI 4140 under Torsional Loading after Conventional Shot Peening, Stress Peening and Warm Peening Cylindrical rods of 450\u00b0C quenched and tempered AISI 41 40 were conventionally shot peened, stress peened and warm peened while rotating in the peening device. Warm peening at Tpeen = 310\u00b0C was conducted using a modified air blast shot peening machine with an electric air flow heater system. To perform stress peening using a torsional pre-stress, a device was conceived which allowed rotating pre-stressed samples without having material of the pre-loading gadget between the shot and the samples. Thus, same peening conditions for all peening procedures were ensured. The residual stress distributions present after the different peening procedures were evaluated and compared with results obtained after peening of flat material of the same steel. The differently peened samples were subjected to torsional pulsating stresses (R = 0) at different loadings to investigate their residual stress relaxation behavior. Additionally, the pulsating torsional strengths for the differently peened samples were determined." }, { "instance_id": "R27380xR27283", "comparison_id": "R27380", "paper_id": "R27283", "text": "X-ray diffraction study of residual macrostresses in shot-peened and fatiqued 4130 steel A study has been made of the effects of shot peening and fatigue cycling on the residual macrostresses determined by X-ray methods in an austenitized and tempered AISI 4130 steel (150\u2013170 ksi). The results show that the effect of shot peening is to produce a residual compressive macrostress layer 0.014-in. deep. The residual-stress profile (stress vs. depth) exhibits a small negative stress gradient at and near the surface and a large positive stress gradient in the interior. Stress relaxation (due to fatique cycling) which occurred early in the fatigue history of the specimen was found greater at the surface than in the subsurface layers. Stress gradients of the stress profile increased with continued cycling and varied with depth. A correlation appears to exist between stress relaxation and stress gradients at the surface." }, { "instance_id": "R27380xR27378", "comparison_id": "R27380", "paper_id": "R27378", "text": "High temperature fatigue behavior and residual stress stability of laser-shock peened and deep rolled austenitic steel AISI 304 Abstract In this paper, we investigate how laser-shock peening and deep rolling affect the cyclic deformation and S/N-behavior of austenitic stainless steel AISI 304 at elevated temperatures (up to 600 \u00b0C). The results demonstrate that laser shock peening can produce similar amounts of lifetime enhancements as deep rolling. The cycle, stress amplitude and temperature-dependent relaxation of compressive residual stresses is more pronounced than the decrease of near-surface work hardening." }, { "instance_id": "R27380xR27285", "comparison_id": "R27380", "paper_id": "R27285", "text": "Compressive Residual Stress on Fatigue Fractured Surface X-ray fractography is a technique for analysing the cause and mechanism of fracture from the information obtained by X-ray irradiation on the fractured surface. It has been shown that a good correlation exists between the residual stress or the half value breadth of diffraction profile and the stress intensity factor that had caused the fracture. X-ray fractography has been successfully applied for the in-service failure of many types of fracture. However, in some cases the residual stresses on the fatigue fractured surface in service are compressive, which have not been found in the laboratory experiments so far. In the present study, fatigue experiments were carried out on 0.5% carbon steel to investigate the stress condition that produces compressive residual stress on the fractured surface. The specimen was a centre notched rectangular plate of 8mm thick, and a wide range of stress ratio R= o min/ o max was applied from tensile to compressive, namely R=0.50, 0.25, 0.20, 0.00, -1.67,-2.33, -2.40, and -3.Q0. From the results of experiments, it was found that, when the stress ratio was -3.00 and the minimum stress was -150MPa, the residual stress on the fractured surface became compressive. Since the minimum stress was far smaller than the compressive yield stress, the cause of the compressive residual stress was considered to be the result of crack closure. In this case, the crack opening ratio U=(\u03c3max-\u03c3op)/\u039b\u03c3, where \u03c3 op is the crack opening stress, was about 0.3 and almost constant." }, { "instance_id": "R27380xR27357", "comparison_id": "R27380", "paper_id": "R27357", "text": "Experimental measurement and finite element simulation of the interaction between residual stresses and mechanical loading Abstract Residual stresses, which can be produced during the manufacturing process, play an important role in an industrial environment. Residual stresses can and do change in service. In this paper, measurements of the statistical distribution of the initial residual stress in shot blast bars of En15R steel are presented. Also measured was the relaxation of the residual stresses after simple tensile and cyclic tension\u2013compression loading. Results from an elastic\u2013plastic finite element (FE) analysis of the interaction between residual stresses and mechanical loading are given. Two material hardening models were used in an FE analyses: simple linear kinematic hardening and multilinear hardening. It is shown that residual stress relaxation occurs when the applied strains are below the elastic limit. Furthermore, the results from the simulations were found to depend on the type of material model. Using the complex multilinear model led to greater residual stress relaxation compared to the simple linear model. Agreement between measurements and predictions was poor for cyclic loading, and good for simple tensile loading." }, { "instance_id": "R27461xR27413", "comparison_id": "R27461", "paper_id": "R27413", "text": "Approximate Correlations for Chevron-Type Plate Heat Exchangers There exists very little useful data representing the performance of industrial plate heat exchangers (PHEs) in the open literature. As a result, it has been difficult to arrive at any generalized correlations. While every PHE manufacturer is believed to have a comprehensive set of performance curves for their own designs, there exists the need to generate an approximate set of generalized correlations for the heat-transfer community. Such correlations can be used for preliminary designs and analytical studies. This paper attempts to develop such a set of generalized correlations to quantify the heat-transfer and pressure-drop performance of chevron-type PHEs. For this purpose, the experimental data reported by Heavner et al. were used for the turbulent region. For the laminar region, a semi-theoretical approach was used to express, for example, the friction factor as a function of the Reynolds number and the chevron angle. Asymptotic curves were used for the transitional region. Physical explanations are provided for the trends shown by the generalized correlations. The correlations are compared against the open-literature data, where appropriate. These correlations are expected to be improved in the future when more data become available." }, { "instance_id": "R27461xR27447", "comparison_id": "R27461", "paper_id": "R27447", "text": "The effect of the corrugation inclination angle on the thermohydraulic performance of plate heat exchangers Abstract It is well established that the inclination angle between plate corrugations and the overall flow direction is a major parameter in the thermohydraulic performance of plate heat exchangers. Application of an improved flow visualization technique has demonstrated that at angles up to about 80\u00b0 the fluid flows mainly along the furrows on each plate. A secondary, swirling motion is imposed on the flow along a furrow when its path is crossed by streams flowing along furrows on the opposite wall. Through the use of the electrochemical mass transfer analogue, it is proved that this secondary motion determines the transfer process; as a consequence of this motion the transfer is fairly uniformly distributed across the width of the plates. The observed maximum transfer rate at an angle of about 80\u00b0 is explained from the observed flow patterns. At higher angles the flow pattern becomes less effective for transfer ; in particular at 90\u00b0 marked flow separation is observed." }, { "instance_id": "R27620xR27514", "comparison_id": "R27620", "paper_id": "R27514", "text": "Energy consumption, employment and causality in Japan: a multivariate approach Using Hsiao's version of Granger causality and cointegration, this study finds that employment (EP), energy consumption (EC), Real GNP (RGNP) and capital are not cointegrated. EC is found to negatively cause EP whereas EP and RNGP are found to directly cause EC. It is also found that capital negatively Granger-causes EP while RGNP and EP are found to strongly influence EC. The findings of this study seem to suggest that a policy of energy conservation may not be detrimental to a country such as Japan. In addition, the finding that energy and capital are substitutes implies that energy conservation will promote capital formation, given output constant." }, { "instance_id": "R27620xR27602", "comparison_id": "R27620", "paper_id": "R27602", "text": "Energy consumption and economic growth: a causality analysis for Greece This paper investigates the causal relationship between aggregated and disaggregated levels of energy consumption and economic growth for Greece for the period 1960-2006 through the application of a later development in the methodology of time series proposed by Toda and Yamamoto (1995). At aggregated levels of energy consumption empirical findings suggest the presence of a uni-directional causal relationship running from total energy consumption to real GDP. At disaggregated levels empirical evidence suggests that there is a bi-directional causal relationship between industrial and residential energy consumption to real GDP but this is not the case for the transport energy consumption with causal relationship being identified in neither direction. The importance of these findings lies on their policy implications and their adoption on structural policies affecting energy consumption in Greece suggesting that in order to address energy import dependence and environmental concerns without hindering economic growth emphasis should be put on the demand side and energy efficiency improvements." }, { "instance_id": "R27620xR27491", "comparison_id": "R27620", "paper_id": "R27491", "text": "The relationship between energy and GNP: further results This paper reexamines the casuality between GNP and energy consumption by using updated US data for the period 1947\u20131979. As a secondary contribution, we investigate the causal relationship between energy consumption and employment. Applying Sims' technique, we find no causal relationship between GNP and energy consumption. We find further that there is a slight unidirectional flow running from employment to energy consumption. Economic interpretations of the empirical results are also presented." }, { "instance_id": "R27620xR27558", "comparison_id": "R27620", "paper_id": "R27558", "text": "The impact of energy consumption on economic growth: evidence from linear and nonlinear models in Taiwan This paper considers the possibility of both a linear effect and nonlinear effect of energy consumption on economic growth, using data for the period 1955\u20132003 in Taiwan. We find evidence of a level-dependent effect between the two variables. Allowing for a nonlinear effect of energy consumption growth sheds new light on the explanation of the characteristics of the energy-growth link. We also provide evidence that the relationship between energy consumption and economic growth in Taiwan is characterized by an inverse U-shape. Some previous studies support the view that energy consumption may promote economic growth. However, the conclusion drawn from the empirical findings suggests that such a relationship exists only where there is a low level of energy consumption in Taiwan. We show that a threshold regression provides a better empirical model than the standard linear model and that policy-makers should seek to capture economic structures associated with different stages of economic growth. It is also worth noting that the energy consumption threshold was reached in the case of Taiwan in the world energy crises periods of 1979 and 1982." }, { "instance_id": "R27620xR27503", "comparison_id": "R27620", "paper_id": "R27503", "text": "Energy and economic growth in the USA. A multivariate approach Abstract This paper examines the causal relationship between GDP and energy use for the period 1947-90 in the USA. The relationship between energy use and economic growth has been examined by both biophysical and neoclassical economists. In particular, several studies have tested for the presence of a causal relationship (in the Granger sense) between energy use and economic growth. However, these tests do not allow a direct test of the relative explanatory powers of the neoclassical and biophysical models. A multivariate adaptation of the test-vector autoregression (VAR) does allow such a test. A VAR of GDP, energy use, capital stock and employment is estimated and Granger tests for causal relationships between the variables are carried out. Although there is no evidence that gross energy use Granger causes GDP, a measure of final energy use adjusted for changing fuel composition does Granger cause GDP." }, { "instance_id": "R27620xR27541", "comparison_id": "R27620", "paper_id": "R27541", "text": "Energy use and output growth in Canada: a multivariate cointegration analysis Using a neo-classical one-sector aggregate production technology where capital, labor and energy are treated as separate inputs, this paper develops a vector error-correction (VEC) model to test for the existence and direction of causality between output growth and energy use in Canada. Using the Johansen cointegration technique, the empirical findings indicate that the long-run movements of output, labor, capital and energy use in Canada are related by two cointegrating vectors. Then using a VEC specification, the short-run dynamics of the variables indicate that Granger-causality is running in both directions between output growth and energy use. Hence, an important policy implication of the analysis is that energy can be considered as a limiting factor to output growth in Canada." }, { "instance_id": "R27620xR27566", "comparison_id": "R27620", "paper_id": "R27566", "text": "Energy consumption and economic activities in Iran Abstract The causal relationship between overall GDP, industrial and agricultural value added and consumption of different kinds of energy are investigated using vector error correction model for the case of Iran within 1967\u20132003. A long-run unidirectional relationship from GDP to total energy and bidirectional relationship between GDP and gas as well as GDP and petroleum products consumption for the whole economy was discovered. Causality is running from value added to total energy, electricity, gas and petroleum products consumption and from gas consumption to value added in industrial sector. The long-run bidirectional relations hold between value added and total energy, electricity and petroleum products consumption in the agricultural sector. The short-run causality runs from GDP to total energy and petroleum products consumption, and also industrial value added to total energy and petroleum products consumption in this sector." }, { "instance_id": "R27620xR27560", "comparison_id": "R27620", "paper_id": "R27560", "text": "Sectoral energy consumption by source and economic growth in Turkey This paper provides a detailed analysis of the energy consumption in Turkey during the last 40 years. It investigates the causal relationships between income and energy consumption in two ways: first, the relationship is studied at the aggregate level; then, we focus on the industrial sector. Previous findings suggest that, in the case of Turkey, there is a unidirectional causality running from energy consumption to growth. However, our findings suggest that in the long run, income and energy consumption appear to be neutral with respect to each other both at the aggregate and at the industrial level. We also find a strong evidence of instantaneous causality, which means that contemporaneous values of energy consumption and income are correlated. Furthermore, a descriptive analysis is conducted in order to reveal the differences in the use of energy resources. We conclude that energy conservation policies are necessary for environmental concerns and our empirical results imply that such policies would not impede economic growth in the long term." }, { "instance_id": "R27620xR27507", "comparison_id": "R27620", "paper_id": "R27507", "text": "An investigation of co-integration and causality between energy consumption and economic activity in Taiwan Applying Hsiao'a version of the Granger causality method, this paper examines the causality between energy and GNP and energy and employment by applying recently developed techniques of co-integration and Hsiao's version of the Granger causality to Taiwanese data for the 1955\u20131993 period. The Phillips-Perron tests reveal that the series with the exception of GNP are not stationary and therefore differencing is performed to secure stationarity. The study finds causality running from GDP to energy consumption without feedback in Taiwan. It is also found that causality runs from GDP to energy but not vice versa." }, { "instance_id": "R27620xR27543", "comparison_id": "R27620", "paper_id": "R27543", "text": "Causal relationship between energy consumption and GDP: the case of Korea 1970-1999 Abstract Causal relationship between energy consumption and economic growth is investigated applying a multivariate model of capital, labor, energy and GDP. Usual BTU energy aggregate is substituted with a Divisia aggregate in an attempt to mitigate aggregation bias. To test for Granger causality in the presence of cointegration among the variables, we employ a vector error correction model rather than a vector autoregressive model. Empirical results for Korea over the period 1970\u20131999 suggest a long run bidirectional causal relationship between energy and GDP, and short run unidirectional causality running from energy to GDP. The source of causation in the long run is found to be the error correction terms in both directions." }, { "instance_id": "R27620xR27568", "comparison_id": "R27620", "paper_id": "R27568", "text": "Energy consumption and GDP in Turkey: is there a co-integration relationship? Energy consumption and GDP are expected to grow by 5.9% and 7% annually until 2025 in Turkey. This paper tries to unfold the linkage between energy consumption and GDP by undertaking a co-integration analysis for Turkey with annual data over the period 1970-2003. The analysis shows that energy consumption and GDP are co-integrated. This means that there is a (possibly bi-directional) causality relationship between the two. We establish that there is a unidirectional causality running from GDP to energy consumption indicating that energy saving would not harm economic growth in Turkey. In addition, we find that energy consumption keeps on growing as long as the economy grows in Turkey." }, { "instance_id": "R27705xR27659", "comparison_id": "R27705", "paper_id": "R27659", "text": "The electricity consumption and GDP nexus dynamic Fiji Islands Fiji is a small open island economy dependent on energy for its growth and development; hence, the relationship between energy consumption and economic growth is crucial for Fiji's development. In this paper, we investigate the nexus between electricity consumption and economic growth for Fiji within a multivariate framework through including the labour force variable. We use the bounds testing approach to cointegration and find that electricity consumption, GDP and labour force are only cointegrated when GDP is the endogenous variable. We use the Granger causality F-test and find that in the long-run causality runs from electricity consumption and labour force to GDP, implying that Fiji is an energy dependent country and thus energy conservation policies will have an adverse effect on Fiji's economic growth." }, { "instance_id": "R27705xR27653", "comparison_id": "R27705", "paper_id": "R27653", "text": "Causality relationship between electricity con- sumption and GDP in Bangladesh In this paper, we examine the causal relationship between the per capita electricity consumption and the per capita GDP for Bangladesh using cointegration and vector error correction model. Our results show that there is unidirectional causality from per capita GDP to per capita electricity consumption. However, the per capita electricity consumption does not cause per capita GDP in case of Bangladesh. The finding has significant implications from the point of view of energy conservation, emission reduction and economic development." }, { "instance_id": "R27705xR27664", "comparison_id": "R27705", "paper_id": "R27664", "text": "Disaggregated energy consumption and GDP in Taiwan: a threshold co-integration analysis Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan." }, { "instance_id": "R27705xR27674", "comparison_id": "R27705", "paper_id": "R27674", "text": "Electricity consumption and economic growth in Nigeria: evidence from cointegration and co-feature analysis The paper investigates the causality relationship between energy consumption and economic growth for Nigeria during the period 1980-2006. The results of our estimation show that real gross domestic product (rGDP) and electricity consumption (ele) are cointegrated and there is only unidirectional Granger causality running from electricity consumption (ele) to (rGDP). Then we applied Hodrick-Prescott (HP) filter to decompose the trend and the fluctuation components of the rGDP and electricity consumption (ele) series. The estimation results show that there is cointegration between the trend and the cyclical components of the two series, which seems to suggest that the Granger causality is possibly related with the business cycle. The paper suggests that investing more and reducing inefficiency in the supply and use of electricity can further stimulate economic growth in Nigeria. The results should, however, be interpreted with caution because of the possibility of loss in power associated with the small sample size and the danger of omitted variable bias that could result from the use of bi-variate analysis." }, { "instance_id": "R27705xR27690", "comparison_id": "R27705", "paper_id": "R27690", "text": "Co-integration and causality relationship between energy consumption and economic growth: further empirical evidence for Nigeria The Paper re - examined co\u2010integration and causality relationship between energy consumption and economic growth for Nigeria using data covering the period 1970 to 2005. Unlike previous related study for Nigeria, different proxies of energy consumption (electricity demand, domestic crude oil consumption and gas utilization) were used for the estimation. It also included government activities proxied by health expenditure and monetary policy proxied by broad money supply though; emphasis was on energy consumption. Using the Johansen co\u2010integration technique, it was found that there existed a long run relationship among the series. It was also found that all the variables used for the study were I(1). Furthermore, unidirectional causality was established between electricity consumption and economic growth, domestic crude oil production and economic growth as well as between gas utilization and economic growth in Nigeria. While causality runs from electricity consumption to economic growth as well as from gas utilization to economic growth, it was found that causality runs from economic growth to domestic crude oil production. Therefore, conservation policy regarding electricity consumption and gas utilization would harm economic growth in Nigeria while energy conservation policy as regards domestic crude oil consumption would not. Santrauka Tyrinejamas energijos suvartojimo ir ekonominio augimo tarpusavio ry\u0161ys bei prie\u017eastingumas Ni\u2010gerijoje, remiantis 1970\u20132005 m. statistiniais duomenimis. Naujai, lyginant su ankstesniais Nigerijos tyrimais, parenkami energijos vartojimo matavimo b\u016bdai (elektros energijos paklausa, vietines naftos \u017ealiavos suvartojimas, duju utilizavimas). Straipsnyje atsi\u017evelgiama i socialine ir monetarine valstybes politika, kurios atspindi valstybes gerove. Pritaikius Johansen tarpusavio priklausomybes metodabuvo gauta, kad tarp visu energijos vartojima atspindin\u010diu rodikliu ir ekonominio augimo yra netiesioginis prie\u017eastinis ry\u0161ys. Manoma, kad elektros bei dujunaudojimo apribojimas stabdytu Nigerijos ekonomini augima, o naftos \u017ealiavos vartojimo masto ma\u017einimas nepaveiktu tolesnes \u0161alies pletros." }, { "instance_id": "R27705xR27670", "comparison_id": "R27705", "paper_id": "R27670", "text": "Electricity consumption and economic growth, the case of Lebanon In this paper we investigate the causal relationship between electricity consumption and economic growth for Lebanon, using monthly data for Lebanon covering the period January 1995 to December 2005. Empirical results of the study confirm the absence of a long-term equilibrium relationship between electricity consumption and economic growth in Lebanon but the existence of unidirectional causality running from electricity consumption to economic growth when examined in a bivariate vector autoregression framework with change in temperature and relative humidity as exogenous variables. Thus, the policy makers in Lebanon should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector of Lebanon, as this would propel the economic growth of the country." }, { "instance_id": "R27705xR27679", "comparison_id": "R27705", "paper_id": "R27679", "text": "Electricity consumption and economic growth in South Africa: a trivariate causality test In this paper we examine the causal relationship between electricity consumption and economic growth in South Africa. We incorporate the employment rate as an intermittent variable in the bivariate model between electricity consumption and economic growth--thereby creating a simple trivariate causality framework. Our empirical results show that there is a distinct bidirectional causality between electricity consumption and economic growth in South Africa. In addition, the results show that employment in South Africa Granger-causes economic growth. The results apply irrespective of whether the causality is estimated in the short-run or in the long-run formulation. The study, therefore, recommends that policies geared towards the expansion of the electricity infrastructure should be intensified in South Africa in order to cope with the increasing demand exerted by the country's strong economic growth and rapid industrialisation programme. This will certainly enable the country to avoid unprecedented power outages similar to those experienced in the country in mid-January 2008." }, { "instance_id": "R27705xR27698", "comparison_id": "R27705", "paper_id": "R27698", "text": "Electricity consumption and economic growth nexus in Portugal using cointegration and causality approaches The aim of this paper is to re-examine the relationship between electricity consumption, economic growth, and employment in Portugal using the cointegration and Granger causality frameworks. This study covers the sample period from 1971 to 2009. We examine the presence of a long-run equilibrium relationship using the bounds testing approach to cointegration within the Unrestricted Error-Correction Model (UECM). Moreover, we examine the direction of causality between electricity consumption, economic growth, and employment in Portugal using the Granger causality test within the Vector Error-Correction Model (VECM). As a summary of the empirical findings, we find that electricity consumption, economic growth, and employment in Portugal are cointegrated and there is bi-directional Granger causality between the three variables in the long-run. With the exception of the Granger causality between electricity consumption and economic growth, the rest of the variables are also bi-directional Granger causality in the short-run. Furthermore, we find that there is unidirectional Granger causality running from economic growth to electricity consumption, but no evidence of reversal causality." }, { "instance_id": "R27835xR27767", "comparison_id": "R27835", "paper_id": "R27767", "text": "What Do Students Learn When Collaboratively Using A Computer Game in the Study of Historical Disease Epidemics, and Why? The use of computer games and virtual environments has been shown to engage and motivate students and can provide opportunities to visualize the historical period and make sense of complex visual information. This article presents the results of a study in which university students were asked to collaboratively solve inquiry-based problems related to historical disease epidemics using game-based learning. A multimethod approach to the data collection was used. Initial results indicated that students attended to visual information with more specificity than text-based information when using a virtual environment. Models of student\u2019s decision-making processes when interacting with the world confirmed that students were making decisions related to these visual elements, and not the inquiry process. Building on theories from the learning sciences, such as learning from animations/visualizations and computer-supported collaborative learning, in this article, the authors begin to answer the question of why students learned what they did about historical disease epidemics." }, { "instance_id": "R27835xR27739", "comparison_id": "R27835", "paper_id": "R27739", "text": "Digital Game-Based Learning in high school Computer Science education: Impact on educational effectiveness and student motivation The aim of this study was to assess the learning effectiveness and motivational appeal of a computer game for learning computer memory concepts, which was designed according to the curricular objectives and the subject matter of the Greek high school Computer Science (CS) curriculum, as compared to a similar application, encompassing identical learning objectives and content but lacking the gaming aspect. The study also investigated potential gender differences in the game's learning effectiveness and motivational appeal. The sample was 88 students, who were randomly assigned to two groups, one of which used the gaming application (Group A, N=47) and the other one the non-gaming one (Group B, N=41). A Computer Memory Knowledge Test (CMKT) was used as the pretest and posttest. Students were also observed during the interventions. Furthermore, after the interventions, students' views on the application they had used were elicited through a feedback questionnaire. Data analyses showed that the gaming approach was both more effective in promoting students' knowledge of computer memory concepts and more motivational than the non-gaming approach. Despite boys' greater involvement with, liking of and experience in computer gaming, and their greater initial computer memory knowledge, the learning gains that boys and girls achieved through the use of the game did not differ significantly, and the game was found to be equally motivational for boys and girls. The results suggest that within high school CS, educational computer games can be exploited as effective and motivational learning environments, regardless of students' gender." }, { "instance_id": "R27835xR27804", "comparison_id": "R27835", "paper_id": "R27804", "text": "Outdoor natural science learning with an RFID-supported immersive ubiquitous learning environment Despite their successful use in many conscientious studies involving outdoor learning applications, mobile learning systems still have certain limitations. For instance, because students cannot obtain real-time, contextaware content in outdoor locations such as historical sites, endangered animal habitats, and geological landscapes, they are unable to search, collect, share, and edit information by using information technology. To address such concerns, this work proposes an environment of ubiquitous learning with educational resources (EULER) based on radio frequency identification (RFID), augmented reality (AR), the Internet, ubiquitous computing, embedded systems, and database technologies. EULER helps teachers deliver lessons on site and cultivate student competency in adopting information technology to improve learning. To evaluate its effectiveness, we used the proposed EULER for natural science learning at the Guandu Nature Park in Taiwan. The participants were elementary school teachers and students. The analytical results revealed that the proposed EULER improves student learning. Moreover, the largely positive feedback from a post-study survey confirms the effectiveness of EULER in supporting outdoor learning and its ability to attract the interest of students." }, { "instance_id": "R27835xR27755", "comparison_id": "R27835", "paper_id": "R27755", "text": "The effects of computer games on primary school students\u2019 achievement and motivation in geography learning The implementation of a computer game for learning about geography by primary school students is the focus of this article. Researchers designed and developed a three-dimensional educational computer game. Twenty four students in fourth and fifth grades in a private school in Ankara, Turkey learnt about world continents and countries through this game for three weeks. The effects of the game environment on students' achievement and motivation and related implementation issues were examined through both quantitative and qualitative methods. An analysis of pre and post achievement tests showed that students made significant learning gains by participating in the game-based learning environment. When comparing their motivations while learning in the game-based learning environment and in their traditional school environment, it was found that students demonstrated statistically significant higher intrinsic motivations and statistically significant lower extrinsic motivations learning in the game-based environment. In addition, they had decreased focus on getting grades and they were more independent while participating in the game-based activities. These positive effects on learning and motivation, and the positive attitudes of students and teachers suggest that computer games can be used as an ICT tool in formal learning environments to support students in effective geography learning." }, { "instance_id": "R27835xR27753", "comparison_id": "R27835", "paper_id": "R27753", "text": "International Evaluation of a Localized Geography Educational Software A report on the implementation and evaluation of an intelligent learning system; the multimedia geography tutor and game software titled Lainos World SM was localized into English, French, Spanish, German, Portuguese, Russian and Simplified Chinese. Thereafter, multilingual online surveys were setup to which High school students were globally invited via mails to schools, targeted adverts and recruitment on Facebook, Google, etc. 1125 respondents from selected nations completed both the initial and final surveys. The effect of the software on students\u2019 geographical knowledge was analyzed through pre and post achievement test scores. In general, the mean score were higher after exposure to the educational software for fifteen days and it was established that the score differences were statistically significant. This positive effect and other qualitative data show that the localized software from students\u2019 perspective is a widely acceptable and effective educational tool for learning geography in an interactive and gaming environment.." }, { "instance_id": "R27835xR27781", "comparison_id": "R27835", "paper_id": "R27781", "text": "Gameplaying for maths learning: cooperative or not? This study investigated the effects of gameplaying on fifth-graders\u2019 maths performance and attitudes. One hundred twenty five fifth graders were recruited and assigned to a cooperative Teams-Games-Tournament (TGT), interpersonal competitive or no gameplaying condition. A state standards-based maths exam and an inventory on attitudes towards maths were used for the pretest and posttest. The students\u2019 gender, socio-economic status and prior maths ability were examined as the moderating variables and covariate. Multivariate analysis of covariance (MANCOVA) indicated that gameplaying was more effective than drills in promoting maths performance, and cooperative gameplaying was most effective for promoting positive maths attitudes regardless of students\u2019 individual differences." }, { "instance_id": "R27835xR27772", "comparison_id": "R27835", "paper_id": "R27772", "text": "The virtual playground: an educational virtual reality environment for evaluating interactivity and conceptual learning The research presented in this paper aims at investigating user interaction in immersive virtual learning environments, focusing on the role and the effect of interactivity on conceptual learning. The goal has been to examine if the learning of young users improves through interacting in (i.e. exploring, reacting to, and acting upon) an immersive virtual environment (VE) compared to non-interactive or non-immersive environments. Empirical work was carried out with more than 55 primary school students between the ages of 8 and 12, in different between-group experiments: an exploratory study, a pilot study, and a large-scale experiment. The latter was conducted in a virtual environment designed to simulate a playground. In this \u201cVirtual Playground,\u201d each participant was asked to complete a set of tasks designed to address arithmetical \u201cfractions\u201d problems. Three different conditions, two experimental virtual reality (VR) conditions and a non-VR condition, that varied the levels of activity and interactivity, were designed to evaluate how children accomplish the various tasks. Pre-tests, post-tests, interviews, video, audio, and log files were collected for each participant, and analysed both quantitatively and qualitatively. This paper presents a selection of case studies extracted from the qualitative analysis, which illustrate the variety of approaches taken by children in the VEs in response to visual cues and system feedback. Results suggest that the fully interactive VE aided children in problem solving but did not provide a strong evidence of conceptual change as expected; rather, it was the passive VR environment, where activity was guided by a virtual robot, that seemed to support student reflection and recall, leading to indications of conceptual change." }, { "instance_id": "R27835xR27779", "comparison_id": "R27835", "paper_id": "R27779", "text": "The Effect of Using Exercise-Based Computer Games during the Process of Learning on Academic Achievement among Education Majors Th e aim of this study is to define whether using exercise-based games increase the performance of learning. For this reason, two basic questions were tried to be answered in the study. First, is there any diff erence in learning between the group that was given exercisebased games and the group that was not? Second, is there any diff erence in learning between the group that used exercise-based games at end of the process of learning and the group that was not applied this but taken the questions of exercises in game material? Th is research has been conducted within the subject of Testing and Evaluation in the program of Kocaeli University Primary Maths Teacher\u2019s College. Experimental design with a pre test-post test control group was used in this study. Experimental process based on game material was used in 120 minutes at the end of a 3-week-teaching period. Th e reliability values (KR-20) of the two tests were found to be .79 and .71 which were used to evaluate learning level. Th e study has reached a conclusion that game materials used at the end of learning process have increased the learning levels of teacher candidates. However, the similar learning levels have been observed among students who were taken printed exercises instead of using learning game method to reinforce the traditional learning in the research. Th is means that in method of applying teaching games in addition to the traditional teaching, there isn\u2019t any diff erence of learning eff iciency of students answered the questions based on competition and fun and the group who only answered the questions. Th is study is expected to contribute defining in which situations games are eff ective." }, { "instance_id": "R27835xR27764", "comparison_id": "R27835", "paper_id": "R27764", "text": "Mobile game-based learning in secondary education: engagement, motivation and learning in a mobile city game Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular project-based instruction. No significant differences were found between the two groups with respect to motivation for History or the Middle Ages. The impact of location-based technology and game-based learning on pupil knowledge and motivation are discussed along with suggestions for future research." }, { "instance_id": "R27835xR27757", "comparison_id": "R27835", "paper_id": "R27757", "text": "Combining software games with education: Evaluation of its educational effectiveness Computer games are very popular among children and adolescents. In this respect, they could be exploited by educational software designers to render educational software more attractive and motivating. However, it remains to be explored what the educational scope of educational software games is. In this paper, we explore several issues concerning the educational effectiveness, appeal and scope of educational software games through an evaluation study of an Intelligent Tutoring System (ITS) that operates as a virtual reality educational game. The results of the evaluation show that educational virtual reality games can be very motivating while retaining or even improving the educational effects on students. Moreover, one important finding of the study was that the educational effectiveness of the game was particularly high for students who used to have poor performance in the domain taught prior to their learning experience with the game." }, { "instance_id": "R27835xR27745", "comparison_id": "R27835", "paper_id": "R27745", "text": "Successful implementation of user- centered game based learning in higher education: An example from civil engineering Goal: The use of an online game for learning in higher education aims to make complex theoretical knowledge more approachable. Permanent repetition will lead to a more in-depth learning. Objective: To gain insight into whether and to what extent, online games have the potential to contribute to student learning in higher education. Experimental setting: The online game was used for the first time during a lecture on Structural Concrete at Master's level, involving 121 seventh semester students. Methods: Pre-test/post-test experimental control group design with questionnaires and an independent online evaluation. Results: The minimum learning result of playing the game was equal to that achieved with traditional methods. A factor called ''joy'' was introduced, according to [Nielsen, J. (2002): User empowerment and the fun factor. In Jakob Nielsen's Alertbox, July 7, 2002. Available from http://www.useit.com/alertbox/20020707.html.], which was amazingly high. Conclusion: The experimental findings support the efficacy of game playing. Students enjoyed this kind of e-learning." }, { "instance_id": "R27835xR27800", "comparison_id": "R27835", "paper_id": "R27800", "text": "Principles underlying the design of \u201cThe Number Race\u201d, an adaptive computer game for remediation of dyscalculia Abstract Background Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a \"core deficit\" in number sense or in the link between number sense and symbolic number representations. Methods \"The Number Race\" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional \"learning space\" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). Results The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article [1] describes the evolution of number sense and arithmetic scores before and after training. Conclusion The software, open-source and freely available online, is designed for learning disabled children aged 5\u20138, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains." }, { "instance_id": "R28099xR27906", "comparison_id": "R28099", "paper_id": "R27906", "text": "A revisit to cost aggregation in stereo matching: How far can we reduce its computational redundancy? This paper presents a novel method for performing an efficient cost aggregation in stereo matching. The cost aggregation problem is re-formulated with a perspective of a histogram, and it gives us a potential to reduce the complexity of the cost aggregation significantly. Different from the previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy which exists among the search range, caused by a repeated filtering for all disparity hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The trade-off between accuracy and complexity is extensively investigated into parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity. This work provides new insights into complexity-constrained stereo matching algorithm design." }, { "instance_id": "R28099xR27857", "comparison_id": "R28099", "paper_id": "R27857", "text": "Adaptive support-weight approach for correspondence search In this paper, we present a new area-based method for visual correspondence search that focuses on the dissimilarity computation. Local and area-based matching methods generally measure the similarity (or dissimilarity) between the image pixels using local support windows. In this approach, an appropriate support window should be selected adaptively for each pixel to make the measure reliable and certain. Finding the optimal support window with an arbitrary shape and size is, however, very difficult and generally known as an NP-hard problem. For this reason, unlike the existing methods that try to find an optimal support window, we adjusted the support-weight of each pixel in a given support window. The adaptive support-weight of a pixel is computed based on the photometric and geometric relationship with the pixel under consideration. Dissimilarity is then computed using the raw matching costs and support-weights of both support windows, and the correspondence is finally selected by the WTA (winner-takes-all) method. The experimental results for the rectified real images show that the proposed method successfully produces piecewise smooth disparity maps while preserving sharp depth discontinuities accurately." }, { "instance_id": "R28099xR27896", "comparison_id": "R28099", "paper_id": "R27896", "text": "Real-time disparity estimation algorithm for stereo camera systems This paper proposes a real-time stereo matching algorithm using GPU programming. The likelihood model is implemented using GPU programming for real-time operation. And the prior model is proposed to improve the accuracy of disparity estimation. First, the likelihood matching based on rank transform is implemented in GPU programming. The shared memory handling in graphic hardware is introduced in calculating the likelihood model. The prior model considers the smoothness of disparity map and is defined as a pixel-wise energy function using adaptive interaction among neighboring disparities. The disparity is determined by minimizing the joint energy function which combines the likelihood model with prior model. These processes are performed in the multi-resolution approach. The disparity map is interpolated using the reliability of likelihood model and color-based similarity in the neighborhood. This paper evaluates the proposed approach with the Middlebury stereo images. According to the experiments, the proposed algorithm shows good estimation accuracy over 30 frames/second for 640\u00d7480 image and 60 disparity range. The proposed disparity estimation algorithm is applied to real-time stereo camera system such as 3-D image display, depth-based object extraction, 3-D rendering, and so on." }, { "instance_id": "R28099xR27947", "comparison_id": "R28099", "paper_id": "R27947", "text": "A non-local cost aggregation method for stereo matching Matching cost aggregation is one of the oldest and still popular methods for stereo correspondence. While effective and efficient, cost aggregation methods typically aggregate the matching cost by summing/averaging over a user-specified, local support region. This is obviously only locally-optimal, and the computational complexity of the full-kernel implementation usually depends on the region size. In this paper, the cost aggregation problem is re-examined and a non-local solution is proposed. The matching cost values are aggregated adaptively based on pixel similarity on a tree structure derived from the stereo image pair to preserve depth edges. The nodes of this tree are all the image pixels, and the edges are all the edges between the nearest neighboring pixels. The similarity between any two pixels is decided by their shortest distance on the tree. The proposed method is non-local as every node receives supports from all other nodes on the tree. As can be expected, the proposed non-local solution outperforms all local cost aggregation methods on the standard (Middlebury) benchmark. Besides, it has great advantage in extremely low computational complexity: only a total of 2 addition/subtraction operations and 3 multiplication operations are required for each pixel at each disparity level. It is very close to the complexity of unnormalized box filtering using integral image which requires 6 addition/subtraction operations. Unnormalized box filter is the fastest local cost aggregation method but blurs across depth edges. The proposed method was tested on a MacBook Air laptop computer with a 1.8 GHz Intel Core i7 CPU and 4 GB memory. The average runtime on the Middlebury data sets is about 90 milliseconds, and is only about 1.25\u00d7 slower than unnormalized box filter. A non-local disparity refinement method is also proposed based on the non-local cost aggregation method." }, { "instance_id": "R28099xR28093", "comparison_id": "R28099", "paper_id": "R28093", "text": "A stereo matching approach based on particle filters and scattered control landmarks In robot localization, particle filtering can estimate the position of a robot in a known environment with the help of sensor data. In this paper, we present an approach based on particle filtering, for accurate stereo matching. The proposed method consists of three parts. First, we utilize multiple disparity maps in order to acquire a very distinctive set of features called landmarks, and then we use segmentation as a grouping technique. Secondly, we apply scan line particle filtering using the corresponding landmarks as a virtual sensor data to estimate the best disparity value. Lastly, we reduce the computational redundancy of particle filtering in our stereo correspondence with a Markov chain model, given the previous scan line values. More precisely, we assist particle filtering convergence by adding a proportional weight in the predicted disparity value estimated by Markov chains. In addition to this, we optimize our results by applying a plane fitting algorithm along with a histogram technique to refine any outliers. This work provides new insights into stereo matching methodologies by taking advantage of global geometrical and spatial information from distinctive landmarks. Experimental results show that our approach is capable of providing high-quality disparity maps comparable to other well-known contemporary techniques. Display Omitted A stereo matching approach motivated by the particle filter framework in robot localization.Highly accurate GCPs, acquired by the computation of multiple cost efficient disparity maps.A Markov chain model has been introduced in the process to reduce the computational complexity of particle filtering.Application of RANSAC algorithm along with a histogram technique to refine any outliers." }, { "instance_id": "R28099xR27942", "comparison_id": "R28099", "paper_id": "R27942", "text": "Real-time stereo matching based on fast belief propagation In this paper, a global optimum stereo matching algorithm based on improved belief propagation is presented which is demonstrated to generate high quality results while maintaining real-time performance. These results are achieved using a foundation based on the hierarchical belief propagation architecture combined with a novel asymmetric occlusion handling model, as well as parallel graphical processing. Compared to the other real-time methods, the experimental results on Middlebury data show the efficiency of our approach." }, { "instance_id": "R28099xR28016", "comparison_id": "R28099", "paper_id": "R28016", "text": "Domain Transformation-Based Efficient Cost Aggregation for Local Stereo Matching Binocular stereo matching is one of the most important algorithms in the field of computer vision. Adaptive support-weight approaches, the current state-of-the-art local methods, produce results comparable to those generated by global methods. However, excessive time consumption is the main problem of these algorithms since the computational complexity is proportionally related to the support window size. In this paper, we present a novel cost aggregation method inspired by domain transformation, a recently proposed dimensionality reduction technique. This transformation enables the aggregation of 2-D cost data to be performed using a sequence of 1-D filters, which lowers computation and memory costs compared to conventional 2-D filters. Experiments show that the proposed method outperforms the state-of-the-art local methods in terms of computational performance, since its computational complexity is independent of the input parameters. Furthermore, according to the experimental results with the Middlebury dataset and real-world images, our algorithm is currently one of the most accurate and efficient local algorithms." }, { "instance_id": "R28099xR27928", "comparison_id": "R28099", "paper_id": "R27928", "text": "Real-time stereo on GPGPU using progressive multi-resolution adaptive windows We introduce a new GPGPU-based real-time dense stereo matching algorithm. The algorithm is based on a progressive multi-resolution pipeline which includes background modeling and dense matching with adaptive windows. For applications in which only moving objects are of interest, this approach effectively reduces the overall computation cost quite significantly, and preserves the high definition details. Running on an off-the-shelf commodity graphics card, our implementation achieves a 36 fps stereo matching on 1024x768 stereo video with a fine 256 pixel disparity range. This is effectively same as 7200M disparity evaluations per second. For scenes where the static background assumption holds, our approach outperforms all published alternative algorithms in terms of the speed performance, by a large margin. We envision a number of potential applications such as real-time motion capture, as well as tracking, recognition and identification of moving objects in multi-camera networks." }, { "instance_id": "R28099xR28088", "comparison_id": "R28099", "paper_id": "R28088", "text": "Real-time stereo to multi-view conversion system based on adaptive meshing The stereo to multi-view conversion technology plays an important role in the development and promotion of three-dimensional television, which can provide adequate supply of high-quality 3D content for autostereoscopic displays. This paper focuses on a real-time implementation of the stereo to multi-view conversion system, the major parts of which are adaptive meshing, sparse stereo correspondence, energy equation construction and virtual-view rendering. To achieve the real-time performance, we make three main contributions. First, we introduce adaptive meshing to reduce the computational complexity at the expense of slight decrease in quality. Second, we use a simple and effective method based on block matching algorithm to generate the sparse disparity map. Third, for the module of block-saliency calculation, sparse stereo correspondence and view synthesis, novel parallelization strategies and fine-grained optimization techniques based on graphic processing units are used to accelerate the executing speed. Experimental results show that the system can achieve real-time and semi-real-time performance when rendering 8 views with the image resolution of 1280 \u00d7 720 and 1920 \u00d7 1080 on Tesla K20. The images and videos presented finally are both visually realistic and comfortable." }, { "instance_id": "R28099xR27909", "comparison_id": "R28099", "paper_id": "R27909", "text": "Real-time stereo matching using memory-efficient Belief Propagation for high-definition 3D telepresence systems Highlights? A real-time and high-definition stereo matching algorithm is presented. ? The proposal is an improved Belief Propagation algorithm with pixel classification. ? It also includes a message compression technique that reduces memory traffic. ? The total memory traffic reduction is about 90%. ? The algorithm improves the overall performance by more than 6%. New generations of telecommunications systems will include high-definition 3D video that provides a telepresence feeling. These systems require high-quality depth maps to be generated in a very short time (very low latency, typically about 40ms). Classical Belief Propagation algorithms (BP) generate high-quality depth maps but they require huge memory bandwidths that limit low-latency implementations of stereo-vision systems with high-definition images.This paper proposes a real-time (latency inferior to 40ms) high-definition (1280i?720) stereo matching algorithm using Belief Propagation with good immersive feeling (80 disparity levels). There are two main contributions. The first is an improved BP algorithm with pixel classification that outperforms classical BP while reducing the number of memory accesses. The second is an adaptive message compression technique with a low performance penalty that greatly reduces the memory traffic. The combination of these techniques outperforms classical BP by about 6.0% while reducing the memory traffic by more than 90%." }, { "instance_id": "R28099xR27924", "comparison_id": "R28099", "paper_id": "R27924", "text": "Disparity map refinement and 3D surface smoothing via directed anisotropic diffusion We propose a new binocular stereo algorithm and 3D reconstruction method from multiple disparity images. First, we present an accurate binocular stereo algorithm. In our algorithm, we use neither color segmentation nor plane fitting methods, which are common techniques among many algorithms nominated in the Middlebury ranking. These methods assume that the 3D world consists of a collection of planes and that each segment of a disparity map obeys a plane equation. We exclude these assumptions and introduce a Directed Anisotropic Diffusion technique for refining a disparity map. Second, we show a method to fill some holes in a distance map and smooth the reconstructed 3D surfaces by using another type of Anisotropic Diffusion technique. The evaluation results on the Middlebury datasets show that our stereo algorithm is competitive with other algorithms that adopt plane fitting methods. We present an experiment that shows the high accuracy of a reconstructed 3D model using our method, and the effectiveness and practicality of our proposed method in a real environment." }, { "instance_id": "R28099xR27902", "comparison_id": "R28099", "paper_id": "R27902", "text": "On building an accurate stereo matching system on graphics hardware This paper presents a GPU-based stereo matching system with good performance in both accuracy and speed. The matching cost volume is initialized with an AD-Census measure, aggregated in dynamic cross-based regions, and updated in a scanline optimization framework to produce the disparity results. Various errors in the disparity results are effectively handled in a multi-step refinement process. Each stage of the system is designed with parallelism considerations such that the computations can be accelerated with CUDA implementations. Experimental results demonstrate the accuracy and the efficiency of the system: currently it is the top performer in the Middlebury benchmark, and the results are achieved on GPU within 0.1 seconds. We also provide extra examples on stereo video sequences and discuss the limitations of the system." }, { "instance_id": "R28099xR28070", "comparison_id": "R28099", "paper_id": "R28070", "text": "Acceleration of stereomatching on multi-core CPU and GPU This paper presents an accelerated version of a dense stereo-correspondence algorithm for two different parallelism enabled architectures, multi-core CPU and GPU. The algorithm is part of the vision system developed for a binocular robot-head in the context of the CloPeMa research project. This research project focuses on the conception of a new clothes folding robot with real-time and high resolution requirements for the vision system. The performance analysis shows that the parallelised stereo-matching algorithm has been significantly accelerated, maintaining 12\u00d7 and 176\u00d7 speed-up respectively for multi-core CPU and GPU, compared with SISD (Single Instruction, Single Data) single-thread CPU. To analyse the origin of the speed-up and gain deeper understanding about the choice of the optimal hardware, the algorithm was broken into key sub-tasks and the performance was tested for four different hardware architectures." }, { "instance_id": "R28099xR28059", "comparison_id": "R28099", "paper_id": "R28059", "text": "Fast and Accurate Stereo Vision System on FPGA In this article, we present a fast and high quality stereo matching algorithm on FPGA using cost aggregation (CA) and fast locally consistent (FLC) dense stereo. In many software programs, global matching algorithms are used in order to obtain accurate disparity maps. Although their error rates are considerably low, their processing speeds are far from that required for real-time processing because of their complex processing sequences. In order to realize real-time processing, many hardware systems have been proposed to date. They have achieved considerably high processing speeds; however, their error rates are not as good as those of software programs, because simple local matching algorithms have been widely used in those systems. In our system, sophisticated local matching algorithms (CA and FLC) that are suitable for FPGA implementation are used to achieve low error rate while maintaining the high processing speed. We evaluate the performance of our circuit on Xilinx Vertex-6 FPGAs. Its error rate is comparable to that of top-level software algorithms, and its processing speed is nearly 2 clock cycles per pixel, which reaches 507.9 fps for 640 480 pixel images." }, { "instance_id": "R28099xR27960", "comparison_id": "R28099", "paper_id": "R27960", "text": "Efficient Disparity Estimation Using Hierarchical Bilateral Disparity Structure Based Graph Cut Algorithm With a Foreground Boundary Refinement Mechanism The disparity estimation problem is commonly solved using graph cut (GC) methods, in which the disparity assignment problem is transformed to one of minimizing global energy function. Although such an approach yields an accurate disparity map, the computational cost is relatively high. Accordingly, this paper proposes a hierarchical bilateral disparity structure (HBDS) algorithm in which the efficiency of the GC method is improved without any loss in the disparity estimation performance by dividing all the disparity levels within the stereo image hierarchically into a series of bilateral disparity structures of increasing fineness. To address the well-known foreground fattening effect, a disparity refinement process is proposed comprising a fattening foreground region detection procedure followed by a disparity recovery process. The efficiency and accuracy of the HBDS-based GC algorithm are compared with those of the conventional GC method using benchmark stereo images selected from the Middlebury dataset. In addition, the general applicability of the proposed approach is demonstrated using several real-world stereo images." }, { "instance_id": "R28099xR28074", "comparison_id": "R28099", "paper_id": "R28074", "text": "Hardware implementation of a full HD real-time disparity estimation algorithm Disparity estimation is a common task in stereo vision and usually requires a high computational effort. High resolution disparity maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3D glasses. In this paper, an FPGA architecture for a disparity estimation algorithm is proposed, that is capable of processing high-definition content in real-time. The resulting architecture is efficient in terms of power consumption and can be easily scaled to support higher resolutions." }, { "instance_id": "R28099xR27884", "comparison_id": "R28099", "paper_id": "R27884", "text": "Accurate and Efficient Cost Aggregation Strategy for Stereo Correspondence Based on Approximated Joint Bilateral Filtering Recent local state-of-the-art stereo algorithms based on variable cost aggregation strategies allow for inferring disparity maps comparable to those yielded by algorithms based on global optimization schemes. Unfortunately, thought these results are excellent, they are obtained at the expense of high computational requirements that are comparable or even higher than those required by global approaches. In this paper, we propose a cost aggregation strategy based on joint bilateral filtering and incremental calculation schemes that allow for efficient and accurate inference of disparity maps. Experimental comparison with state-of-the-art techniques shows the effectiveness of our proposal." }, { "instance_id": "R28099xR28044", "comparison_id": "R28099", "paper_id": "R28044", "text": "A modified census transform based on the neighborhood information for stereo matching algorithm Census transform is a non-parametric local transform. Its weakness is that the results relied on the center pixel too much. This paper proposes a modified Census transform based on the neighborhood information for stereo matching. By improving the classic Census transform, the new technique utilizes more bits to represent the differences between the pixel and its neighborhood information. The result image of the modified Census transform has more detailed information at depth discontinuity. After stereo correspondence, sub-pixel interpolation and the disparity refinement, a better dense disparity map can be obtained. The experiments present that the proposed algorithm has simple mechanism and strong robustness. It can improve the accuracy of matching and is applicable to hardware systems." }, { "instance_id": "R28140xR28129", "comparison_id": "R28140", "paper_id": "R28129", "text": "Gangliocytic Paraganglioma: Case Report and Review of the Literature Gangliocytic paraganglioma is a rare tumor, which occurs nearly exclusively in the second portion of the duodenum. Generally, this tumor has a benign clinical course, although rarely, it may recur or metastasize to regional lymph nodes. Only one case with distant metastasis has been reported. We present a case of duodenal gangliocytic paraganglioma treated first by local resection followed by pylorus-preserving pancreaticoduodenectomy. Examination of the first specimen revealed focal nuclear pleomorphism and mitotic activity, in addition to the presence of three characteristic histologic components: epithelioid, ganglion, and spindle cell. In the subsequent pancreaticoduodenectomy specimen, there was no residual tumor identified in the periampullary area, but metastatic gangliocytic paraganglioma was present in two of seven lymph nodes. This case report confirms the malignant potential of this tumor. We review the published literature on gangliocytic paragangliomas pursuing a malignant course. We conclude that surgical therapy of these neoplasms should not be limited to local resection, as disease recurrence, lymph node involvement, and rarely distant metastasis may occur." }, { "instance_id": "R28140xR28104", "comparison_id": "R28140", "paper_id": "R28104", "text": "A Metastatic Endocrine-Neurogenic Tumor of the Ampulla of Vater with Multiple Endocrine Immunoreaction Malignant Paraganglioma? The present case report demonstrates the history of a 50-year-old man with a mixed endocrine-neurogenous tumor of the ampulla of Vater. The tumor was localized endoscopically after an attack of melena. There were no signs of endocrinopathy. A local resection with suturing of the pancreatic duct was performed. Morphologically, there were two different tissue types (neurogenous and carcinoid-like) with numerous cells and nerve fibers reacting immunohistochemically with somatostatin and neurotensin antisera: some immunoreactivity to PP-antibodies was observed. Still, after 20 months, the patient seems to have been cured by local resection." }, { "instance_id": "R28140xR28132", "comparison_id": "R28140", "paper_id": "R28132", "text": "An unusual case of duodenal obstruction-gangliocytic paraganglioma Gangliocytic paragangliomas are rare tumors located in the gastrointestinal tract that are considered to be benign. They are composed of spindle-shaped cells, epithelioid cells, and ganglion-like cells. They usually present with abdominal pain, and/or gastrointestinal bleeding, and occasionally with obstructive jaundice. We report a case of obstruction in a 17-year-old female, which on histology was found to be a gangliocytic paraganglioma, with an extremely unusual presentation. Intraoperatively, the patient was found to have local tumor extension and regional lymph node invasion, and so she underwent a pylorus-preserving pancreaticoduodenectomy, with local lymph node clearance. We discuss the management of this unusual case and review the literature." }, { "instance_id": "R28140xR28124", "comparison_id": "R28140", "paper_id": "R28124", "text": "Paraganglioma of the ampulla of vater: a potentially malignant neoplasm Paragangliomas are rare tumours originating from neuroectodermic remnants and are usually considered as benign. We present two cases of paraganglioma of the ampulla of Vater that were treated surgically by pancreaticoduodenectomy. In one case, histopathology revealed malignant characteristics of the tumour with invasion of the pancreas and simultaneous duodenal lymph\u2010node metastases. Both patients had a favourable outcome without disease recurrence at 40 and 44 months postoperatively. Only 21 cases of ampullary paraganglioma have been reported in the literature, 7 of them with malignant characteristics. In conclusion, paragangliomas of the ampulla of Vater have malignant potential. Surgical therapy of these tumours should not be limited to local resection, as disease recurrence and lymph node involvement have been reported. We propose that paragangliomas of the ampulla of Vater should be operated by cephalic pancreaticoduodenectomy, which allows long\u2010term and disease\u2010free survival." }, { "instance_id": "R28191xR28179", "comparison_id": "R28191", "paper_id": "R28179", "text": "An optimal containership slot allocation for liner shipping revenue management In the competitive liner shipping market, carriers may utilize revenue management systems to increase profits by using slot allocation and pricing. In this paper, related research on revenue management for transportation industries is reviewed. A conceptual model for liner shipping revenue management (LSRM) is proposed and a slot allocation model is formulated through mathematical programming to maximize freight contribution. We illustrate this slot allocation model with a case study of a Taiwan liner shipping company and the results show the applicability and better performances than the previous allocation used in practice." }, { "instance_id": "R28191xR28144", "comparison_id": "R28191", "paper_id": "R28144", "text": "A frequency-based maritime container assignment model This paper transfers the classic frequency-based transit assignment method of Spiess and Florian to containers demonstrating its promise as the basis for a global maritime container assignment model. In this model, containers are carried by shipping lines operating strings (or port rotations) with given service frequencies. An origin-destination matrix of full containers is assigned to these strings to minimize sailing time plus container dwell time at the origin port and any intermediate transhipment ports. This necessitated two significant model extensions. The first involves the repositioning of empty containers so that a net outflow of full containers from any port is balanced by a net inflow of empty containers, and vice versa. As with full containers, empty containers are repositioned to minimize the sum of sailing and dwell time, with a facility to discount the dwell time of empty containers in recognition of the absence of inventory. The second involves the inclusion of an upper limit to the maximum number of container moves per unit time at any port. The dual variable for this constraint provides a shadow price, or surcharge, for loading or unloading a container at a congested port. Insight into the interpretation of the dual variables is given by proposition and proof. Model behaviour is illustrated by a simple numerical example. The paper concludes by considering the next steps toward realising a container assignment model that can, amongst other things, support the assessment of supply chain vulnerability to maritime disruptions." }, { "instance_id": "R28191xR28166", "comparison_id": "R28191", "paper_id": "R28166", "text": "Seasonal slot allocation planning for a container liner shipping service This research addresses a slot allocation planning problem of the container shipping company for satisfying the estimated seasonal demands on a liner service. We explore in detail the influenced factors of planning and construct a quantitative model for the optimum allocation of the ship\u2019s slot spaces. An integer programming model is formulated to maximize the potential profits per round trip voyage for a liner company, and a real life example of an eastern Asia short sea service has been studied. Analysis results reveal that containers with the higher contributions like reefers and 40 feet dry containers have priorities to be allocated more than others, but not all because of satisfying necessary operational constraints. Our model is not only providing a higher space utilization rate and more detailed allocation results, but also helpful for the ship size assessment in long-term planning." }, { "instance_id": "R28191xR28185", "comparison_id": "R28191", "paper_id": "R28185", "text": "Robust optimization model for resource allocation of container shipping lines Abstract The operating efficiency of container shipping lines depends on proper resource allocation of container shipping. A deterministic model was developed for shipping lines based on the equilibrium principle. The objective was to optimize the resource allocation for container lines considering ship size, container deployment, and slot allocation. The deterministic model was then expanded to a robust optimization model accounting for the uncertain factors, while ship size was treated as the design variable and slot allocation as the control variable. The effectiveness of the proposed model is demonstrated using a pendulum shipping line as an example. The results indicate that infeasible solutions will increase and the model robustness will be enhanced by an increased penalty coefficient and the solution robustness will be enhanced by increasing the preference coefficient. The optimization model simultaneously considers demand uncertainty, model robustness, and risk preference of the decision maker to agree better with actual practices." }, { "instance_id": "R28235xR28205", "comparison_id": "R28235", "paper_id": "R28205", "text": "Maritime repositioning of empty containers under uncertain port disruptions This paper addresses the problem of repositioning empty containers in maritime networks under possible port disruptions. Since drastically different futures may occur, the decision making process for dealing with this problem cannot ignore the uncertain nature of its parameters. In this paper, we consider the uncertainty of relevant problem data by a stochastic programming approach, in which different scenarios are included in a multi-scenario optimization model and linked by non-anticipativity conditions. Numerical experiments show that the multi-scenario solutions provide a hedge against uncertainty when compared to deterministic decisions and exhibit some forms of robustness, which mitigate the risks of not meeting empty container demand." }, { "instance_id": "R28235xR28220", "comparison_id": "R28235", "paper_id": "R28220", "text": "Empty Container Management in Cyclic Shipping Routes This paper considers the empty container management problem in a cyclic shipping route. The objective is to seek the optimal empty container repositioning policy in a dynamic and stochastic situation by minimising the expected total costs consisting of inventory holding costs, demand lost-sale costs, lifting-on and lifting-off charges, and container transportation costs. A three-phase threshold control policy is developed to reposition empty containers in cyclic routes. The threshold values are determined based on the information of average customer demands and their variability. A non-repositioning policy and three other heuristic policies are introduced for purposes of comparison. Simulation is used to evaluate the performance of empty repositioning policies. A range of numerical examples with different demand patterns; degrees of uncertainty, and fleet sizes demonstrate that the threshold policy significantly outperforms the heuristic policies." }, { "instance_id": "R28235xR28202", "comparison_id": "R28235", "paper_id": "R28202", "text": "The effect of multi-scenario policies on empty container repositioning This study addresses a repositioning problem where some ports impose several restrictions on the storage of empty containers, sailing distances are short, information becomes available close to decision times and decisions are made in a highly uncertain environment. Although point forecasts are available and probabilistic distributions can be derived from the historical database, specific changes in the operational environment may give rise to the realization of parameters that were never observed in the past. Since historical statistics are useless for decision-making processes, we propose a time- extended multi-scenario optimization model in which scenarios are generated by shipping company opinions. We then show the importance of adopting multi-scenario policies compared to standard deterministic ones." }, { "instance_id": "R28235xR28226", "comparison_id": "R28235", "paper_id": "R28226", "text": "Cargo routing and empty container repositioning in multiple shipping service routes This paper considers the problem of joint cargo routing and empty container repositioning at the operational level for a shipping network with multiple service routes, multiple deployed vessels and multiple regular voyages. The objective is to minimize the total relevant costs in the planning horizon including: container lifting on/off costs at ports, customer demand backlog costs, the demurrage (or waiting) costs at the transhipment ports for temporarily storing laden containers, the empty container inventory costs at ports, and the empty container transportation costs. The laden container routing from the original port to the destination port is limited with at most three service routes. Two solution methods are proposed to solve the optimization problem. The first is a two-stage shortest-path based integer programming method, which combines a cargo routing algorithm with an integer programming of the dynamic system. The second is a two-stage heuristic-rules based integer programming method, which combines an integer programming of the static system with a heuristic implementation algorithm in dynamic system. The two solution methods are applied to two case studies with 30 different scenarios and compared with a practical policy. The results show that two solution methods perform substantially better than the practical policy. The shortest-path based method is preferable for relatively small-scale problems as it yields slightly better solution than the heuristic-rules based method. However, the heuristic-rules based method has advantages in its applicability to large-scale realistic systems while producing good performance, to which the shortest-path based method may be computationally inapplicable. Moreover, the heuristic-rules based method can also be applied to stochastic situations because its second stage is rule-based and dynamical." }, { "instance_id": "R28333xR28313", "comparison_id": "R28333", "paper_id": "R28313", "text": "Liner ship route schedule design with sea contingency time and port time uncertainty This paper deals with a tactical-level liner ship route schedule design problem which aims to determine the arrival time of a ship at each portcall on a ship route and the sailing speed function on each voyage leg by taking into account time uncertainties at sea and at port. It first derives the optimality condition for the sailing speed function with sea contingency and subsequently demonstrates the convexity of the bunker consumption function. A mixed-integer non-linear stochastic programming model is developed for the proposed liner ship route schedule design problem by minimizing the ship cost and expected bunker cost while maintaining a required transit time service level. In view of the special structure of the model, an exact cutting-plane based solution algorithm is proposed. Numerical experiments on real data provided by a global liner shipping company demonstrate that the proposed algorithm can efficiently solve real-case problems." }, { "instance_id": "R28333xR28250", "comparison_id": "R28333", "paper_id": "R28250", "text": "Container vessel scheduling with bi-directional flows We consider a strongly NP-hard container vessel scheduling problem with bi-directional flows. We show that a special case of it is solvable as a linear program. This property is then used to design a heuristic for the general case." }, { "instance_id": "R28333xR28320", "comparison_id": "R28333", "paper_id": "R28320", "text": "Bunker consumption optimization methods in shipping: A critical review and extensions It is crucial nowadays for shipping companies to reduce bunker consumption while maintaining a certain level of shipping service in view of the high bunker price and concerned shipping emissions. After introducing the three bunker consumption optimization contexts: minimization of total operating cost, minimization of emission and collaborative mechanisms between port operators and shipping companies, this paper presents a critical and timely literature review on mathematical solution methods for bunker consumption optimization problems. Several novel bunker consumption optimization methods are subsequently proposed. The applicability, optimality, and efficiency of the existing and newly proposed methods are also analyzed. This paper provides technical guidelines and insights for researchers and practitioners dealing with the bunker consumption issues." }, { "instance_id": "R28333xR28260", "comparison_id": "R28333", "paper_id": "R28260", "text": "Analysis of an exact algorithm for the vessel speed optimization problem Increased fuel costs together with environmental concerns have led shipping companies to consider the optimization of vessel speeds. Given a fixed sequence of port calls, each with a time window, and fuel cost as a convex function of vessel speed, we show that optimal speeds can be found in quadratic time. \u00a9 2013 Wiley Periodicals, Inc. NETWORKS, 2013" }, { "instance_id": "R28333xR28272", "comparison_id": "R28333", "paper_id": "R28272", "text": "Liner shipping service network design with empty container repositioning This paper proposes a liner shipping service network design problem with combined hub-and-spoke and multi-port-calling operations and empty container repositioning. It first introduces a novel concept - segment - defined as a pair of ordered ports served by one shipping line and subsequently develops a mixed-integer linear programming model for the proposed problem. Extensive numerical experiments based on realistic Asia-Europe-Oceania shipping operations show that the proposed model can be efficiently solved by CPLEX for real-case problems. They also demonstrate the potential for large cost-savings over pure hub-and-spoke or pure multi-port-calling network, or network without considering empty container repositioning." }, { "instance_id": "R28333xR28280", "comparison_id": "R28333", "paper_id": "R28280", "text": "Ship assignment with hub and spoke constraints As the shipping industry enters the future, an increasing number of technological developments are being introduced into this market. This has led to a significant change in business operations, such as the innovative design of hub and spoke systems, resulting in cargo consolidation and a better use of the ship's capacity. In the light of this new scenario, the authors present a successful application of integer linear programming to support the decision-making process of assigning ships to previously defined voyages \u2014 the rosters. The tool used to build the final models was the MS-Excel Solver (Microsoft\u00ae Excel 97 SR-2, 1997), a package that enabled the real case studies addressed to be solved. The results of the experiment prompted the authors to favour the assignment of very small fleets, as opposed to the existing high number of ships employed in such real trades," }, { "instance_id": "R28333xR28317", "comparison_id": "R28333", "paper_id": "R28317", "text": "Containership scheduling with transit-time-sensitive container shipment demand This paper examines the optimal containership schedule with transit-time-sensitive demand that is assumed to be a decreasing continuous function of transit time. A mixed-integer nonlinear non-convex optimization model is first formulated to maximize the total profit of a ship route. In view of the problem structure, a branch-and-bound based holistic solution method is developed. It is rigorously demonstrated that this solution method can obtain an e-optimal solution in a finite number of iterations for general forms of transit-time-sensitive demand. Computational results based on a trans-Pacific liner ship route demonstrate the applicability and efficiency of the solution method." }, { "instance_id": "R28333xR28266", "comparison_id": "R28333", "paper_id": "R28266", "text": "Tactical planning models for managing container flow and ship deployment This paper addresses two practical problems from a liner shipping company, i.e. the container flow management problem and the ship deployment problem, at the tactical planning level. A sequential model and a joint optimisation model are formulated to solve the problems. Our results show that the company should implement the joint optimisation model at the tactical planning level to improve the shipping capacity utilisation rather than the sequential model used in the current practice. Repositioning empty containers also need to be considered jointly with the nonempty container flow at the tactical planning level. Some important managerial insights into the operational and business processes are gained." }, { "instance_id": "R28333xR28298", "comparison_id": "R28333", "paper_id": "R28298", "text": "Ship scheduling and cost analysis for route planning in liner shipping Liner shipping companies can benefit significantly by improving ship scheduling and cost analysis in service route planning by systematic methods. This paper proposes a dynamic programming (DP) model for ship scheduling and identifies cost items relevant to the planning of a service route, which can help planners make better scheduling decisions under berth time-window constraints, as well as estimate more accurately voyage fixed costs and freight variable costs in liner service route planning. The proposed model pursues an optimal scheduling strategy including cruising speed and quay crane dispatching decisions, vis \u00e0 vis tentative and rough schedule arrangements. Additionally, the model can be extended to cases of integrating one company\u2019s \u2013 or strategic alliance \u2013 partners\u2019 service networks, in order to gain more efficient hub-and-spoke operations, tighter transshipment and better level-of-service." }, { "instance_id": "R28369xR28364", "comparison_id": "R28369", "paper_id": "R28364", "text": "Container shipping on the Northern Sea Route Since the beginning of the 20th century, the principal commercial maritime routes have changed very little. With global warming, the Northern Sea Route (NSR) has opened up as a possible avenue of trade in containerized products between Asia and Europe. This paper verifies the technical and economic feasibility of regular container transport along the NSR. By adopting a model schedule between Shanghai and Hamburg, we are able to analyze the relative costs of various axes in the Asia\u2013Europe transport network, including the NSR. While shipping through the Suez Canal is still by far the least expensive option, the NSR and Trans-Siberian Railway appear to be roughly equivalent second-tier alternatives." }, { "instance_id": "R28369xR28361", "comparison_id": "R28369", "paper_id": "R28361", "text": "Studying port selection on liner routes: An approach from logistics perspective The research aims to study the port selection in liner shipping. The central work is to set up a model to deal with port choice decisions. The model solves three matters: ports on a ship\u2019s route; the order of selected ports and loading/unloading ports for each shipment. Its objective is to minimize total cost including ship cost, port tariff, inland transport cost and inventory cost. The model has been applied in real data, with cargo flows between the USA and Northern Europe. Afterwards, two sensitive analyses are considered. The first assesses the impact of a number of port calls on the total cost which relates closely to the viability of two service patterns: multi ports and hub & spoke. The second analyzes the efficiency of large vessels in the scope of a logistics network. The overriding result of this research is to indicate the influence of logistics factors in the decision of port choice. The research emphasizes the necessity to combine different factors when dealing with this topic, or else a result can be one-sided." }, { "instance_id": "R28369xR28346", "comparison_id": "R28369", "paper_id": "R28346", "text": "Planning the route of container ships: A fuzzy genetic approach Nowadays, liner shipping has become a constant operation model for shipping companies, and scheduling is an important issue for operation. It is well-known that a nice plan for route of container ships will bring long-term profit to companies. In the earlier works, the market demand is assumed to be crisp. However, the market demand could be uncertain in real world. Fuzzy sets theory is frequently used to deal with the uncertainty problem. On the other hand, genetic algorithm owns powerful multi-objective searching capability and it can extensively find optimal solutions through continuous copy, crossover, and mutation. Due to these advantages, in this paper, a fuzzy genetic algorithm for liner shipping planning is proposed. This algorithm not only takes market demand, shipping and berthing time of container ships into account simultaneously but also is capable of finding the most suitable route of container ships." }, { "instance_id": "R28369xR28344", "comparison_id": "R28369", "paper_id": "R28344", "text": "Designing container shipping network under changing demand and freight rates This paper focuses on the optimization of container shipping network and its operations under changing cargo demand and freight rates. The problem is formulated as a mixed integer non-linear programming problem (MINP) with an objective of maximizing the average unit ship-slot profit at three stages using analytical methodology. The issues such as empty container repositioning, ship-slot allocating, ship sizing, and container configuration are simultaneously considered based on a series of the matrices of demand for a year. To solve the model, a bi-level genetic algorithm based method is proposed. Finally, numerical experiments are provided to illustrate the validity of the proposed model and algorithms. The obtained results show that the suggested model can provide a more realistic solution to the issues on the basis of changing demand and freight rates and arrange a more effective approach to the optimization of container shipping network structures and operations than does the model based on the average demand." }, { "instance_id": "R28407xR28403", "comparison_id": "R28407", "paper_id": "R28403", "text": "Study on a Liner Shipping Network Design Considering Empty Container Reposition Empty container allocation problems arise due to imbalance on trades. Imbalanced trade is a common fact in the liner shipping,creating the necessity of repositioning empty containers from import-dominant ports to export-dominant ports in an economic and efficient way. The present work configures a liner shipping network, by performing the routes assignment and their integration to maximize the profit for a liner shipping company. The empty container repositioning problem is expressly taken into account in whole process. By considering the empty container repositioning problem in the network design, the choice of routes will be also influenced by the empty container flow, resulting in an optimum network, both for loaded and empty cargo. The Liner Shipping Network Design Program (LS-NET program) will define the best set of routes among a set of candidate routes, the best composition of the fleet for the network and configure the empty container repositioning network. Further, a network of Asian ports was studied and the results obtained show that considering the empty container allocation problem in the designing process can influence the final configuration of the network." }, { "instance_id": "R28407xR28383", "comparison_id": "R28407", "paper_id": "R28383", "text": "A Base Integer Programming Model and Benchmark Suite for Liner-Shipping Network Design The liner-shipping network design problem is to create a set of nonsimple cyclic sailing routes for a designated fleet of container vessels that jointly transports multiple commodities. The objective is to maximize the revenue of cargo transport while minimizing the costs of operation. The potential for making cost-effective and energy-efficient liner-shipping networks using operations research OR is huge and neglected. The implementation of logistic planning tools based upon OR has enhanced performance of airlines, railways, and general transportation companies, but within the field of liner shipping, applications of OR are scarce. We believe that access to domain knowledge and data is a barrier for researchers to approach the important liner-shipping network design problem. The purpose of the benchmark suite and the paper at hand is to provide easy access to the domain and the data sources of liner shipping for OR researchers in general. We describe and analyze the liner-shipping domain applied to network design and present a rich integer programming model based on services that constitute the fixed schedule of a liner shipping company. We prove the liner-shipping network design problem to be strongly NP-hard. A benchmark suite of data instances to reflect the business structure of a global liner shipping network is presented. The design of the benchmark suite is discussed in relation to industry standards, business rules, and mathematical programming. The data are based on real-life data from the largest global liner-shipping company, Maersk Line, and supplemented by data from several industry and public stakeholders. Computational results yielding the first best known solutions for six of the seven benchmark instances is provided using a heuristic combining tabu search and heuristic column generation." }, { "instance_id": "R28407xR28394", "comparison_id": "R28407", "paper_id": "R28394", "text": "Planning and scheduling for efficiency in liner shipping ANALYSIS OF THE CAPACITY REQUIRED TO SERVE A SPECIFIC TRADE ROUTE,WITH APPLICATION TO AUSTRALIANORTH AMERICAN WEST COAST TRADE" }, { "instance_id": "R28407xR28380", "comparison_id": "R28407", "paper_id": "R28380", "text": "A Matheuristic for the Liner Shipping Network Design Problem with Transit Time Restrictions We present a mathematical model for the liner shipping network design problem with transit time restrictions on the cargo flow. We extend an existing matheuristic for the liner shipping network design problem to consider transit time restrictions. The matheuristic is an improvement heuristic, where an integer program is solved iteratively as a move operator in a large-scale neighborhood search. To assess the effects of insertions/removals of port calls, flow and revenue changes are estimated for relevant commodities along with an estimation of the change in the vessel cost. Computational results on the benchmark suite LINER-LIB are reported, showing profitable networks for most instances. We provide insights on causes for rejecting demand and the average speed per vessel class in the solutions obtained." }, { "instance_id": "R28446xR28428", "comparison_id": "R28446", "paper_id": "R28428", "text": "From multi-porting to a hub port configuration: the South African container port system in transition This paper addresses the tension that exists between multi-porting and a hub configuration in the South African container port system. We apply a generalised cost model to two alternative network configurations: the actual situation of multi-porting and an alternative hub port configuration. The results demonstrate that South African import and export flows are likely to face small cost increases when the port system moves to a hub port configuration. However, from a ship operator's perspective, the hub configuration is more attractive given considerable cost reductions in marine charges, port dues and ship costs. The paper concludes by underlining Transnet's pivotal role in the attractiveness of the hub option and the need for a wider Sub-Saharan strategy in view of making the hub port concept work." }, { "instance_id": "R28446xR28439", "comparison_id": "R28446", "paper_id": "R28439", "text": "Dynamic programming of port position and scale in the hierarchized container ports network A hierarchized container ports network, with several super hubs and many multilevel hub ports, will be established, mainly serving transshipment and carrying out most of its business in the hub-spoke mode. This paper sums up a programming model, in which the elementary statistic units, cost and expense of every phase of any shipment are the straight objects, and the minimum cost of the whole network is taken as the objective. This is established based on a dynamic system to make out the hierarchical structure of the container ports network, i.e. the trunk hub and feeder hubs can be planned in a economic zone, then the optimal scale vector can also be obtained for all container ports concerned with the network. The vector is a standard measurement to decide a port's position and their scale distribution in the whole network." }, { "instance_id": "R28446xR28421", "comparison_id": "R28446", "paper_id": "R28421", "text": "The impact of hub and spoke networks in the Mediterranean perculiarity The tendency towards consolidation of the liner shipping companies requires the development of the 'hub and spokes' model in the Mediterranean as well. In this framework, we argue that the network model will be reinforced as compared to the 'point to point' model. However, only a few studies are available to confirm the advantageousness of this choice. We therefore propose a methodological analysis of transport costs in a 'hub and spokes' system in comparison to a 'point to point' system, both in general terms and in the specific framework of the Mediterranean. The relative costs of the two alternatives were simulated utilizing the experience of the Mediterranean hub port of Gioia Tauro and with reference to the most recent literature on this subject." }, { "instance_id": "R28487xR28464", "comparison_id": "R28487", "paper_id": "R28464", "text": "A Containerized Liner Routing in Eastern Asia New partnerships has been made in containerized liner services. This likely results in drastic changes in ship size and hub location in Eastern Asia. In this study we address strategies of the containerized liner services by using a mathematical programming with two objectives of shipping company and customer." }, { "instance_id": "R28487xR28472", "comparison_id": "R28487", "paper_id": "R28472", "text": "Multi-port vs. Hub-and-Spoke port calls by containerships This paper addresses the design of container liner shipping networks taking into consideration container management issues including empty container repositioning. We examine two typical service networks with different ship sizes: multi-port calling by conventional ship size and hub-and-spoke by mega-ship. The entire solution process is performed in two phases: the service network design and container distribution. A wide variety of numerical experiments are conducted for the Asia-Europe and Asia-North America trade lanes. In most scenarios the multi-port calling is superior in terms of total cost, while the hub-and-spoke is more advantageous in the European trade for a costly shipping company." }, { "instance_id": "R28487xR28481", "comparison_id": "R28487", "paper_id": "R28481", "text": "The containership feeder network design problem: the new Izmir port as hub in the Black sea Global containership liners design their transportation service as hub-and-spoke networks to increase the market linkages and reduce the average operational costs by using indirect connections. These indirect connections from the hub ports to the feeder ports called feeder networks are serviced by feeder ships. The feeder network design (FND) problem determines the smallest feeder ship fleet size with routes to minimize operational costs. Therefore, this problem could be described as capacitated vehicle routing problem with simultaneous pick-ups and deliveries with time limit. In our investigation, a perturbation based variable neighborhood search (PVNS) approach is developed to solve the FND problem which determines the fleet mix and sequence of port calls. The proposed model implementation has been tested using a case study from the Black Sea region with the new Izmir port (Candarli port) as hub. Moreover, a range of scenarios and parameter values are used in order to test the robustness of the approach through sensitivity analyses. Numerical results show that the new Izmir port has great potential as hub port in the Black Sea region." }, { "instance_id": "R28487xR28475", "comparison_id": "R28487", "paper_id": "R28475", "text": "Containership routing with time deadlines and simultaneous deliveries and pick-ups In this paper we seek to determine optimal routes for a containership fleet performing pick-ups and deliveries between a hub and several spoke ports. A capacitated vehicle routing problem with pick-ups, deliveries and time deadlines is formulated and solved using a hybrid genetic algorithm for establishing routes for a dedicated containership fleet. Results on the performance of the algorithm and the feasibility of the approach show that a relatively small fleet of containerships could provide efficient services within deadlines. Moreover, through sensitivity analysis we discuss performance robustness and consistency of the developed algorithm under a variety of problem settings and parameters values." }, { "instance_id": "R28614xR28535", "comparison_id": "R28614", "paper_id": "R28535", "text": "Undifferentiated (embryonal) sarcoma of the liver.Report of 31 cases Thirty\u2010one cases of undifferentiated (embryonal) sarcoma of the liver are presented. The tumor is found predominantly in the pediatric age group, the majority of patients (51.6%) being between 6 and 10 years of age. An abdominal mass and pain are the usual presenting symptoms. Radiographic examination is nonspecific except to demonstrate a space\u2010occupying lesion of the liver. The tumors are large, single, usually globular and well demarcated, and have multiple cystic areas of hemorrhage, necrosis, and gelatinous degeneration. Histologic examination shows a pseudocapsule partially separating the normal liver from undifferentiated sarcomatous cells that, near the periphery of the tumor, surround entrapped hyperplastic or degenerating bile duct\u2010like structures. Eosinophilic globules that are PAS positive are usually found within and adjacent to tumor cells. Areas of necrosis and hemorrhage are prominent. The prognosis is poor, with a median survival of less than 1 year following diagnosis." }, { "instance_id": "R28614xR28574", "comparison_id": "R28614", "paper_id": "R28574", "text": "Undifferentiated sarcoma of the liver in childhood Undifferentiated (embryonal) sarcoma of the liver (UESL) is a rare childhood hepatic tumor, and it is generally considered an aggressive neoplasm with an unfavorable prognosis." }, { "instance_id": "R28614xR28591", "comparison_id": "R28614", "paper_id": "R28591", "text": "Pregnancy and Delivery in a Patient With Metastatic Embryonal Sarcoma of the Liver BACKGROUND Embryonal sarcoma of the liver is a rare undifferentiated mesenchymal neoplasm with a grave prognosis. CASE We report a spontaneous uneventful pregnancy in a young woman with recurrent metastatic embryonal sarcoma of the liver after hepatectomy (twice) and radiochemotherapy. The patient chose to continue pregnancy despite her general condition and possible hazards to maternal and fetal well-being. She gave birth to a full-term healthy infant. Her disease recurred shortly after the delivery. CONCLUSION According to a computerized search of the National Library of Medicine database, from 1966 to present, using the search terms \u201cembryonal (or undifferentiated) sarcoma\u201d and \u201cliver\u201d without language restrictions, this is the first reported case of pregnancy and delivery in a patient with embryonal sarcoma of the liver. It illustrates the clinicalandethicaldilemmasassociatedwiththiscomplicated condition." }, { "instance_id": "R28614xR28583", "comparison_id": "R28614", "paper_id": "R28583", "text": "Hepatic Undifferentiated (Embryonal) Sarcoma Arising in a Mesenchymal Hamartoma We report the case of a hepatic undifferentiated (embryonal) sarcoma (UES) arising within a mesenchymal hamartoma (MH) in a 15-year-old girl. Mapping of the tumor demonstrated a typical MH transforming gradually into a UES composed of anaplastic stromal cells. When evaluated by flow cytometry, the MH was diploid and the UES showed a prominent aneuploid peak. Karyotypic analysis of the UES showed structural alterations of chromosome 19, which have been implicated as a potential genetic marker of MH. The histogenesis of MH and UES is still debated, and reports of a relationship between them, although suggested on the basis of histomorphologic similarities, have never been convincing. The histologic, flow cytometric, and cytogenetic evidence reported herein suggests a link between these two hepatic tumors of the pediatric population." }, { "instance_id": "R28614xR28593", "comparison_id": "R28614", "paper_id": "R28593", "text": "Undifferentiated (embryonal) sarcoma of the liver in middle-aged adults: Smooth muscle differentiation determined by immunohistochemistry and electron microscopy Undifferentiated (embryonal) sarcoma of the liver (UESL) is a rare pediatric liver malignancy that is extremely uncommon in middle-aged individuals. We studied 2 cases of UESL in middle-aged adults (1 case in a 49-year-old woman and the other in a 62-year-old man) by histology, immunohistochemistry, and electron microscopy to clarify the cellular characteristics of this peculiar tumor. One tumor showed a mixture of spindle cells, polygonal cells, and multinucleated giant cells within a myxoid matrix and also revealed focal areas of a storiform pattern in a metastatic lesion. The other tumor was composed mainly of anaplastic large cells admixed with few fibrous or spindle-shaped components and many multinucleated giant cells. In both cases, some tumor cells contained eosinophilic hyaline globules that were diastase resistant and periodic acid-Schiff positive. Immunohistochemically, the tumor cells showed positive staining for smooth muscle markers, such as desmin, alpha-smooth muscle actin, and muscle-specific actin, and also for histiocytic markers, such as alpha-1-antitrypsin, alpha-1-antichymotrypsin, and CD68. Electron microscope examination revealed thin myofilaments with focal densities and intermediate filaments in the cytoplasm of tumor cells. Our studies suggest that UESL exhibits at least a partial smooth muscle phenotype in middle-aged adults, and this specific differentiation may be more common in this age group than in children. Tumor cells of UESL with smooth muscle differentiation in middle-aged adults show phenotypic diversity comparable to those of malignant fibrous histiocytoma with myofibroblastic differentiation." }, { "instance_id": "R28614xR28609", "comparison_id": "R28614", "paper_id": "R28609", "text": "Undifferentiated embryonal sarcoma of the liver mimicking acute appendicitis. Case report and review of the literature Abstract Background Undifferentiated embryonal sarcoma (UES) of liver is a rare malignant neoplasm, which affects mostly the pediatric population accounting for 13% of pediatric hepatic malignancies, a few cases has been reported in adults. Case presentation We report a case of undifferentiated embryonal sarcoma of the liver in a 20-year-old Caucasian male. The patient was referred to us for further investigation after a laparotomy in a district hospital for spontaneous abdominal hemorrhage, which was due to a liver mass. After a through evaluation with computed tomography scan and magnetic resonance imaging of the liver and taking into consideration the previous history of the patient, it was decided to surgically explore the patient. Resection of I\u2013IV and VIII hepatic lobe. Patient developed disseminated intravascular coagulation one day after the surgery and died the next day. Conclusion It is a rare, highly malignant hepatic neoplasm, affecting almost exclusively the pediatric population. The prognosis is poor but recent evidence has shown that long-term survival is possible after complete surgical resection with or without postoperative chemotherapy." }, { "instance_id": "R28614xR28554", "comparison_id": "R28614", "paper_id": "R28554", "text": "Hepatic undifferentiated (embryonal) sarcoma and rhabdomyosarcoma in children. Results of therapy From July 1972 through September 1984, 8 of 44 children diagnosed as having primary malignant hepatic tumors, who were treated at St. Jude Children's Research Hospital, had undifferentiated (embryonal) sarcoma (five patients) or rhabdomyosarcoma (three patients). The natural history and response to multimodal therapy of these rare tumors are described. The pathologic material was reviewed and evidence for the differentiating potential of undifferentiated (embryonal) sarcoma is presented. At diagnosis, disease was restricted to the right lobe of the liver in three patients, was bilobar in four patients, and extended from the left lobe into the diaphragm in one patient. Lung metastases were present in two patients at diagnosis. All three patients with rhabdomyosarcoma had intrahepatic lesions without involvement of the biliary tree. Survival ranged from 6 to 73 months from diagnosis (median, 19.5 months); two patients are surviving disease\u2010free for 55+ and 73+ months, and one patient recently underwent resection of a recurrent pulmonary nodule 22 months from initial diagnosis. Three patients died of progressive intrahepatic and extrahepatic abdominal tumors, and two patients, who died of progressive pulmonary tumor, also had bone or brain metastasis but no recurrence of intra\u2010abdominal tumor. Six patients had objective evidence of response to chemotherapy. The authors suggest an aggressive multimodal approach to the treatment of these rare tumors in children." }, { "instance_id": "R28614xR28542", "comparison_id": "R28614", "paper_id": "R28542", "text": "Primary sarcoma of the liver Abstract A case of primary undifferentiated sarcoma of the liver in a 66-year-old woman is reported. The similarity of the presenting symptoms to those of a hepatic abscess is emphasized. The patient's response to palliative treatment by means of hepatic artery ligation and chemotherapy was excellent. We believe that this is the first report of such a response in a sarcoma localized to the liver. This therapy may well be useful in symptomatic patients in whom partial hepatectomy is not feasible." }, { "instance_id": "R28889xR28853", "comparison_id": "R28889", "paper_id": "R28853", "text": "Faster Fault Finding at Google using Multi-Objective Regression Test Optimisation Companies such as Google tend to develop products from one continually evolving core of code. Software is neither shipped, nor released in the traditional sense. It is simply made available, with dramatically compressed release cycles regression testing. This large scale rapid release environment creates challenges for the application of regression test optimisation techniques. This paper reports initial results from a partnership between Google and the CREST centre at UCL aimed at transferring techniques from the regression test optimisation literature into industrial practice. The results illustrate the industrial potential for these techniques: regression test time can be reduced by between 33%\u201382%, while retaining fault detection capability. Our experience also highlights the importance of a multi objective approach: optimising for coverage and time alone is insufficient; we have, at least, to additionally prioritise historical fault revelation." }, { "instance_id": "R28889xR28875", "comparison_id": "R28889", "paper_id": "R28875", "text": "Multiobjective Simulation Optimisation in Software Project Management Traditionally, simulation has been used by project managers in optimising decision making. However, current simulation packages only include simulation optimisation which considers a single objective (or multiple objectives combined into a single fitness function). This paper aims to describe an approach that consists of using multiobjective optimisation techniques via simulation in order to help software project managers find the best values for initial team size and schedule estimates for a given project so that cost, time and productivity are optimised. Using a System Dynamics (SD) simulation model of a software project, the sensitivity of the output variables regarding productivity, cost and schedule using different initial team size and schedule estimations is determined. The generated data is combined with a well-known multiobjective optimisation algorithm, NSGA-II, to find optimal solutions for the output variables. The NSGA-II algorithm was able to quickly converge to a set of optimal solutions composed of multiple and conflicting variables from a medium size software project simulation model. Multiobjective optimisation and SD simulation modeling are complementary techniques that can generate the Pareto front needed by project managers for decision making. Furthermore, visual representations of such solutions are intuitive and can help project managers in their decision making process." }, { "instance_id": "R28889xR28619", "comparison_id": "R28889", "paper_id": "R28619", "text": "The Multi- Objective Next Release Problem This paper is concerned with the Multi-Objective Next Release Problem (MONRP), a problem in search-based requirements engineering. Previous work has considered only single objective formulations. In the multi-objective formulation, there are at least two (possibly conflicting) objectives that the software engineer wishes to optimize. It is argued that the multi-objective formulation is more realistic, since requirements engineering is characterised by the presence of many complex and conflicting demands, for which the software engineer must find a suitable balance. The paper presents the results of an empirical study into the suitability of weighted and Pareto optimal genetic algorithms, together with the NSGA-II algorithm, presenting evidence to support the claim that NSGA-II is well suited to the MONRP. The paper also provides benchmark data to indicate the size above which the MONRP becomes non--trivial." }, { "instance_id": "R28889xR28870", "comparison_id": "R28889", "paper_id": "R28870", "text": "Software Project Portfolio Optimization with Advanced Multiobjective Evolutionary Algorithms Large software companies have to plan their project portfolio to maximize potential portfolio return and strategic alignment, while balancing various preferences, and considering limited resources. Project portfolio managers need methods and tools to find a good solution for complex project portfolios and multiobjective target criteria efficiently. However, software project portfolios are challenging to describe for optimization in a practical way that allows efficient optimization. In this paper we propose an approach to describe software project portfolios with a set of multiobjective criteria for portfolio managers using the COCOMO II model and introduce a multiobjective evolutionary approach, mPOEMS, to find the Pareto-optimal front efficiently. We evaluate the new approach with portfolios choosing from a set of 50 projects that follow the validated COCOMO II model criteria and compare the performance of the mPOEMS approach with state-of-the-art multiobjective optimization evolutionary approaches. Major results are as follows: the portfolio management approach was found usable and useful; the mPOEMS approach outperformed the other approaches." }, { "instance_id": "R28889xR28861", "comparison_id": "R28889", "paper_id": "R28861", "text": "A Multi-Objective Software Quality Classification Model Using Genetic Programming A key factor in the success of a software project is achieving the best-possible software reliability within the allotted time & budget. Classification models which provide a risk-based software quality prediction, such as fault-prone & not fault-prone, are effective in providing a focused software quality assurance endeavor. However, their usefulness largely depends on whether all the predicted fault-prone modules can be inspected or improved by the allocated software quality-improvement resources, and on the project-specific costs of misclassifications. Therefore, a practical goal of calibrating classification models is to lower the expected cost of misclassification while providing a cost-effective use of the available software quality-improvement resources. This paper presents a genetic programming-based decision tree model which facilitates a multi-objective optimization in the context of the software quality classification problem. The first objective is to minimize the \"Modified Expected Cost of Misclassification\", which is our recently proposed goal-oriented measure for selecting & evaluating classification models. The second objective is to optimize the number of predicted fault-prone modules such that it is equal to the number of modules which can be inspected by the allocated resources. Some commonly used classification techniques, such as logistic regression, decision trees, and analogy-based reasoning, are not suited for directly optimizing multi-objective criteria. In contrast, genetic programming is particularly suited for the multi-objective optimization problem. An empirical case study of a real-world industrial software system demonstrates the promising results, and the usefulness of the proposed model" }, { "instance_id": "R28889xR28629", "comparison_id": "R28889", "paper_id": "R28629", "text": "Today/Future Importance Analysis SBSE techniques have been widely applied to requirements selection and prioritization problems in order to ascertain a suitable set of requirements for the next release of a system. Unfortunately, it has been widely observed that requirements tend to be changed as the development process proceeds and what is suitable for today, may not serve well into the future. Though SBSE has been widely applied to requirements analysis, there has been no previous work that seeks to balance the requirements needs of today with those of the future. This paper addresses this problem. It introduces a multi-objective formulation of the problem which is implemented using multi-objective Pareto optimal evolutionary algorithms. The paper presents the results of experiments on both synthetic and real world data." }, { "instance_id": "R28889xR28859", "comparison_id": "R28889", "paper_id": "R28859", "text": "Evolutionary Algorithms for the Multi-objective Test Data Generation Problem Automatic test data generation is a very popular domain in the field of search\u2010based software engineering. Traditionally, the main goal has been to maximize coverage. However, other objectives can be defined, such as the oracle cost, which is the cost of executing the entire test suite and the cost of checking the system behavior. Indeed, in very large software systems, the cost spent to test the system can be an issue, and then it makes sense by considering two conflicting objectives: maximizing the coverage and minimizing the oracle cost. This is what we did in this paper. We mainly compared two approaches to deal with the multi\u2010objective test data generation problem: a direct multi\u2010objective approach and a combination of a mono\u2010objective algorithm together with multi\u2010objective test case selection optimization. Concretely, in this work, we used four state\u2010of\u2010the\u2010art multi\u2010objective algorithms and two mono\u2010objective evolutionary algorithms followed by a multi\u2010objective test case selection based on Pareto efficiency. The experimental analysis compares these techniques on two different benchmarks. The first one is composed of 800 Java programs created through a program generator. The second benchmark is composed of 13 real programs extracted from the literature. In the direct multi\u2010objective approach, the results indicate that the oracle cost can be properly optimized; however, the full branch coverage of the system poses a great challenge. Regarding the mono\u2010objective algorithms, although they need a second phase of test case selection for reducing the oracle cost, they are very effective in maximizing the branch coverage. Copyright \u00a9 2011 John Wiley & Sons, Ltd." }, { "instance_id": "R28889xR28880", "comparison_id": "R28889", "paper_id": "R28880", "text": "Single and Multi Objective Genetic Programming for Software Development Effort Estimation The idea of exploiting Genetic Programming (GP) to estimate software development effort is based on the observation that the effort estimation problem can be formulated as an optimization problem. Indeed, among the possible models, we have to identify the one providing the most accurate estimates. To this end a suitable measure to evaluate and compare different models is needed. However, in the context of effort estimation there does not exist a unique measure that allows us to compare different models but several different criteria (e.g., MMRE, Pred(25), MdMRE) have been proposed. Aiming at getting an insight on the effects of using different measures as fitness function, in this paper we analyzed the performance of GP using each of the five most used evaluation criteria. Moreover, we designed a Multi-Objective Genetic Programming (MOGP) based on Pareto optimality to simultaneously optimize the five evaluation measures and analyzed whether MOGP is able to build estimation models more accurate than those obtained using GP. The results of the empirical analysis, carried out using three publicly available datasets, showed that the choice of the fitness function significantly affects the estimation accuracy of the models built with GP and the use of some fitness functions allowed GP to get estimation accuracy comparable with the ones provided by MOGP." }, { "instance_id": "R28889xR28633", "comparison_id": "R28889", "paper_id": "R28633", "text": "A Multiobjective Optimization Approach to the Software Release Planning with Undefined Number of Releases and Interdependent Requirements Release Planning is an important and complex activity in software development. It involves several aspects related to which functionalities are going to be developed in each release of the system. Consistent planning must meet the customers\u2019 needs and comply with existing constraints. Optimization techniques have been successfully applied to solve problems in the Software Engineering field, including the Software Release Planning Problem. In this context, this work presents an approach based on multiobjective optimization for the problem when the number of releases is not known a priori or when the number of releases is a value expected by stakeholders. The strategy regards on the stakeholders\u2019 satisfaction, business value and risk management, as well as provides ways for handling requirements interdependencies. Experiments show the feasibility of the proposed approach." }, { "instance_id": "R28889xR28654", "comparison_id": "R28889", "paper_id": "R28654", "text": "A Multiobjective Module-Order Model for Software Quality Enhancement The knowledge, prior to system operations, of which program modules are problematic is valuable to a software quality assurance team, especially when there is a constraint on software quality enhancement resources. A cost-effective approach for allocating such resources is to obtain a prediction in the form of a quality-based ranking of program modules. Subsequently, a module-order model (MOM) is used to gauge the performance of the predicted rankings. From a practical software engineering point of view, multiple software quality objectives may be desired by a MOM for the system under consideration: e.g., the desired rankings may be such that 100% of the faults should be detected if the top 50% of modules with highest number of faults are subjected to quality improvements. Moreover, the management team for the same system may also desire that 80% of the faults should be accounted if the top 20% of the modules are targeted for improvement. Existing work related to MOM(s) use a quantitative prediction model to obtain the predicted rankings of program modules, implying that only the fault prediction error measures such as the average, relative, or mean square errors are minimized. Such an approach does not provide a direct insight into the performance behavior of a MOM. For a given percentage of modules enhanced, the performance of a MOM is gauged by how many faults are accounted for by the predicted ranking as compared with the perfect ranking. We propose an approach for calibrating a multiobjective MOM using genetic programming. Other estimation techniques, e.g., multiple linear regression and neural networks cannot achieve multiobjective optimization for MOM(s). The proposed methodology facilitates the simultaneous optimization of multiple performance objectives for a MOM. Case studies of two industrial software systems are presented, the empirical results of which demonstrate a new promise for goal-oriented software quality modeling." }, { "instance_id": "R28889xR28805", "comparison_id": "R28889", "paper_id": "R28805", "text": "Software Module Clustering as a Multi-Objective Search Problem Software module clustering is the problem of automatically organizing software units into modules to improve program structure. There has been a great deal of recent interest in search-based formulations of this problem in which module boundaries are identified by automated search, guided by a fitness function that captures the twin objectives of high cohesion and low coupling in a single-objective fitness function. This paper introduces two novel multi-objective formulations of the software module clustering problem, in which several different objectives (including cohesion and coupling) are represented separately. In order to evaluate the effectiveness of the multi-objective approach, a set of experiments was performed on 17 real-world module clustering problems. The results of this empirical study provide strong evidence to support the claim that the multi-objective approach produces significantly better solutions than the existing single-objective approach." }, { "instance_id": "R28889xR28647", "comparison_id": "R28889", "paper_id": "R28647", "text": "Software Requirements Selection using Quantum-inspired Elitist Multi- objective Evolutionary Algorithm This paper presents a Quantum-inspired Multi-objective Differential Evolution Algorithm (QMDEA) for the selection of software requirements, an issue in Requirements engineering phase of software development life cycle. Generally the software development process is iterative or incremental in nature, as request for new requirements keep coming from the customers from time to time for inclusion in the next release of the software. Due to the feasibility reasons it is not possible for a company to incorporate all the requirements in the software product. Consequently, it becomes a challenging task for the company to select a subset of the requirements to be included, by keeping the business goals in view. The problem is to identify a set of requirements to be included in the next release of the product, by minimizing the cost and maximizing the customer satisfaction. As minimizing the cost and maximizing the customer satisfaction are contradictory objectives, the problem is multi-objective and is also NP-hard in nature. Therefore it cannot be solved efficiently using traditional optimization techniques especially for the large problem instances. QMDEA combines the preeminent features of Differential Evolution and Quantum Computing. The features of QMDEA help in achieving quality Pareto-optimal front solutions with faster convergence. The performance of QMDEA is tested on six benchmark problems derived from the literature. The comparison of the obtained results indicates superior performance over the other methods reported in the literature." }, { "instance_id": "R28889xR28637", "comparison_id": "R28889", "paper_id": "R28637", "text": "Simulating and Optimising Design Decisions in Quantitative Goal Models Making decisions among a set of alternative system designs is an essential activity of requirements engineering. It involves evaluating how well each alternative satisfies the stakeholders' goals and selecting one alternative that achieves some optimal tradeoffs between possibly conflicting goals. Quantitative goal models support such activities by describing how alternative system designs \u2014 expressed as alternative goal refinements and responsibility assignments \u2014 impact on the levels of goal satisfaction specified in terms of measurable objective functions. Analyzing large numbers of alternative designs in such models is an expensive activity for which no dedicated tool support is currently available. This paper takes a first step towards providing such support by presenting automated techniques for (i) simulating quantitative goal models so as to estimate the levels of goal satisfaction contributed by alternative system designs and (ii) optimising the system design by applying a multi-objective optimisation algorithm to search through the design space. These techniques are presented and validated using a quantitative goal model for a well-known ambulance service system." }, { "instance_id": "R28889xR28790", "comparison_id": "R28889", "paper_id": "R28790", "text": "Optimal Web Service Selection based on Multi-Objective Genetic Algorithm Considering that there are three aspects of constrains in the service selection process, such as control structure within a composition plan, relationship between concrete services, and tradeoff among multiple QoS indexes, a QoS based optimal Web services selection method by multi-objective genetic algorithm is presented. First we design a chromosome coding method to represent a feasible service selection solution, and then develop genetic operators and strategies for maintaining diversity of population and avoiding getting trapped in local optima. Experimental results show that within a finite number of evolving generations this algorithm can generate a set of nondominated Pareto optimal sol.utions which satisfy to user's QoS requirements." }, { "instance_id": "R28889xR28797", "comparison_id": "R28889", "paper_id": "R28797", "text": "Interactive, evolutionary search in upstream object-oriented class design Although much evidence exists to suggest that early life cycle software engineering design is a difficult task for software engineers to perform, current computational tool support for software engineers is limited. To address this limitation, interactive search-based approaches using evolutionary computation and software agents are investigated in experimental upstream design episodes for two example design domains. Results show that interactive evolutionary search, supported by software agents, appears highly promising. As an open system, search is steered jointly by designer preferences and software agents. Directly traceable to the design problem domain, a mass of useful and interesting class designs is arrived at which may be visualized by the designer with quantitative measures of structural integrity, such as design coupling and class cohesion. The class designs are found to be of equivalent or better coupling and cohesion when compared to a manual class design for the example design domains, and by exploiting concurrent execution, the runtime performance of the software agents is highly favorable." }, { "instance_id": "R28889xR28657", "comparison_id": "R28889", "paper_id": "R28657", "text": "Identifying \"Good\" Architectural Design Alternatives with Multi-Objective Optimization Strategies Architecture trade-off analysis methods are appropriate techniques to evaluate design decisions and design alternatives with respect to conflicting quality requirements. However, the identification of good design alternatives is a time consuming task, which is currently performed manually. To automate this task, this paper proposes to use evolutionary algorithms and multi-objective optimization strategies based on architecture refactorings to identify a sufficient set of design alternatives. This approach will reduce development costs and improve the quality of the final system, because an automated and systematic search will identify more and better design alternatives." }, { "instance_id": "R29012xR28984", "comparison_id": "R29012", "paper_id": "R28984", "text": "From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions." }, { "instance_id": "R29012xR28992", "comparison_id": "R29012", "paper_id": "R28992", "text": "XM2VTSDB: The extended M2VTS database Keywords: vision Reference EPFL-CONF-82502 URL: ftp://ftp.idiap.ch/pub/papers/vision/avbpa99.pdf Record created on 2006-03-10, modified on 2017-05-10" }, { "instance_id": "R29012xR28990", "comparison_id": "R29012", "paper_id": "R28990", "text": "Multi-PIE A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE." }, { "instance_id": "R29012xR28988", "comparison_id": "R29012", "paper_id": "R28988", "text": "Comprehensive database for facial expression analysis Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis." }, { "instance_id": "R29012xR28986", "comparison_id": "R29012", "paper_id": "R28986", "text": "The FERET evaluation methodology for face-recognition algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance." }, { "instance_id": "R29034xR29021", "comparison_id": "R29034", "paper_id": "R29021", "text": "Face alignment by coarse-to-fine shape searching We present a novel face alignment framework based on coarse-to-fine shape searching. Unlike the conventional cascaded regression approaches that start with an initial shape and refine the shape in a cascaded manner, our approach begins with a coarse search over a shape space that contains diverse shapes, and employs the coarse solution to constrain subsequent finer search of shapes. The unique stage-by-stage progressive and adaptive search i) prevents the final solution from being trapped in local optima due to poor initialisation, a common problem encountered by cascaded regression approaches; and ii) improves the robustness in coping with large pose variations. The framework demonstrates real-time performance and state-of-the-art results on various benchmarks including the challenging 300-W dataset." }, { "instance_id": "R29034xR28973", "comparison_id": "R29034", "paper_id": "R28973", "text": "Hyperface: A deep multi-task learning framework for face detection, land- mark localization, pose estimation, and gender recognition We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks." }, { "instance_id": "R29080xR29078", "comparison_id": "R29080", "paper_id": "R29078", "text": "Continuous conditional neural fields for structured regression An increasing number of computer vision and pattern recognition problems require structured regression techniques. Problems like human pose estimation, unsegmented action recognition, emotion prediction and facial landmark detection have temporal or spatial output dependencies that regular regression techniques do not capture. In this paper we present continuous conditional neural fields (CCNF) \u2013 a novel structured regression model that can learn non-linear input-output dependencies, and model temporal and spatial output relationships of varying length sequences. We propose two instances of our CCNF framework: Chain-CCNF for time series modelling, and Grid-CCNF for spatial relationship modelling. We evaluate our model on five public datasets spanning three different regression problems: facial landmark detection in the wild, emotion prediction in music and facial action unit recognition. Our CCNF model demonstrates state-of-the-art performance on all of the datasets used." }, { "instance_id": "R29080xR29036", "comparison_id": "R29080", "paper_id": "R29036", "text": "Locating facial features with an extended active shape model We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using two- instead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series." }, { "instance_id": "R29080xR29050", "comparison_id": "R29080", "paper_id": "R29050", "text": "Detector of facial landmarks learned by the structured output SVM In this paper we describe a detector of facial landmarks based on the Deformable Part Models. We treat the task of landmark detection as an instance of the structured output classification problem. We propose to learn the parameters of the detector from data by the Structured Output Support Vector Machines algorithm. In contrast to the previous works, the objective function of the learning algorithm is directly related to the performance of the resulting detector which is controlled by a user-defined loss function. The resulting detector is real-time on a standard PC, simple to implement and it can be easily modified for detection of a different set of landmarks. We evaluate performance of the proposed landmark detector on a challenging \u201cLabeled Faces in the Wild\u201d (LFW) database. The empirical results demonstrate that the proposed detector is consistently more accurate than two public domain implementations based on the Active Appearance Models and the Deformable Part Models. We provide an open-source implementation of the proposed detector and the manual annotation of the facial landmarks for all images in the LFW database." }, { "instance_id": "R29153xR29129", "comparison_id": "R29153", "paper_id": "R29129", "text": "A survey on the recent research literature on ERP systems The research literature on ERP systems has exponentially grown in recent years. In a domain, where new concepts and techniques are constantly introduced, it is therefore, of interest to analyze the recent trends of this literature, which is only partially included in the research papers published. Therefore, we have chosen to primarily analyze the literature of the last 2 years (2003 and 2004), on the basis of a classification according to six categories: implementation of ERP; optimisation of ERP; management through ERP; the ERP software; ERP for supply chain management; case studies. This survey confirms that the research on ERP systems is still a growing field, but has reached some maturity. Different research communities address this area from various points of view. Among the research axes that are now active, we can, especially, notice a growing interest on the post-implementation phase of the projects, on the customization of ERP systems, on the sociological aspects of the implementation, on the interoperability of the ERP with other systems and on the return on investment of the implementations." }, { "instance_id": "R29153xR29133", "comparison_id": "R29153", "paper_id": "R29133", "text": "Enterprise resource planning research: where are we now and where should we go from here? ABSTRACT The research related to Enterprise Resource Planning (ERP) has grown over the past several years. This growing body of ERP research results in an increased need to review this extant literature with the intent of identifying gaps and thus motivate researchers to close this breach. Therefore, this research was intended to critique, synthesize and analyze both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature, and then enumerates and discusses an agenda for future research efforts. To accomplish this, we analyzed 49 ERP articles published (1999-2004) in top Information Systems (IS) and Operations Management (OM) journals. We found an increasing level of activity during the 5-year period and a slightly biased distribution of ERP articles targeted at IS journals compared to OM. We also found several research methods either underrepresented or absent from the pool of ERP research. We identified several areas of need within the ERP literature, none more prevalent than the need to analyze ERP within the context of the supply chain. INTRODUCTION Davenport (1998) described the strengths and weaknesses of using Enterprise Resource Planning (ERP). He called attention to the growth of vendors like SAP, Baan, Oracle, and People-Soft, and defined this software as, \"...the seamless integration of all the information flowing through a companyfinancial and accounting information, human resource information, supply chain information, and customer information.\" (Davenport, 1998). Since the time of that article, there has been a growing interest among researchers and practitioners in how organization implement and use ERP systems (Amoako-Gyampah and Salam, 2004; Bendoly and Jacobs, 2004; Gattiker and Goodhue, 2004; Lander, Purvis, McCray and Leigh, 2004; Luo and Strong, 2004; Somers and Nelson, 2004; Zoryk-Schalla, Fransoo and de Kok, 2004). This interest is a natural continuation of trends in Information Technology (IT), such as MRP II, (Olson, 2004; Teltumbde, 2000; Toh and Harding, 1999) and in business practice improvement research, such as continuous process improvement and business process reengineering (Markus and Tanis, 2000; Ng, Ip and Lee, 1999; Reijers, Limam and van der Aalst, 2003; Toh and Harding, 1999). This growing body of ERP research results in an increased need to review this extant literature with the intent of \"identifying critical knowledge gaps and thus motivate researchers to close this breach\" (Webster and Watson, 2002). Also, as noted by Scandura & Williams (2000), in order for research to advance, the methods used by researchers must periodically be evaluated to provide insights into the methods utilized and thus the areas of need. These two interrelated needs provide the motivation for this paper. In essence, this research critiques, synthesizes and analyzes both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature and then enumerates and discusses an agenda for future research efforts. The remainder of the paper is organized as follows: Section 2 describes the approach to the analysis of the ERP research. Section 3 contains the results and a review of the literature. Section 4 discusses our findings and the needs relative to future ERP research efforts. Finally, section 5 summarizes the research. RESEARCH STUDY We captured the trends pertaining to (1) the number and distribution of ERP articles published in the leading journals, (2) methodologies employed in ERP research, and (3) emphasis relative to topic of ERP research. During the analysis of the ERP literature, we identified gaps and needs in the research and therefore enumerate and discuss a research agenda which allows the progression of research (Webster and Watson, 2002). In short, we sought to paint a representative landscape of the current ERP literature base in order to influence the direction of future research efforts relative to ERP. \u2026" }, { "instance_id": "R29153xR29140", "comparison_id": "R29153", "paper_id": "R29140", "text": "An Updated ERP Systems Annotated Bibliography: 2001-2005 This study provides an updated annotated bibliography of ERP publications published in the main IS conferences and journals during the period 2001-2005, categorizing them through an ERP lifecycle-based framework that is structured in phases. The first version of this bibliography was published in 2001 (Esteves and Pastor, 2001c). However, so far, we have extended the bibliography with a significant number of new publications in all the categories used in this paper. We also reviewed the categories and some incongruities were eliminated." }, { "instance_id": "R29153xR29114", "comparison_id": "R29153", "paper_id": "R29114", "text": "Enterprise Resource Planning Systems Research: An Annotated Bibliography Despite growing interest, publications on ERP systems within the academic Information Systems community, as reflected by contributions to journals and international conferences, is only now emerging. This article provides an annotated bibliography of the ERP publications published in the main Information Systems journals and conferences and reviews the state of the ERP art. The publications surveyed are categorized through a framework that is structured in phases that correspond to the different stages of an ERP system lifecycle within an organization. We also present topics for further research in each phase. ." }, { "instance_id": "R29184xR29159", "comparison_id": "R29184", "paper_id": "R29159", "text": "Planning for ERP systems: analysis and future trend The successful implementation of various enterprise resource planning (ERP) systems has provoked considerable interest over the last few years. Management has recently been enticed to look toward these new information technologies and philosophies of manufacturing for the key to survival or competitive edges. Although there is no shortage of glowing reports on the success of ERP installations, many companies have tossed millions of dollars in this direction with little to show for it. Since many of the ERP failures today can be attributed to inadequate planning prior to installation, we choose to analyze several critical planning issues including needs assessment and choosing a right ERP system, matching business process with the ERP system, understanding the organizational requirements, and economic and strategic justification. In addition, this study also identifies new windows of opportunity as well as challenges facing companies today as enterprise systems continue to evolve and expand." }, { "instance_id": "R29184xR29179", "comparison_id": "R29184", "paper_id": "R29179", "text": "The Future of ERP Systems: look backward before moving forward Abstract This paper explores the enterprise resource planning (ERP) systems literature in an attempt to elucidate knowledge to help us see the future of ERP systems\u2019 research. The main purpose of this research is to study the development of ERP systems and other related areas in order to reach the constructs of mainstream literature. The analysis of literature has helped us to reach the key constructs of an as-is scenario, those are: history and development of ERP systems, the implementation life cycle, critical success factors and project management, and benefits and costs. However, the to-be scenario calls for more up-to-date research constructs of ERP systems integrating the following constructs: social networks, cloud computing, enterprise 2.0, and decision 2.0. In the end, the conclusion section will establish the link between the as-is and to-be scenarios opening the door for more novel ERP research areas." }, { "instance_id": "R29184xR29161", "comparison_id": "R29184", "paper_id": "R29161", "text": "Enterprise resource planning (ERP) systems: a research agenda The continuing development of enterprise resource planning (ERP) systems has been considered by many researchers and practitioners as one of the major IT innovations in this decade. ERP solutions seek to integrate and streamline business processes and their associated information and work flows. What makes this technology more appealing to organizations is increasing capability to integrate with the most advanced electronic and mobile commerce technologies. However, as is the case with any new IT field, research in the ERP area is still lacking and the gap in the ERP literature is huge. Attempts to fill this gap by proposing a novel taxonomy for ERP research. Also presents the current status with some major themes of ERP research relating to ERP adoption, technical aspects of ERP and ERP in IS curricula. The discussion presented on these issues should be of value to researchers and practitioners. Future research work will continue to survey other major areas presented in the taxonomy framework." }, { "instance_id": "R29240xR29203", "comparison_id": "R29240", "paper_id": "R29203", "text": "Critical success factors in ERP implementation: a review ERP systems have become vital strategic tools in today\u2019s competitive business environment. This ongoing research study presents a review of recent research work in ERP systems. It attempts to identify the main benefits of ERP systems, the drawbacks and the critical success factors for implementation discussed in the relevant literature. The findings revealed that despite some organizations have faced challenges undertaking ERP implementations, many others have enjoyed the benefits that the systems have brought to the organizations. ERP system facilitates the smooth flow of common functional information and practices across the entire organization. In addition, it improves the performance of the supply chain and reduces the cycle times. However, without top management support, having appropriate business plan and vision, re-engineering business process, effective project management, user involvement and education and training, organizations can not embrace the full benefits of such complex system and the risk of failure might be at high level." }, { "instance_id": "R29240xR29224", "comparison_id": "R29240", "paper_id": "R29224", "text": "Comparing risk and success factors in ERP projects: a literature review Although research and practice has attributed considerable attention to Enterprise Resource Planning (ERP) projects their failure rate is still high. There are two main fields of research, which aim at increasing the success rate of ERP projects: Research on risk factors and research on success factors. Despite their topical relatedness, efforts to integrate these two fields have been rare. Against this background, this paper analyzes 68 articles dealing with risk and success factors and categorizes all identified factors into twelve categories. Though some topics are equally important in risk and success factor research, the literature on risk factors emphasizes topics which ensure achieving budget, schedule and functionality targets. In contrast, the literature on success factors concentrates more on strategic and organizational topics. We argue that both fields of research cover important aspects of project success. The paper concludes with the presentation of a possible holistic consideration to integrate both, the understanding of risk and success factors." }, { "instance_id": "R29240xR29210", "comparison_id": "R29240", "paper_id": "R29210", "text": "Identification and assessment of risks associated with ERP post-implementation in China Purpose \u2013 The purpose of this paper is to identify, assess and explore potential risks that Chinese companies may encounter when using, maintaining and enhancing their enterprise resource planning (ERP) systems in the post\u2010implementation phase.Design/methodology/approach \u2013 The study adopts a deductive research design based on a cross\u2010sectional questionnaire survey. This survey is preceded by a political, economic, social and technological analysis and a set of strength, weakness, opportunity and threat analyses, from which the researchers refine the research context and select state\u2010owned enterprises (SOEs) in the electronic and telecommunications industry in Guangdong province as target companies to carry out the research. The questionnaire design is based on a theoretical risk ontology drawn from a critical literature review process. The questionnaire is sent to 118 selected Chinese SOEs, from which 42 (84 questionnaires) valid and usable responses are received and analysed.Findings \u2013 The findings ident..." }, { "instance_id": "R29240xR29201", "comparison_id": "R29240", "paper_id": "R29201", "text": "Risk management in ERP project introduction: Review of the literature In recent years ERP systems have received much attention. However, ERP projects have often been found to be complex and risky to implement in business enterprises. The organizational relevance and risk of ERP projects make it important for organizations to focus on ways to make ERP implementation successful. We collected and analyzed a number of key articles discussing and analyzing ERP implementation. The different approaches taken in the literature were compared from a risk management point of view to highlight the key risk factors and their impact on project success. Literature was further classified in order to address and analyze each risk factor and its relevance during the stages of the ERP project life cycle." }, { "instance_id": "R29240xR29198", "comparison_id": "R29240", "paper_id": "R29198", "text": "ERP implementation: a compilation and analysis of critical success factors Purpose \u2013 To explore the current literature base of critical success factors (CSFs) of ERP implementations, prepare a compilation, and identify any gaps that might exist.Design/methodology/approach \u2013 Hundreds of journals were searched using key terms identified in a preliminary literature review. Successive rounds of article abstract reviews resulted in 45 articles being selected for the compilation. CSF constructs were then identified using content analysis methodology and an inductive coding technique. A subsequent critical analysis identified gaps in the literature base.Findings \u2013 The most significant finding is the lack of research that has focused on the identification of CSFs from the perspectives of key stakeholders. Additionally, there appears to be much variance with respect to what exactly is encompassed by change management, one of the most widely cited CSFs, and little detail of specific implementation tactics.Research limitations/implications \u2013 There is a need to focus future research efforts..." }, { "instance_id": "R29240xR29194", "comparison_id": "R29240", "paper_id": "R29194", "text": "Critical successful factors of ERP implementation: a review Recently e -business has become the focus of management interest both in academics and in business. Among the major components of e -business, ERP (Enterprise Resource Planning) is the backbone of other applications. Therefore more and more enterprises attempt to adopt this new application in order to improve their business competitiveness. Owing to the specific characteristics of ERP, its implementation is more difficult than that of traditional information systems. For this reason, how to implement ERP successfully becomes an important issue for both academics and practitioners. In this paper, a review on critical successful factors of ERP in important MIS publications will be presented. Additionally traditional IS implementatio n and ERP implementation will be compared and the findings will be served as the basis for further research." }, { "instance_id": "R29351xR29320", "comparison_id": "R29351", "paper_id": "R29320", "text": "ERP systems business value: a critical review of empirical literature The business value generated by information and communication technologies (ICT) has been for long time a major research topic. Recently there is a growing research interest in the business value generated by particular types of information systems (IS). One of them is the enterprise resource planning (ERP) systems, which are increasingly adopted by organizations for supporting and integrating key business and management processes. The current paper initially presents a critical review of the existing empirical literature concerning the business value of the ERP systems, which investigates the impact of ERP systems adoption on various measures of organizational performance. Then is critically reviewed the literature concerning the related topic of critical success factors (CSFs) in ERP systems implementation, which aims at identifying and investigating factors that result in more successful ERP systems implementation that generate higher levels of value for organizations. Finally, future directions of research concerning ERP systems business value are proposed." }, { "instance_id": "R29351xR29326", "comparison_id": "R29351", "paper_id": "R29326", "text": "Enterprise resource planning systems (ERP) and user performance: a literature review Organizations spend billions of dollars and countless hours implementing Enterprise Resources Planning systems (ERPs) to attain better performance. However, the failure rate of ERP implementation is very high, with subsequent research interests focussing mainly on understanding the failure factors. With the spotlight of prior research mainly focussed on success and failure factors other important aspects have not been given enough attention. This paper starts from the proposition that users can evaluate the benefits of the ERP systems and users can judge whether or not ERPs provide reasonable payoff and outcomes for organizations. This premise is based on the view that the user creates the benefits through the accomplishment of tasks leading to the achievement of goals. The study consists of comprehensive literature review bringing to light previous investigations on the impacts of ERP on user performance and presents how ERP research utilises IS theory to investigate ERP in different settings." }, { "instance_id": "R29351xR29310", "comparison_id": "R29351", "paper_id": "R29310", "text": "Understanding the impact of enterprise systems on management decision making: an agenda for future research Enterprise systems have been widely sold on the basis that they reduce costs through process efficiency and enhance decision making by providing accurate and timely enterprise wide information. Although research shows that operational efficiencies can be achieved, ERP systems are notoriously poor at delivering management information in a form that would support effective decision-making. Research suggests managers are not helped in their decision-making abilities simply by increasing the flow of information. This paper calls for a new approach to researching the impact of ERP implementations on global organizations by examining decision making processes at 3 levels in the organisation (corporate, core implementation team and local site)." }, { "instance_id": "R29351xR29344", "comparison_id": "R29351", "paper_id": "R29344", "text": "The effects of management information and ERP systems on strategic knowledge management and decision-making resources, to shorten delivery time, to increase quality and product variety, in other words, are obligated to develop \"an integrated information system\". Enterprise Resource Planning (ERP) systems help unleash the true potential of companies by integrating business and management processes. In this study, how and in what direction Enterprise Resource Planning Systems affect the decision of the upper and middle level managers of businesses together with the effects of ERP systems on strategic knowledge management to make enterprises more innovative and competitively advantaged, transformable, and decisions based on ERP systems will be investigated. As a result of literature review study, the role and the impacts of these systems on strategic information management and decisionmaking will be investigated with both global and local business application examples. 2013 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of the 9 th" }, { "instance_id": "R29351xR29298", "comparison_id": "R29351", "paper_id": "R29298", "text": "Potential impact of cultural differences on enterprise resource planning (ERP) projects Over the last ten years, there has been a dramatic growth in the acquisition of Enterprise Resource Planning (ERP) systems, where the market leader is the German company, SAP AG. However, more recently, there has been an increase in reported ERP failures, suggesting that the implementation issues are not just technical, but encompass wider behavioural factors." }, { "instance_id": "R29351xR29316", "comparison_id": "R29351", "paper_id": "R29316", "text": "Organisations and vanilla software: what do we know about ERP systems and competitive advantage? Enterprise Resource Planning (ERP) systems have become a de facto standard for integrating business functions. But an obvious question arises: if every business is using the same socalled \u201cVanilla\u201d software (e.g. an SAP ERP system) what happens to the competitive advantage from implementing IT systems? If we discard our custom-built legacy systems in favour of enterprise systems do we also jettison our valued competitive advantage from IT? While for some organisations ERPs have become just a necessity for conducting business, others want to exploit them to outperform their competitors. In the last few years, researchers have begun to study the link between ERP systems and competitive advantage. This link will be the focus of this paper. We outline a framework summarizing prior research and suggest two researchable questions. A future article will develop the framework with two empirical case studies from within part of the European food industry." }, { "instance_id": "R29351xR29332", "comparison_id": "R29351", "paper_id": "R29332", "text": "Taking knowledge management on the ERP road: a two-dimensional analysis\u201d In today's fierce business competition, companies face the tremendous challenge of expanding markets, improving their products, services and processes and exploiting their intellectual capital in a dynamic network of knowledge-intensive relations inside and outside their borders. In order to accomplish these objectives, more and more companies are turning to the Enterprise Resource Planning systems (ERP). On the other hand, Knowledge Management (KM) has received considerable attention in the last decade and is continuously gaining interest by industry, enterprises and academia. As we are moving into an era of \u201cknowledge capitalism\u201d, knowledge management will play a fundamental role in the success of today's businesses. This paper aims at throwing light on the role of KM in the ERP success first and on their possible integration second. A wide range of academic and practitioner literature related to KM and ERP is reviewed. On the basis of this review, the paper gives answers to specific research questions and analyses future research directions." }, { "instance_id": "R30476xR30284", "comparison_id": "R30476", "paper_id": "R30284", "text": "The Relationship between CO2 Emission, Energy Consumption, Urbanization and Trade Openness for Selected CEECs This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991\u20132011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well." }, { "instance_id": "R30476xR29900", "comparison_id": "R30476", "paper_id": "R29900", "text": "Environmental Kuznet\u2019s curve for India: evidence from tests for cointegration with unknown structural breaks This study revisits the cointegrating relationship between carbon emission, energy use, economic activity and trade openness for India using threshold cointegration tests with a view to testing the environmental Kuznet\u2019s curve hypothesis in the presence of possible regime shift in long run relationship of the variables for the period 1971 to 2008. The article confirms the existence of \u2018regime-shift\u2019 or \u2018threshold\u2019 cointegration among the variables and environmental Kuznet\u2019s curve for India. It challenges previous empirical works for India which fail to establish cointegrating relationship among these variables and explains its logical and econometric reasons. The study finds that the carbon emission is highly elastic with respect to real per capita income and energy use in India. This finding is critical and warns successful design and execution of energy and environmental policy framework which would pave the low carbon sustainable growth path inIndia." }, { "instance_id": "R30476xR29404", "comparison_id": "R30476", "paper_id": "R29404", "text": "Richer and cleaner? A study on carbon dioxide emissions in developing countries The Climate Change debate has drawn attention to the problem of greenhouse gases emissions into the atmosphere. One of the most important issues in the policy debate is the role that should be played by developing countries in joining the commitment of developed countries to reduce GHG emissions, and particularly CO2 emissions. This debate calls into play the relationship between energy consumption, CO2 emissions and economic development. In this paper we use a panel data model for 110 world countries to estimate the relationship between CO2 emissions and GDP and to produce emission forecast. The paper contains three major results: (i) the empirical relationship between carbon dioxide and income is well described by non linear Gamma and Weibull specifications as opposed to more usual linear and log-linear functional forms; (ii) our single equation reduced form model is comparable in terms of forecasted emissions with other more complex, less data driven models; (iii) despite the decreasing marginal propensity to pollute, our forecasts show that future global emissions will rise. The average world growth of CO2 emissions between 2000 and 2020 is about 2.2% per year, while that of Non Annex 1 countries is posted at 3.3% per year." }, { "instance_id": "R30476xR30272", "comparison_id": "R30476", "paper_id": "R30272", "text": "Estimating the environmental Kuznets curve for Spain by considering fuel oil prices (1874\u20132011) Abstract We perform a structural analysis on an environmental Kuznets curve (EKC) for Spain by exploiting long time series (1874\u20132011) and by using real oil prices as an indicator of variations in fuel energy consumption. This empirical strategy allows us to both, capture the effect of the most pollutant energy on carbon dioxide (CO2) emissions and, at the same time, preclude potential endogeneity problems derived from the direct inclusion of fuel consumption in econometric specification. Knowing the extent to which oil prices affect CO2 emissions has a straightforward application for environmental policy. The dynamics estimates of the long and short-term relationships among CO2, economic growth and oil prices are built through an autoregressive distributed lag (ARDL) model. Our test results support the EKC hypothesis. Moreover, real oil prices are clearly revealed as a valuable indicator of pollutant energy consumption." }, { "instance_id": "R30476xR30189", "comparison_id": "R30476", "paper_id": "R30189", "text": "CO2 emissions, economic growth, energy consumption, trade and urbanization in new EU member and candidate countries: A panel data analysis This paper investigates the causal relationship between energy consumption, carbon dioxide emissions, economic growth, trade openness and urbanization for a panel of new EU member and candidate countries over the period 1992\u20132010. Panel unit root tests, panel cointegration methods and panel causality tests are used to investigate this relationship. The main results provide evidence supporting the Environmental Kuznets Curve hypothesis. Hence, there is an inverted U-shaped relationship between environment and income for the sampled countries. The results also indicate that there is a short-run unidirectional panel causality running from energy consumption, trade openness and urbanization to carbon emissions, from GDP to energy consumption, from GDP, energy consumption and urbanization to trade openness, from urbanization to GDP, and from urbanization to trade openness. As for the long-run causal relationship, the results indicate that estimated coefficients of lagged error correction term in the carbon dioxide emissions, energy consumption, GDP, and trade openness equations are statistically significant, implying that these four variables could play an important role in adjustment process as the system departs from the long-run equilibrium." }, { "instance_id": "R30476xR29984", "comparison_id": "R30476", "paper_id": "R29984", "text": "Fossil & renewable energy consumption, GHGs (greenhouse gases) and economic growth: Evidence from a panel of EU (European Union) countries Recently a great number of empirical research studies have been conducted on the relationship between certain indicators of environmental degradation and income. The EKC (Environmental Kuznets Curve) hypothesis has been tested for various types of environmental degradation. The EKC hypothesis states that the relationship between environmental degradation and income per capita takes the form of an inverted U shape. In this paper the EKC hypothesis was investigated with regards to the relationship between carbon emissions, income and energy consumption in 16 EU (European Union) countries. We conducted panel data analysis for the period of 1990\u20132008 by fixing the multicollinearity problem between the explanatory variables using their centered values. The main contribution of this paper is that the EKC hypothesis has been investigated by separating final energy consumption into renewable and fossil fuel energy consumption. Unfortunately, the inverted U-shape relationship (EKC) does not hold for carbon emissions in the 16 EU countries. The other important finding is that renewable energy consumption contributes around 1/2 less per unit of energy consumed than fossil energy consumption in terms of GHG (greenhouse gas) emissions in EU countries. This implies that a shift in energy consumption mix towards alternative renewable energy technologies might decrease the GHG emissions." }, { "instance_id": "R30476xR30151", "comparison_id": "R30476", "paper_id": "R30151", "text": "Public budgets for energy RD&D and the effects on energy intensity and pollution levels This study, based on the N-shaped cubic model of the environmental Kuznets curve, analyzes the evolution of per capita greenhouse gas emissions (GHGpc) using not just economic growth but also public budgets dedicated to energy-oriented research development and demonstration (RD&D) and energy intensity. The empirical evidence, obtained from an econometric model of fixed effects for 28 OECD countries during 1994\u20132010, suggests that energy innovations help reduce GHGpc levels and mitigate the negative impact of energy intensity on environmental quality. When countries develop active energy RD&D policies, they can reduce both the rates of energy intensity and the level of GHGpc emissions. This paper incorporates a moderating variable to the econometric model that emphasizes the effect that GDP has on energy intensity. It also adds a variable that reflects the difference between countries that have made a greater economic effort in energy RD&D, which in turn corrects the GHG emissions resulting from the energy intensity of each country." }, { "instance_id": "R30476xR30161", "comparison_id": "R30476", "paper_id": "R30161", "text": "Causal relationship between CO2 emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia The aim of this paper is to examine the causal relationship between CO2 emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia over the period of 1971\u20132012. The long-run relationship is investigated by the auto-regressive distributed lag (ARDL) bounds testing approach to cointegration and error correction method (ECM). The results of the analysis reveal a positive sign for the coefficient of financial development, suggesting that the financial development in Tunisia has taken place at the expense of environmental pollution. The Tunisian case also shows a positive monotonic relationship between real GDP and CO2 emissions. This means that the results do not support the validity of environmental Kuznets curve (EKC) hypothesis. In addition, the paper explores causal relationship between the variables by using Granger causality models and it concludes that financial development plays a vital role in the Tunisian economy." }, { "instance_id": "R30476xR30426", "comparison_id": "R30476", "paper_id": "R30426", "text": "The impact of financial development and trade on environmental quality in Iran Undesirable changes in the environment such as global warming and emissions of greenhouse gases have elicited worldwide attention in recent decades. Environmental problems emanating from economic activities targeted at achieving higher economic growth rate have become a controversial issue. In this study, the effects of financial development and trade on environmental quality in Iran were investigated. To this purpose, statistical data collected between the periods of 1970 and 2011 were used. In addition to using the autoregressive distributed lag model (ARDL), the short-term and long-term relationships between the variables were estimated and analyzed. Moreover, the environmental Kuznets curve (EKC) hypothesis was evaluated using various pollutants. The results show that financial development accelerates the degradation of the environment; however, an increase in trade openness reduces the damage to the environment in Iran. Furthermore, the results did not agree with the EKC hypothesis in Iran. Error correction coefficient showed that in each period, 49% of imbalances were justified and approached their long-run procedure. Structural stability tests showed that the estimated coefficients were stable over the period." }, { "instance_id": "R30476xR30280", "comparison_id": "R30476", "paper_id": "R30280", "text": "Estimating the relationship between economic growth and environmental quality for the brics economies - a dynamic panel data approach It has been forecasted by many economists that in the next couple of decades the BRICS economies are going to experience an unprecedented economic growth. This massive economic growth would definitely have a detrimental impact on the environment since these economies, like others, would extract their environmental and natural resource to a larger scale in the process of their economic growth. Therefore, maintaining environmental quality while growing has become a major challenge for these economies. However, the proponents of Environmental Kuznets Curve (EKC) Hypothesis - an inverted U shape relationship between income and emission per capita, suggest BRICS economies need not bother too much about environmental quality while growing because growth would eventually take care of the environment once a certain level of per capita income is achieved. In this backdrop, the present study makes an attempt to estimate EKC type relationship, if any, between income and emission in the context of the BRICS countries for the period 1997 to 2011. Therefore, the study first adopts fixed effect (FE) panel data model to control time constant country specific effects, and then uses Generalized Method of Moments (GMM) approach for dynamic panel data to address endogeneity of income variable and dynamism in emission per capita. Apart from income, we also include variables related to financial sector development and energy utilization to explain emission. The fixed effect model shows a significant EKC type relation between income and emission supporting the previous literature. However, GMM estimates for the dynamic panel model show the relationship between income and emission is actually U shaped with the turning point being out of sample. This out of sample turning point indicates that emission has been growing monotonically with growth in income. Factors like, net energy imports and share of industrial output in GDP are found to be significant and having detrimental impact on the environment in the dynamic panel model. However, these variables are found to be insignificant in FE model. Capital account convertibility shows significant and negative impact on the environment irrespective of models used. The monotonically increasing relationship between income and emission suggests the BRICS economies must adopt some efficiency oriented action plan so that they can grow without putting much pressure on the environment. These findings can have important policy implications as BRICS countries are mainly depending on these factors for their growth but at the same time they can cause serious threat to the environment." }, { "instance_id": "R30476xR30202", "comparison_id": "R30476", "paper_id": "R30202", "text": "Is there an environmental Kuznets curve for South Africa? A co-summability approach using a century of data There exists a huge international literature on the, so-called, Environmental Kuznets Curve (EKC) hypothesis, which in turn, postulates an inverted u-shaped relationship between environmental pollutants and output. The empirical literature on EKC has mainly used test for cointegration, based on polynomial relationships between pollution and income. Motivated by the fact that, measured in per capita CO2 equivalent emissions, South Africa is the world's most carbon-intensive non-oil-producing developing country, this paper aims to test the validity of the EKC for South Africa. For this purpose, we use a century of data (1911\u20132010), to capture the process of development better compared to short sample-based research; and the concept of co-summability, which is designed to analyze non-linear long-run relations among persistent processes. Our results, however, provide no support of the EKC for South Africa, both for the full-sample and sub-samples (determined by tests of structural breaks), implying that to reduce emissions without sacrificing growth, policies should be aimed at promoting energy efficiency." }, { "instance_id": "R30476xR29581", "comparison_id": "R30476", "paper_id": "R29581", "text": "Environment Kuznets curve for CO2 emissions: A cointegration analysis for China This study examines the long-run relationship between carbon emissions and energy consumption, income and foreign trade in the case of China by employing time series data of 1975-2005. In particular the study aims at testing whether environmental Kuznets curve (EKC) relationship between CO2 emissions and per capita real GDP holds in the long run or not. Auto regressive distributed lag (ARDL) methodology is employed for empirical analysis. A quadratic relationship between income and CO2 emission has been found for the sample period, supporting EKC relationship. The results of Granger causality tests indicate one way causality runs through economic growth to CO2 emissions. The results of this study also indicate that the carbon emissions are mainly determined by income and energy consumption in the long run. Trade has a positive but statistically insignificant impact on CO2 emissions." }, { "instance_id": "R30476xR30295", "comparison_id": "R30476", "paper_id": "R30295", "text": "CO2 emissions, real output, energy consumption, trade, urbanization and financial development: testing the EKC hypothesis for the USA This study aims to investigate the relationship between carbon dioxide (CO2) emissions, energy consumption, real output (GDP), the square of real output (GDP2), trade openness, urbanization, and financial development in the USA for the period 1960\u20132010. The bounds testing for cointegration indicates that the analyzed variables are cointegrated. In the long run, energy consumption and urbanization increase environmental degradation while financial development has no effect on it, and trade leads to environmental improvements. In addition, this study does not support the validity of the environmental Kuznets curve (EKC) hypothesis for the USA because real output leads to environmental improvements while GDP2 increases the levels of gas emissions. The results from the Granger causality test show that there is bidirectional causality between CO2 and GDP, CO2 and energy consumption, CO2 and urbanization, GDP and urbanization, and GDP and trade openness while no causality is determined between CO2 and trade openness, and gas emissions and financial development. In addition, we have enough evidence to support one-way causality running from GDP to energy consumption, from financial development to output, and from urbanization to financial development. In light of the long-run estimates and the Granger causality analysis, the US government should take into account the importance of trade openness, urbanization, and financial development in controlling for the levels of GDP and pollution. Moreover, it should be noted that the development of efficient energy policies likely contributes to lower CO2 emissions without harming real output." }, { "instance_id": "R30476xR29415", "comparison_id": "R30476", "paper_id": "R29415", "text": "An Exploration of the Conceptual and Empirical Basis of the Environmental Kuznets Curve We examine the conceptual and empirical basis of the environmental Kuznets curve. From both perspectives, the relationship lacks firm foundations. In particular, the empirical relationship is shown to be highly sensitive to the choice of pollutant, sample of countries and time period. This strongly suggests that there is an omitted variables problem. We find that two important omitted variables are education and inequality. Also, we show that the observed relationship is sensitive to the measure of income/welfare used. The paper concludes with a discussion of some policy implications of our findings. Copyright 2002 by Blackwell Publishers Ltd/University of Adelaide and Flinders University of South Australia" }, { "instance_id": "R30476xR29437", "comparison_id": "R30476", "paper_id": "R29437", "text": "The impact of population pressure on global carbon dioxide emissions, 1975\u20131996: evidence from pooled cross-country data Abstract In assessing and forecasting the impact of population change on carbon dioxide emissions, most previous studies have assumed a unitary elasticity of emissions with respect to population change, i.e. that a 1% increase in population results in a 1% increase in emissions. This study finds that global population change over the last two decades is more than proportionally associated with growth in carbon dioxide emissions, and that the impact of population change on emissions is much more pronounced in developing countries than in developed countries. The empirical findings are based on a data for 93 countries over the period 1975\u20131996." }, { "instance_id": "R30476xR29384", "comparison_id": "R30476", "paper_id": "R29384", "text": "Are environmental Kuznets curves misleading us? The case of CO2 emissions Environmental Kuznets curve (EKC) analysis links changes in environmental quality to national economic growth. The reduced form models, however, do not provide insight into the underlying processes that generate these changes. We compare EKC models to structural transition models of per capita CO 2 emissions and per capita GDP, and find that, for the 16 countries which have undergone such a transition, the initiation of the transition correlates not with income levels but with historic events related to the oil price shocks of the 1970s and the policies that followed them. In contrast to previous EKC studies of CO 2 the transition away from positive emissions elasticities for these 16 countries is found to occur as a sudden, discontinuous transition rather than as a gradual change. We also demonstrate that the third order polynomial 'N' dependence of emissions on income is the result of data aggregation. We conclude that neither the 'U'- nor the 'N'-shaped relationship between CO 2 emissions and income provide a reliable indication of future behaviour." }, { "instance_id": "R30476xR29976", "comparison_id": "R30476", "paper_id": "R29976", "text": "The environmental Kuznets curve and the role of coal consumption in India: cointegration and causality analysis in an open economy This study investigates the dynamic relationship between coal consumption, economic growth, trade openness and CO2 emissions for Indian economy. In doing so, Narayan and Pop structural break unit test is applied to test the order of integration of the variables. Long run relationship between the variables is tested by applying ARDL bounds testing approach to cointegration developed by Pesaran et al. (2001). The results confirm the existence of cointegration for long run between coal consumption, economic growth, trade openness and CO2 emissions. Our empirical exercise indicates the presence of Environmental Kuznets Curve (EKC) long run as well as short run. Coal consumption as well as trade openness contributes to CO2 emissions. The causality results report the feedback hypothesis between economic growth and CO2 emissions and same inference is drawn between coal consumption and CO2 emissions. Moreover, trade openness Granger causes economic growth, coal consumption and CO2 emissions." }, { "instance_id": "R30476xR29779", "comparison_id": "R30476", "paper_id": "R29779", "text": "Modeling the CO2 emissions, energy use, and economic growth in Russia This paper applies the co-integration technique and causality test to examine the dynamic relationships between pollutant emissions, energy use, and real output during the period between 1990 and 2007 for Russia. The empirical results show that in the long-run equilibrium, emissions appear to be energy use elastic and output inelastic. This elasticity suggests high energy use responsiveness to changes in emissions. The output exhibits a negative significant impact on emissions and does not support EKC hypothesis. These indicate that both economic growth and energy conservation policies can reduce emissions and no negative impact on economic development. The causality results indicate that there is a bidirectional strong Granger-causality running between output, energy use and emissions, and whenever a shock occurs in the system, each variable makes a short-run adjustment to restore the long-run equilibrium. The average speed of adjustment is as low as just over 0.26 years. Hence, in order to reduce emissions, the best environmental policy is to increase infrastructure investment to improve energy efficiency, and to step up energy conservation policies to reduce any unnecessary waste of energy. That is, energy conservation is expected to improve energy efficiency, thereby promoting economic growth." }, { "instance_id": "R30476xR29973", "comparison_id": "R30476", "paper_id": "R29973", "text": "The environmental Kuznets curve in Asia: the case of sulphur and carbon emissions\u201d The present study examines whether the Race to the Bottom and Revised EKC scenarios presented by Dasgupta and others (2002) are, with regard to the analytical framework of the Environmental Kuznets Curve (EKC), applicable in Asia to representative environmental indices, such as sulphur emissions and carbon emissions. To carry out this study, a generalized method of moments (GMM) estimation was made, using panel data of 19 economies for the period 1950-2009. The main findings of the analysis on the validity of EKC indicate that sulphur emissions follow the expected inverted U-shape pattern, while carbon emissions tend to increase in line with per capita income in the observed range. As for the Race to the Bottom and Revised EKC scenarios, the latter was verified in sulphur emissions, as their EKC trajectories represent a linkage of the later development of the economy with the lower level of emissions while the former one was not present in neither sulphur nor carbon emissions." }, { "instance_id": "R30476xR30085", "comparison_id": "R30476", "paper_id": "R30085", "text": "Non-renewable and renewable energy consumption and CO2 emissions in OECD countries: A comparative analysis This paper attempts to explore the determinants of CO2 emissions using the STIRPAT model and data from 1980 to 2011 for OECD countries. The empirical results show that non-renewable energy consumption increases CO2 emissions, whereas renewable energy consumption decreases CO2 emissions. Further, the results support the existence of an environmental Kuznets curve between urbanisation and CO2 emissions, implying that at higher levels of urbanisation, the environmental impact decreases. Therefore, the overall evidence suggests that policy makers should focus on urban planning as well as clean energy development to make substantial contributions to both reducing non-renewable energy use and mitigating climate change." }, { "instance_id": "R30476xR29627", "comparison_id": "R30476", "paper_id": "R29627", "text": "The emissions, energy consumption, and growth nexus: Evidence from the commonwealth of independent states This study examines the causal relationship between carbon dioxide emissions, energy consumption, and real output within a panel vector error correction model for eleven countries of the Commonwealth of Independent States over the period 1992-2004. In the long-run, energy consumption has a positive and statistically significant impact on carbon dioxide emissions while real output follows an inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis. The short-run dynamics indicate unidirectional causality from energy consumption and real output, respectively, to carbon dioxide emissions along with bidirectional causality between energy consumption and real output. In the long-run there appears to be bidirectional causality between energy consumption and carbon dioxide emissions." }, { "instance_id": "R30476xR29843", "comparison_id": "R30476", "paper_id": "R29843", "text": "An econometric study of carbon dioxide (CO2) emissions, energy consumption, and economic growth of Pakistan Purpose The purpose of this paper is to examine the relationship among environmental pollution, economic growth and energy consumption per capita in the case of Pakistan. The per capital carbon dioxide (CO 2 ) emission is used as the environmental indicator, the commercial energy use per capita as the energy consumption indicator, and the per capita gross domestic product (GDP) as the economic indicator. Design/methodology/approach The investigation is made on the basis of the environmental Kuznets curve (EKC), using time series data from 1971 to 2006, by applying different econometric tools like ADF Unit Root Johansen Co\u2010integration VECM and Granger causality tests. Findings The Granger causality test shows that there is a long term relationship between these three indicators, with bidirectional causality between per capita CO 2 emission and per capita energy consumption. A monotonically increasing curve between GDP and CO 2 emission has been found for the sample period, rejecting the EKC relationship, implying that as per capita GDP increases a linear increase will be observed in per capita CO 2 emission. Research limitations/implications Future research should replace the economic growth variable, i.e. GDP by industrial growth variable because industrial sector is major contributor of pollution by emitting CO 2 . Practical implications The empirical findings will help the policy makers of Pakistan in understanding the severity of the CO 2 emissions issue and in developing new standards and monitoring networks for reducing CO 2 emissions. Originality/value Energy consumption is the major cause of environmental pollution in Pakistan but no substantial work has been done in this regard with reference to Pakistan." }, { "instance_id": "R30476xR29957", "comparison_id": "R30476", "paper_id": "R29957", "text": "Does financial instability increase environmental degradation? Fresh evidence from Pakistan The present study explores the relationship between financial instability and environmental degradation within the multivariate framework using time series data over the period of 1971\u20132009 in case of Pakistan. The long run relationship is investigated by the ARDL bounds testing approach to cointegration, and error correction method (ECM) is applied to examine the short run dynamics. The stationary properties of the variables are investigated by applying Saikkonen and Lutkepohl unit root test. Empirical evidence confirms that there exists a long run relationship between both variables and financial instability increases environmental degradation." }, { "instance_id": "R30476xR29893", "comparison_id": "R30476", "paper_id": "R29893", "text": "The impacts of transport energy consumption, foreign direct investment and income on CO2 emissions in ASEAN-5 economies In this study, we incorporate new variables and assess the impact of transportation sector's energy consumption and foreign direct investment on CO2 emissions for ASEAN-5 economies using the cointegration and Granger causality methods. This study also attempts to validate the Environmental Kuznets Curve (EKC) hypothesis. Our results reveal that the CO2 emissions and their determinants are co-integrated only in Indonesia, Malaysia and Thailand. The long-run elasticity estimation suggests that income and transport energy consumption significantly influence CO2 emissions whereas FDI is not significant. Economic growth plays a greater role in contributing to CO2 emission in ASEAN-5. Nonetheless, we find that the inverted U-shape EKC hypothesis is not applicable to the ASEAN-5 economies, especially in Indonesia, Malaysia and Thailand. In the long run, the bi-directional causality between economic growth and CO2 emissions is detected in Indonesia and Thailand, while we find unidirectional causality running from GDP to CO2 emissions in Malaysia. We also observe bi-directional causality between transport energy consumption, FDI and CO2 emissions in Thailand and Malaysia. As an immediate policy option, controlling energy consumption in transportation sector may result in a significant reduction in CO2 emissions. However, this may slow the process of economic growth in Malaysia and Indonesia. Alternatively, we suggest policymakers to place more emphasis on energy efficient transportation system and policies to minimise fossil fuel consumption. Thus, the quality of environment can be improved with less deleterious impact on economic growth." }, { "instance_id": "R30476xR30381", "comparison_id": "R30476", "paper_id": "R30381", "text": "Atmospheric consequences of trade and human development: A case of BRIC countries This paper looks into the causal association between economic growth, CO2 emission, trade volume, and human development indicator for Brazil, Russia, India, and China (BRIC countries) during 1980-2013. Following a generalized method of moments (GMM) technique, we have found out that bidirectional causality exists between CO2 emissions and economic growth. Feedback hypothesis is supported between CO2 emissions and human development, trade volume and human development, economic growth, and human development, and CO2 emissions and trade volume. Apart from finding out the unidirectional association from trade volume to economic growth, this study also validated the existence of Environmental Kuznets curve. Empirical findings of the study substantiate that the policymakers of the BRIC nations must focus on the green energy initiatives, either by in-house development or by technology transfer. This movement will allow them to control the ambient air pollutions prevalent in these nations." }, { "instance_id": "R30476xR29633", "comparison_id": "R30476", "paper_id": "R29633", "text": "Economic growth and pollutant emissions in Tunisia: an empirical analysis of the environmental Kuznets curve This paper investigates the relationship between economic growth and pollutant emissions for a small and open developing country, Tunisia, during the period 1961-2004. The investigation is made on the basis of the environmental Kuznets curve hypothesis, using time series data and cointegration analysis. Carbon dioxide (CO2) and sulfur dioxide (SO2) are used as the environmental indicators, and GDP as the economic indicator. Our results show that there is a long-run cointegrating relationship between the per capita emissions of two pollutants and the per capita GDP. An inverted U relationship between SO2 emissions and GDP has been found, with income turning point approximately equals to $1200 (constant 2000 prices) or to $3700 (in PPP, constant 2000 prices). However, a monotonically increasing relationship with GDP is found more appropriate for CO2 emissions. Furthermore, the causality results show that the relationship between income and pollution in Tunisia is one of unidirectional causality with income causing environmental changes and not vice versa, both in the short-run and long-run. This implies that an emission reduction policies and more investment in pollution abatement expense will not hurt economic growth. It could be a feasible policy tool for Tunisia to achieve its sustainable growth in the long-run." }, { "instance_id": "R30476xR30347", "comparison_id": "R30476", "paper_id": "R30347", "text": "Testing environmental Kuznets curve hypothesis: the role of renewable and non-renewable energy consumption and trade in OECD countries This paper investigates the causal relationships between per capita CO2 emissions, gross domestic product (GDP), renewable and non-renewable energy consumption, and international trade for a panel of 25 OECD countries over the period 1980\u20132010. Short-run Granger causality tests show the existence of bidirectional causality between: renewable energy consumption and imports, renewable and non-renewable energy consumption, non-renewable energy and trade (exports or imports); and unidirectional causality running from: exports to renewable energy, trade to CO2 emissions, output to renewable energy. There are also long-run bidirectional causalities between all our considered variables. Our long-run fully modified ordinary least squares (FMOLS) and dynamic ordinary least squares (DOLS) estimates show that the inverted U-shaped environmental Kuznets curve (EKC) hypothesis is verified for this sample of OECD countries. They also show that increasing non-renewable energy increases CO2 emissions. Interestingly, increasing trade or renewable energy reduces CO2 emissions. According to these results, more trade and more use of renewable energy are efficient strategies to combat global warming in these countries." }, { "instance_id": "R30476xR29676", "comparison_id": "R30476", "paper_id": "R29676", "text": "Environmental Kuznets curves, carbon emissions, and public choice ABSTRACT Concern about global climate change has elicited responses from governments around the world. These responses began with the 1997 Kyoto Protocol and have continued with other negotiations, including the 2009 Copenhagen Summit. These negotiations raised important questions about whether countries will reduce greenhouse gas emissions and, if so, how the burden of emissions reductions will be shared. To investigate these questions, we utilize environmental Kuznets curves for carbon emissions for the G8 plus five main developing countries. Our findings raise doubts about the feasibility of reducing global carbon emissions and shed light on the different positions taken by countries on the distribution of emissions reductions." }, { "instance_id": "R30476xR29711", "comparison_id": "R30476", "paper_id": "R29711", "text": "A panel data heterogeneous Bayesian estimation of environmental Kuznets curves for CO2emissions This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income\u2013CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions." }, { "instance_id": "R30476xR29410", "comparison_id": "R30476", "paper_id": "R29410", "text": "Economic growth and atmospheric pollution in Spain: discussing the environmental Kuznets curve hypothesis The environmental Kuznets curve (EKC) hypothesis posits an inverted U relationship between environmental pressure and per capita income. Recent research has examined this hypothesis for different pollutants in different countries. Despite certain empirical evidence shows that some environmental pressures have diminished in developed countries, the hypothesis could not be generalized to the global relationship between economy and environment at all. In this article we contribute to this debate analyzing the trends of annual emission flux of six atmospheric pollutants in Spain. The study presents evidence that there is not any correlation between higher income level and smaller emissions, except for SO2 whose evolution might be compatible with the EKC hypothesis. The authors argue that the relationship between income level and diverse types of emissions depends on many factors. Thus it cannot be thought that economic growth, by itself, will solve environmental problems." }, { "instance_id": "R30476xR30450", "comparison_id": "R30476", "paper_id": "R30450", "text": "Carbon dioxide (CO2) emissions during urbanization: A comparative study between China and Japan As the world's largest emitter of CO2, China is facing increasing domestic and international pressures on emissions mitigation. Considering Japan's leading position in energy conservation, this paper conducts a comparative study between China and Japan at the urbanization stages to analyze the similarities as well as differences of influencing factors of CO2 emissions. Results indicate that although CO2 emissions in Japan and China showed the similar characteristics of rigid growth during the urbanization processes, significant differences exist in factors such as CO2 emissions per capita, energy structure and energy intensity between the two countries, which are the determinants for CO2 emissions growth. A cointegration model is constructed to examine the long-run equilibrium relationship between CO2 emissions and factors including GDP, urbanization level, energy intensity and cement manufacture. Empirical results show that there is a quadratic relationship between income growth and CO2 emissions. Granger causality test is then adopted to explore the causal relationships among the variables. The future CO2 emissions and emission reduction potentials in China are estimated based on scenario analysis. Finally, Japan's valuable experience of reducing energy intensity and optimizing energy structure is concluded to provide important policy implications for China's strategic low-carbon planning and low carbon transition." }, { "instance_id": "R30476xR30082", "comparison_id": "R30476", "paper_id": "R30082", "text": "An empirical examination of environmental Kuznets curve (EKC) in West Africa This study aims to examine the relationship between income and environmental degradation in West Africa and ascertain the validity of EKC hypothesis in the region. The study adopted a panel data approach for fifteen West Africa countries for the period 1980-2012. The available results from our estimation procedure confirmed the EKC theory in the region. At early development stages, pollution rises with income and reaching a turning point, pollution dwindles with increasing income; as indicated by the significant inverse relation between income and environmental degradation. Consequently, literacy level and sound institutional arrangement were found to contribute significantly in mitigating the extent of environmental degradation. Among notable recommendation is the need for awareness campaign on environment abatement and adaptation strategies, strengthening of institutions to caution production and dumping pollution emitting commodities and encourage adoption of cleaner technologies." }, { "instance_id": "R30476xR29725", "comparison_id": "R30476", "paper_id": "R29725", "text": "On the Relationship Between CO 2 Emissions and Economic Growth: The Mauritian Experience This paper analyses the relationship between GDP and carbon dioxide emissions for Mauritius and vice-versa in a historical perspective. Using rigorous econometrics analysis, our results suggest that the carbon dioxide emission trajectory is closely related to the GDP time path. We show that emissions elasticity on income has been increasing over time. By estimating the EKC for the period 1975-2009, we were unable to prove the existence of a reasonable turning point and thus no EKC \u201cU\u201d shape was obtained. Our results suggest that Mauritius could not curb its carbon dioxide emissions in the last three decades. Thus, as hypothesized, the cost of degradation associated with GDP grows over time and it suggests that the economic and human activities are having increasingly negative environmental impacts on the country as cpmpared to their economic prosperity." }, { "instance_id": "R30476xR29443", "comparison_id": "R30476", "paper_id": "R29443", "text": "STIRPAT, IPAT and ImPACT: analytic tools for unpacking the driving forces of environmental impacts Abstract Despite the scientific consensus that humans have dramatically altered the global environment, we have a limited knowledge of the specific forces driving those impacts. One key limitation to a precise understanding of anthropogenic impacts is the absence of a set of refined analytic tools. Here we assess the analytic utility of the well-known IPAT identity, the newly developed ImPACT identity, and their stochastic cousin, the STIRPAT model. We discuss the relationship between these three formulations, their similar conceptual underpinnings and their divergent uses. We then refine the STIRPAT model by developing the concept of ecological elasticity (EE). To illustrate the application of STIRPAT and EE, we compute the ecological elasticities of population, affluence and other factors for cross-national emissions of carbon dioxide (CO2) from fossil fuel combustion and for the energy footprint, a composite measure comprising impacts from fossil fuel combustion, fuel wood, hydropower and nuclear power. Our findings suggest that population has a proportional effect (unitary elasticity) on CO2 emissions and the energy footprint. Affluence monotonically increases both CO2 emissions and the energy footprint. However, for the energy footprint the relationship between affluence and impact changes from inelastic to elastic as affluence increases, while for CO2 emissions the relationship changes from elastic to inelastic. Climate appears to affect both measures of impact, with tropical nations having considerably lower impact than non-tropical nations, controlling for other factors. Finally, indicators of modernization (urbanization and industrialization) are associated with high impacts. We conclude that the STIRPAT model, augmented with measures of ecological elasticity, allows for a more precise specification of the sensitivity of environmental impacts to the forces driving them. Such specifications not only inform the basic science of environmental change, but also point to the factors that may be most responsive to policy." }, { "instance_id": "R30476xR30185", "comparison_id": "R30476", "paper_id": "R30185", "text": "The role of renewable energy consumption and trade: environmental Kuznets curve analysis for Sub-Saharan Africa countries type=\"main\" xml:lang=\"en\"> Based on the Environmental Kuznets Curve (EKC) hypothesis, this paper uses panel cointegration techniques to investigate the short- and long-run relationship between CO 2 emissions, gross domestic product (GDP), renewable energy consumption and international trade for a panel of 24 sub-Saharan Africa countries over the period 1980\u20132010. Short-run Granger causality results reveal that there is a bidirectional causality between emissions and economic growth; bidirectional causality between emissions and real exports; unidirectional causality from real imports to emissions; and unidirectional causality runs from trade (exports or imports) to renewable energy consumption. There is an indirect short-run causality running from emissions to renewable energy and an indirect short-run causality from GDP to renewable energy. In the long-run, the error correction term is statistically significant for emissions, renewable energy consumption and trade. The long-run estimates suggest that the inverted U-shaped EKC hypothesis is not supported for these countries; exports have a positive impact on CO 2 emissions, whereas imports have a negative impact on CO 2 emissions. As a policy recommendation, sub-Saharan Africa countries should expand their trade exchanges particularly with developed countries and try to maximize their benefit from technology transfer occurring when importing capital goods as this may increase their renewable energy consumption and reduce CO 2 emissions." }, { "instance_id": "R30476xR30441", "comparison_id": "R30476", "paper_id": "R30441", "text": "The environmental Kuznets curve for carbon dioxide in India and China: Growth and pollution at crossroad This study probes cointegration among carbon dioxide (CO2) emissions, economic activity, energy use, and trade, and examines the environmental Kuznets curve (EKC) hypothesis. We undertake a comparative analysis between India and China over the period 1971\u20132012 by using the autoregressive distributed lag model of Pesaran et al. (2001). This study establishes a long-run effect of economic activity and trade openness and a short-run effect of energy use on CO2 emissions. It shows the N-shaped relationship between CO2 emissions and economic activity, a departure from the EKC hypothesis. The study ends with policy advisory for balancing between growth and environment." }, { "instance_id": "R30476xR29907", "comparison_id": "R30476", "paper_id": "R29907", "text": "A panel estimation of the relationship between trade liberalization, economic growth and CO2 emissions in BRICS countries In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels" }, { "instance_id": "R30476xR30128", "comparison_id": "R30476", "paper_id": "R30128", "text": "Testing Environmental Kuznets Curve hypothesis in Asian countries Abstract The aim of this study is to test the Environmental Kuznet Curve (EKC) hypothesis for 14 Asian countries spanning the period 1990\u20132011. We focused on how both income and policies in these countries affect the income\u2013emissions (environment) relationship. The GMM methodology using panel data is employed in a multivariate framework to test the EKC hypothesis. The multivariate framework includes: CO2 emissions, GDP per capita, population density, land, industry shares in GDP, and four indicators that measure the quality of institutions. In terms of the presence of an inverted U-shape association between emissions and income per capita, the estimates have the expected signs and are statistically significant, yielding empirical support to the presence of an Environmental Kuznets Curve hypothesis." }, { "instance_id": "R30476xR30055", "comparison_id": "R30476", "paper_id": "R30055", "text": "Environmental costs and renewable energy: Re-visiting the Environmental Kuznets Curve The environmental costs of economic development have received increasing attention during the last years. According to the World Energy Outlook (2013) sustainable energy policies should be promoted in order to spur economic growth and environmental protection in a global context, particularly in terms of reducing greenhouse gas emissions that contribute to climate change. Within this framework, the European Union aims to achieve the \"20-20-20\" targets, including a 20% reduction in EU greenhouse gas emissions from 1990 levels, a raise in the share of EU energy consumption produced from renewable resources to 20% and a 20% improvement in the EU's energy efficiency. Furthermore, the EU \"Energy Roadmap 2050\" has been recently adopted as a basis for developing a long-term European energy framework, fighting against climate change through the implementation of energy efficiency measures and the reduction of emissions. This paper focuses on the European context and attempts to explain the impact of economic growth on CO2 emissions through the estimation of an Environmental Kuznets Curve (EKC) using panel data. Moreover, since energy seems to be at the heart of the environmental problem it should also form the core of the solution, and therefore we provide some extensions of the EKC by including renewable energy sources as explanatory variables in the proposed models. Our data sets are referred to the 27 countries of the European Union during the period 1996-2010. With this information, our empirical results provide some interesting evidence about the significant impacts of renewable energies on CO2 emissions, suggesting the existence of an extended EKC." }, { "instance_id": "R30476xR30154", "comparison_id": "R30476", "paper_id": "R30154", "text": "CO2 emissions, energy consumption, economic and population growth in Malaysia This study investigates the dynamic impacts of GDP growth, energy consumption and population growth on CO2 emissions using econometric approaches for Malaysia. Empirical results from ARDL bounds testing approach show that over the period of 1970\u20131980, per capita CO2 emissions decreased with increasing per capita GDP (economic growth); however from 1980 to 2009, per capita CO2 emissions increased sharply with a further increase of per capita GDP. This is also supported by the dynamic ordinary least squared (DOLS) and the Sasabuchi\u2013Lind\u2013Mehlum U (SLM U test) tests. Consequently, the hypothesis of the EKC is not valid in Malaysia during the study period. The results also demonstrate that both per capita energy consumption and per capita GDP has a long term positive impacts with per capita carbon emissions, but population growth rate has no significant impacts on per capita CO2 emission. However, the study suggests that in the long run, economic growth may have an adverse effect on the CO2 emissions in Malaysia. Thus, significant transformation of low carbon technologies such as renewable energy and energy efficiency could contribute to reduce the emissions and sustain the long run economic growth." }, { "instance_id": "R30476xR30208", "comparison_id": "R30476", "paper_id": "R30208", "text": "The impact of foreign direct investment on environmental quality: A bounds testing and causality analysis for Turkey This study aims to investigate the impact of foreign direct investment (FDI), together with gross domestic product (GDP), the square of GDP, and energy consumption, on carbon dioxide (CO2) emissions in Turkey over the period 1974\u20132010. We employ both the bounds test approach which has superior properties especially in small samples and the Hatemi-J test which takes structural breaks into consideration in the co-integration analysis. Due to the co-integration relationship between CO2 emissions and other variables, the autoregressive distributed lag (ARDL) model is used in order to investigate short and long run elasticity between the variables. The long-run coefficients of the ARDL model indicate that the effect of FDI on CO2 emissions is positive but relatively small, while the effects of the GDP and energy consumption on CO2 emissions are quite considerable. Moreover, the short-run coefficients obtained by the error correction model (ECM) are found to be similar to those of the long-run model. The findings support the validity of the environmental Kuznets curve (EKC) hypothesis in both time-horizons. The vector ECM based Granger causality test is also applied to investigate the causal link. The causality test results indicate the existence of a causality running from all explanatory variables to CO2 emissions in the long run. Overall, the findings suggest that Turkey should promote energy efficiency with sustainable growth, and encourage more FDI inflows particularly in technology-intensive and environment-friendly industries to improve environmental quality." }, { "instance_id": "R30476xR29771", "comparison_id": "R30476", "paper_id": "R29771", "text": "Modeling and forecasting the CO2 emissions, energy consumption, and economic growth in Brazil This paper examines the dynamic relationships between pollutant emissions, energy consumption, and the output for Brazil during 1980\u20132007. The Grey prediction model (GM) is applied to predict three variables during 2008\u20132013. In the long-run equilibrium emissions appear to be both energy consumption and output inelastic, but energy is a more important determinant of emissions than output. This may be because Brazilian unsustainable land use and forestry contribute most to the country\u2019s greenhouse gas emissions. The findings of the inverted U-shaped relationships of both emissions\u2013income and energy consumption\u2013income imply that both environmental damage and energy consumption firstly increase with income, then stabilize, and eventually decline. The causality results indicate that there is a bidirectional strong causality running between income, energy consumption and emissions. In order to reduce emissions and to avoid a negative effect on the economic growth, Brazil should adopt the dual strategy of increasing investment in energy infrastructure and stepping up energy conservation policies to increase energy efficiency and reduce wastage of energy. The forecasting ability of GM is compared with the autoregressive integrated moving average (ARIMA) model over the out-of-sample period between 2002 and 2007. All of the optimal GMs and ARIMAs have a strong forecasting performance with MAPEs of less than 3%." }, { "instance_id": "R30476xR30368", "comparison_id": "R30476", "paper_id": "R30368", "text": "Economic growth and energy regulation in the environmental Kuznets curve This study establishes the existence of a pattern of behavior, between economic growth and environmental degradation, consistent with the environmental Kuznets curve (EKC) hypothesis for 17 Organization for Economic Cooperation and Development (OECD) countries between 1990 and 2012. Based on this EKC pattern, it shows that energy regulation measures help reduce per capita greenhouse gas (GHG) emissions. To validate this hypothesis, we also add the explanatory variables: renewable energy promotion, energy innovation processes, and the suppression effect of income level on the contribution of renewable energy sources to total energy consumption. It aims to be a tool for decision-making regarding energy policy. This paper provides a two-stage econometric analysis of instrumental variables with the aim of correcting the existence of endogeneity in the variable GDP per capita, verifying that the instrumental variables used in this research are appropriate for our aim. To this end, it first makes a methodological contribution before incorporating additional variables associated with environmental air pollution into the EKC hypothesis and showing how they positively affect the explanation of the correction in the GHG emission levels. This study concludes that air pollution will not disappear on its own as economic growth increases. Therefore, it is necessary to promote energy regulation measures to reduce environmental pollution." }, { "instance_id": "R30476xR30453", "comparison_id": "R30476", "paper_id": "R30453", "text": "Testing the EKC hypothesis by considering trade openness, urbanization, and financial development: the case of Turkey This study investigates the environmental Kuznets curve (EKC) hypothesis for the case of Turkey from 1960 to 2013 by considering energy consumption, trade, urbanization, and financial development variables. Although previous literature examines various aspects of the EKC hypothesis for the case of Turkey, our model augments the basic model with several covariates to develop a better understanding of the relationship among the variables and to refrain from omitted variable bias. The results of the bounds test and the error correction model under autoregressive distributed lag mechanism suggest long-run relationships among the variables as well as proof of the EKC and the scale effect in Turkey. A conditional Granger causality test reveals that there are causal relationships among the variables. Our findings can have policy implications including the imposition of a \u201cpolluter pays\u201d mechanism, such as the implementation of a carbon tax for pollution trading, to raise the urban population\u2019s awareness about the importance of adopting renewable energy and to support clean, environmentally friendly technology." }, { "instance_id": "R30476xR29418", "comparison_id": "R30476", "paper_id": "R29418", "text": "An EKC-pattern in historical perspective: carbon dioxide emissions, technology, fuel prices and growth in Sweden 1870\u20131997 The environmental Kuznets curve (EKC) has been subject to research and debate since the early 1990s. This articleexamines the inverted-U trajectory of Swedish CO2 emissions during an extended time ..." }, { "instance_id": "R30476xR29504", "comparison_id": "R30476", "paper_id": "R29504", "text": "Democracy and environmental quality We develop and estimate an econometric model of the relationship between several local and global air pollutants and economic development while allowing for critical aspects of the socio-political-economic regime of a State. We obtain empirical support for our hypothesis that democracy and its associated freedoms provide the conduit through which agents can exercise their preferences for environmental quality more effectively than under an autocratic regime, thus leading to decreased concentrations or emissions of pollution. However, additional factors such as income inequality, age distribution, education, and urbanization may mitigate or exacerbate the net effect of the type of political regime on pollution, depending on the underlying societal preferences and the weights assigned to those preferences by the State." }, { "instance_id": "R30476xR30165", "comparison_id": "R30476", "paper_id": "R30165", "text": "Economic growth, CO2 emissions, and energy consumption in the five ASEAN countries This paper investigates the relationship between economic growth, carbon dioxide (CO2) emissions, and energy consumption with an aim to test the validity of the Environmental Kuznets Curve (EKC) hypothesis in five ASEAN (Association of South East Asian Nations) countries (Indonesia, Malaysia, Philippines, Singapore, and Thailand) by applying the panel smooth transition regression (PSTR) model as a new econometric technique. The PSTR model is more flexible and appropriate for describing cross-country heterogeneity and time instability. Our empirical results strongly rejected the null hypothesis of linearity, and the test for no remaining nonlinearity indicated a model with one transition function and two threshold parameters. The first regime (levels of GDP per capita below 4686 USD) showed that environmental degradation increases with economic growth while the trend was reversed in the second regime (GDP per capita above 4686 USD). The results also showed that energy consumption with either the first or the second regime lead to increase CO2. The overall results support the validity of the EKC hypothesis in the ASEAN countries." }, { "instance_id": "R30476xR30076", "comparison_id": "R30476", "paper_id": "R30076", "text": "Beyond the Environmental Kuznets Curve in Africa: Evidence from Panel Cointegration Abstract The main objective of this study is to establish the applicability of the environmental Kuznets curve (EKC) hypothesis in explaining the relationship between environmental pollution and development in Africa. The EKC has been used to explain such relationships in a variety of contexts, yet rarely applied in Africa, despite it hosting both the poorest countries in the world, 60% of those with extreme environmental pollution vulnerability and having a distinct socio-economic and institutional profile that tests the validity of such a model. This paper describes an empirical model that applies the EKC hypothesis and its modifications to 50 African countries, using data from 1995\u20132010. The empirical analysis suggests that there is a long-term relationship between CO2 and particulate matter emissions with per capita income and other variables, including institutional factors and trade, leading to specific recommendations on future strategies for sustainable development in an African context." }, { "instance_id": "R30476xR29494", "comparison_id": "R30476", "paper_id": "R29494", "text": "A Test for Parameter Homogeneity in CO2Panel EKC Estimations This paper casts doubt on empirical results based on panel estimations of an \u201cinverted-U\u201d relationship between per capita GDP and pollution. Using a new dataset for OECD countries on carbon dioxide emissions for the period 1960\u20131997, we find that the crucial assumption of homogeneity across countries is problematic. Decisively rejected are model specifications that feature even weaker homogeneity assumptions than are commonly used. Furthermore, our results challenge the existence of an overall Environmental Kuznets Curve for carbon dioxide emissions." }, { "instance_id": "R30476xR30121", "comparison_id": "R30476", "paper_id": "R30121", "text": "Economic growth and environmental degradation in Saudi Arabia This paper has for objective to examine the effects of the economic growth, energy use and trade openness on carbon dioxide (CO 2 ) emissions for the case of the Kingdom of Saudi Arabia by estimating what is called the environmental Kuznets curve (EKC) model over the period 1970-2012. Our findings infirm the existence of EKC whereas they indicate that Saudi Arabia would be in its ascending phase of the environmental Kuznets curve. We notice that the per capita GDP and the per capita energy use increase CO 2 emissions whereas trade openness does not have a significant effect on CO 2 emissions. Our results suggest that growth targets should be accompanied with the measures of adaptation and strategies of development which plan limits in energy use and CO 2 emissions in Saudi Arabia. Keywords: Economic growth; Energy use, CO 2 emissions; environmental Kuznets curve; Saudi Arabia." }, { "instance_id": "R30512xR30496", "comparison_id": "R30512", "paper_id": "R30496", "text": "Everywhere Run: a virtual personal trainer for supporting people in their running activity In the last years many medical researches have reported an increase of health problems in developed countries, mostly related to a sedentary lifestyle (as obesity and linked pathologies like diabetes and cardiovascular diseases). As a consequence. many research efforts have been carried out for finding strategies for motivating people to exercise regularly. In this paper we present an Android-based mobile application, called Everywhere Run [1], that aims at motivating and supporting people during their running activities, behaving as a virtual personal trainer. Everywhere Run fosters the interaction between users and real personal trainers, in order to make it easy to non expert people to start working out in a healthy and safe way." }, { "instance_id": "R30512xR30492", "comparison_id": "R30512", "paper_id": "R30492", "text": "Maintaining levels of activity using a haptic personal training application This paper describes the development of a novel mobile phone-based application designed to monitor the walking habits of older adults. Haptic cues integrated within the prototype, are designed to inform an individual of changes which should be made to maintain a prescribed level of activity. A pilot study was conducted with fifteen older adults walking at varying speeds, both with and without the presence of assistive haptic feedback from the prototype. The results confirm that more steps were taken when haptic feedback was provided while walking at normal and fast paces. However, results also indicate that further refinements would be needed to improve the identification of haptic cues while individuals are in motion." }, { "instance_id": "R30512xR30510", "comparison_id": "R30512", "paper_id": "R30510", "text": "Self-setting of physical activity goals and effects on perceived difficulty, importance and competence Goal setting can be a powerful method for persuading individuals to adopt an active lifestyle. In order for this to be the case, it is important to set concrete and challenging goals, and to strongly commit to them. In this study, we explored how people set goals for physical activity and how these goals were reflected in self-regulatory mechanisms to drive goal attainment. Our approach is novel in two ways: first, we used an unobtrusive wearable sensor to accurately measure physical activity throughout the day rather than rely on self-report, and second, we provided individuals with feedback about the contribution of their common daily activities (e.g., household activities) to their physical activity level. Our results showed that on the basis of this feedback, participants were able to indicate to what degree they intended to change their behavior. Nevertheless, they failed to set concrete goals that matched their intentions precisely. In particular, we observed that overall the set goals were in accordance with intentions (i.e., goals were set in the desired direction), but we saw a strong tendency to focus on enhancing vigorous activity at the cost of moderate intensity activity. This suggests that many individuals have intentions to change and goal setting support is needed to compose goals that accurately reflect these intentions. Technology-mediated interventions might be ideal to support individuals along that path." }, { "instance_id": "R30512xR30494", "comparison_id": "R30512", "paper_id": "R30494", "text": "Maintaining and modifying pace through tactile and multimodal feedback Older adults are recommended to remain physically active to reduce the risk of chronic diseases and to maintain psychological well-being. At the same time, research also suggests that levels of fitness can be raised among this group. This paper describes the development and evaluation of a mobile technology, which enables older adults to monitor and modify their walking habits, with the long-term aim of sustaining appropriate levels of physical activity. An empirical study was conducted with twenty older adults to determine the feasibility of the proposed solution, with results indicating that tactile signals could be perceived while in motion and could support participants in walking at a range of paces. However, the effects were difficult to discern due to limitations of the hardware. In response, a novel low-cost prototype was developed to amplify vibrations, and effectiveness of redundant auditory information was investigated with the goal of enhancing the perception of the cues. A second study was conducted to determine the impact of multimodal feedback on walking behavior. Findings revealed that participants were able to maintain a desired level of pace more consistently when redundant auditory information was presented alongside the tactile feedback. When the visual channel is not available, these results suggest that tactile cues presented via a mobile device should be augmented with auditory feedback. Our research also suggests that mobile devices could be made more effective for alternative applications if they are designed to allow for stronger tactile feedback." }, { "instance_id": "R30512xR30502", "comparison_id": "R30512", "paper_id": "R30502", "text": "A mobile health and fitness companion demonstrator Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. The paper presents a multimodal conversational Companion system focused on health and fitness, which has both a stationary and a mobile component." }, { "instance_id": "R30579xR30549", "comparison_id": "R30579", "paper_id": "R30549", "text": "Detection of malicious vehicles (DMV) through monitoring in vehicular ad-hoc networks Vehicular Ad Hoc Networks (VANETs) are appropriate networks that can be applied to intelligent transportation systems. In VANET, messages exchanged among vehicles may be damaged by attacker nodes. Therefore, security in message forwarding is an important factor. We propose the Detection of Malicious Vehicles (DMV) algorithm through monitoring to detect malicious nodes that drop or duplicate received packets and to isolate them from honest vehicles, where each vehicle is monitored by some of it trustier neighbors called verifier nodes. If a verifier vehicle observes an abnormal behavior from vehicle V, it increases distrust value of vehicle V. The ID of vehicle V is then reported to its relevant Certificate Authority (CA) as a malicious node when its distrust value is higher than a threshold value. Performance evaluation shows that DMV can detect most existence abnormal and malicious vehicles even at high speeds." }, { "instance_id": "R30579xR30573", "comparison_id": "R30579", "paper_id": "R30573", "text": "Defense against Sybil attack in vehicular ad hoc network based on roadside unit support In this paper, we propose a timestamp series approach to defend against Sybil attack in a vehicular ad hoc network (VANET) based on roadside unit support. The proposed approach targets the initial deployment stage of VANET when basic roadside unit (RSU) support infrastructure is available and a small fraction of vehicles have network communication capability. Unlike previously proposed schemes that require a dedicated vehicular public key infrastructure to certify individual vehicles, in our approach RSUs are the only components issuing the certificates. Due to the differences of moving dynamics among vehicles, it is rare to have two vehicles passing by multiple RSUs at exactly the same time. By exploiting this spatial and temporal correlation between vehicles and RSUs, two messages will be treated as Sybil attack issued by one vehicle if they have the similar timestamp series issued by RSUs. The timestamp series approach needs neither vehicular-based public-key infrastructure nor Internet accessible RSUs, which makes it an economical solution suitable for the initial stage of VANET." }, { "instance_id": "R30579xR30575", "comparison_id": "R30579", "paper_id": "R30575", "text": "Distributed Misbehavior Detection in VANETs In any vehicular adhoc network, there is always a possibility of incorrect messages being transmitted either due to faulty sensors and/or intentional malicious activities. Detecting and evicting sources of such misbehavior is an important problem. We observe that the performance of misbehavior detection schemes will depend on the application under consideration and the mobility dynamics of the detecting vehicle. Further, the underlying tradeoff in any such detection algorithm is the balance between False Positives and False Negatives; one would like to detect as many misbehaviors as possible, while at the same time ensuring that the genuine vehicles are not wrongly accused. In this work we propose and analyze (via simulations) the performance of a Misbehavior Detection Scheme (MDS) for Post Crash Notification (PCN) application. We observe that the performance of this proposed scheme is not very sensitive to the exact dynamics of the vehicle on small scales, so that slight error in estimating the dynamics of the detecting vehicle does not degrade the performance of the MDS." }, { "instance_id": "R30579xR30561", "comparison_id": "R30579", "paper_id": "R30561", "text": "Anovelsecure communication schemein vehicular ad hocnetworks Vehicular networks are very likely to become the most pervasive and applicable of mobile ad hoc networks in this decade. Vehicular Ad hoc NETwork (VANET) has become a hot emerging research subject, but few academic publications describing its security infrastructure. In this paper, we review the secure infrastructure of VANET, some potential applications and interesting security challenges. To cope with these security challenges, we propose a novel secure scheme for vehicular communication on VANETs. The proposed scheme not only protects the privacy but also maintains the liability in the secure communications by using session keys. We also analyze the robustness of the proposed scheme." }, { "instance_id": "R30646xR30584", "comparison_id": "R30646", "paper_id": "R30584", "text": "Precise eye localization through a general-to-specific model definition We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate." }, { "instance_id": "R30646xR30598", "comparison_id": "R30646", "paper_id": "R30598", "text": "Projection functions for eye detection In this paper, the generalized projection function (GPF) is defined. Both the integral projection function (IPF) and the variance projection function (VPF) can be viewed as special cases of GPF. Another special case of GPF, i.e. the hybrid projection function (HPF), is developed through experimentally determining the optimal parameters of GPF. Experiments on three face databases show that IPF, VPF, and HPF are all effective in eye detection. Nevertheless, HPF is better than VPF, while VPF is better than IPF. Moreover, IPF is found to be more effective on occidental than on oriental faces, and VPF is more effective on oriental than on occidental faces. Analysis of the detections shows that this effect may be owed to the shadow of the noses and eyeholes of different races of people." }, { "instance_id": "R30646xR30606", "comparison_id": "R30646", "paper_id": "R30606", "text": "2D cascaded AdaBoost for eye localization In this paper, 2D cascaded AdaBoost, a novel classifier designing framework, is presented and applied to eye localization. By the term \"2D\", we mean that in our method there are two cascade classifiers in two directions: The first one is a cascade designed by bootstrapping the positive samples, and the second one, as the component classifiers of the first one, is cascaded by bootstrapping the negative samples. The advantages of the 2D structure include: (1) it greatly facilitates the classifier designing on huge-scale training set; (2) it can easily deal with the significant variations within the positive (or negative) samples; (3) both the training and testing procedures are more efficient. The proposed structure is applied to eye localization and evaluated on four public face databases, extensive experimental results verified the effectiveness, efficiency, and robustness of the proposed method" }, { "instance_id": "R30646xR30608", "comparison_id": "R30646", "paper_id": "R30608", "text": "A robust eye localization method for low quality face images Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments." }, { "instance_id": "R30646xR30590", "comparison_id": "R30646", "paper_id": "R30590", "text": "Average of Synthetic Exact Filters This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time." }, { "instance_id": "R30646xR30602", "comparison_id": "R30646", "paper_id": "R30602", "text": "Robust precise eye location under probabilistic framework Eye feature location is an important step in automatic visual interpretation and human face recognition. In this paper, a novel approach for locating eye centers in face areas under probabilistic framework is devised. After grossly locating a face, we first find the areas which left and right eyes lies in. Then an appearance-based eye detector is used to detect the possible left and right eye separately. According to their probabilities, the candidates are subsampled to merge those in near positions. Finally, the remaining left and right eye candidates are paired; each possible eye pair is normalized and verified. According to their probabilities, the precise eye positions are decided. The experimental results demonstrate that our method can effectively cope with different eye variations and achieve better location performance on diverse test sets than some newly proposed methods. And the influence of precision of eye location on face recognition is also probed. The location of other face organs such as mouth and nose can be incorporated in the framework easily." }, { "instance_id": "R30698xR30696", "comparison_id": "R30698", "paper_id": "R30696", "text": "Dental erosion in Cuban children associated with excessive consumption of oranges Marked erosion at the mesial edges of upper front teeth was observed during an examination of Cuban children. The preferential erosion of mesial edges produced characteristic V-shaped defects on upper central incisors, and the aim of the present study, carried out on 12-yr-old children (N = 1010) in 10 communities in the Province of Havana was to establish the frequency of dental erosion and explain its occurrence. The symmetrical erosion of teeth 11 and 21 (excluding crown injuries and attrition) were clinically classified into four grades: 0.5 = objectionable; 1 = abnormal mesial shortening of incisal edges; 2 = V-shaped defect of cutting edges; 3 = exposure of dentine and extension of the erosive defect to the lateral incisors. In four of the communities, children did not show or rarely showed incisal erosion. In the other six communities, the frequency was surprisingly high (16.6-40.9%). Overall, 17.4% of children exhibited erosion, and the occurrence was significantly higher in girls (20.7%) than in boys (15.0%). The typical V-shaped pattern of erosion seems to be a consequence of the manner in which citrus fruits are eaten. There was also a positive correlation between the frequency of dental erosion and the proximity of citrus plantations, which presumably related to the extent of (daily) orange consumption." }, { "instance_id": "R30698xR30662", "comparison_id": "R30698", "paper_id": "R30662", "text": "The prevalence of dental erosion in preschool children in China OBJECTIVE To describe the prevalence of dental erosion and associated factors in preschool children in Guangxi and Hubei provinces of China. METHODS Dental examinations were carried out on 1949 children aged 3-5 years. Measurement of erosion was confined to primary maxillary incisors. The erosion index used was based upon the 1993 UK National Survey of Children's Dental Health. The children's general information as well as social background and dietary habits were collected based on a structured questionnaire. RESULTS A total of 112 children (5.7%) showed erosion on their maxillary incisors. Ninety-five (4.9%) was scored as being confined to enamel and 17 (0.9%) as erosion extending into dentine or pulp. There was a positive association between erosion and social class in terms of parental education. A significantly higher prevalence of erosion was observed in children whose parents had post-secondary education than those whose parents had secondary or lower level of education. There was also a correlation between the presence of dental erosion and intake of fruit drink from a feeding bottle or consumption of fruit drinks at bedtime. CONCLUSION Erosion is not a serious problem for dental heath in Chinese preschool children. The prevalence of erosion is associated with social and dietary factors in this sample of children." }, { "instance_id": "R30698xR30666", "comparison_id": "R30698", "paper_id": "R30666", "text": "Caries trends 1996-2002 among 6- and 12-year-old children and erosive wear prevalence among 12-year-old children in The Hague In 2002 a dental survey amongst 6- and 12-year-old schoolchildren (n = 832) in The Hague was carried out. The caries findings were compared with findings from earlier studies in The Hague. Caries prevalence (% of caries-free children) and caries experience (mean dmfs scores) among 6-year-old children had not changed significantly in the period 1996\u20132002. However, a significant increase of caries-free 12-year-old children of low socio-economic status was found in the period 1996\u20132002. The proportions of caries-free 12-year-old Dutch, Turkish and Moroccan children of low socio-economic status were 88, 69 and 78%, respectively, in 2002. The average DMFT score of 12-year-olds reached a minimum of 0.2. In 2002, 24% of the 12-year-olds exhibited signs of erosion, indicating that the presence of erosive wear was high among youngsters in The Hague." }, { "instance_id": "R30698xR30654", "comparison_id": "R30698", "paper_id": "R30654", "text": "Dental erosion among children in an Istanbul public school The aim of this study was to evaluate the prevalence, clinical manifestations, and etiology of dental erosion among children. A total of 153 healthy, 11-year-old children were sampled from a downtown public school in Istanbul, Turkey comprised of middle-class children. Data were obtained via: (1) dinical examination; (2) questionnaire; and (3) standardized data records. A new dental erosion index for children designed by O'Sullivan (2000) was used. Twenty-eight percent (N=43) of the children exhibited dental erosion. Of children who consumed orange juice, 32% showed erosion, while 40% who consumed carbonated beverages showed erosion. Of children who consumed fruit yogurt, 36% showed erosion. Of children who swam professionally in swimming pools, 60% showed erosion. Multiple regression analysis revealed no relationship between dental erosion and related erosive sources (P > .05)." }, { "instance_id": "R30739xR30705", "comparison_id": "R30739", "paper_id": "R30705", "text": "Tooth wear in the elderly population in south east local government area in Ibadan It is the aim of this study to determine the pattern and degree of tooth wear in the elderly population in the South East Local Government Area in Ibadan. The study was carried out on 690 elderly individuals who were 65 years old and above, living in various wards in South East Local Government Area, in Ibadan. A multistage sampling technique was used to select elderly individuals for the study. Two interviewers, 2-record clerks and 2 examiners were trained for the study and the examiners were calibrated. The index of Eccles J.D was used to determine the severity of tooth wear. The results highlight the high prevalence of tooth wear, mainly attrition in the elderly in this local government area. Six hundred and forty (92.8%) of the elderly had tooth wear. Of these 58.59% were males and 41.41% females. Attrition was observed in 618 (89.6%) elderly individuals. The mandible exhibited a higher prevalence of tooth wear than the maxilla and was statistically significant. Severe tooth wear was observed in only 5.74% of the teeth whilst moderate and mild tooth wear were observed in 26.91% and 30.88% respectively. Unlike the Western European countries, attrition being the most common type of tooth wear in these elderly individuals suggests that the aetiological factors responsible for tooth wear are different. Common habits such as crushing of bones and chewing of sticks for routine oral hygiene care could be contributing factors to tooth wear in this environment." }, { "instance_id": "R30739xR30721", "comparison_id": "R30739", "paper_id": "R30721", "text": "Retrospective long term monitoring of tooth wear using study models Objective Tooth wear is recognised as a common feature of European dentitions. However, little is known about its progression in susceptible patients. The aim of this study was to assess the degree and progression of tooth wear in patients by examining study casts taken of their teeth on two separate occasions.Design Over 500 sets of study casts taken during an 18-year period from patients referred for a variety of restorative procedures, were examined at Guy's Dental Hospital. Of these, 34 cases were found to have consecutive models taken at two time intervals and these were used to assess the progression of tooth wear. Study models from 19 females and 16 males, with an average age of 26 years (range 18-60) at the time of their first presentation and were all examined by a single operator. The Smith and Knight tooth wear index was used to assess the degree of tooth wear at presentation and then at another time which was a median of 26 months (interquartile range 14 \u2013 50 months) later.Results The most common initial TWI score per surface was 1, with 54% of surfaces affected at the first assessment and 57% at the second. Score 2 was less common (14% at both assessments) and the scores for 3 and 4 combined were relatively uncommon with 5% of surfaces effected. Minimal progression of tooth wear was observed on study casts with only 7.3% of surfaces involved.Conclusion In this sample, tooth wear was a slow, minimally progressive process." }, { "instance_id": "R30739xR30717", "comparison_id": "R30739", "paper_id": "R30717", "text": "The prevalence of non-carious cervical lesions in permanent dentition A non-carious cervical lesion (NCCL) is the loss of hard dental tissue on the neck of the tooth, most frequently located on the vestibular plane. Causal agents are diverse and mutually interrelated. In the present study all vestibular NCCL were observed and recorded by the tooth wear index (TWI). The aim of the study was to determine the prevalence and severity of NCCL. For this purpose, 18555 teeth from the permanent dentition were examined in a population from the city of Rijeka, Croatia. Subjects were divided into six age groups. The teeth with most NCCL were the lower premolars, which also had the largest percentage of higher index levels, indicating the greater severity of the lesions. The most frequent index level was 1, and the prevalence and severity of the lesions increased with age." }, { "instance_id": "R30739xR30726", "comparison_id": "R30739", "paper_id": "R30726", "text": "Relationship between sports drinks and dental erosion in 304 university athletes in Columbus Acidic soft drinks, including sports drinks, have been implicated in dental erosion with limited supporting data in scarce erosion studies worldwide. The purpose of this study was to determine the prevalence of dental erosion in a sample of athletes at a large Midwestern state university in the USA, and to evaluate whether regular consumption of sports drinks was associated with dental erosion. A cross-sectional, observational study was done using a convenience sample of 304 athletes, selected irrespective of sports drinks usage. The Lussi Index was used in a blinded clinical examination to grade the frequency and severity of erosion of all tooth surfaces excluding third molars and incisal surfaces of anterior teeth. A self-administered questionnaire was used to gather details on sports drink usage, lifestyle, health problems, dietary and oral health habits. Intraoral color slides were taken of all teeth with erosion. Sports drinks usage was found in 91.8% athletes and the total prevalence of erosion was 36.5%. Nonparametric tests and stepwise regression analysis using history variables showed no association between dental erosion and the use of sports drinks, quantity and frequency of consumption, years of usage and nonsport usage of sports drinks. The most significant predictor of erosion was found to be not belonging to the African race (p < 0.0001). The results of this study reveal no relationship between consumption of sports drinks and dental erosion." }, { "instance_id": "R30739xR30730", "comparison_id": "R30739", "paper_id": "R30730", "text": "Tooth surface loss: does recreational drug use contribute? AbstractObjective. This pilot study was designed to measure tooth wear in a sample of 13 subjects who regularly use amphetamine-like drugs (Ecstasy, amphetamines) and compare the observed wear with a matched sample of nondrug users. Design. The two groups, both composed of 13 undergraduate students, were matched for age and sex. Other factors influencing tooth wear were controlled by matching the groups on their responses to a questionnaire asking about recognised common causes of tooth wear. The participants' teeth were examined and the degree of wear scored according to a tooth wear index. Results. Severity of occlusal tooth wear of the lower first molar teeth was significantly greater in the drug user group than in the control group (P<0.05). No other statistically significant differences between the groups were found. Conclusion. Regular use of amphetamine-like drugs could be associated with increased posterior tooth wear." }, { "instance_id": "R30739xR30710", "comparison_id": "R30739", "paper_id": "R30710", "text": "Silicone sealers, acetic acid vapors and dental erosion: a work-related risk? The aim of the present study was to investigate whether the acetic acid released by some silicone sealers during the curing process poses an increased risk for dental erosion, thus constituting an occupational hazard to exposed individuals. The material comprised 13 individuals (x=30 years, 10 men and 3 women) who had been exposed to an average of 4.2 years' (range 0.6-10 years) of working with silicone. Each had comprehensive medical and dental examinations carried out. A sex- and aged-matched group of 20 healthy, unexposed workers from the same company served as controls for the medical examination, while study models from randomly selected sex- and age-matched individuals were used as controls for assessing the severity of erosion. Using a questionnaire, an assessment of the role of various possible factors related to oral and general health, and to dental erosion in particular, was made for each participant in the exposed group. Clinical examination included recordings of severity of dental erosion, presence of \"cuppings\", DMFT, salivary secretion rate and buffer capacity, visible plaque index and gingival bleeding index. In addition, bitewing radiographs, study casts and intraoral colour transparencies were obtained for each individual. The severity of dental erosion was significantly higher in those exposed to silicone compared to controls. There was also a significant correlation between the period of exposure to silicone in the workplace and severity of erosion. Medical problems, especially with regard to upper respiratory tract symptoms, were significantly more common among exposed individuals than controls. In conclusion, a relationship between occupational exposure to acetic acid vapours from silicone sealers and development of dental erosion would appear to exist." }, { "instance_id": "R31174xR31165", "comparison_id": "R31174", "paper_id": "R31165", "text": "A modified particle swarm optimization for disaster relief logistics under uncertain environment Relief logistics is one of the most important elements of a relief operation. This paper investigates a relief chain design problem where not only demands but also supplies and the cost of procurement and transportation are considered as the uncertain parameters. Furthermore, the model considers uncertainty for the locations where those demands can arise and the possibility that a number of the facility could be partially destroyed by the disaster. The proposed model for this study is formulated as a mixed-integer nonlinear programming to minimize the sum of the expected total cost (which includes costs of location, procurement, transportation, holding, and shortage) and the variance of the total cost. The model simultaneously determines the location of relief distribution centers and the allocation of affected area to relief distribution centers. Furthermore, an efficient solution approach based on particle swarm optimization is developed in order to solve the proposed mathematical model. At last, computational results for several instances of the problem are presented to demonstrate the feasibility and effectiveness of the proposed model and algorithm." }, { "instance_id": "R31214xR31188", "comparison_id": "R31214", "paper_id": "R31188", "text": "The Spatially-Dispersed Genetic Algorithm Spatially structured population models improve the performance of genetic algorithms by assisting the selection scheme in maintaining diversity. A significant concern with these systems is that they need to be carefully configured in order to operate at their optimum. Failure to do so can often result in performance that is significantly under that of an equivalent non-spatial implementation. This paper introduces a GA that uses a population structure that requires no additional configuration. Early experimentation with this paradigm indicates that it is able to improve the searching abilities of the genetic algorithm on some problem domains." }, { "instance_id": "R31214xR31195", "comparison_id": "R31214", "paper_id": "R31195", "text": "A Religion-Based Spatial Model for Evolutionary Algorithms Traditionally, the population in Evolutionary Algorithms (EA) is modelled as a simple set of individuals. In recent years, several models have been proposed, which demonstrated that a structured population can improve the performance of EAs. We suggest that religions should be considered as an interesting new source of inspiration in this context, because of their important role in the organisation of human societies. Essential concepts of religions are commandments and religious customs, which influence the behaviour of the individuals. Typically, religious rules include the commitments to reproduce, to believe in no other religions, and to convert non-believers. In this paper, we used these concepts and the struggle between religions as an inspiration for a novel Religion-Based EA (RBEA) with a structured population. In the RBEA, individuals inhabit a two dimensional world in which they can move around and interact with each other. Additional ideas are the religion membership of the individuals, which restricts mating between individuals of different religions and the exchange of individuals between religions by conversion. The result of this design is a simple model with spatial diffusion of genes and self-organised subpopulations with varying population sizes. Our experiments on six numerical benchmark problems showed that the performance of the RBEA was clearly better than the performance of a standard EA and a Diffusion EA." }, { "instance_id": "R31214xR31182", "comparison_id": "R31214", "paper_id": "R31182", "text": "Advanced models of cellular genetic algorithms evaluated on SAT Cellular genetic algorithms (cGAs) are mainly characterized by their spatially decentralized population, in which individuals can only interact with their neighbors. In this work, we study the behavior of a large number of different cGAs when solving the well-known 3-SAT problem. These cellular algorithms differ in the policy of individuals update and the population shape, since these two features affect the balance between exploration and exploitation of the algorithm. We study in this work both synchronous and asynchronous cGAs, having static and dynamically adaptive shapes for the population. Our main conclusion is that the proposed adaptive cGAs outperform other more traditional genetic algorithms for a well known benchmark of 3-SAT." }, { "instance_id": "R31281xR31228", "comparison_id": "R31281", "paper_id": "R31228", "text": "Middle-Income Transitions: Trap or Myth? During the last few years, the newly coined term middle-income trap has been widely used by policymakers to refer to the middle-income economies that seem to be stuck in the middle-income range. However, there is no accepted definition of the term in the literature. In this paper, we study historical transitions across income groups to see whether there is any evidence that supports the claim that economies do not advance. Overall, the data rejects this proposition. Instead, we argue that what distinguishes economies in their transition from middle to high income is fast versus slow transitions. We find that, historically, it has taken a \u201ctypical\u201d economy 55 years to graduate from lower-middle income ($2,000 in 1990 purchasing power parity [PPP] $) to upper-middle income ($7,250 in 1990 PPP $). Likewise, we find that, historically, it has taken 15 years for an economy to graduate from upper-middle income to high income (above $11,750 in 1990 PPP $). Our analysis implies that as of 2013, there were 10 (out of 39) lower-middle-income economies and that 4 (out of 15) upper-middle-income economies that were experiencing slow transitions (i.e., above 55 and 15 years, respectively). The historical evidence presented in this paper indicates that economies move up across income groups. Analyzing a large sample of economies over many decades, indicates that experiences are wide, including many economies that today are high income that spent many decades traversing the middle-income segment." }, { "instance_id": "R31281xR31247", "comparison_id": "R31281", "paper_id": "R31247", "text": "On the Existence of a Middle-Income Trap. University of Western Australia Working The term \"middle income trap\" has been widely used in the literature, without having been clearly de ned or formally tested. We propose a statistical de nition of a middle income trap and derive a simple time-series test. We nd that the concept survives a rigorous scrutiny of the data, with the growth patterns of 19 countries being consistent with our de nition of a middle income trap.." }, { "instance_id": "R31281xR31220", "comparison_id": "R31281", "paper_id": "R31220", "text": "Growth slowdowns redux We analyze the incidence and correlates of growth slowdowns in fast-growing middle-income countries, extending the analysis of an earlier paper (Eichengreen et al., 2012a). We continue to find dispersion in the per capita income at which slowdowns occur. But in contrast to our earlier analysis which pointed to the existence of a single mode at which slowdowns occur, in the neighborhood of $15,000\u201316,000 2005 purchasing power parity dollars, new data suggest the possibility of two modes, one at $10,000\u201311,000 and another at $15,000\u201316,000. A number of countries appear to have experienced two slowdowns, consistent with the existence of multiple modes. We suggest that growth in middle-income countries may slow down in a succession of stages rather than at a single point in time. This implies that a larger group of countries is at risk of a growth slowdown and that middle-income countries may find themselves slowing down at lower income levels than implied by our earlier estimates. We also find that slowdowns are less likely in countries where the population has a relatively high level of secondary and tertiary education and where high-technology products accounts for a relatively large share of exports, consistent with our earlier emphasis of the importance of moving up the technology ladder in order to avoid the middle-income trap." }, { "instance_id": "R31669xR31556", "comparison_id": "R31669", "paper_id": "R31556", "text": "A fuzzy logic based diagnosis system for the on-line supervision of an anaerobic digestor pilot-plant Abstract This paper deals with the development of a fuzzy logic based diagnosis system and its application as a fault detection and isolation (FDI) procedure in a wastewater treatment plant. Different fault detection methods are tested and their advantages and limitations are highlighted. An aggregate FDI strategy is implemented and tested. Results using the fuzzy residual generation module are presented and discussed based on experimental data from a 1 m3 pilot-scale anaerobic digestion reactor for the treatment of raw industrial wine distillery vinasses." }, { "instance_id": "R31669xR31434", "comparison_id": "R31669", "paper_id": "R31434", "text": "Comparative study of black-box and hybrid estimation methods in fed-batch fermentation A neural network based softsensor is proposed for a PHB fed-batch fermentation process. The softsensor is designed to estimate the biomass concentration on-line. The design is based on the following model structures: 1. a feedforward neural network, 2. a RBFN (radial basis function neural network) and 3. hybrid models composed of either feedforward or RBFN neural network and the a priori known dilution term of the mass balance equations. The different designs are experimentally implemented and compared using Alcaligenes eutrophus as a model fed-batch system. Additionally, the possibility of directly inferring the substrate (glucose) concentration from the estimated biomass was investigated by assessing the variability of the corresponding yield coefficient. The combination of the neural network model and mechanistic differential equation provided the best results. Because of the variability in the yield coefficient, substrate concentration could not be inferred directly." }, { "instance_id": "R31669xR31453", "comparison_id": "R31669", "paper_id": "R31453", "text": "A new approach for the prediction of the heat transfer rate of the wire-on-tube type heat exchanger\u2013\u2013use of an artificial neural network model This study presents an application of artificial neural networks (ANNs) to predict the heat transfer rate of the wire-on-tube type heat exchanger. A back propagation algorithm, the most common learning method for ANNs, is used in the training and testing of the network. To solve this algorithm, a computer program was developed by using C++ programming language. The consistence between experimental and ANNs approach results was achieved by a mean absolute relative error <3%. It is suggested that the ANNs model is an easy modeling tool for heat engineers to obtain a quick preliminary assessment of heat transfer rate in response to the engineering modifications to the exchanger." }, { "instance_id": "R31669xR31567", "comparison_id": "R31669", "paper_id": "R31567", "text": "An expert system design for a crude oil distillation column with the neural networks model and the process optimization using genetic algorithm framework In this study an expert system of a crude oil distillation column is designed to predict the unknown values of required product flow and temperature in required input feed characteristics. The system is also capable to optimize the distillation process with minimizing the model output error and maximizing the required oil production rate with respect to control parameter values. The designed expert system uses the practical data of an operating refinery located in Abadan. The input operating variables of the column were operational parameters of crude oil such as flow and temperature, while the system output variables were defined as oil product qualities. We can make the knowledge database of these input-output values of plant with the aid of a neural networks model (NNM) to organize and collect all data related to this process and also to predict the unknown output values of required inputs. In addition we have made the ability of system's optimization with the use of genetic algorithm (GA) with the aim of error minimizing of expert system's output and also maximizing the required product rate with respect to its industrial importance. The built expert system can be used by operators and engineers to calculate and get some unknown data for operational values of this distillation column." }, { "instance_id": "R31669xR31416", "comparison_id": "R31669", "paper_id": "R31416", "text": "Development of real-time state estimators for reaction-separation processes: A continuous flash fermentation as a study case Abstract The development of reliable on-line state estimators applicable to reaction\u2013separation processes is addressed in this work. Artificial Neural Network-based software sensors (ANN-SS) are proposed to allow on-line measurement of key variables, with an estimation algorithm that uses secondary variables as inputs. A continuous laboratory-scale flash fermentation for bioethanol production is considered as a case study. The process consists of three interconnected units: fermentor, filter (tangential microfiltration for cell recycling) and vacuum flash vessel (for the continuous separation of ethanol from the broth). The concentrations of ethanol in the fermentor and of ethanol condensed from the flash are successfully monitored on-line using ANN-SS. The proposed model contributes to improve the understanding of the complex relationships between process variables in the reaction and separation units, which is of major importance to allow the operation of the ethanol production process near its optimum performance." }, { "instance_id": "R31669xR31484", "comparison_id": "R31669", "paper_id": "R31484", "text": "Artificial neural network-based prediction of hydrogen content of coal in power station boilers Abstract Artificial neural networks (ANN) are powerful tools that can be used to model and investigate various highly complex and non-linear phenomena. This paper describes the development and training of a feed-forward back-propagation artificial neural network (BPNN), which is used to predict the hydrogen content in coal from proximate analysis. The ultimate objective is to enhance the performance of the combustion control system with the aid of regularly obtained knowledge of the elemental content of coal. In the present work, network modelling was performed using MATLAB with the Levenberg\u2013Marquardt algorithm. Nine-hundred and three sets of data from a diverse range of coals have been used to develop the neural network architecture and topology. Trials were performed using one or two hidden layers with the number of neurons varied from 4 to 30. Validation data has been adopted to evaluate each trial and better model structure is determined to combat the over-fitting problem. As a result, it was found that a 4-12-1 or 4-8-4-1 network could give the most accurate prediction for this particular study. The regression analysis of the model tested gave a 0.937 correlation coefficient and the mean squared error of 0.0087. The average relative error is 5.46%. This has demonstrated that artificial neural networks have good potential for predicting elemental content of coal from frequently available proximate analysis data in power utilities." }, { "instance_id": "R31669xR31396", "comparison_id": "R31669", "paper_id": "R31396", "text": "Trajectory tracking of a batch polymerization reactor based on input\u2013output-linearization of a neural process model Abstract Input\u2013output-linearization via state feedback offers the potential to serve as a practical and systematic design methodology for nonlinear control systems. Nevertheless, its widespread use is delayed due to the fact that developing an accurate plant model based on physical principles is often too costly and time consuming. Data-based modeling of dynamic systems using neural networks offers a cost-effective alternative. This work describes the methodology of input\u2013output-linearization using neural process models and gives an extended simulative case study of its application to trajectory tracking of a batch polymerization reactor." }, { "instance_id": "R31669xR31336", "comparison_id": "R31669", "paper_id": "R31336", "text": "Composition estimations in a middle-vessel batch distillation column using artificial neural networks A virtual sensor that estimates product compositions in a middle-vessel batch distillation column has been developed. The sensor is based on a recurrent artificial neural network, and uses information available from secondary measurements (such as temperatures and flow rates). The criteria adopted for selecting the most suitable training data set and the benefits deriving from pre-processing these data by means of principal component analysis are demonstrated by simulation. The effects of sensor location, model initialization, and noisy temperature measurements on the performance of the soft sensor are also investigated. It is shown that the estimated compositions are in good agreement with the actual values." }, { "instance_id": "R31669xR31326", "comparison_id": "R31669", "paper_id": "R31326", "text": "State estimation and inferential control for a reactive batch distillation column An optimal reflux ratio profile is obtained for a reactive batch distillation system utilizing the capacity factor as the objective function in a nonlinear optimization problem. Then, an Artificial Neural Network (ANN) estimator system, which utilizes the use of several ANN estimators, is designed to predict the product composition values of the distillation column from temperature measurements inferentially. The network used is an Elman network with two hidden layers. The designed estimator system is used in the feedback inferential control algorithm, where the estimated compositions and the reflux ratio information are given as inputs to the controller to see the performance of the ANN. In the control law, a scheduling policy is used and the optimal reflux ratio profile is considered as pre-defined set-points. It is found that, it is possible to control the compositions in this dynamically complex system by using the designed ANN estimator system with error refinement whenever necessary." }, { "instance_id": "R31669xR31368", "comparison_id": "R31669", "paper_id": "R31368", "text": "Bioreactor state estimation and control Advanced control methods have been effectively employed for industrial chemical processing for decades. Only recently, however, have model-based strategies been implemented for biological processes. Some notable advances include the enhancement of metabolic flux models to describe the dynamic behavior observed in biochemical reactors. The combination of more than one type of model in a hybrid form was shown to perform well for bioprocess control applications." }, { "instance_id": "R31669xR31515", "comparison_id": "R31669", "paper_id": "R31515", "text": "Fouling detection in a heat exchanger by observer of Takagi\u2013Sugeno type for systems with unknown polynomial inputs This paper proposes a new method for fouling detection in a heat exchanger. It is based on the modeling of the system in a fuzzy Takagi-Sugeno representation derived from a physical model. With this representation, the design of a fuzzy observer with unknown inputs of polynomial types is obtained via a LMI formulation. Main advantages of the proposed method are that neither specific sensor, excepted standard ones, nor special operating conditions such as steady state regime are required. Some realistic simulations show the efficiency of the proposed technique." }, { "instance_id": "R31669xR31315", "comparison_id": "R31669", "paper_id": "R31315", "text": "ANN-based estimator for distillation using Levenberg-Marquardt approach In modern chemical industries the purity of the distillate is the main objective and time to estimate the distillate composition is also the constraint. In the present paper, the Levenberg-Marquardt (LM) approach is proposed for predictive inferential control of distillation process. The developed estimator using LM approach predicts the composition of distillate using column pressure, reboiler duty, and reflux flow along with the temperature profile of the distillation column as inputs. In complex chemical industries where the output depends on many parameters, Steepest Descent Back Propagation (SDBP) algorithm does not work properly for estimating the composition of distillate, which results in saturated outputs and differs from the desired results. To overcome such type of situation, LM approach is used in developed estimator. The estimated results are compared with the simulation results and it is observed that the results obtained from LM approach are significantly improved than the results obtained from SDBP algorithm. To enhance the accuracy of the estimated results, the pressure, reflux flow and heat input with temperature profile of the column are used as input to train the neural network." }, { "instance_id": "R31669xR31399", "comparison_id": "R31669", "paper_id": "R31399", "text": "Neural network based estimation of a semi-batch polymerisation reactor Abstract Three different approaches for modelling a semi-batch polymerisation reactor using artificial neural networks (ANN) have been investigated. Based on the characteristics of the semi-batch reactor a multi-stage strategy is recommended. It divides the whole reaction process into semi-batch and batch, and further divides the semi-batch part into two periods, before and after arriving at the first maximum temperature. Different ANN architectures are used to model the three parts separately. The results demonstrate that the multi-stage approach proposed can be used to estimate difficult-to-measure polymer variables with acceptable accuracy for semi-batch processes. Concentrations of monomer and initiator are estimated from reactor temperature, feed temperature and concentration of initiator in feed." }, { "instance_id": "R31669xR31332", "comparison_id": "R31669", "paper_id": "R31332", "text": "Control strategies analysis for a batch distillation column with experimental testing Abstract The dynamic nature and the non-linear behaviour of batch distillation equipment pose challenging control system design when products of constant purity are to be recovered. Several alternative column configurations and operating policies have been studied. However, issues related to the on-line operation of such process have not been properly addressed. The present work describes the investigation with experimental verification of computer based control strategies to batch distillation: a programmable adaptive controller (PAC), a self-tuning regulator (STR) and a non-linear model predictive control (MPC). The developed control systems made the conventional batch distillation column more efficient and easy to operate. Experiments performed on the pilot column confirm the simulation results." }, { "instance_id": "R31669xR31511", "comparison_id": "R31669", "paper_id": "R31511", "text": "On-line expert system for odor complaints in a refinery On-line capabilities of modern expert systems are of special interest due to the possibility of real-time environmental monitoring and detection of possible rules violations in their early phase of development. One of the most difficult environmental problem for some refineries located in fairly populated areas are released refinery odors. In an attempt to solve this problem, an odor expert system is proposed in this paper. It includes a historical data base with all available odor complaints, real-time monitoring of weather data and selected process variables, a detailed map, and rules. The rules are based on the developed methodology for quantitavely correlating process variables with odor complaints. The methodology includes threshold values calculations for selected variables at the Waste Water Treatment Plant and their correlation with process variables in different units inside the refinery. The proposed expert system has been implemented in the GENSYM G2 software environment at StarEnterprise refinery in Delaware City, Delaware, USA." }, { "instance_id": "R31669xR31357", "comparison_id": "R31669", "paper_id": "R31357", "text": "Soft sensors development for on-line bioreactor state estimation Abstract During a fermentation process, variables such as concentrations are determined by off-line laboratory analysis, making this set of variables of limited use for control purposes. However, these variables can be on-line estimated using soft sensors. The objective of this study is to present the state of the art of state estimator techniques. Special attention was given to filtering techniques, namely extended Kalman filter, adaptive observers, and artificial neural networks (ANN). It is shown that software based state estimation is a powerful technique that can be successfully used to enhance automatic control performance of biological systems as well as in system monitoring and on-line optimisation." }, { "instance_id": "R31669xR31421", "comparison_id": "R31669", "paper_id": "R31421", "text": "\u2018\u2018Assumed inherent sensor\u2019\u2019 inversion based ANN dynamic soft-sensing method and its application in erythromycin fermentation process Abstract An artificial neural network (ANN) soft-sensing method, based on the \u201cassumed inherent sensor\u201d and its inversion concepts, is proposed and used to estimate some crucial process variables which would be very difficult to be measured directly. For a real biochemical process whose mathematical model is a general nonlinear dynamic system, one may assume that, in its interior, there exists an \u201cinherent sensor\u201d subsystem whose inputs are exactly the process variables to be estimated while whose outputs are the directly measurable ones. To verify this assumption, this paper presents an algorithm to construct the mathematical model of the \u201cassumed inherent sensor\u201d and furthermore presents a global invertibility condition of the \u201cassumed inherent sensor\u201d which guarantees the existence of the inversion of such an \u201cassumed inherent sensor\u201d in theory. The \u201cassumed inherent sensor\u201d inversion consists of a set of nonlinear functions and a series of differentiators and could be treated as the dynamic soft-sensing model because its outputs are capable of reproducing the input variables of the \u201cassumed inherent sensor\u201d, or the process variables to be estimated. To overcome the difficulty in constructing the above \u201cassumed inherent sensor\u201d inversion in an analytic manner, a static ANN is used to approximate the nonlinear function so that the ANN-inversion dynamic soft-sensing model or the desired soft-sensor is finally completed. This makes the proposed ANN-inversion soft-sensor stricter in construction principle and more credible in practical use than most proposed soft-sensors. The soft-sensor consisting of a static ANN and a set of differentiators has been put into use of estimating such crucial biochemical variables as mycelia concentration, sugar concentration and chemical potency in erythromycin fermentation process. The field results show that the soft-sensing values approximately coincide with the offline analyzing ones sampled from the production process." }, { "instance_id": "R31669xR31381", "comparison_id": "R31669", "paper_id": "R31381", "text": "On-line adaptation of neural networks for bioprocess control A recurrent neural network with intra-connections within the output layer is developed to track the dynamics of fed-batch yeast fermentation. The neural network is adapted on-line using only the dissolved oxygen measurement to account for varying operating conditions. The other states of the system, namely the substrate, ethanol and biomass concentrations are not measured but predicted by the adapted network. A neural network having a 10-8-4 architecture with output layer feed back and intra-connections between the nodes of the output layer has been studied in detail. A comparative study of its performance with and without online adaptation of weights is presented. Predictions based on online adaptation of weights were found to be superior compared to that without adaptation. The network was implemented as an online state-estimator facilitating the control of a yeast fermentation process. The results demonstrate that with on-line adaptation of weights, it is possible to implement neural networks to control processes in a wide region outside its training domain." }, { "instance_id": "R31669xR31616", "comparison_id": "R31669", "paper_id": "R31616", "text": "Development of a hybrid neural network system for prediction of process parameters in injection moulding Abstract In this paper, the attempts made by the authors to develop an artificial neural network system for prediction of injection moulding process parameters is presented. In this work, attempts have been made to determine the process parameters that could affect injection moulding process based on governing equations of the filling process. Focus is then directed to parameters that require the use of trial and error methods or other complex software to determine the process parameters. The two parameters that are predicted from the developed network are injection time and injection pressure. In this work, the training data are generated by simulation using C-MOLD flow simulation software. A total of 114 data were collected out of which 94 were used to train the network using MATLAB and the remaining 20 for testing the network. Two algorithms are used during the training phase, namely the error-back-propagation algorithm and the Levenberg\u2013Marquardt approximation algorithm. Results showed that the latter algorithm is more suitable for this application since the Leverberg\u2019s algorithm converged rapidly with lesser training cycles when compared to the error-back-propagation algorithm. The accuracy of the developed network has been tested by predicting the injection pressure and injection time for few engineering components and found that the overall error is 0.93% with a deviation of 3.93%." }, { "instance_id": "R31669xR31621", "comparison_id": "R31669", "paper_id": "R31621", "text": "Simulation of biomass gasification with a hybrid neural network model Gasification of several types of biomass has been conducted in a fluidized bed gasifier at atmospheric pressure with steam as the fluidizing medium. In order to obtain the gasification profiles for each type of biomass, an artificial neural network model has been developed to simulate this gasification processes. Model-predicted gas production rates in this biomass gasification processes were consistent with the experimental data. Therefore, the gasification profiles generated by neural networks are considered to have properly reflected the real gasification process of a biomass. Gasification profiles identified by neural network suggest that gasification behavior of arboreal types of biomass is significantly different from that of herbaceous ones." }, { "instance_id": "R31669xR31460", "comparison_id": "R31669", "paper_id": "R31460", "text": "Using artificial neural network to predict the pressure drop in a rotating packed bed Although rotating beds are good equipments for intensified separations and multiphase reactions, but the fundamentals of its hydrodynamics are still unknown. In the wide range of operating conditions, the pressure drop across an irrigated bed is significantly lower than dry bed. In this regard, an approach based on artificial intelligence, that is, artificial neural network (ANN) has been proposed for prediction of the pressure drop across the rotating packed beds (RPB). The experimental data sets used as input data (280 data points) were divided into training and testing subsets. The training data set has been used to develop the ANN model while the testing data set was used to validate the performance of the trained ANN model. The results of the predicted pressure drop values with the experimental values show a good agreement between the prediction and experimental results regarding to some statistical parameters, for example (AARD% = 4.70, MSE = 2.0 \u00d7 10\u22125 and R2 = 0.9994). The designed ANN model can estimate the pressure drop in the countercurrent flow rotating packed bed with unexpected phenomena for higher pressure drop in dry bed than in wet bed. Also, the designed ANN model has been able to predict the pressure drop in a wet bed with the good accuracy with experimental." }, { "instance_id": "R31669xR31456", "comparison_id": "R31669", "paper_id": "R31456", "text": "Applications of Artificial Neural Network for the Prediction of Flow Boiling Curves An artificial neural network (ANN) was applied successfully to predict flow boiling curves. The databases used in the analysis are from the 1960's, including 1,305 data points which cover these parameter ranges: pressure P=100\u20131,000 kPa, mass flow rate G=40\u2013500 kg/m2-s, inlet subcooling \u0394Tsub =0\u201335\u00b0C, wall superheat \u0394Tw = 10\u2013300\u00b0C and heat flux Q=20\u20138,000kW/m2. The proposed methodology allows us to achieve accurate results, thus it is suitable for the processing of the boiling curve data. The effects of the main parameters on flow boiling curves were analyzed using the ANN. The heat flux increases with increasing inlet subcooling for all heat transfer modes. Mass flow rate has no significant effects on nucleate boiling curves. The transition boiling and film boiling heat fluxes will increase with an increase in the mass flow rate. Pressure plays a predominant role and improves heat transfer in all boiling regions except the film boiling region. There are slight differences between the steady and the transient boiling curves in all boiling regions except the nucleate region. The transient boiling curve lies below the corresponding steady boiling curve." }, { "instance_id": "R31669xR31372", "comparison_id": "R31669", "paper_id": "R31372", "text": "Estimation of oxygen mass transfer coefficient in stirred tank reactors using artificial neural networks The estimation of volumetric mass transfer coefficient, k(L)a, in stirred tank reactors using artificial neural networks has been studied. Several operational conditions (N and V(s)), properties of fluid (\u00b5(a)) and geometrical parameters (D and T) have been taken into account. Learning sets of input-output patterns were obtained by k(L)a experimental data in stirred tank reactors of different volumes. The inclusion of prior knowledge as an approach which improves the neural network prediction has been considered. The hybrid model combining a neural network together with an empirical equation provides a better representation of the estimated parameter values. The outputs predicted by the hybrid neural network are compared with experimental data and some correlations previously proposed in the literature for tanks of different sizes." }, { "instance_id": "R31669xR31493", "comparison_id": "R31669", "paper_id": "R31493", "text": "Estimate of process compositions and plantwide control from multiple secondary measurements using artificial neural networks The primary object of this exploration is to infer process compositions from other measurements in place of on-line analyzers under plantwide consideration using the predicted capability of artificial neural networks (ANN) dynamically. The Tennessee Eastman (TE) plant was employed for the investigation. First, transient data of process compositions and secondary measurements, such as temperature, pressure, level, and flow rate were obtained. Then, a composition estimator was developed using these data by the training and testing of recurrent ANN models based on a cross-validation technique. Several plantwide control techniques including classic PI control and supervisory model predictive control (MPC) to regulate process compositions in the TE plant using ANN estimators were investigated. Simulation results have demonstrated that an ANN estimator with laboratory calibration can reliably estimate process compositions on-line from secondary measurements. In addition, plantwide control using ANN estimators, either in the classic PI control or in the neural MPC, can perform reliable composition control." }, { "instance_id": "R31669xR31524", "comparison_id": "R31669", "paper_id": "R31524", "text": "Energy efficiency estimation based on data fusion strategy: Case study of ethylene product industry Data fusion is an emerging technology to fuse data from multiple data or information of the environment through measurement and detection to make a more accurate and reliable estimation or decision. In this Article, energy consumption data are collected from ethylene plants with the high temperature steam cracking process technology. An integrated framework of the energy efficiency estimation is proposed on the basis of data fusion strategy. A Hierarchical Variable Variance Fusion (HVVF) algorithm and a Fuzzy Analytic Hierarchy Process (FAHP) method are proposed to estimate energy efficiencies of ethylene equipments. For different equipment scales with the same process technology, the HVVF algorithm is used to estimate energy efficiency ranks among different equipments. For different technologies based on HVVF results, the FAHP method based on the approximate fuzzy eigenvector is used to get energy efficiency indices (EEI) of total ethylene industries. The comparisons are used to assess energy utilization..." }, { "instance_id": "R31669xR31410", "comparison_id": "R31669", "paper_id": "R31410", "text": "Inferential estimation of polymer quality using bootstrap aggregated neural networks Inferential estimation of polymer quality in a batch polymerisation reactor using bootstrap aggregated neural networks is studied in this paper. Number average molecular weight and weight average molecular weight are estimated from the on-line measurements of reactor temperature, jacket inlet and outlet temperatures, coolant flow rate through the jacket, monomer conversion, and the initial batch conditions. Bootstrap aggregated neural networks are used to enhance the accuracy and robustness of neural network models built from a limited amount of training data. The training data set is re-sampled using bootstrap re-sampling with replacement to form several sets of training data. For each set of training data, a neural network model is developed. The individual neural networks are then combined together to form a bootstrap aggregated neural network. Determination of appropriate weights for combining individual networks using principal component regression is proposed in this paper. Confidence bounds for neural network predictions can also be obtained using the bootstrapping technique. The techniques have been successfully applied to the simulation of a batch methyl methacrylate polymerisation reactor." }, { "instance_id": "R31689xR31674", "comparison_id": "R31689", "paper_id": "R31674", "text": "Query learning with large margin classifiers The active selection of instances can significantly improve the generalisation performance of a learning machine. Large margin classifiers such as support vector machines classify data using the most informative instances (the support vectors). This makes them natural candidates for instance selection strategies. In this paper we propose an algorithm for the training of support vector machines using instance selection. We give a theoretical justification for the strategy and experimental results on real and artificial data demonstrating its effectiveness. The technique is most efficient when the data set can be learnt using few support vectors." }, { "instance_id": "R31689xR31672", "comparison_id": "R31689", "paper_id": "R31672", "text": "Support vector machine active learning with applications to text classification Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings." }, { "instance_id": "R31689xR31687", "comparison_id": "R31689", "paper_id": "R31687", "text": "Balancing Exploration and Exploitation: A New Algorithm for Active Machine Learning Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassifies. We propose a new active learning algorithm that balances such exploration with refining of the decision boundary by dynamically adjusting the probability to explore at each step. Our experimental results demonstrate improved performance on data sets that require extensive exploration while remaining competitive on data sets that do not. Our algorithm also shows significant tolerance of noise." }, { "instance_id": "R31725xR31717", "comparison_id": "R31725", "paper_id": "R31717", "text": "How Design Patterns Affect Application Performance \u2013 A Case of a Multi-tier J2EE Application Different kinds of patterns, especially design patterns, are popular and useful concepts in software engineering. In some cases, flexibility and reusability of the design comes with the price of decreased efficiency. At the same time, performance is often a key quality attribute of distributed applications. It is therefore beneficial to investigate whether design patterns may influence performance of applications. This paper investigates differences in performance between selected design patterns implemented in an example multi-tier J2EE application. To this end, a series of performance tests in distinctive Enterprise Java Beans containers and deployment configurations were carried out. The comparison of the differences between results for each tested design pattern indicates influence on application quality, especially performance." }, { "instance_id": "R31725xR31705", "comparison_id": "R31725", "paper_id": "R31705", "text": "Impact of the visitor pattern on program comprehension and maintenance In the software engineering literature, many works claim that the use of design patterns improves the comprehensibility of programs and, more generally, their maintainability. Yet, little work attempted to study the impact of design patterns on the developers' tasks of program comprehension and modification. We design and perform an experiment to collect data on the impact of the Visitor pattern on comprehension and modification tasks with class diagrams. We use an eye-tracker to register saccades and fixations, the latter representing the focus of the developers' attention. Collected data show that the Visitor pattern plays a role in maintenance tasks: class diagrams with its canonical representation requires less efforts from developers." }, { "instance_id": "R31725xR31695", "comparison_id": "R31725", "paper_id": "R31695", "text": "Design Patterns in Software Maintenance: An Experiment Replication at Freie Universität Berlin An article published in 2001 reported a controlled experiment that compared maintenance of small programs using design patterns with maintenance of equivalent programs using simplified design solutions. A replication of that experiment was published in 2004. In 2010, a group of researchers from multiple countries picked this experiment as the subject of an attempt to perform a joint replication: Many groups performing an experiment using the same setup, each contributing a few data points to a larger overall data set. This article reports on one of those sub-replications. Only one of the results is statistically significant; it confirms the result of the original experiment stating that the simplified version of the GR program could be extended more quickly than the pattern version which used the Abstract Factory and Composite patterns. The article's main contributions, however, are (a) its description of the peculiarities of this particular subdataset and (b) its (implicit) suggestions for possible evaluation methods." }, { "instance_id": "R31725xR31709", "comparison_id": "R31725", "paper_id": "R31709", "text": "An empirical study on the evolution of design patterns Design patterns are solutions to recurring design problems, conceived to increase benefits in terms of reuse, code quality and, above all, maintainability and resilience to changes. This paper presents results from an empirical study aimed at understanding the evolution of design patterns in three open source systems, namely JHotDraw, ArgoUML, and Eclipse-JDT. Specifically, the study analyzes how frequently patterns are modified, to what changes they undergo and what classes co-change with the patterns. Results show how patterns more suited to support the application purpose tend to change more frequently, and that different kind of changes have a different impact on co-changed classes and a different capability of making the system resilient to changes." }, { "instance_id": "R31725xR31721", "comparison_id": "R31725", "paper_id": "R31721", "text": "Defect frequency and design patterns: an empirical study of industrial code Software \"design patterns\" seek to package proven solutions to design problems in a form that makes it possible to find, adapt, and reuse them. A common claim is that a design based on properly applied patterns will have fewer defects than more ad hoc solutions. This case study analyzes the weekly evolution and maintenance of a large commercial product (C++, 500,000 LOC) over three years, comparing defect rates for classes that participated in selected design patterns to the code at large. We found that there are significant differences in defect rates among the patterns, ranging from 63 percent to 154 percent of the average rate. We developed a new set of tools able to extract design pattern information at a rate of 3/spl times/10/sup 6/ lines of code per hour, with relatively high precision. Based on a qualitative analysis of the code and the nature of the patterns, we conclude that the Observer and Singleton patterns are correlated with larger code structures and, so, can serve as indicators of code that requires special attention. Conversely, code designed with the Factory pattern is more compact and possibly less closely coupled and, consequently, has lower defect numbers. The Template Method pattern was used in both simple and complex situations, leading to no clear tendency." }, { "instance_id": "R31725xR31699", "comparison_id": "R31725", "paper_id": "R31699", "text": "Design Patterns in Software Maintenance: An Experiment Replication at University of Alabama Design patterns are widely used within the software engineer community. Researchers claim that design patterns improve software quality. In this paper, we describe two experiments, using graduate student participants, to study whether design patterns improve the software quality, specifically maintainability and understandability. We replicated a controlled experiment to compare the maintainability of two implementations of an application, one using a design pattern and the other using a simpler alternative. The maintenance tasks in this replication experiment required the participants to answer questions about a Java program and then modify that program. Prior to the replication, we performed a preliminary exercise to investigate whether design patterns improve the understandability of software designs. We gave the participants the graphical design of the systems that would be used in the replication study. The participant received either the version of the design containing the design pattern or the version containing the simpler alternative. We asked the participants a series of questions to see how well they understood the given design. The results of two experiments revealed that the design patterns did not improve either the maintainability or the understandability of the software. We found that there was no significant correlation between the maintainability and the understandability of the software even though the participants had received the design of the systems before they performed the maintenance tasks." }, { "instance_id": "R31768xR31727", "comparison_id": "R31768", "paper_id": "R31727", "text": "Prevalence of Campylobacter in chicken and chicken by-products retailed in Sapporo area, Hokkaido, Japan The present work was carried out to study the prevalence of Campylobacter in fresh chicken meat and chicken by-products on retail level in Sapporo, Japan. Out of the 170 samples of chicken meat (breasts and thighs) and chicken by products (wings, livers, gizzards and hearts), 110 (64.7%) were contaminated with Campylobacter. Among the different products, chicken wings showed the highest contamination incidence (77.1%) followed by chicken thighs (70%), while chicken gizzards and hearts showed the lowest contamination incidence (45% and 40%, respectively). Of the 341 Campylobacter isolates, 278 (81.5%) were identified as Campylobacter jejuni and 63 (18.5%) isolates were identified as C. coli. All of the 341 Campylobacter strains identified by the conventional culture methods were further confirmed by polymerase chain reaction (PCR), which indicated that almost all (99.4%) of the tested strains were also positive by PCR. Screening of 195 selected Campylobacter isolates for determining their antimicrobial resistance indicated that most of the tested strains (73.3%) were resistant to three or more of antimicrobials examined. The study concluded that high proportion of chicken meat and chicken by-products marketed in Sapporo area are contaminated by Campylobacter, most of which are antimicrobial-resistant strains, with a possible risk from such microorganism especially from consumption of undercooked or post-cooking contaminated chicken products." }, { "instance_id": "R31809xR31807", "comparison_id": "R31809", "paper_id": "R31807", "text": "5-Methylcytosine content and methylation status in six millet DNAs High performance liquid chromatographic analysis of the total nuclear DNAs of 6 millets plant species indicates that the 5-methylcytosine content ranges from 3% in barn yard millet to 9.6% in great millet while the fraction of cytosines methylated varies between 14% in little millet to 31 % in pearl millet. Digestion of millet DNAs with MspI/HpaII suggests that CpG methylation is more in great millet DNA while CpC methylation is more in the other 5 millet DNAs. Digestion of millet DNAs with MboI, Sau3AI andDpnI indicates that some of the5\u2019 GATC3\u2019 sequences are methylated at adenine and/or cytosine residues except in little millet where adenine methylation of the5\u2019GATC3\u2019 sequences is insignificant and there is a predominance of cytosine methylation in these sequences." }, { "instance_id": "R31809xR31773", "comparison_id": "R31809", "paper_id": "R31773", "text": "Characterization of the level, target sites and inheritance of cytosine methylation in tomato nuclear DNA The tomato nuclear genome was determined to have a G+C content of 37% which is among the lowest reported for any plant species. Non-coding regions have a G+C content even lower (32% average) whereas coding regions are considerably richer in G+C (46%).5-methyl cytosine was the only modified base detected and on average 23% of the cytosine residues are methylated. Immature tissues and protoplasts have significantly lower levels of cytosine methylation (average 20%) than mature tissues (average 25%). Mature pollen has an intermediate level of methylation (22%). Seeds gave the highest value (27%), suggesting de novo methylation after pollination and during seed development.Based on isoschizomer studies we estimate 55% of the CpG target sites (detected by Msp I/Hpa II) and 85% of the CpNpG target sites (detected by Bst NI/Eco RI)are methylated. Unmethylated target sites (both CpG and CpNpG) are not randomly distributed throughout the genome, but frequently occur in clusters. These clusters resemble CpG islands recently reported in maize and tobacco.The low G+C content and high levels of cytosine methylation in tomato may be due to previous transitions of 5mC\u2192T. This is supported by the fact that G+C levels are lowest in non-coding portions of the genome in which selection is relaxed and thus transitions are more likely to be tolerated. This hypothesis is also supported by the general deficiency of methylation target sites in the tomato genome, especially in non-coding regions.Using methylation isoschizomers and RFLP analysis we have also determined that polymorphism between plants, for cytosine methylation at allelic sites, is common in tomato. Comparing DNA from two tomato species, 20% of the polymorphisms detected by Bst NI/Eco RII could be attributed to differential methylation at the CpNpG target sites. With Msp I/Hpa II, 50% of the polymorphisms were attributable to methylation (CpG and CpNpG sites). Moreover, these polymorphisms were demonstrated to be inherited in a mendelian fashion and to co-segregate with the methylation target site and thus do not represent variation for transacting factors that might be involved in methylation of DNA. The potential role of heritable methylation polymorphism in evolution of gene regulation and in RFLP studies is discussed." }, { "instance_id": "R31809xR31776", "comparison_id": "R31809", "paper_id": "R31776", "text": "Genetic and DNA methylation changes in cotton (Gossypium) genotypes and tissues In plants, epigenetic regulation is important in normal development and in modulating some agronomic traits. The potential contribution of DNA methylation mediated gene regulation to phenotypic diversity and development in cotton was investigated between cotton genotypes and various tissues. DNA methylation diversity, genetic diversity, and changes in methylation context were investigated using methylation-sensitive amplified polymorphism (MSAP) assays including a methylation insensitive enzyme (BsiSI), and the total DNA methylation level was measured by high-performance liquid chromatography (HPLC). DNA methylation diversity was greater than the genetic diversity in the selected cotton genotypes and significantly different levels of DNA methylation were identified between tissues, including fibre. The higher DNA methylation diversity (CHG methylation being more diverse than CG methylation) in cotton genotypes suggest epigenetic regulation may be important for cotton, and the change in DNA methylation between fibre and other tissues hints that some genes may be epigenetically regulated for fibre development. The novel approach using BsiSI allowed direct comparison between genetic and epigenetic diversity, and also measured CC methylation level that cannot be detected by conventional MSAP." }, { "instance_id": "R31809xR31785", "comparison_id": "R31809", "paper_id": "R31785", "text": "The DNA of Arabidopsis thaliana SummaryArabidopsis thaliana is a small flowering plant of the mustard family. It has a four to five week generation time, can be self- or cross-pollinated and bears as many as 104 seeds per plant. Many visible and biochemical mutations exist and have been mapped by recombination to one of the five chromosomes that comprise the haploid karyotype. With the experiments reported here we demonstrate that Arabidopsis has an extraordinarily small haploid genome size (approximately 7\u00d7107 nucleotide pairs) and a low level of cytosine methylation for an angiosperm. In addition, it appears to have little repetitive DNA in its nuclear DNA, in contrast to other higher plants." }, { "instance_id": "R31928xR31923", "comparison_id": "R31928", "paper_id": "R31923", "text": "Economic analysis for conceptual design of super- critical O2-based PC boiler This report determines the capital and operating costs of two different oxygen-based, pulverized coal-fired (PC) power plants and compares their economics to that of a comparable, air-based PC plant. Rather than combust their coal with air, the oxygen-based plants use oxygen to facilitate capture/removal of the plant CO{sub 2} for transport by pipeline to a sequestering site. To provide a consistent comparison of technologies, all three plants analyzed herein operate with the same coal (Illinois No 6), the same site conditions, and the same supercritical pressure steam turbine (459 MWe). In the first oxygen-based plant, the pulverized coal-fired boiler operates with oxygen supplied by a conventional, cryogenic air separation unit, whereas, in the second oxygen-based plant, the oxygen is supplied by an oxygen ion transport membrane. In both oxygen-based plants a portion of the boiler exhaust gas, which is primarily CO{sub 2}, is recirculated back to the boiler to control the combustion temperature, and the balance of the flue gas undergoes drying and compression to pipeline pressure; for consistency, both plants operate with similar combustion temperatures and utilize the same CO{sub 2} processing technologies. The capital and operating costs of the pulverized coal-fired boilers required by the three different plants more \u00bb were estimated by Foster Wheeler and the balance of plant costs were budget priced using published data together with vendor supplied quotations. The cost of electricity produced by each of the plants was determined and oxygen-based plant CO{sub 2} mitigation costs were calculated and compared to each other as well as to values published for some alternative CO{sub 2} capture technologies. \u00ab less" }, { "instance_id": "R32025xR32023", "comparison_id": "R32025", "paper_id": "R32023", "text": "Technoeconomic assessment of China\u2019s indirect coal liquefaction projects with different CO2 capture alternatives ICL (Indirect coal liquefaction), an alternative fuel-supplying technology, has drawn much attention and caused considerable debate in China\u2019s energy sector. The hurdles to its development include the high risk of investment into large-scale installations, the high CO2 emissions and water resource consumption. A comprehensive assessment of ICL is urgently needed. This study provides an economic assessment and a technical analysis based on process simulations. To address the future challenge of curbing CO2 emissions, three absorption methods are compared for capturing the CO2 released from the ICL process: DMC (a novel absorbent), MEA and Rectisol. The comparative results suggest that physical absorbents, represented by Rectisol and DMC, have a remarkable advantage over chemical absorption processes, represented by MEA. The Rectisol process costs the least, while the DMC process is close to the same level. As a novel absorbent, DMC has the potential to be widely used in the future. The economic analysis of ICL predicted a high capital cost of over 35 billion yuan and an overall product cost of approximately 3800 yuan/ton for the baseline. In addition, via a sensitivity analysis, coal price, electricity price and capacity factor were identified as the three most influential factors affecting the overall product cost." }, { "instance_id": "R32025xR31995", "comparison_id": "R32025", "paper_id": "R31995", "text": "Comparative analysis of the production costs and life-cycle GHG emissions of FT liquid fuels from coal and natural Gas Liquid transportation fuels derived from coal and natural gas could helpthe United States reduce its dependence on petroleum. The fuels could be produced domestically or imported from fossil fuel-rich countries. The goal of this paper is to determine the life-cycle GHG emissions of coal- and natural gas-based Fischer-Tropsch (FT) liquids, as well as to compare production costs. The results show that the use of coal- or natural gas-based FT liquids will likely lead to significant increases in greenhouse gas (GHG) emissions compared to petroleum-based fuels. In a best-case scenario, coal- or natural gas-based FT-liquids have emissions only comparable to petroleum-based fuels. In addition, the economic advantages of gas-to-liquid (GTL) fuels are not obvious: there is a narrow range of petroleum and natural gas prices at which GTL fuels would be competitive with petroleum-based fuels. CTLfuels are generally cheaper than petroleum-based fuels. However, recent reports suggest there is uncertainty about the availability of economically viable coal resources in the United States. If the U.S. has a goal of increasing its energy security, and at the same time significantly reducing its GHG emissions, neither CTL nor GTL consumption seem a reasonable path to follow." }, { "instance_id": "R32061xR32042", "comparison_id": "R32061", "paper_id": "R32042", "text": "Domain adaptation with latent semantic association for named entity recognition Domain adaptation is an important problem in named entity recognition (NER). NER classifiers usually lose accuracy in the domain transfer due to the different data distribution between the source and the target domains. The major reason for performance degrading is that each entity type often has lots of domain-specific term representations in the different domains. The existing approaches usually need an amount of labeled target domain data for tuning the original model. However, it is a labor-intensive and time-consuming task to build annotated training data set for every target domain. We present a domain adaptation method with latent semantic association (LaSA). This method effectively overcomes the data distribution difference without leveraging any labeled target domain data. LaSA model is constructed to capture latent semantic association among words from the unlabeled corpus. It groups words into a set of concepts according to the related context snippets. In the domain transfer, the original term spaces of both domains are projected to a concept space using LaSA model at first, then the original NER model is tuned based on the semantic association features. Experimental results on English and Chinese corpus show that LaSA-based domain adaptation significantly enhances the performance of NER." }, { "instance_id": "R32061xR32050", "comparison_id": "R32061", "paper_id": "R32050", "text": "Hierarchical Bayesian domain adaptation Multi-task learning is the problem of maximizing the performance of a system across a number of related tasks. When applied to multiple domains for the same task, it is similar to domain adaptation, but symmetric, rather than limited to improving performance on a target domain. We present a more principled, better performing model for this problem, based on the use of a hierarchical Bayesian prior. Each domain has its own domain-specific parameter for each feature but, rather than a constant prior over these parameters, the model instead links them via a hierarchical Bayesian global prior. This prior encourages the features to have similar weights across domains, unless there is good evidence to the contrary. We show that the method of (Daume III, 2007), which was presented as a simple \"preprocessing step,\" is actually equivalent, except our representation explicitly separates hyperparameters which were tied in his work. We demonstrate that allowing different values for these hyperparameters significantly improves performance over both a strong baseline and (Daume III, 2007) within both a conditional random field sequence model for named entity recognition and a discriminatively trained dependency parser." }, { "instance_id": "R32061xR32048", "comparison_id": "R32061", "paper_id": "R32048", "text": "Adaptation of maximum entropy capitalizer: Little data can help a lot Abstract A novel technique for maximum \u201ca posteriori\u201d (MAP) adaptation of maximum entropy (MaxEnt) and maximum entropy Markov models (MEMM) is presented. The technique is applied to the problem of automatically capitalizing uniformly cased text. Automatic capitalization is a practically relevant problem: speech recognition output needs to be capitalized; also, modern word processors perform capitalization among other text proofing algorithms such as spelling correction and grammar checking. Capitalization can be also used as a preprocessing step in named entity extraction or machine translation. A \u201cbackground\u201d capitalizer trained on 20 M words of Wall Street Journal (WSJ) text from 1987 is adapted to two Broadcast News (BN) test sets \u2013 one containing ABC Primetime Live text and the other NPR Morning News/CNN Morning Edition text \u2013 from 1996. The \u201cin-domain\u201d performance of the WSJ capitalizer is 45% better relative to the 1-gram baseline, when evaluated on a test set drawn from WSJ 1994. When evaluating on the mismatched \u201cout-of-domain\u201d test data, the 1-gram baseline is outperformed by 60% relative; the improvement brought by the adaptation technique using a very small amount of matched BN data \u2013 25\u201370k words \u2013 is about 20\u201325% relative. Overall, automatic capitalization error rate of 1.4% is achieved on BN data. The performance gain obtained by employing our adaptation technique using a tiny amount of out-of-domain training data on top of the background data is striking: as little as 0.14 M words of in-domain data brings more improvement than using 10 times more background training data (from 2 M words to 20 M words)." }, { "instance_id": "R32061xR32046", "comparison_id": "R32061", "paper_id": "R32046", "text": "Topic-bridged PLSA for cross-domain text classification In many Web applications, such as blog classification and new-sgroup classification, labeled data are in short supply. It often happens that obtaining labeled data in a new domain is expensive and time consuming, while there may be plenty of labeled data in a related but different domain. Traditional text classification ap-proaches are not able to cope well with learning across different domains. In this paper, we propose a novel cross-domain text classification algorithm which extends the traditional probabilistic latent semantic analysis (PLSA) algorithm to integrate labeled and unlabeled data, which come from different but related domains, into a unified probabilistic model. We call this new model Topic-bridged PLSA, or TPLSA. By exploiting the common topics between two domains, we transfer knowledge across different domains through a topic-bridge to help the text classification in the target domain. A unique advantage of our method is its ability to maximally mine knowledge that can be transferred between domains, resulting in superior performance when compared to other state-of-the-art text classification approaches. Experimental eval-uation on different kinds of datasets shows that our proposed algorithm can improve the performance of cross-domain text classification significantly." }, { "instance_id": "R32189xR32073", "comparison_id": "R32189", "paper_id": "R32073", "text": "A Genetics-based hybrid scheduler for generating static schedules in flexible manufacturing contexts Existing computerized systems that support scheduling decisions for flexible manufacturing systems (FMS's) rely largely on knowledge acquired through rote learning for schedule generation. In a few instances, the systems also possess some ability to learn using deduction or supervised induction. We introduce a novel AI-based system for generating static schedules that makes heavy use of an unsupervised learning module in acquiring significant portions of the requisite problem processing knowledge. This scheduler pursues a hybrid schedule generation strategy wherein it effectively combines knowledge acquired via genetics-based unsupervised induction with rote-learned knowledge in generating high-quality schedules in an efficient manner. Through a series of experiments conducted on a randomly generated problem of practical complexity, we show that the hybrid scheduler strategy is viable, promising, and, worthy of more in-depth investigations. >" }, { "instance_id": "R32189xR32105", "comparison_id": "R32189", "paper_id": "R32105", "text": "Dynamic scheduling of FMS using a real-time genetic algorithm The paper presents a genetic algorithm capable of generating optimised production plans in flexible manufacturing systems. The ability of the system to generate alternative plans following part-flow changes and unforeseen situations is particularly stressed (dynamic scheduling). Two contrasting objectives represented by the reduction of machine idle-times, thanks to dynamic scheduling computation and the reduction of the makespan, are taken into account by the proposed system. The key-point is the real-time response obtained by an optimised evolutionary strategy capable of minimising the number of genetic operations needed to reach the optimal schedule in complex manufacturing systems." }, { "instance_id": "R32189xR32142", "comparison_id": "R32189", "paper_id": "R32142", "text": "A pareto based multi-objective genetic algorithm for scheduling of FMS Many real-world engineering and scientific problems involve simultaneous optimization of multiple objectives that often are competing. In this work, we have addressed issues relating to scheduling with multiple (and competing) objectives of flexible manufacturing system (FMS) and have developed a mechanism by employing a Pareto based GA to generate nearer optimal schedules. In the proposed method we have applied Pareto ranking to identify the elite solutions and their fitness values are derated using fitness sharing method. The procedure is evaluated with sample problem environment found in literature and results are compared with other available heuristics found in literature. The proposed niched Pareto genetic algorithm (NPGA) exhibits a superiority over the other heuristics and scheduling rules" }, { "instance_id": "R32189xR32120", "comparison_id": "R32189", "paper_id": "R32120", "text": "A genetic algorithm to solving the problem of flexible manufacturing system cyclic scheduling We are interested in the determination of the command of Flexible Manufacturing Systems (FMS). We have chosen the Cyclic Behavior to reduce the complexity of the general,scheduling problem. The aim is to propose a new one based on genetic algorithms which can reach the optimal production speed while minimizing the Work in Process (W.I.P.). In fact we want to minimize the W.L.P. to satisfy economical constraints. The use of genetic algorithms is justified by the huge combinatorial complexity of such problems (NP hard in the general case). Indeed, this kind of algorithm is able to give a solution at any moment. Our approach was validated on a set of five FMS cyclic scheduling test problems." }, { "instance_id": "R32189xR32178", "comparison_id": "R32189", "paper_id": "R32178", "text": "Cyclic scheduling for F.M.S.: Modelling and evolutionary solving approach This paper concerns the domain of flexible manufacturing systems (FMS) and focuses on the scheduling problems encountered in these systems. We have chosen the cyclic behaviour to study this problem, to reduce its complexity. This cyclic scheduling problem, whose complexity is NP-hard in the general case, aims to minimise the work in process (WIP) to satisfy economic constraints. We first recall and discuss the best known cyclic scheduling heuristics. Then, we present a two-step resolution approach. In the first step, a performance analysis is carried out; it is based on the Petri net modelling of the production process. This analysis resolves some indeterminism due to the system's flexibility and allows a lower bound of the WIP to be obtained. In the second step, after a formal model of the scheduling problem has been given, we describe a genetic algorithm approach to find a schedule which can reach the optimal production speed while minimizing the WIP. Finally, our genetic approach is validated and compared with known heuristics on a set of test problems." }, { "instance_id": "R32189xR32081", "comparison_id": "R32189", "paper_id": "R32081", "text": "Applications of genetic algorithm and simulation to dispatching rule-based FMS scheduling This paper presents a hybrid intelligent approach to a production scheduling problem in FMS. An FMS scheduling system is modelled as a four-level simultaneous decision-making problem. The genetic algorithm and simulation approaches are integrated to seek efficiently the best combination of dispatching rules in order to obtain an appropriate production schedule under specific performance measures." }, { "instance_id": "R32189xR32156", "comparison_id": "R32189", "paper_id": "R32156", "text": "Scheduling flexible manufacturing systems using parallelization of multi-objective evolutionary algorithms Solving multi-objective scientific and engineering problems is, generally, a very difficult goal. In these optimization problems, the objectives often conflict across a high-dimensional problem space and require extensive computational resources. In this paper, a migration model of parallelization is developed for a genetic algorithm (GA) based multi-objective evolutionary algorithm (MOEA). The MOEA generates a near-optimal schedule by simultaneously achieving two contradicting objectives of a flexible manufacturing system (FMS). The parallel implementation of the migration model showed a speedup in computation time and needed less objective function evaluations when compared to a single-population algorithm. So, even for a single-processor computer, implementing the parallel algorithm in a serial manner (pseudo-parallel) delivers better results. Two versions of the migration model are constructed and the performance of two parallel GAs is compared for their effectiveness in bringing genetic diversity and minimizing the total number of functional evaluations." }, { "instance_id": "R32189xR32151", "comparison_id": "R32189", "paper_id": "R32151", "text": "Scheduling optimisation of flexible manufacturing systems using particle swarm optimisation algorithm The increased use of flexible manufacturing systems (FMS) to efficiently provide customers with diversified products has created a significant set of operational challenges. Although extensive research has been conducted on design and operational problems of automated manufacturing systems, many problems remain unsolved. In particular, the scheduling task, the control problem during the operation, is of importance owing to the dynamic nature of the FMS such as flexible parts, tools and automated guided vehicle (AGV) routings. The FMS scheduling problem has been tackled by various traditional optimisation techniques. While these methods can give an optimal solution to small-scale problems, they are often inefficient when applied to larger-scale problems. In this work, different scheduling mechanisms are designed to generate optimum scheduling; these include non-traditional approaches such as genetic algorithm (GA), simulated annealing (SA) algorithm, memetic algorithm (MA) and particle swarm algorithm (PSA) by considering multiple objectives, i.e., minimising the idle time of the machine and minimising the total penalty cost for not meeting the deadline concurrently. The memetic algorithm presented here is essentially a genetic algorithm with an element of simulated annealing. The results of the different optimisation algorithms (memetic algorithm, genetic algorithm, simulated annealing, and particle swarm algorithm) are compared and conclusions are presented ." }, { "instance_id": "R32189xR32168", "comparison_id": "R32189", "paper_id": "R32168", "text": "Multi-objective Genetic Algorithm for Multistage-based Job Processing Schedules in FMS Environment In this paper, we propose a multi-objective genetic algorithm for effectively solving multistage-based job processing schedules in FMS environment. The proposed method is random-weight approach to obtaining a variable search direction toward Pareto solution. The objectives are to minimize the makespan and the total flow time, simultaneously. The feasibility and adaptability of the proposed moGA are investigated through experimental results." }, { "instance_id": "R32189xR32094", "comparison_id": "R32189", "paper_id": "R32094", "text": "A genetic algorithm approach to the simultaneous scheduling of machines and automated guided vehicles Abstract This article addresses the problem of simultaneous scheduling of machines and a number of identical automated guided vehicles (AGVs) in a flexible manufacturing system (FMS) so as t minimize the makespan. For solving this problem, a genetic algorithm (GA) is proposed. Here, chromosomes represent both operation sequencing and AGV assignment dimensions of the search space. A third dimension, time, is implicitly given by the ordering of operations of the chromosomes. A special uniform crossover operator is developed which produces one offspring from two parent chromosomes. It transfers any patterns of operation sequences and/or AGV assignments that are present in both parents to the child. Two mutation operators are introduced; a bitwise mutation for AGV assignments and a swap mutation for operations. Any precedence infeasibility resulting from the operation swap mutation is removed by a repair function. The schedule associated with a given chromosome is determined by a simple schedule builder. After a number of problems are solved to evaluate various search strategies and to tune the parameters of the proposed GA, 180 test problems are solved to evaluate various search lower bound is introduced and compared with the results of GA. In 60% of the problems GA reaches the lower bound indicating optimality. The average deviation from the lower bound over all problems is found to be 2.53%. Additional comparison is made with the time window approach suggested for this same problem using 82 test problems from the literature. In 59% of the problems GA outperforms the time window approach where the reverse is true only in 6% of the problems." }, { "instance_id": "R32189xR32067", "comparison_id": "R32189", "paper_id": "R32067", "text": "Intelligent scheduling for flexible manufacturing systems A scheme for the scheduling of flexible manufacturing systems (FMSs) have been developed. It integrates neural networks, parallel Monte-Carlo simulation, genetic algorithms, and machine learning. Modular neural networks are used to generate a small set of attractive plans and schedules from a larger list of such plans and schedules. Parallel Monte-Carlo simulation predicts the impact of each on the future evolution of the manufacturing system. Genetic algorithms are utilized to combine attractive alternatives into a single best decision. Induction mechanisms are used for learning and simplify the decision process for future performance. The development of a modular neural network architecture for candidate rule selection for a FMS cell is investigated. A scheduling example illustrates the scheme capabilities including speed, adaptability, and computational efficiency.<>" }, { "instance_id": "R32189xR32129", "comparison_id": "R32189", "paper_id": "R32129", "text": "An intelligent hierarchical workstation control model for FMS Abstract Hierarchical planning, scheduling and control in flexible manufacturing systems (FMS) provide a systematic way to effectively allocate resources along different time horizons. This paper describes the design and development of an intelligent hierarchical control model based on a proposed tool management method. The control model consists of four levels: the process plan selection, the master scheduling, the job sequencing and the control level. The model is developed to optimize the machine utilization and balance tool magazine capacity of a flexible machining workstation (FMW) in a tool-sharing environment. Problems are identified and modeled in the level of process plan selection, master scheduling, and job sequencing. A genetic-based algorithm was developed to solve the problem domains throughout the hierarchical planning and scheduling model. Fuzzy logic technique could also be incorporated into the master production schedule (MPS) level to allow for a more realistic result in the presence of uncertainty and impreciseness in order to fit the realistic nature of actual industrial environments." }, { "instance_id": "R32424xR32391", "comparison_id": "R32424", "paper_id": "R32391", "text": "Antimicrobial and antioxidant activities of Artemisia herba-alba essential oil cultivated in Tunisian arid zone Abstract This study was conceived to examine the antimicrobial and antioxidant activities of four essential oil types extracted by hydrodistillation from the aerial parts of Artemisia herba-alba cultivated in southern Tunisia. The chemical composition was investigated using both capillary GC and GC/MS techniques. \u03b2-thujone, \u03b1-thujone, \u03b1-thujone/\u03b2-thujone and 1,8-cineole/camphor/\u03b1-thujone/\u03b2-thujone were respectively, the major components of these oil types. The antimicrobial activity of different oils was tested using the diffusion method and by determining the inhibition zone. The results showed that all examined oil types had great potential of antimicrobial activity against strains. In addition, antioxidant capacity was assessed by different in vitro tests and weak activity was found for these A. herba-alba oils." }, { "instance_id": "R32424xR32350", "comparison_id": "R32424", "paper_id": "R32350", "text": "Essential Oil Composition of Artemisia herba-alba from Southern Tunisia The composition of the essential oil hydrodistilled from the aerial parts of 18 individual Artemisia herba-alba Asso. plants collected in southern Tunisia was determined by GC and GCMS analysis. The oil yield varied between 0.68% v/w and 1.93% v/w. One hundred components were identified, 21 of of which are reported for the first time in Artemisia herba-alba oil. The oil contained 10 components with percentages higher than 10%. The main components were cineole, thujones, chrysanthenone, camphor, borneol, chrysanthenyl acetate, sabinyl acetate, davana ethers and davanone. Twelve samples had monoterpenes as major components, three had sesquiterpenes as major components and the last three samples had approximately the same percentage of monoterpenes and sesquiterpenes. The chemical compositions revealed that ten samples had compositions similar to those of other Artemisia herba-alba essential oils analyzed in other countries. The remaining eight samples had an original chemical composition." }, { "instance_id": "R32424xR32286", "comparison_id": "R32424", "paper_id": "R32286", "text": "Chemical variability of Artemisia herba-alba Asso essential oils from East Morocco Abstract Chemical compositions of 16 Artemisia herba-alba oil samples harvested in eight East Moroccan locations were investigated by GC and GC/MS. Chemical variability of the A. herba-alba oils is also discussed using statistical analysis. Detailed analysis of the essential oils led to the identification of 52 components amounting to 80.5\u201398.6 % of the total oil. The investigated chemical compositions showed significant qualitative and quantitative differences. According to their major components (camphor, chrysanthenone, and \u03b1- and \u03b2-thujone), three main groups of essential oils were found. This study also found regional specificity of the major components." }, { "instance_id": "R32424xR32299", "comparison_id": "R32424", "paper_id": "R32299", "text": "Constitution of the essential oil from an Artemisia herba-alba population of Spain Abstract The composition of the essential oil from a Spanish population of Artemisia herba-alba has been compared with that of Israeli populations of the same plant. The chemotaxonomic affinity of the two populations was not reflected in the compositions of the oils." }, { "instance_id": "R32424xR32361", "comparison_id": "R32424", "paper_id": "R32361", "text": "Influence of drying time and process on Artemisia herba-alba Asso essential oil yield and composition Abstract The essential oil content of Artemisia herba-alba Asso decreased along the drying period from 2.5 % to 1.8 %. Conversely, the composition of the essential oil was not qualitatively affected by the drying process. The same principle components were found in all essential analyzed such as \u03b1-thujone (13.0 \u2013 22.7 %), \u03b2-thujone (18.0 \u2013 25.0 %), camphor (8.6 - 13 %), 1,8-cineole (7.1 \u2013 9.4 %), chrysanthenone (6.7 \u2013 10.9 %), terpinen-4-ol (3.4 \u2013 4.7 %). Quantitatively, during the air-drying process, the content of some components decreased slightly such as \u03b1-thujone (from 22.7 to 15.9 %) and 1,8-cineole (from 9.4 to 7.1 %), while the amount of other compounds increased such as chrysanthenone (from 6.7 to 10.9 %), borneol (from 0.8 to 1.5 %), germacrene-D (from 1.0 to 2.4 %) and spathulenol (from 0.8 to 1.5 %). The chemical composition of the oil was more affected by oven-drying the plant material at 35\u00b0C. \u03b1-Thujone and \u03b2-thujone decreased to 13.0 %and 18.0 %respectively, while the percentage of camphor, germacrene-D and spathulenol increased to 13.0 %, 5.5 %and 3.7 %, respectively." }, { "instance_id": "R32424xR32268", "comparison_id": "R32424", "paper_id": "R32268", "text": "Composition of the Essential Oil fromArtemisia herba-albaGrown in Jordan Abstract The composition of the essential oil hydrodistilled from the aerial parts of Artemisia herba-alba Asso. growing in Jordan was determined by GC and GC/MS. The oil yield was 1.3% (v/w) from dried tops (leaves, stems and fowers). Forty components corresponding to 95.3% of the oil were identifed, of which oxygenated monoterpenes were the main oil fraction (39.3% of the oil), with \u03b1- and \u03b2-thujones as the principal components (24.7%). The other major identifed components were: santolina alcohol (13.0%), artemisia ketone (12.4%), trans-sabinyl acetate (5.4%), germacrene D (4.6%), \u03b1-eudesmol (4.2%) and caryophyllene acetate (5.7%). The high oil yield and the substantial levels of potentially active components, in particular thujones and santolina alcohol, in the oil of this Jordanian species make the plant and the oil thereof promising candidates as natural herbal constituents of antimicrobial drug combinations." }, { "instance_id": "R32424xR32407", "comparison_id": "R32424", "paper_id": "R32407", "text": "Chemical Variability ofArtemisia herba-albaAsso Growing Wild in Semi-arid and Arid Land (Tunisia) Abstract Twenty-six oil samples were isolated by hydrodistillation from aerial parts of Artemisia herba-alba Asso growing wild in Tunisia (semi-arid land) and their chemical composition was determined by GC(RI), GC/MS and 13C-NMR. Various compositions were observed, dominated either by a single component (\u03b1-thujone, camphor, chrysanthenone or trans-sabinyl acetate) or characterized by the occurrence, at appreciable contents, of two or more of these compounds. These results confrmed the tremendous chemical variability of A. herba-alba." }, { "instance_id": "R32541xR32477", "comparison_id": "R32541", "paper_id": "R32477", "text": "The overqualified Canadian graduate: the role of the academic program in the incidence, persistence, and economic returns to overqualification This study investigates the role of the academic program in the incidence, persistence, and economic returns to overqualification among recent Canadian post-secondary graduates. Master\u2019s graduates are far more prone to overqualification than other graduates. Overqualification rates vary considerably by major field of study at the college and bachelor\u2019s levels, but not at the master\u2019s level. Graduates who are overqualified shortly after entering the workforce are far more likely to remain overqualified in the following years. Weak evidence suggests that co-op graduates are less likely to be overqualified at the bachelor\u2019s and master\u2019s levels, but not at the college level (where co-op programs are more popular). There is a strong, negative earnings effect associated with overqualification at the college and bachelor\u2019s levels, most of which dissipates after accounting for unobserved heterogeneity in a longitudinal framework. There is little or no earnings effect at the master\u2019s and doctoral levels." }, { "instance_id": "R32541xR32513", "comparison_id": "R32541", "paper_id": "R32513", "text": "Overeducation and Undereducation: Evidence for Portugal Abstract Using a unique data set of Portuguese workers, we attempt to contribute additional empirical evidence to the debate on whether or not a discrepancy exists between the educational attainment of workers and the skill requirements of jobs, with the related impact on earnings functions and the returns to education. It appears that earnings are not uniquely determined on the basis of the educational attainment of workers. The placement of the worker in a particular job plays a role in wage determination. Hence, estimates of the impact of education on earnings using the standardized human capital earnings model may give misleading results, as estimates of the return to additional education beyond that required to perform the job may be lower than those associated with required education." }, { "instance_id": "R32541xR32529", "comparison_id": "R32541", "paper_id": "R32529", "text": "Overeducation, Undereducation and the British Labour Market This paper addresses the issue of overeducation and undereducation using for the first time a British dataset which contains explicit information on the level of required education to enter a job across the generality of occupations. Three key issues within the overeducation literature are addressed. First, what determines the existence of over and undereducation and to what extent are over and undereducation substitutes for experience, tenure and training? Second, to what extent are over and undereducation temporary or permanent phenomena? Third, what are the returns to over and undereducation and do certain stylized facts discovered for the US and a number of European countries hold for Britain?" }, { "instance_id": "R32541xR32427", "comparison_id": "R32541", "paper_id": "R32427", "text": "Unemployment Persistency, Over-education and the Employment Chances of the Less Educated The research question addressed in this article concerns whether unemployment persistency can be regarded as a phenomenon that increases employment difficulties for the less educated and, if so, whether their employment chances are reduced by an overly rapid reduction in the number of jobs with low educational requirements. The empirical case is Sweden and the data covers the period 1976-2000. The empirical analyses point towards a negative response to both questions. First, it is shown that jobs with low educational requirements have declined but still constitute a substantial share of all jobs. Secondly, educational attainment has changed at a faster rate than the job structure with increasing over-education in jobs with low educational requirements as a result. This, together with changed selection patterns into the low education group, are the main reasons for the poor employment chances of the less educated in periods with low general demand for labour." }, { "instance_id": "R32541xR32442", "comparison_id": "R32541", "paper_id": "R32442", "text": "Over-education: What influence does the workplace have? The wage and job satisfaction impacts for over-educated workers have been well-documented; yet little attention has been paid to the consequences for firms. In this paper we examine over-education from the perspective of the workplace. Using linked employer-employee data for the United Kingdom, we derive the standard worker-level penalties on wages and job satisfaction. We then show how over-education rates across workplaces adversely influence workplace pay and workplace labor relations. For individual workers who may be at-risk of over-education, we also distinguish between workforce composition effects and workplace labor practices, such as hiring. The effect of over-education on job satisfaction is particularly strong and its effects are evident at the workplace level. Our results suggest that investigations of over-education at the level of the firm are a promising area of inquiry." }, { "instance_id": "R32541xR32508", "comparison_id": "R32541", "paper_id": "R32508", "text": "\u00abOvereducation in the Graduate Labor Market: A Quantile Regression Appro- ach\u00bb Abstract This paper exploits the homogeneity of data from a cohort of Northern Ireland graduates to explore the extent to which both the incidence and impacts of overeducation are specific to individuals of particular ability levels as proxied by their position within the graduate wage distribution. It was found that whilst the incidence of overeducation was heavily concentrated within low-ability segments of both the male and female graduate wage distributions, it was by no means exclusive to them. Using quantile regression techniques it was found that, relative to their well-matched counterparts, low-and mid-ability overeducated male graduates earned substantially less. However, the impacts of overeducation were found to be much more pervasive and constant throughout the entirety of the female ability (wage) distribution. The results provide only partial support for the hypothesis linking overeducation with lower levels of ability." }, { "instance_id": "R32541xR32446", "comparison_id": "R32541", "paper_id": "R32446", "text": "\u00abOvereducation, Undereducation, and the Theory of Career Mobility The theory of career mobility (Sicherman and Galor, Journal of Political Economy, 98(1), 169\u201392, 1990) claims that wage penalties for overeducated workers are compensated by better promotion prospects. Sicherman (Journal of Labour Economics, 9(2), 101\u201322, 1991) was able to confirm this theory in an empirical study using panel data. However, the only retest using panel data so far (Robst, Eastern Economic Journal, 21, 539\u201350, 1995) produced rather ambiguous results. In the present paper, random effects models to analyse relative wage growth are estimated using data from the German Socio-Economic Panel. It is found that overeducated workers in Germany have markedly lower relative wage growth rates than adequately educated workers. The results cast serious doubt on whether the career mobility model is able to explain overeducation in Germany. The plausibility of the results is supported by the finding that overeducated workers have less access to formal and informal on-the-job training, which is usually found to be positively correlated with wage growth even when controlling for selectivity effects (Pischke, Journal of Population Economics, 14, 523\u201348, 2001)." }, { "instance_id": "R32541xR32437", "comparison_id": "R32541", "paper_id": "R32437", "text": "\u00abOver and Undereducation in the UK Graduate Labour Market\u00bb ABSTRACT The authors examine the apparent underutilisation of the skills of employed graduates. As in the USA, concern has arisen in Britain over the numbers of graduates working in jobs which might be carried out equally well by those with subdegree qualifications. The authors discuss whether or not overeducation represents a serious problem, outlining theoretical explanations of over- and undereducation. Two measures of over/undereducation are then used to examine the British graduate jobs market. Drawing on Labour Force Survey data, the authors relate over- and undereducation to a range of personal and employment characteristics. They conclude that the significance of the problem of overeducation can be exaggerated, since it may represent a rational response of individuals to labour market conditions. They also point out that undereducation\u2014where people hold graduate-level jobs without possessing degrees\u2014is a form of labour market advantage which accrues disproportionately to white males." }, { "instance_id": "R32541xR32524", "comparison_id": "R32541", "paper_id": "R32524", "text": "The Impact of Surplus Schooling on Productivity and Earnings This article examines the impact of surplus schooling on individual productivity and earnings. It proposes a model that divides workers' education into two components: education that is required and thus fully utilized in the job, and education that exceeds the amount required and thus may be underutilized in the job. The model is tested with data from the 1969, 1973, and 1977 Quality of Working Life Surveys (Quinn and Staines 1979). Required schooling for each occupation is derived from estimates by job incumbents and by the Dictionary of Occupational Titles. The results show that surplus or underutilized education is rewarded at a lower rate than required education, with the actual return dependent on the type of job." }, { "instance_id": "R32541xR32494", "comparison_id": "R32541", "paper_id": "R32494", "text": "Overeducation and Skill Mismatch Past research has operationalized the notions of overeducation, overtraining, occupational mismatch, and the like in terms of the deviation of a worker's attained schooling from the estimated mean or required schooling of the worker's occupation" }, { "instance_id": "R32541xR32505", "comparison_id": "R32541", "paper_id": "R32505", "text": "Gender Differences in Overeducation: A Test of the Theory of Differential Overqualification There is little question that substantial labormarket differences exist between men and women. Among the most researched difference is the male-female wage gap. Many different theories are used to explain why men earn more than women. One possible reason is based on the limited geographic mobility of married women (Robert Frank, 1978). Family mobility is a joint decision in which the needs of the husband and wife are balanced to maximize family welfare. Job-motivated relocations are generally made to benefit the primary earner in the family. This leads to a constrained job search for the secondary earner, as he or she must search for a job in a limited geographic area. Since the husband is still the primary wage earner in many families, the job search of the wife may suffer. Individuals who are tied to a certain area are labeled \"tied-stayers,\" while secondary earners who move for the benefit of the family are labeled \"tied-movers\" (Jacob Mincer, 1978). The wages of a tied-stayer or tied-mover may not be substantially lower if the family lives in or moves to a large city. If a large labor market has more vacancies, the wife may locate a wage offer near the maximum she would find with a nationwide job search. However, being a tied-stayer or tied-mover can lower the wife's wage if the family lives in or moves to a small community. A small labor market will reduce the likelihood of her finding a job that utilizes her skills. As a result she may accept a job for which she is overqualified and thus earn a lower wage.' This hypothesized relationship between the likelihood of being overqualified and SMSA size is termed \"differential overqualification.\" Frank ( 1978) and Haim Ofek and Yesook Merrill (1994) provide support for the theory of differential overqualification by finding that the malefemale wage gap is greater in smaller SMSA's. While the results are consistent with the existence of differential overqualification, they may also result from other situations as well. Firms in small labor markets may use their monopsony power to keep wages down.2 Local demand shocks are found to be a major source of wage variation both across and within local labor markets (Robert Topel, 1986). Since large labor markets are generally more diversified, a demand shock can have a substantial impact on immobile workers in small labor markets. Another reason for examining differential overqualification involves the assumption that there are more vacancies in large labor markets. While there is little doubt that more vacancies exist in large labor markets, there are also likely to be more people searching for jobs in large labor markets. If the greater number of vacancies is offset by the larger number of searchers, it is unclear whether women will be more likely to be overqualified in small labor markets. Instead of relying on wages to determine if differential overqualification exists, we consider an explicit form of overqualification based on education." }, { "instance_id": "R32541xR32535", "comparison_id": "R32541", "paper_id": "R32535", "text": "The Impact of Surplus Schooling on Worker Productivity Human capital theory suggests that education enhances worker productivity and is reflected in higher individual earnings. We use data from the 1969 Survey of Working Conditions and the 1973 and 1977 Quality of Employment Surveys, and a model derived from the industrial psychology literature, to test the proposition that workers' education in excess of what their jobs require can have adverse effects on job satisfaction and other correlates of worker productivity. Our results support earlier studies that have found surplus schooling has a negative effect on job satisfaction. Our findings also indicate that the negative impact of surplus schooling on job satisfaction and turnover is more significant for workers with a higher level of surplus education. Finally, the negative effects of surplus schooling appear to change over time." }, { "instance_id": "R32871xR32731", "comparison_id": "R32871", "paper_id": "R32731", "text": "Rotation Sliding Window of the Hog Feature in Remote Sensing Images for Ship Detection Ship detection plays a relatively vital role in the effect of the traditional of military. In remote sensing image, we combined Histograms of Oriented Gradients features and support vector machine for ship detection. However, hog feature does not have a rotation of invariant, ship can be in any direction. Consequently, in this paper, we proposes a measure of continuous interval rotating detection sliding window of hog feature. We extract and train hog feature of positive and negative samples. Then, continuous interval rotating sliding window of hog feature to improve the accuracy of detecting ship. The experiments reveal that the detection rate can reach high of 72.7% in the vertical direction of test ship. It is of practical significance for civil and military field." }, { "instance_id": "R32871xR32546", "comparison_id": "R32871", "paper_id": "R32546", "text": "Ship detection from Landsat imagery Recent inspection of Landsat CCT printouts revealed that the detection of ships is possible. Experience has shown that MSS band 7, because of low radiance values from water and the resultant high S/N ratio, is the best MSS band for a \"quick look\" inspection of CCT printouts for possible ships. Following verification of the target on CCT printouts of other MMS bands the ship's size, orientation, state of motion, and direction of movement can be determined from the total number of pixels occupied by the target for each MSS band, the orientation of these pixels, and the target's maximum and total pixel radiance values. This paper presents the procedures used for detecting ships, and discusses the problems and limitations of the overall technique as related to ship parameters, sea state and turbidity, pixel overlap, relative geometric fidelity between pixels, and solar elevation angle. /Author/" }, { "instance_id": "R32871xR32726", "comparison_id": "R32871", "paper_id": "R32726", "text": "Texture-based vessel classifier for electro-optical satellite imagery Satellite imagery provides a valuable source of information for maritime surveillance. The vast majority of the research regarding satellite imagery for maritime surveillance focuses on vessel detection and image enhancement, whilst vessel classification remains a largely unexplored research topic. This paper presents a vessel classifier for spaceborne electro-optical imagery based on a feature representative across all satellite imagery, texture. Local Binary Patterns were selected to represent vessels for their high distinctivity and low computational complexity. Considering vessels characteristic super-structure, the extracted vessel signatures are sub-divided in three sections bow, middle and stern. A hierarchical decision-level classification is proposed, analysing first each vessel section individually and then combining the results in the second stage. The proposed approach is evaluated with the electro-optical satellite image dataset presented in [1]. Experimental results reveal an accuracy of 85.64% across four vessel categories." }, { "instance_id": "R32871xR32580", "comparison_id": "R32871", "paper_id": "R32580", "text": "Fully automated procedure for ship detection using optical satellite imagery Ship detection from remote sensing imagery is a crucial application for maritime security which includes among others traffic surveillance, protection against illegal fisheries, oil discharge control and sea pollution monitoring. In the framework of a European integrated project GMES-Security/LIMES, we developed an operational ship detection algorithm using high spatial resolution optical imagery to complement existing regulations, in particular the fishing control system. The automatic detection model is based on statistical methods, mathematical morphology and other signal processing techniques such as the wavelet analysis and Radon transform. This paper presents current progress made on the detection model and describes the prototype designed to classify small targets. The prototype was tested on panchromatic SPOT 5 imagery taking into account the environmental and fishing context in French Guiana. In terms of automatic detection of small ship targets, the proposed algorithm performs well. Its advantages are manifold: it is simple and robust, but most of all, it is efficient and fast, which is a crucial point in performance evaluation of advanced ship detection strategies." }, { "instance_id": "R32871xR32838", "comparison_id": "R32871", "paper_id": "R32838", "text": "A ship target automatic detection method for high-resolution remote sensing With the increasement of spatial resolution of remote sensing, the ship detection methods for low-resolution images are no longer suitable. In this study, a ship target automatic detection method for high-resolution remote sensing is proposed, which mainly contains steps of Otsu binary segmentation, morphological operation, calculation of target features and target judgment. The results show that almost all of the offshore ships can be detected, and the total detection rates are 94% and 91% with the experimental Google Earth data and GF-1 data respectively. The ship target automatic detection method proposed in this study is more suitable for detecting ship targets offshore rather than anchored along the dock." }, { "instance_id": "R32871xR32643", "comparison_id": "R32871", "paper_id": "R32643", "text": "Detection and classification of man-made offshore objects in TerraSAR-X and RapidEye imagery: Selected results of the DeMarine-DEKO project The project DEKO (Detection of artificial objects in sea areas) is integrated in the German DeMarine-Security project and focuses on the detection and classification of ships and offshore artificial objects relying on TerraSAR-X as well as on RapidEye multispectral optical images. The objectives are 1/ the development of reliable detection algorithms and 2/ the definition of effective, customized service concepts. In addition to an earlier publication, we describe in the following paper some selected results of our work. The algorithms for TerraSAR-X have been extended to a processing chain including all needed steps for ship detection and ship signature analysis, with an emphasis on object segmentation. For Rapid Eye imagery, a ship detection algorithm has been developed. Finally, some applications are described: Ship monitoring in the Strait of Dover based on TerraSAR-X StripMap using AIS information for verification, analyzing TerraSAR-X HighResolution scenes of an industrial harbor and finally an example of surveying a wind farm using change detection." }, { "instance_id": "R32871xR32782", "comparison_id": "R32871", "paper_id": "R32782", "text": "Marine vessel detection comparing GPRS and satellite images for security applications Unauthorized and unregistered sea going fishing vessels are being used for criminal activities in the coastal areas. The issue of piracy against merchant ships using illegal fishing vessels poses a significant threat to world shipping. Unfortunately counter piracy efforts and maritime security of our own and other nation enforcement often affects the innocent fishermen who conduct the trans-border fishing. Hence a proper vessel monitoring system is required to protect the maritime security without tampering the routine fishing activity of the sea going fishermen. This paper discusses about the feasibility of a system for the detection of registered marine fishing vessels comparing the satellite images and GPRS signal information. A review on various algorithms for identifying marine boats from satellite images is also conducted in this paper." }, { "instance_id": "R32871xR32854", "comparison_id": "R32871", "paper_id": "R32854", "text": "Coarse-to-fine ship detection using visual saliency fusion and feature encoding for optical satellite images In order to overcome cloud clutters and varied sizes of objects in high-resolution optical satellite images, a novel coarse-to-fine ship detection framework is proposed. Initially, a modified saliency fusion algorithm is derived to reduce cloud clutters and extract ship candidates. Then, in coarse discrimination stage, candidates are described by introducing shape feature to eliminate regions which are not conform to ship characteristics. In fine discrimination stage, candidates are represented by local descriptor-based feature encoding, and then linear SVM is used for discrimination. Experiments on 60 images (including 467 objects) collected from Microsoft Virtual Earth demonstrate the effectiveness of the proposed framework. Specifically, the fusion of visual saliency achieves 17.07% higher Precision and 7.23% higher Recall compared with those of individual one. Moreover, using local descriptor in fine discrimination makes Precision and F-measure further be improved by 7.23% and 1.74%, respectively." }, { "instance_id": "R32871xR32847", "comparison_id": "R32871", "paper_id": "R32847", "text": "Fast ship detection from optical satellite images based on ship distribution probability analysis Automatic ship detection from optical satellite images remains a tough task. In this paper, a novel method of ship detection from optical satellites is proposed by analyzing the ship distribution probability. First, an anomaly detection model is constructed by the sea cluster histogram model; then, the ship distribution based on the ship safety navigational criterion is analyzed to obtain the ship candidates, and obvious non-ship objects are removed by the area properties from ship candidates; finally, a structural continuity descriptor is designed to remove false alarms from the ship candidates. Experiments on numerous satellite images from panchromatic and one band within multispectral sensors are conducted. The results verified that the proposed method outperforms existing methods in both effectiveness and efficiency." }, { "instance_id": "R32871xR32553", "comparison_id": "R32871", "paper_id": "R32553", "text": "Automatic ship detection in satellite multispectral imagery In recent years, very little attention in the literature has been given to the task of automatically detecting shipping vessels in optical satellite imagery. A method for achieving this goal is described for both SPOT Multispectral and Landsat Thematic Mapper data. Essentially a task in pattern recognition, the method utilizes masking, filtering, and shape analysis techniques. Results showing a high degree of accuracy have been obtained with test data." }, { "instance_id": "R32871xR32714", "comparison_id": "R32871", "paper_id": "R32714", "text": "Ship detection in high-resolution optical imagery based on anomaly detector and local shape feature Ship detection in high-resolution optical imagery is a challenging task due to the variable appearances of ships and background. This paper aims at further investigating this problem and presents an approach to detect ships in a \u201ccoarse-to-fine\u201d manner. First, to increase the separability between ships and background, we concentrate on the pixels in the vicinities of ships. We rearrange the spatially adjacent pixels into a vector, transforming the panchromatic image into a \u201cfake\u201d hyperspectral form. Through this procedure, each produced vector is endowed with some contextual information, which amplifies the separability between ships and background. Afterward, for the \u201cfake\u201d hyperspectral image, a hyperspectral algorithm is applied to extract ship candidates preliminarily and quickly by regarding ships as anomalies. Finally, to validate real ships out of ship candidates, an extra feature is provided with histograms of oriented gradients (HOGs) to generate a hypothesis using AdaBoost algorithm. This extra feature focuses on the gray values rather than the gradients of an image and includes some information generated by very near but not closely adjacent pixels, which can reinforce HOG to some degree. Experimental results on real database indicate that the hyperspectral algorithm is robust, even for the ships with low contrast. In addition, in terms of the shape of ships, the extended HOG feature turns out to be better than HOG itself as well as some other features such as local binary pattern." }, { "instance_id": "R32871xR32630", "comparison_id": "R32871", "paper_id": "R32630", "text": "Ship Detection Using Texture Statistics from Optical Satellite Images This paper presents a method for ship detection using texture statistics from optical satellite images. The proposed method focuses on the extraction of ship candidates. First, a structural texture descriptor derived from local multiple patterns is introduced to describe image texture features, and then two statistical histograms are generated by quantizing texture features to describe the texture difference between sea and ships. Second, corresponding confidence maps representing the probabilities of ship candidates are created based on back projection of the statistical histograms, and ship candidates are extracted according to the confidence maps. Finally, the prior knowledge of ship shapes is employed to remove the false ship candidates. As using texture features, the proposed method is insensitive to different waves, illumination changes, ships with different sizes and bright/dark intensities. Experimental results demonstrate the method has good performance in both precision and recall." }, { "instance_id": "R32871xR32729", "comparison_id": "R32871", "paper_id": "R32729", "text": "Inshore ship detection in high-resolution satellite images: approximation of harbors using sea-land segmentation This paper proposes a novel inshore ship detection method that is based on the approximation of harbour area with piecewise linear line segments. The method heavily depends on a very fine sea-land segmentation, which is realized in two steps in this work. First, an initial mask is generated by thresholding the normalized difference water index (NDWI) using the zero-level of available global elevation data. In the second step, border of the segmentation result is further enhanced via graph-cut algorithm since spectral characteristics of sea close to sea-land border may differ from the ones of deep parts of the sea. The resultant borderline is used for finding line segments that are assumed to represent the man-made harbours. After being merged and eliminated properly, these line segments are used to extract harbour area so that the remaining connected components of the binary mask can be tested for being ship according to their shapes. Test results show that the proposed method is capable of detecting different kinds of ships in a variety of sea states." }, { "instance_id": "R32871xR32677", "comparison_id": "R32871", "paper_id": "R32677", "text": "Segmentation and wake removal of seafaring vessels in optical satellite images This paper aims at the segmentation of seafaring vessels in optical satellite images, which allows an accurate length estimation. In maritime situation awareness, vessel length is an important parameter to classify a vessel. The proposed segmentation system consists of robust foreground-background separation, wake detection and ship-wake separation, simultaneous position and profile clustering and a special module for small vessel segmentation. We compared our system with a baseline implementation on 53 vessels that were observed with GeoEye-1. The results show that the relative L1 error in the length estimation is reduced from 3.9 to 0.5, which is an improvement of 87%. We learned that the wake removal is an important element for the accurate segmentation and length estimation of ships." }, { "instance_id": "R32871xR32685", "comparison_id": "R32871", "paper_id": "R32685", "text": "Automatic ship detection from commercial multispectral satellite imagery Commercial multispectral satellite sensors spend much of their time over the oceans. NRL has demonstrated an automatic processing system for finding ships at sea using commercially available multispectral data. To distinguish ships from whitecaps and clouds, a water/cloud clutter subspace is estimated and a continuum fusion derived anomaly detection algorithm is applied. This provides a maritime awareness capability with an acceptable detection rate while maintaining a low rate of false alarms. The system also provides a confidence metric, which can be used to further limit the false alarm rate." }, { "instance_id": "R32871xR32819", "comparison_id": "R32871", "paper_id": "R32819", "text": "Ship detection based on surface fitting modeling for large range background of ocean images For the seawater background interference problem in ship detection of high resolution remote sensing images, the characteristics of seawater background are analyzed deeply in this paper. And it's found that there is consistent in local but continuous variation in large range. On the basis of above analysis, a Gauss variable surface seawater background model for high resolution remote sensing images is built, and the estimation of mean surface and variance surface are also given. Then, a novel ship detection method based on sea background statistical modeling is proposed for large range high resolution remote sensing images. The experimental results show the feasibility of our proposed method in sea background modeling and target detection for different kinds of high resolution remote sensing images. Compared with the other relative method, the proposed method has higher recall rate and lower missing rate." }, { "instance_id": "R32871xR32625", "comparison_id": "R32871", "paper_id": "R32625", "text": "Ship detection in MODIS imagery Understanding the capabilities of satellite sensors with spatial and spectral characteristics similar to those of MODIS for Maritime Domain Awareness (MDA) is of importance because of the upcoming NPOES with 100 minutes revisit time carrying the MODIS-like VIIRS multispectral imaging sensor. This paper presents an experimental study of ship detection using MODIS imagery. We study the use of ship signatures such as contaminant plumes in clouds and the spectral contrast between the ship and the sea background for detection. Results show the potential and challenges for such approach in MDA." }, { "instance_id": "R32871xR32830", "comparison_id": "R32871", "paper_id": "R32830", "text": "Ship Rotated Bounding Box Space for Ship Extraction From High-Resolution Optical Satellite Images With Complex Backgrounds Extracting ships from complex backgrounds is the bottleneck of ship detection in high-resolution optical satellite images. In this letter, we propose a nearly closed-form ship rotated bounding box space used for ship detection and design a method to generate a small number of highly potential candidates based on this space. We first analyze the possibility of accurately covering all ships by labeling rotated bounding boxes. Moreover, to reduce search space, we construct a nearly closed-form ship rotated bounding box space. Then, by scoring for each latent candidate in the space using a two-cascaded linear model followed by binary linear programming, we select a small number of highly potential candidates. Moreover, we also propose a fast version of our method. Experiments on our data set validate the effectiveness of our method and the efficiency of its fast version, which achieves a close detection rate in near real time." }, { "instance_id": "R32871xR32571", "comparison_id": "R32871", "paper_id": "R32571", "text": "Measuring Overlap-Rate in Hierarchical Cluster Merging for Image Segmentation and Ship Detection In this paper, we present a definition on the degree of overlap between two clusters and develop an algorithm for calculating the overlap rate. Using this theory, we also develop a new hierarchical cluster merging algorithm for image segmentation and apply it to the ship detection in high resolution image. In our experiment, we compare our method with several existing popular methods. Experimental results demonstrate the effectiveness of the overlap rate measuring method and the new ship detection method." }, { "instance_id": "R32871xR32662", "comparison_id": "R32871", "paper_id": "R32662", "text": "An approach for visual attention based on biquaternion and its application for ship detection in multispectral imagery This paper proposes an approach for visual attention based on biquaternion, and investigates its application for ship detection in multispectral imagery. The proposed approach describes high-dimensional data in the form of biquaternion and utilizes the phase spectrum of biquaternion Fourier transform to generate a required saliency map that can be used for salient target detection. In our method, the multidimensional data is processed as a whole, and the features contained in each spectral band can be extracted effectively. Compared with traditional visual attention approaches, our method has very low computational complexity. Experimental results on simulated and real multispectral remote sensing data have shown that the proposed method has excellent performance in ship detection. Furthermore, our method is robust against white noise and almost meets real-time requirements, which has great potentials in engineering applications." }, { "instance_id": "R32871xR32660", "comparison_id": "R32871", "paper_id": "R32660", "text": "A visual search inspired computational model for ship detection in optical satellite images In this letter, we propose a novel computational model for automatic ship detection in optical satellite images. The model first selects salient candidate regions across entire detection scene by using a bottom-up visual attention mechanism. Then, two complementary types of top-down cues are employed to discriminate the selected ship candidates. Specifically, in addition to the detailed appearance analysis of candidates, a neighborhood similarity-based method is further exploited to characterize their local context interactions. Furthermore, the framework of our model is designed in a multiscale and hierarchical manner which provides a plausible approximation to a visual search process and reasonably distributes the computational resources. Experiments over panchromatic SPOT5 data prove the effectiveness and computational efficiency of the proposed model." }, { "instance_id": "R32871xR32733", "comparison_id": "R32871", "paper_id": "R32733", "text": "A remote sensing ship recognition method based on co-training model Aiming at detecting sea targets efficiently, an approach using optical remote sensing data based on co-training model is proposed. Firstly, using size, texture, shape, moment invariants features and ratio codes, feature extraction is realized. Secondly, based on rough set theory, the common discernibility degree is used to select valid recognition features automatically. Finally, a co-training model for classification is introduced. Firstly, two diverse ruducts are generated, and then the model employs them to train two base classifiers on labeled dada, and makes two base classifiers teach each other on unlabeled data to boot their performance iteratively. Experimental results show the proposed approach can get better performance than K-Nearest Neighbor (KNN), Support Vector Machines (SVM), traditional hierarchical discriminant regression (HDR)." }, { "instance_id": "R32871xR32575", "comparison_id": "R32871", "paper_id": "R32575", "text": "Enhanced ship detection from overhead imagery In the authors' previous work, a sequence of image-processing algorithms was developed that was suitable for detecting and classifying ships from panchromatic Quickbird electro-optical satellite imagery. Presented in this paper are several new algorithms, which improve the performance and enhance the capabilities of the ship detection software, as well as an overview on how land masking is performed. Specifically, this paper describes the new algorithms for enhanced detection including for the reduction of false detects such as glint and clouds. Improved cloud detection and filtering algorithms are described as well as several texture classification algorithms are used to characterize the background statistics of the ocean texture. These detection algorithms employ both cloud and glint removal techniques, which we describe. Results comparing ship detection with and without these false detect reduction algorithms are provided. These are components of a larger effort to develop a low-cost solution for detecting the presence of ships from readily-available overhead commercial imagery and comparing this information against various open-source ship-registry databases to categorize contacts for follow-on analysis." }, { "instance_id": "R32871xR32739", "comparison_id": "R32871", "paper_id": "R32739", "text": "A new method of inshore ship detection in high-resolution optical remote sensing images Ship as an important military target and water transportation, of which the detection has great significance. In the military field, the automatic detection of ships can be used to monitor ship dynamic in the harbor and maritime of enemy, and then analyze the enemy naval power. In civilian field, the automatic detection of ships can be used in monitoring transportation of harbor and illegal behaviors such as illegal fishing, smuggling and pirates, etc. In recent years, research of ship detection is mainly concentrated in three categories: forward-looking infrared images, downward-looking SAR image, and optical remote sensing images with sea background. Little research has been done into ship detection of optical remote sensing images with harbor background, as the gray-scale and texture features of ships are similar to the coast in high-resolution optical remote sensing images. In this paper, we put forward an effective harbor ship target detection method. First of all, in order to overcome the shortage of the traditional difference method in obtaining histogram valley as the segmentation threshold, we propose an iterative histogram valley segmentation method which separates the harbor and ships from the water quite well. Secondly, as landing ships in optical remote sensing images usually lead to discontinuous harbor edges, we use Hough Transform method to extract harbor edges. First, lines are detected by Hough Transform. Then, lines that have similar slope are connected into a new line, thus we access continuous harbor edges. Secondary segmentation on the result of the land-and-sea separation, we eventually get the ships. At last, we calculate the aspect ratio of the ROIs, thereby remove those targets which are not ship. The experiment results show that our method has good robustness and can tolerate a certain degree of noise and occlusion." }, { "instance_id": "R32871xR32603", "comparison_id": "R32871", "paper_id": "R32603", "text": "Performance of Landsat TM in ship detection in turbid waters Abstract The visible and near infrared bands of Landsat have limitations for detecting ships in turbid water. The potential of TM middle infrared bands for ship detection has so far not been investigated. This study analyzed the performance of the six Landsat TM visible and infrared bands for detecting dredging ships in the turbid waters of the Poyang Lake, China. A colour composite of principal components analysis (PCA) components 3, 2 and 1 of a TM image was used to randomly select 81 dredging ships. The reflectance contrast between ships and adjacent water was calculated for each ship. A z-score and related p-value were used to assess the ship detection performance of the six Landsat TM bands. The reflectance contrast was related to water turbidity to analyze how water turbidity affected the capability of ship identification. The results revealed that the TM middle infrared bands 5 and 7 better discriminated vessels from surrounding waters than the visible and near infrared bands 1\u20134. A significant relation between reflectance contrast and water turbidity in bands 1\u20134 could explain the limitations of bands 1\u20134; while water turbidity has no a significant relation to the reflectance contrast of bands 5 and 7. This explains why bands 5 and 7 detect ships better than bands 1\u20134." }, { "instance_id": "R32871xR32619", "comparison_id": "R32871", "paper_id": "R32619", "text": "A Novel Hierarchical Method of Ship Detection from Spaceborne Optical Image Based on Shape and Texture Features Ship detection from remote sensing imagery is very important, with a wide array of applications in areas such as fishery management, vessel traffic services, and naval warfare. This paper focuses on the issue of ship detection from spaceborne optical images (SDSOI). Although advantages of synthetic-aperture radar (SAR) result in that most of current ship detection approaches are based on SAR images, disadvantages of SAR still exist, such as the limited number of SAR sensors, the relatively long revisit cycle, and the relatively lower resolution. With the increasing number of and the resulting improvement in continuous coverage of the optical sensors, SDSOI can partly overcome the shortcomings of SAR-based approaches and should be investigated to help satisfy the requirements of real-time ship monitoring. In SDSOI, several factors such as clouds, ocean waves, and small islands affect the performance of ship detection. This paper proposes a novel hierarchical complete and operational SDSOI approach based on shape and texture features, which is considered a sequential coarse-to-fine elimination process of false alarms. First, simple shape analysis is adopted to eliminate evident false candidates generated by image segmentation with global and local information and to extract ship candidates with missing alarms as low as possible. Second, a novel semisupervised hierarchical classification approach based on various features is presented to distinguish between ships and nonships to remove most false alarms. Besides a complete and operational SDSOI approach, the other contributions of our approach include the following three aspects: 1) it classifies ship candidates by using their class probability distributions rather than the direct extracted features; 2) the relevant classes are automatically built by the samples' appearances and their feature attribute in a semisupervised mode; and 3) besides commonly used shape and texture features, a new texture operator, i.e., local multiple patterns, is introduced to enhance the representation ability of the feature set in feature extraction. Experimental results of SDSOI on a large image set captured by optical sensors from multiple satellites show that our approach is effective in distinguishing between ships and nonships, and obtains a satisfactory ship detection performance." }, { "instance_id": "R32871xR32802", "comparison_id": "R32871", "paper_id": "R32802", "text": "A real-time on-board ship targets detection method for optical remote sensing satellite Optical remote sensing satellite holds great potential for ship detection. However, it is challenging for real-time detection due to the relatively low resolution and complicated background. We propose a real-time on-board ship detection method based on statistical analysis and shape identification. First, Gaussian and median filter are used to reduce the periodical and pepper noise generated by the camera sensor system. Then, mathematical morphology processing is employed to remove the background interference and thus enhances the ship targets. Next, statistic analysis is performed on inspected and neighbor areas to distinguish the suspected ship target and pure-sea, land, islands or strong waves. Finally, features, such as the length-width ratio, circular degree of the suspected targets, are used to detect the targets. The proposed method was implemented on a single FPGA and was validated on real high-orbit optical satellite images. For an image as large as 1024\u00d71024 of 8bits, the computation time was less than 10 seconds. The detection rate was above 90% while the false alarm rate was under 5%. The experimental results demonstrate the ability to support low-power consumption, miniaturization for the real-time ship targets detection on-board." }, { "instance_id": "R32871xR32869", "comparison_id": "R32871", "paper_id": "R32869", "text": "Ship Detection From Optical Satellite Images Based on Saliency Segmentation and Structure-LBP Feature Automatic ship detection from optical satellite imagery is a challenging task due to cluttered scenes and variability in ship sizes. This letter proposes a detection algorithm based on saliency segmentation and the local binary pattern (LBP) descriptor combined with ship structure. First, we present a novel saliency segmentation framework with flexible integration of multiple visual cues to extract candidate regions from different sea surfaces. Then, simple shape analysis is adopted to eliminate obviously false targets. Finally, a structure-LBP feature that characterizes the inherent topology structure of ships is applied to discriminate true ship targets. Experimental results on numerous panchromatic satellite images validate that our proposed scheme outperforms other state-of-the-art methods in terms of both detection time and detection accuracy." }, { "instance_id": "R32871xR32867", "comparison_id": "R32871", "paper_id": "R32867", "text": "A Hierarchical Maritime Target Detection Method for Optical Remote Sensing Imagery Maritime target detection from optical remote sensing images plays an important role in related military and civil applications and its weakness lies in its compromised performance under complex uncertain conditions. In this paper, a novel hierarchical ship detection method is proposed to overcome this issue. In the ship detection stage, based on Entropy information, we construct a combined saliency model with self-adaptive weights to prescreen ship candidates from across the entire maritime domain. To characterize ship targets and further reduce the false alarms, we introduce a novel and practical descriptor based on gradient features, and this descriptor is robust against clutter introduced by heavy clouds, islands, ship wakes as well as variation in target size. Furthermore, the proposed method is effective for not only color images but also gray images. The experimental results obtained using real optical remote sensing images have demonstrated that the locations and the number of ships can be determined accurately and that the false alarm rate is greatly decreased. A comprehensive comparison is performed between the proposed method and the state-of-the-art methods, which shows that the proposed method achieves higher accuracy and outperforms all the competing methods. Furthermore, the proposed method is robust under various backgrounds of maritime images and has great potential for providing more accurate target detection in engineering applications." }, { "instance_id": "R32871xR32651", "comparison_id": "R32871", "paper_id": "R32651", "text": "A Novel Algorithm for Ship Detection Based on Dynamic Fusion Model of Multi-feature and Support Vector Machine Ship detection is one of the most important applications of target recognition based on optical remote sensing images. In this paper, we propose an uncertain ship target extraction algorithm based on dynamic fusion model of multi-feature and variance feature of optical remote sensing image. We choose several geometrical features, such as length, wide, rectangular ratio, tightness ratio and so on, using SVM to train and predict the uncertain ship targets extracted by our algorithm automatically. Experiments show that our algorithm is very robust, and the recognition rate of our algorithm can reach or even better than 95%, with the false alarm rate is kept at 3%." }, { "instance_id": "R32871xR32836", "comparison_id": "R32871", "paper_id": "R32836", "text": "A ship target automatic recognition method for sub-meter remote sensing images The spatial resolution is increasingly high as development of optical remote sensing, and more and more optical sensors can achieve the detection ability of sub meter, which lays down the data foundation for automatic recognition of ship targets. However, mature technology is lacked to identify the ship models automatically with remote sensing images. In this study, an automatic recognition method for ship targets is proposed based on the local invariant feature extraction algorithm SIFT (Scale Invariant Feature Transform), which is consist of feature extraction and description, feature matching and target recognition. The model of unknown target is identified based on the target library using the matching difference of targets with the same model and different models. The experiment results show that this automatic recognition flow is effective to identify the ship targets of interest based on the target library, and the total correct recognition rate is 92%. This method provides a new flow for automatic model recognition of ship targets, and has considerable potential for wide applications." }, { "instance_id": "R32871xR32861", "comparison_id": "R32871", "paper_id": "R32861", "text": "Ship Detection in Spaceborne Optical Image With SVD Networks Automatic ship detection on spaceborne optical images is a challenging task, which has attracted wide attention due to its extensive potential applications in maritime security and traffic control. Although some optical image ship detection methods have been proposed in recent years, there are still three obstacles in this task: 1) the inference of clouds and strong waves; 2) difficulties in detecting both inshore and offshore ships; and 3) high computational expenses. In this paper, we propose a novel ship detection method called SVD Networks (SVDNet), which is fast, robust, and structurally compact. SVDNet is designed based on the recent popular convolutional neural networks and the singular value decompensation algorithm. It provides a simple but efficient way to adaptively learn features from remote sensing images. We evaluate our method on some spaceborne optical images of GaoFen-1 and Venezuelan Remote Sensing Satellites. The experimental results demonstrate that our method achieves high detection robustness and a desirable time performance in response to all of the above three problems." }, { "instance_id": "R32871xR32606", "comparison_id": "R32871", "paper_id": "R32606", "text": "A Hierarchical Salient-Region Based Algorithm for Ship Detection in Remote Sensing Images In this paper, we present a hierarchical salient-region based algorithm and apply it for automatic ship detection in remote sensing images. The novel framework breaks down the complex problem of scene analysis by hierarchical attention, in a computationally efficient manner, such that only the salient-regions which contain potential targets can be analyzed in detail. Firstly, a parallel method is adopted for crudely selecting saliency tiles from entire scene by using low-level feature extraction mechanisms, and then the Region-of-Interest (ROI) around each saliency object is taken out from the saliency tiles to pass to the further processing. Shape and texture features are extracted from the multiresource ROIs to describe more details for candidate targets respectively. Finally, Support Vector Machine (SVM) is applied for target validation. Experiments show the proposed algorithm achieves high probabilities of recall and correct detection, as well as the false alarms can be greatly diminished, with a reasonable time-consumption." }, { "instance_id": "R32871xR32699", "comparison_id": "R32871", "paper_id": "R32699", "text": "Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding Abstract Automatic detection of geospatial targets in cluttered scenes is a profound challenge in the field of aerial and satellite image analysis. In this paper, we propose a novel practical framework enabling efficient and simultaneous detection of multi-class geospatial targets in remote sensing images (RSI) by the integration of visual saliency modeling and the discriminative learning of sparse coding. At first, a computational saliency prediction model is built via learning a direct mapping from a variety of visual features to a ground truth set of salient objects in geospatial images manually annotated by experts. The output of this model can predict a small set of target candidate areas. Afterwards, in contrast with typical models that are trained independently for each class of targets, we train a multi-class object detector that can simultaneously localize multiple targets from multiple classes by using discriminative sparse coding. The Fisher discrimination criterion is incorporated into the learning of a dictionary, which leads to a set of discriminative sparse coding coefficients having small within-class scatter and big between-class scatter. Multi-class classification can be therefore achieved by the reconstruction error and discriminative coding coefficients. Finally, the trained multi-class object detector is applied to those target candidate areas instead of the entire image in order to classify them into various categories of target, which can significantly reduce the cost of traditional exhaustive search. Comprehensive evaluations on a satellite RSI database and comparisons with a number of state-of-the-art approaches demonstrate the effectiveness and efficiency of the proposed work." }, { "instance_id": "R32914xR32887", "comparison_id": "R32914", "paper_id": "R32887", "text": "The effects of GAAP regulation and bond market interaction on local government disclosure This study examines the effects of disclosure regulation on municipal managers' incentives to disclose financial report information to the bond market. I compare disclosure levels of municipalities in Michigan, which requires GAAP, with those in Pennsylvania, which has unregulated disclosure. In the absence of disclosure regulation, I find that managers disclose information in response to bond market incentives. Controlling for other incentives to disclose, I find that regulation induces additional disclosures for low-debt governments, and is not binding for high-debt governments. My evidence has two implications. First, regulated governments with high debt levels are required to disclose GAAP information that they would have voluntarily disclosed in the absence of regulation. Second, mandating GAAP imposes costs on governments with lower bond market interaction." }, { "instance_id": "R32914xR32893", "comparison_id": "R32914", "paper_id": "R32893", "text": "A comparative empirical examination of extent of disclosure by private and public colleges and universities in the United States Abstract This study examines the annual reports of 100 United States (US) institutions of higher education to determine identifiable and measurable factors associated with extent of disclosure. Each disclosure was weighted by its relative importance to users of college and university financial statements. The measurement construct for extent of disclosure was the ratio of an institution\u2019s total disclosure score to its total possible disclosure score. Institution size and public/private status were associated with total extent of disclosure but leverage and audit firm size were not significant. Extent of disclosure of non-financial performance information (service efforts and accomplishments) was associated with high tuition rates and low dependence on tuition revenue and with state auditors as opposed to public accounting firm auditors. The findings are consistent with accountability and public interest tenets (Coy, D., Fischer, M., Gordon, T., 2001. Public accountability: a new paradigm for college and university annual reports. Critical Perspectives on Accounting 12 (1), 1\u201331). Highly visible institutions, those larger in size or audited by the state, disclosed more information. Moreover, some institutions used a corporate-style report to better promote their interests." }, { "instance_id": "R32914xR32875", "comparison_id": "R32914", "paper_id": "R32875", "text": "Determinants of web site information by Spanish city councils Purpose - The purpose of this research is to analyse the web sites of large Spanish city councils with the objective of assessing the extent of information disseminated on the internet and determining what factors are affecting the observed levels of information disclosure.Design/methodology/approach - The study takes as its reference point the existing literature on the examination of the quality of web sites, in particular the provisions of the Web Quality Model (WQM) and the importance of content as a key variable in determining web site quality. In order to quantify the information on city council web sites, a Disclosure Index has been designed which takes into account the content, navigability and presentation of the web sites. In order to contrast which variables determine the information provided on the web sites, our investigation bases itself on the studies about voluntary disclosure in the public sector, and six lineal regressions models have been performed.Findings - The empirical evidence obtained reveals low disclosure levels among Spanish city council web sites. In spite of this, almost 50 per cent of the city councils have reached the \"approved\" level and of these, around a quarter obtained good marks. Our results show that disclosure levels depend on political competition, public media visibility and the access to technology and educational levels of the citizens. Practical implications - The strategy of communication on the internet by local Spanish authorities is limited in general to an ornamental web presence but one that does not respond efficiently to the requirements of the digital society. During the coming years, local Spanish politicians will have to strive to take advantage of the opportunities that the internet offers to increase both the relational and informational capacity of municipal web sites as well as the digital information transparency of their public management.Originality/value - The internet is a potent channel of communication that is modifying the way in which people access and relate to information and each other. The public sector is not unaware of these changes and is incorporating itself gradually into the new network society. This study systematises the analysis of local administration web sites, showing the lack of digital transparency, and orients politicians in the direction to follow in order to introduce improvements in their electronic relationships with the public." }, { "instance_id": "R32914xR32911", "comparison_id": "R32914", "paper_id": "R32911", "text": "Economic Incentives and the Choice of State Government Accounting Practices Several recent studies have examined possible economic determinants of accounting policy choices of local government entities. For example, Zimmerman [1977] and Maher and Keller [1978] proposed economic reasons for the current (diverse) state of municipal accounting and financial reporting, and Evans and Patton [1983] identified economic incentives leading to participation in the Municipal Finance Officers Association Certificate of Conformance program. A recent survey by the Council on State Governments (CSG) [1980], in its summary of major accounting and reporting practices for individual state governments, characterizes both the general status of state government accounting and the diversity of accounting practices observed across states. Using the data reported by the CSG and some of the economic arguments offered in earlier research, this study provides preliminary evidence on the association between economic factors and cross-sectional variations in accounting practices of state governments. The specific evidence presented is characteristic of states that report quantitatively" }, { "instance_id": "R32914xR32895", "comparison_id": "R32914", "paper_id": "R32895", "text": "Accountability Disclosures by Queensland Local Government Councils: 1997\u20131999 The annual report is promoted and regarded as the primary medium of accountability for government agencies. In Australia, anecdotal evidence suggests the quality of annual reports is variable. However, there is scant empirical evidence on the quality of reports. The aim of this research is to gauge the quality of annual reporting by local governments in Queensland, and to investigate the factors that may contribute to that level of quality. The results of the study indicate that although the quality of reporting by local governments has improved over time, councils generally do not report information on aspects of corporate governance, remuneration of executive staff, personnel, occupational health and safety, equal opportunity policies, and performance information. In addition, the results indicate there is a correlation between the size of the local government and the quality of reporting but the quality of disclosures is not correlated with the timeliness of reports. The study will be of interest to the accounting profession, public sector regulators who are responsible for the integrity of the accountability mechanisms and public sector accounting practitioners. It will form the basis for future longitudinal research, which will map changes in the quality of local government annual reporting." }, { "instance_id": "R32914xR32877", "comparison_id": "R32914", "paper_id": "R32877", "text": "Communicating performance: the extent and effectiveness of performance reporting by U.S. colleges and universities Performance measures have long been a topic of interest in higher education although no consensus on the best way to measure performance has been achieved. This paper examines the extent and effectiveness of service efforts and accomplishment reporting by public and not-for-profit U.S. colleges and universities using survey data provided by the National Association of College and University Business Officers. Effectiveness is evaluated using the Government Accounting Standards Board (GASB) suggested criteria. Regression analysis suggests an association between the extent of disclosure and size, leverage, level of education provided, and regional accreditation agency. Private institutions rate themselves as more effective communicators. Effectiveness of communication is also associated with the extent of disclosure, level of education provided and accreditation region." }, { "instance_id": "R32940xR32926", "comparison_id": "R32940", "paper_id": "R32926", "text": "Error in Byline in: Long-term and Perioperative Corticosteroids in Anastomotic Leakage: A Prospective Study of 259 Left-Sided Colorectal Anastomoses prediction model based on the anatomic injury scale. Ann Surg. 2008;247(6): 1041-1048. 13. van Buuren S, Boshuizen HC, Knook DL. Multiple imputation of missing blood pressure covariates in survival analysis. Stat Med. 1999;18(6):681-694. 14. White H. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica. 1980;48(4):817-838. 15. Davidson GH, Hamlat CA, Rivara FP, Koepsell TD, Jurkovich GJ, Arbabi S. Longterm survival of adult trauma patients. JAMA. 2011;305(10):1001-1007. 16. Dutton RP, Stansbury LG, Leone S, Kramer E, Hess JR, Scalea TM. Trauma mortality in mature trauma systems: are we doing better? an analysis of trauma mortality patterns, 1997-2008. J Trauma. 2010;69(3):620-626. 17. Nathens AB, Jurkovich GJ, Cummings P, Rivara FP, Maier RV. The effect of organized systems of trauma care on motor vehicle crash mortality. JAMA. 2000; 283(15):1990-1994. 18. MacKenzie EJ, Rivara FP, Jurkovich GJ, et al. A national evaluation of the effect of trauma-center care on mortality. N Engl J Med. 2006;354(4):366-378. 19. Acute Respiratory Distress Syndrome Network. Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. N Engl J Med. 2000;342(18):1301-1308. 20. Moore FA, Feliciano DV, Andrassy RJ, et al. Early enteral feeding, compared with parenteral, reduces postoperative septic complications: the results of a meta-analysis. Ann Surg. 1992;216(2):172-183. 21. Rotondo MF, Zonies DH. The damage control sequence and underlying logic. Surg Clin North Am. 1997;77(4):761-777. 22. Holcomb JB, Jenkins D, Rhee P, et al. Damage control resuscitation: directly addressing the early coagulopathy of trauma. J Trauma. 2007;62(2):307-310. 23. Nathens AB, Jurkovich GJ, MacKenzie EJ, Rivara FP. A resource-based assessment of trauma care in the United States. J Trauma. 2004;56(1):173-178. 24. HsiaRY,ShenYC.Risingclosuresofhospital traumacentersdisproportionatelyburden vulnerable populations. Health Aff (Millwood). 2011;30(10):1912-1920. 25. Mullins RJ, Mann NC, Hedges JR, et al. Adequacy of hospital discharge status as a measure of outcome among injured patients. JAMA. 1998;279(21):17271731. 26. Shafi S, Friese R, Gentilello LM. Moving beyond personnel and process: a case for incorporating outcome measures in the trauma center designation process. Arch Surg. 2008;143(2):115-120. 27. Glance LG, Dick AW, Osler TM, Meredith W, Mukamel DB. The association between cost and quality in trauma: is greater spending associated with higherquality care? Ann Surg. 2010;252(2):217-222. 28. Eddy DM, Billings J. The quality of medical evidence: implications for quality of care. Health Aff (Millwood). 1988;7(1):19-32. 29. Glance LG, Dick AW, Mukamel DB, Osler TM. Association between trauma quality indicators and outcomes for injured patients. Arch Surg. 2012;147(4):308315." }, { "instance_id": "R32940xR32934", "comparison_id": "R32940", "paper_id": "R32934", "text": "Biologic treatment or immunomodulation is not associated with postoperative anastomotic complications in abdominal surgery for Crohn's disease Abstract Objectives. There are concerns that biologic treatments or immunomodulation may negatively influence anastomotic healing. This study investigates the relationship between these treatments and anastomotic complications after surgery for Crohn's disease. Patients and methods. Retrospective study on 417 operations for Crohn's disease performed at four Danish hospitals in 2000\u20132007. Thirty-two patients were preoperatively treated with biologics and 166 were on immunomodulation. In total, 154 were treated with corticosteroids of which 66 had prednisolone 20 mg or more. Results. Anastomotic complications occurred at 13% of the operations. There were no difference in patients on biologic treatment (9% vs. 12% (p = 0.581)) or in patients on immunomodulation (10% vs. 14% (p = 0.263)). Patients on 20 mg prednisolone or more had more anastomotic complications (20% vs. 11% (p = 0.04)). Anastomotic complications were more frequent after a colo-colic anastomosis than after an entero-enteric or entero-colic (33% vs. 12% (p = 0.013)). Patients with anastomotic complications were older (40 years vs. 35 years (p = 0.014)), had longer disease duration (7.5 years vs. 4 years (p = 0.04)), longer operation time (155 min vs. 115 min (p = 0.018)) and more operative bleeding (200 ml vs. 130 ml (p = 0.029)). Multivariate analysis revealed preoperative treatment with prednisolone 20 mg or more, operation time and a colo-colic anastomosis as negative predictors of anastomotic complications. Conclusions. Preoperative biologic treatment or immunomodulation had no influence on anastomotic complications. The study confirms previous findings of corticosteroids and a colo-colic anastomosis as negative predictors and also that surgical complexity, as expressed by bleeding and operation time, may contribute to anastomotic complications." }, { "instance_id": "R32940xR32930", "comparison_id": "R32940", "paper_id": "R32930", "text": "Effect of systemic steroids on ileal pouch-anal anastomosis in patients with ulcerative colitis BACKGROUND: Long-term steroid therapy predisposes to postsurgical complications, especially in patients with inflammatory bowel disease. PURPOSE: This study was undertaken to determine incidence of early septic complications after ileal pouch-anal anastomosis (TPAA) in patients who are undergoing prolonged steroid therapy. METHODS: We reviewed charts of 692 patients undergoing restorative proctocolectomy and IPAA to treat ulcerative colitis. Incidence of early (within 30 days) septic complications and sepsis-related reoperations, in patients who were having high-dose (>20 mg of prednisone per day) and low-dose steroid therapy (<20 mg of prednisone per day) for more than one month before surgery, was compared with patients who were not receiving steroid therapy. Follow-up included an annual questionnaire and physical examination. RESULTS: Patients without steroid dose data recorded were excluded (n=21). Of the 671 remaining patients, 310 received no steroids, 169 received low-dose steroids, and 192 received high-dose steroids. These three groups were similar in gender composition, age at surgery, types of anastomosis (stapled or handsewn), and incidence of diabetes mellitus, peripheral vascular disease, and obesity. Early septic complications were found in 18 (6 percent), 14 (8 percent), and 12 (6 percent) patients without steroid therapy, those having low-dose steroid therapy, and those having high-dose steroid therapy (P=0.57), respectively. Sepsis-related reoperation rate (P=0.73) and number of sepsis-related pouch excisions (P=0.79) did not differ between groups. In patients undergoing IPAA without ileostomy, early septic complications were found in one (3.8 percent), two (20 percent), and five (50 percent) patients without steroid treatment, low-dose steroid therapy, and high-dose steroid therapy (P=0.004), respectively. CONCLUSION: In patients who are undergoing IPAA with diversion for ulcerative colitis, prolonged systemic steroid therapy before surgery is not associated with increased septic complications." }, { "instance_id": "R32940xR32924", "comparison_id": "R32940", "paper_id": "R32924", "text": "Effect of high-dose steroids on anastomotic complications after proctocolectomy with ileal pouch?anal anastomosis This review was designed to determine whether \u201chigh-dose\u201d steroid therapy (\u226520 mg prednisone/day) increases the likelihood of anastomotic complications after restorative proctocolectomy with ileal pouch-anal anastomosis (IPAA). The hospital records of 100 patients undergoing proctocolectomy with IPAA were reviewed. Patient characteristics were analyzed to determine what factors were associated with higher rates of anastomosis-related complications. Seventy-one of our patients were given diverting ileostomies, whereas the remaining 29 underwent a single-stage procedure. Fifty-four percent of the patients in our review were taking steroids preoperatively, 39 of whom were on high-dose therapy. The overall anastomosis-related complication rate was 14%. There was no significant difference in complication rates with respect to age, steroid use, steroid dose, use of a diverting ileostomy, type of anastomosis, duration of disease, or presence of backwash ileitis. A trend toward higher leakage rates was found in patients undergoing single-stage procedures (10.3% vs. 2.8%, P = 0.14) as well as in patients undergoing single-stage procedures on high-dose steroids (22% vs. 5.0, P = 0.22). Nevertheless, neither of these trends was found to be statistically significant, which was likely infiuenced by the small sample size. Our data suggest that there may be an increase in anastomotic leakage rates in patients on high-dose steroids undergoing a single-stage proctocolectomy with IPAA. Nevertheless, our rate was not as high as the rates seen by other investigators and did not reach statistical significance. During preoperative counseling, patients on high-dose steroids should be informed of this uncertain but real risk of anastomotic leakage." }, { "instance_id": "R33008xR32998", "comparison_id": "R33008", "paper_id": "R32998", "text": "Cytogenetic studies in untreated Hodgkin's disease Abstract Very little data have been published on cytogenetic abnormalities in Hodgkin's disease (HD) and their correlation with clinicopathologic features are scanty. We have performed chromosomal analysis of lymph nodes from 60 previously untreated HD patients and obtained analyzable metaphases in 49 patients (82%). Chromosomal abnormalities were found in 33 patients (55%) but only 31 karyotypes could be, at least partially, described. Twenty-nine cases showed numerical abnormalities that involved all chromosomes with the exception of chromosomes 13 and Y, which were gained less frequently and lost more frequently than other chromosomes. Structural abnormalities were found in 30 cases, involving all chromosomes except Y. Chromosomal regions 12p11\u201313, 13p11\u2013 13, 3q26\u201328, 6q15\u201316, and 7q31\u201335 were rearranged in more than 20% of the analyzable cases. No correlation was found between cytogenetic findings and initial characteristics. When compared with diffuse B-cell lymphomas, defects in regions 2p25 (P less than .01), 12p11\u201313 (P less than .01), 13p11\u201313 (P less than .01), 14p11 (P less than .01), 15p11\u2013 13 (P less than .02), and 20q12\u201313 (P less than .05) were more frequent in HD. When compared with T-cell lymphomas, only defects in regions 12p12\u201313 (P less than .01) and 13p11\u201313 (P less than .01) were more frequent in HD. Failure to obtain analyzable metaphases was correlated with stage IV of the disease (P less than .05) and with a poor survival (P less than .01), but cytogenetic results showed no other correlation with clinical outcome. We conclude that molecular studies in HD should be focused on the short arms of chromosomes 12 and 13. Determination of the clinical significance of cytogenetic findings will require a larger number of patients and a longer follow-up period." }, { "instance_id": "R33008xR32969", "comparison_id": "R33008", "paper_id": "R32969", "text": "Cyto- genetic studies at diagnosis in polycythemia vera: clinical and JAK2V617F allele burden correlates Abstract Background: Previous cytogenetic studies in polycythemia vera (PV) have included a relatively small number of patients (\u201cn\u201d ranging 10\u201364). In the current study (n=137), we describe cytogenetic findings at presentation and examine their relationship to clinical and laboratory features, including bone marrow JAK2V617F allele burden. Methods: The study consisted of a consecutive group of patients with PV who fulfilled the World Health Organization (WHO) diagnostic criteria and in whom bone marrow biopsy and cytogenetic studies were performed at diagnosis. Results I: cytogenetic details At diagnosis: A total of 137 patients (median age, 64 years; 49% females) were studied at diagnosis and had adequate metaphases for interpretation. Cytogenetics were normal in 117 patients (85%) and displayed either a sole -Y abnormality in 5 patients (7% of the male patients), and other chromosomal abnormalities in 15 (11%). The latter included trisomy 8 in five patients, trisomy 9 in three patients, two patients each with del(13q), del(20q), and abnormalities of chromosome 1, and one patient each with del(3)(p13p21), dup(13)(q12q14), and del(11)(q21). At follow-up: Repeat cytogenetic studies while still in the chronic phase of the disease were performed in 19 patients at a median of 60 months (range, 8\u2013198) from diagnosis. Of these, 4 had aquired new cytogenetic clones including 3 with normal cytogenetics at time of initial PV diagnosis. The new abnormalities included del(20q), del(5q), del(1p), chromosome 1 abnormality, and inv(3)(q21q26.2). At time of disease transformation: Leukemic transformation was documented in 3 patients of whom cytogenetic information at the time was available in 2 patients; both patients had normal results at time of initial PV diagnosis and complex cytogenetic abnormalities at time of leukemic transformation. In contrast, among 6 patients with available cytogenetic information at time of fibrotic transformation, the results were unchanged from those obtained at time of diagnosis in 5 patients. ii) Correlation between cytogenetics at diagnosis and JAK2V617F allele burden: Allele-specific, quantitative PCR analysis for JAK2V617F was performed in 71 patients using genomic DNA from archived bone marrow obtained at the time of the initial cytogenetic studies. JAK2V617F mutation was detected in 64 of the 71 (90%) patients; median mutant allele burden was 16% (range 3\u201380%) without significant difference among the different cytogenetic groups: normal vs. \u2013Y vs. other cytogenetic abnormalities (p=0.72). iii) Clinical correlates and prognostic relevance of cytogenetic findings at diagnosis: Among several parameters studied for significant correlations with cytogenetic findings at diagnosis, an association was evident only for age (p=0.02); all \u2013Y abnormalities (n=5) as well as 13 of the 15 (87%) other cytogenetic abnormalities occurred in patients \u2265 60 years of age. Stated another way, the incidence of abnormal cytogenetics (other than -Y) was 4% for patients younger than age 60 years and 15% otherwise. The presence of abnormal cytogenetics at diagnosis had no significant impact on either overall or leukemia-free survival. Conclusions: Abnormal cytogenetic findings at diagnosis are infrequent in PV, especially in patients below age 60 years. Furthermore, their clinical relevance is limited and there is not significant correlation with bone marrow JAK2V617F allele burden." }, { "instance_id": "R33008xR33006", "comparison_id": "R33008", "paper_id": "R33006", "text": "In B-cell chronic lymphocytic leukaemia chromosome 17 abnormal- ities and not trisomy 12 are the single most important cytogenetic abnormalities for the prognosis: a cytogenetic and immunophenotypic study of 480 unselected newly diagnosed patients Of 560 consecutive, newly diagnosed untreated patients with B CLL submitted for chromosome study, G-banded karyotypes could be obtained in 480 cases (86%). Of these, 345 (72%) had normal karyotypes and 135 (28%) had clonal chromosome abnormalities: trisomy 12 (+12) was found in 40 cases, 20 as +12 alone (+12single), 20 as +12 with additional abnormalities (+12complex). Other frequent findings included abnormalities of 14q, chromosome 17, 13q and 6q. The immunophenotype was typical for CLL in 358 patients (CD5+, Slg(weak), mainly FMC7-) and atypical for CLL in 122 patients (25%) (CD5-, or Slg(strong) or FMC7+). Chromosome abnormalities were found significantly more often in patients with atypical (48%) than in patients with typical CLL phenotype (22%) (P < 0.00005). Also +12complex, 14q+, del6q, and abnormalities of chromosome 17 were significantly more frequent in patients with atypical CLL phenotype, whereas +12single was found equally often in patients with typical and atypical CLL phenotype. The cytomorphology of most of the +12 patients was that of classical CLL irrespective of phenotype. In univariate survival analysis the following cytogenetic findings were significantly correlated to a poor prognosis: chromosome 17 abnormalities, 14q+, an abnormal karyotype, +12complex, more than one cytogenetic event, and the relative number of abnormal mitoses. In multivariate survival analysis chromosome 17 abnormalities were the only cytogenetic findings with independent prognostic value irrespective of immunophenotype. We conclude that in patients with typical CLL immunophenotype, chromosome abnormalities are somewhat less frequent at the time of diagnosis than hitherto believed. +12single is compatible with classical CLL, and has no prognostic influence whereas chromosome 17 abnormalities signify a poor prognosis. In patients with an atypical CLL immunophenotype, chromosome abnormalities including +12complex, 14q+, del 6q and chromosome 17 are found in about 50% of the patients, and in particular chromosome 17 abnormalities suggest a poor prognosis." }, { "instance_id": "R33008xR32964", "comparison_id": "R33008", "paper_id": "R32964", "text": "Conventional cyto- genetics in myelofibrosis: literature review and discussion The clinical phenotype of myelofibrosis (MF) is recognized either de novo (primary) or in the setting of polycythemia vera (post\u2010PV) or essential thrombocythemia (post\u2010ET). Approximately one\u2010third of patients with primary MF (PMF) present with cytogenetic abnormalities; the most frequent are del(20q), del(13q), trisomy 8 and 9, and abnormalities of chromosome 1 including duplication 1q. Other less frequent lesions include \u22127/del(7q), del(5q), del(12p), +21 and der(6)t(1;6)(q21;p21.3). In general, cytogenetic abnormalities are qualitatively similar among PMF, post\u2010ET MF and post\u2010PV MF although their individual frequencies may differ. Based on prognostic effect, cytogenetic findings in MF are classified as either \u2018favorable\u2019 or \u2018unfavorable\u2019. The former include normal karyotype or isolated del(20q) or del(13q) and the latter all other abnormalities. Unfavorable cytogenetic profile in both PMF and post\u2010PV/ET MF confers an independent adverse effect on survival; it is also associated with higher JAK2V617F mutational frequency. In addition to their prognostic value, cytogenetic studies in MF ensure diagnostic exclusion of other myeloid neoplasms that are sometimes associated with bone marrow fibrosis (e.g. BCR\u2010ABL1\u2010positive or PDGFRB\u2010rearranged) and also assist in specific treatment selection (e.g. lenalidomide therapy is active in MF associated with del(5q)." }, { "instance_id": "R33091xR33017", "comparison_id": "R33091", "paper_id": "R33017", "text": "Translocation 1;7 in preleukemic states A translocation t(1;7) interpreted as t(1;7)(p11;p11) was first reported by Scheres et al. in eight patients with various hematologic disorders. The karyotype of the abnormal cells was trisomic for 1q and monosomic for 7q. Those investigators reported having found four other cases in the literature. We report herein studies of two patients with the same t(1;7)." }, { "instance_id": "R33091xR33086", "comparison_id": "R33091", "paper_id": "R33086", "text": "Prognostic relevance of cytogenetics determined by fluorescent in situ hybridization in patients having myelofibrosis with myeloid metaplasia In chronic myelofibrosis (MF), distinct recurrent cytogenetic aberrations have been identified but their true prognostic relevance remains uncertain. In this disease, cytogenetic studies as assessed by conventional metaphase karyotyping are limited due to the inherent difficulties in obtaining adequate bone marrow aspirates and the low proliferative capacity of the clonal cells. Interphase fluorescent in situ hybridization (FISH) can partly overcome these limitations and increase the sensitivity of cytogenetic assessment in MF." }, { "instance_id": "R33091xR33037", "comparison_id": "R33091", "paper_id": "R33037", "text": "Partial trisomy 1q in idiopathic myelofibrosis Three cases of idiopathic myelofibrosis with partial trisomy of the long arm of chromosome 1 are described. Partial trisomy 1q was the only karyotypic change detectable in unstimulated peripheral blood cell cultures of one and bone-marrow cultures of two patients at diagnosis. The extra segment from chromosome 1 was located on different karyotype sites, i.e. 1qter, 1p34 and 6p22-23; 1q21-32 was the shortest overlapping region and the only trisomic segment in one of the three patients. These findings suggest that partial trisomy 1q is a primary chromosome aberration in myelofibrosis relevant in the pathogenesis of this hematologic disorder." }, { "instance_id": "R33091xR33013", "comparison_id": "R33091", "paper_id": "R33013", "text": "An identical translocation between chromosome 1 and 7 in three patients with myelofibrosis and myeloid metaplasia Summary. An identical chromosome abnormality was observed in three unrelated patients with myelofibrosis and myeloid metaplasia, two of the patients showing a history of polycythaemia vera (PV) before development of the myelofibrosis. Unstimulated peripheral blood cultures showed a translocation between chromosomes 1 and 7 replacing a homologue of pair 7. It was identified by G\u2010 and C\u2010banding as t(1;7)(7pter\u21927p11::1p1?\u21921qter)." }, { "instance_id": "R33091xR33064", "comparison_id": "R33091", "paper_id": "R33064", "text": "Acute myeloid leukemia and myelodysplastic syndromes following essential thrombocythemia treated with hydroxyurea: high proportion of cases with 17p deletion Treatment with alkylating agents or radiophosphorous (32P) has been shown to carry a certain leukemogenic risk in myeloproliferative disorders (MPDs), including essential thrombocytemia (ET). The leukemogenic risk associated to treatment with hydroxyurea in ET, on the other hand, is generally considered to be relatively low. Between 1970 and 1991, we diagnosed ET in 357 patients, who were monitored until 1996. One or several therapeutic agents had been admistered to 326 patients, including hydroxyurea (HU) in 251 (as only treatment in 201), pipobroman in 43, busulfan in 41, and32P in 40. With a median follow-up duration of 98 months, 17 patients (4.5%) had progressed to acute myeloid leukemia (AML; six cases) or myelodysplastic syndrome (MDS; 11 cases). Fourteen of these patients had received HU, as sole treatment in seven cases, and preceded or followed by other treatment in seven cases, mainly pipobroman (five cases). The remaining three leukemic progressions occurred in patients treated with 32P (two cases) and busulfan (one case). The incidence of AML and MDS after treatment, using 32P alone and 32P with other agents, busulfan alone and with other agents, HU alone and with others agents, and pipobroman alone and with other agents was 7% and 9%, 3% and 17%, 3.5% and 14%, and 0% and 16%, respectively. Thirteen of 17 patients who progressed to AML or MDS had successful cytogenetic analysis. Seven of them had rearrangements of chromosome 17 (unbalanced translocation, partial or complete deletion, isochromosome 17q) that resulted in 17p deletion. They also had a typical form of dysgranulopoiesis combining pseudo Pelger Hu\u0308et hypolobulation and vacuoles in neutrophils, and p53 mutation, as previously described in AML and MDS with 17p deletion. Those seven patients had all received HU, as the only therapeutic agent in three, and followed by pipobroman in three. The three patients who had received no HU and progressed to AML or MDS had no 17p deletion. A review of the literature found cytogenetic analysis in 35 cases of AML and MDS occurring after ET, 11 of whom had been treated with HU alone. Five of 35 patients had rearrangements that resulted in 17p deletion. Four of them had been treated with HU alone. These results show that treatment with HU alone is associated with a leukemic risk of approximately 3.5%. A high proportion of AML and MDS occurring in ET treated with HU (alone or possibly followed by pipobroman) have morphologic, cytogenetic, and molecular characteristics of the 17p\u2212 syndrome. These findings suggest that widespread and prolonged use of HU in ET may have to be reconsidered in some situations, such as asymptomatic ET." }, { "instance_id": "R33091xR33026", "comparison_id": "R33091", "paper_id": "R33026", "text": "Cytogenetic studies in twelve patients with primary myelofibrosis and myeloid metaplasia Chromosome studies on bone marrow and/or peripheral blood cells without phytohemagglutinin were performed on 12 patients with primary myelofibrosis with myeloid meta-plasia (PMMM) between 1980 and 1984. Abnormal clones were found in six patients (50%). In five cases the abnormal clone involved the long arm of chromosome #7, two of which also had partial trisomy of chromosome #1 and trisomy of 9. Additional abnormalities involving chromosomes #3, #5, #11, #13, #15, and #21 were each found once. Review of the literature showed few studies on the cytogenetics of PMMM. No specific chromosomal pattern can be established; however, abnormalities described are nonrandom." }, { "instance_id": "R33581xR33280", "comparison_id": "R33581", "paper_id": "R33280", "text": "Identifying the factors influencing the performance of reverse supply chains (RSC) This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web\u2010based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research." }, { "instance_id": "R33581xR33358", "comparison_id": "R33581", "paper_id": "R33358", "text": "Requirements for forming an \u2018e-supply chain\u2019 In today's digital economy, web-based integration of the enterprises to form an e-supply chain is a critical weapon for orchestrating the whole supply chain towards competitiveness. This paper intends to discuss the requirements for forming an e-supply chain from different perspectives, such as integration with the legacy systems, timing and prior presence of ERP (enterprise resources planning) systems, BPR (business process re-engineering) needs of internal and external business processes and business intelligence/decision support needs. A look at technical knowledge and structure to construct an e-supply chain is provided. Challenges involved in forming an e-supply chain are also briefly mentioned as a separate section in this paper. During the study, requirements are gathered by making a review of recent literature." }, { "instance_id": "R33581xR33461", "comparison_id": "R33581", "paper_id": "R33461", "text": "Supply chain management: success factors from the Malaysian manufacturer's perspective The purpose of this paper is to shed the light on the critical success factors that lead to high supply chain performance outcomes in a Malaysian manufacturing company. The critical success factors consist of relationship with customer and supplier, information communication and technology (ICT), material flow management, corporate culture and performance measurement. Questionnaire was the main instrument for the study and it was distributed to 84 staff from departments of purchasing, planning, logistics and operation. Data analysis was conducted by employing descriptive analysis (mean and standard deviation), reliability analysis, Pearson correlation analysis and multiple regression. The findings show that there are relationships exist between relationship with customer and supplier, ICT, material flow management, performance measurement and supply chain management (SCM) performance, but not for corporate culture. Forming a good customer and supplier relationship is the main predictor of SCM performance, followed by performance measurement, material flow management and ICT. It is recommended that future study to determine additional success factors that are pertinent to firms\u2019 current SCM strategies and directions, competitive advantages and missions. Logic suggests that further study to include more geographical data coverage, other nature of businesses and research instruments. Key words: Supply chain management, critical success factor." }, { "instance_id": "R33581xR33153", "comparison_id": "R33581", "paper_id": "R33153", "text": "Strategic Alliance Success Factors SUMMARY There is recognition that competition is shifting from a \u201cfirm versus firm perspective\u201d to a \u201csupply chain versus supply chain perspective.\u201d In response to this shift, firms seeking competitive advantage are participating in cooperative supply chain arrangements, such as strategic alliances, which combine their individual strengths and unique resources. Buyer-supplier sourcing relationships are a primary focus of alliance improvement efforts. While interest in such arrangements remains strong, it is well accepted that creating, developing, and maintaining a successful alliance is a very daunting task. This research addresses several critical issues regarding that challenge. First, what factors contribute most to long-term alliance success? Second, what conditions define the presence of those success factors? Third, do buyers and suppliers in an alliance agree on those success factors and defining conditions? The research results demonstrate a remarkably consistent perspective among alliance partners regarding key success factors, despite the acknowledgment that the resultant success is based on a relatively even, but not equal, exchange of benefits and resources. Additionally, within an alliance's intended \u201cwin-win\u201d foundation, suppliers must recognize their innate dependence on customers. Finally, significant opportunities for improvement exist with respect to alliance goal clarification, communication, and performance evaluation." }, { "instance_id": "R33581xR33118", "comparison_id": "R33581", "paper_id": "R33118", "text": "The elements of a successful logistics partnership Describes the elements of a successful logistics partnership. Looks at what can cause failure and questions whether the benefits of a logistics partnership are worth the effort required. Concludes that strategic alliances are increasingly becoming a matter of survival, not merely a matter of competitive advantage. Refers to the example of the long\u2010term relationship between Kimberly\u2010Clark Corporation and Interamerican group\u2019s Tricor Warehousing, Inc." }, { "instance_id": "R33581xR33223", "comparison_id": "R33581", "paper_id": "R33223", "text": "Successful use of e\u2010procurement in supply chains Purpose \u2013 Electronic support of internal supply chains for direct or production goods has been a major element during the implementation of enterprise resource planning (ERP) systems that has taken place since the late 1980s. However, supply chains to indirect material suppliers were not usually included due to low transaction volumes, low product values and low strategic importance of these goods. Dedicated information systems for streamlining indirect goods supply chains have emerged since the late 1990s and subsequently have faced a broad diffusion in practice. The concept of these e\u2010procurement solutions has also been described broadly in the literature. However, studies on how companies use these e\u2010procurement solutions and what factors are critical to their implementation are only emerging. This research aims to explore the introduction of e\u2010procurement systems and their contribution to the management of indirect goods supply chain.Design/methodology/approach \u2013 Chooses a two\u2010part qualitative approac..." }, { "instance_id": "R33581xR33395", "comparison_id": "R33581", "paper_id": "R33395", "text": "An analysis of the Cyclone Larry emergency relief chain: Some key success factors The emergency relief chain and overall relief effort of Cyclone Larry in Australia is generally agreed to be one of the more effective in the history of emergency cyclone response in northern Australia. This paper identifies and analyses some key success factors in the emergency relief chain and the overall emergency relief effort of this cyclone disaster. The findings in this paper are based on document analysis and semi-structured discussions with disaster managers relating to the emergency relief chain and cyclone relief management processes undertaken by disaster management agencies. The paper contributes to a growing area of knowledge about emergency relief chains and the management of cyclone response in the context of a developed, Western Asia-Pacific country by improving managerial understanding of the critical ingredients in designing and executing an effective emergency relief chain and overall cyclone response operation." }, { "instance_id": "R33581xR33213", "comparison_id": "R33581", "paper_id": "R33213", "text": "Virtual supply-chain management In global business competition, companies believe greater transparency in supply-chain operations and collaboration is very important for success. Transparency brings accountability and responsibility. This openness in the supply-chain allows companies to see how their suppliers are performing, from their sourcing of raw materials to their delivery to the retail outlet. Achieving greater transparency in the supply chain requires the development of comprehensive e-Logistics tools, which provide all players with open communication and shared information in every stage of the order-to-delivery process. Supply-chain transparency in ordering, inventory and transportation is a prerequisite for optimization and is critical for making business decisions. In this paper, the experiences of a virtual supply-chain (VSC) company are discussed with reference to the strategies, methods and technologies of its supply-chain. The supply-chain aims for improved customer satisfaction and hence for overall competitiveness in a global market. This discussion will be useful for other companies intending to emulate some of the critical success factors in VSC management." }, { "instance_id": "R33581xR33109", "comparison_id": "R33581", "paper_id": "R33109", "text": "Benchmarking logistics performance with an application of the analytic hierarchy process In the increasingly turbulent environment, logistics strategic management has become a necessity for achieving competitive advantage. The use of benchmarking is widening as a technique for supporting logistics strategic management. Benchmarking can be described as the search for the best practices leading to a superior performance of a company. In this paper, we demonstrate how the analytic hierarchy process (AHP) can be used for supporting a generic logistics benchmarking process. First, the customers of a company are interviewed in order to define the logistic critical success factors and to determine their importance. The performance levels of the companies to be benchmarked are then evaluated with regard to each success factor. Second, the factors enabling the companies to achieve superior logistics performance are determined and prioritized with respect to each success factor. Third, the strengths, weaknesses, and problems of the company conducting the benchmarking process are analyzed and prioritized with respect to each enabler. Then the potential developmental actions for achieving superior logistics performance are defined and prioritized. In addition to supporting the three steps mentioned above, the AHP-based approach forms the basic framework for a continuous logistics benchmarking process." }, { "instance_id": "R33581xR33447", "comparison_id": "R33581", "paper_id": "R33447", "text": "Linking Success Factors to Financial Performance Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors \u201crelationship with 3PLs and skilled logistics professionals\u201d would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India." }, { "instance_id": "R33581xR33534", "comparison_id": "R33581", "paper_id": "R33534", "text": "Application of critical success factors in supply chain management This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs." }, { "instance_id": "R33581xR33579", "comparison_id": "R33581", "paper_id": "R33579", "text": "Critical success factors of customer involvement in greening the supply chain: an empirical study The role of customers and their involvement in green supply chain management (GSCM) has been recognised as an important research area. This paper is an attempt to explore factors influencing involvement of customers towards greening the supply chain (SC). Twenty-five critical success factors (CSFs) of customer involvement in GSCM have been identified from literature review and through extensive discussions with senior and middle level SC professionals. Interviews and questionnaire-based survey have been used to indicate the significance of these CSFs. A total of 478 valid responses were received to rate these CSFs on a five-point Likert scale (ranging from unimportant to most important). Statistical analysis has been carried out to establish the reliability and to test the validity of the questionnaires. Subsequent factor analysis has identified seven major components covering 79.24% of total variance. This paper may help to establish the importance of customer role in promoting green concept in SCs and to develop an understanding of factors influencing customer involvement \u2013 key input towards creating \u2018greening pull system\u2019 (GPSYS). This understanding may further help in framing the policies and strategies to green the SC." }, { "instance_id": "R33581xR33321", "comparison_id": "R33581", "paper_id": "R33321", "text": "Drivers and impacts of ICT adoption on transport and logistics services Summary The availability of high\u2010quality transport and logistics services (TLS) is of paramount importance for the growth and competitiveness of an economy. The objective of this paper is to describe how European companies in this industry use information and communication technology (ICT) for conducting business and to assess the impact of this development for firms and the industry as a whole. A comparison with some important Asia Pacific economies is also presented, indicating that some of these countries (Singapore, Hong Kong, Japan, Taiwan, and Korea) boast very good transport infrastructure compared with the most developed European economies. Using the structure\u2010conduct\u2010performance (SCP) model and the bi\u2010directional relationships of its elements, the paper identifies the links between ICT adoption and market structure, innovation dynamics, and firm performance. A set of recommendations on how to further improve the actual scenario of e\u2010business in the TLS industry is also presented. The model could also be implemented in Asian countries." }, { "instance_id": "R33581xR33316", "comparison_id": "R33581", "paper_id": "R33316", "text": "Responsive supply chain: A competitive strategy in a networked economy\u2606 Supply chain management (SCM) has been considered as the most popular operations strategy for improving organizational competitiveness in the twenty-first century. In the early 1990s, agile manufacturing (AM) gained momentum and received due attention from both researchers and practitioners. In the mid-1990s, SCM began to attract interest. Both AM and SCM appear to differ in philosophical emphasis, but each complements the other in objectives for improving organizational competitiveness. For example, AM relies more on strategic alliances/partnerships (virtual enterprise environment) to achieve speed and flexibility. But the issues of cost and the integration of suppliers and customers have not been given due consideration in AM. By contrast, cost is given a great deal of attention in SCM, which focuses on the integration of suppliers and customers to achieve an integrated value chain with the help of information technologies and systems. Considering the significance of both AM and SCM for firms to improve their performance, an attempt has been made in this paper to analyze both AM and SCM with the objective of developing a framework for responsive supply chain (RSC). We compare their characteristics and objectives, review the selected literature, and analyze some case experiences on AM and SCM, and develop an integrated framework for a RSC. The proposed framework can be employed as a competitive strategy in a networked economy in which customized products/services are produced with virtual organizations and exchanged using e-commerce." }, { "instance_id": "R33581xR33245", "comparison_id": "R33581", "paper_id": "R33245", "text": "Assessing supply chain management success factors: a case study Purpose \u2013 The purpose of this study is to examine important operational issues related to strategic success factors that are necessary when implementing SCM plans in an organization.Design/methodology/approach \u2013 A questionnaire was distributed to top and middle management within a large manufacturing firm, specializing in producing consumer and building products, to examine the importance and the extent to which the selected manufacturing company practiced the strategies based on these identified operational issues.Findings \u2013 Reducing cost of operations, improving inventory, lead times and customer satisfaction, increasing flexibility and cross\u2010functional communication, and remaining competitive appear to be the most important objectives to implement SCM strategies. The responses by the survey respondents indicate that not enough resources were allocated to implement and support SCM initiatives in their divisions. In addition, they perceived that resource allocation could be improved in the areas of bette..." }, { "instance_id": "R33581xR33163", "comparison_id": "R33581", "paper_id": "R33163", "text": "Critical success factors in agile supply chain management \u2010 An empirical study This paper analyses results from a survey of 962 Australian manufacturing companies in order to identify some of the factors critical for successful agile organizations in managing their supply chains. Analysis of the survey results provided some interesting insights into factors differentiating \u201cmore agile\u201d organizations from \u201cless agile\u201d organizations. \u201cMore agile\u201d companies from this study can be characterized as more customer focused, and applying a combination of \u201csoft\u201d and \u201chard\u201d methodologies in order to meet changing customer requirements. They also see the involvement of suppliers in this process as being crucial to their ability to attain high levels of customer satisfaction. The \u201cless agile\u201d group, on the other hand, can be characterized as more internally focused with a bias toward internal operational outcomes. They saw no link between any of the independent variables and innovation, and appear to see technology as more closely linked to the promotion of these operational outcomes than to customer satisfaction. The role of suppliers for this group is to support productivity and process improvement rather than to promote customer satisfaction." }, { "instance_id": "R33581xR33205", "comparison_id": "R33581", "paper_id": "R33205", "text": "An Exploratory Study of the Success Factors for Extranet Adoption in E-Supply Chain Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted." }, { "instance_id": "R33783xR33729", "comparison_id": "R33783", "paper_id": "R33729", "text": "Power harmonic identification and compensation with an artificial neural network method This paper introduces a new neural method for harmonic identification and compensation. Based on Adaline networks, the proposed method is called the diphase currents method. The architecture and the learning are formulated based on an original decomposition of the disturbed currents. These currents are converted in the alphabeta- or DQ-spaces to separate each harmonic component in a linear expression. In this harmonic compensation method, the harmonic components may be individually selected and the reactive power may be compensated. The proposed method is robust and has been efficiently compared to other conventional and neural harmonic compensation methods. In order to validate the performance of the diphase currents method, simulation studies are carried out in the presence of plant variations. Experiments are also presented to show the performance of the proposed neural method under many practical industrial conditions." }, { "instance_id": "R33783xR33651", "comparison_id": "R33783", "paper_id": "R33651", "text": "Neural Network-Based Approach for Identification of the Harmonic Content of a Nonlinear Load in a Single-Phase System In this paper an alternative method based on artificial neural networks is presented to determine harmonic components in the load current of a single-phase electric power system with nonlinear loads, whose parameters can vary so much in reason of the loads characteristic behaviors as because of the human intervention. The first six components in the load current are determined using the information contained in the time-varying waveforms. The effectiveness of this method is verified by using it in a single-phase active power filter with selective compensation of the current drained by an AC controller. The proposed method is compared with the fast Fourier transform." }, { "instance_id": "R33783xR33697", "comparison_id": "R33783", "paper_id": "R33697", "text": "An Artificial Neural Network Based Method for Harmonic Detection in power system A novel advanced harmonic detection method based on neural network (NN) is proposed in this paper. It is an adaptive harmonic detection method with variable step-size based on adaptive linear NN and self-adaptive noise countervailing principle. And this proposed method adopts a sliding integrator to extract the real tracing error and then uses a fuzzy adjuster with self-adjustable factor to modify the step-size. So the novel harmonic detection method can obtain fast convergence speed and high steady-state precision at the same time. Comparisons are made between conventional harmonic detection methods based on NN and the advanced method based on NN proposed in this paper. Finally detailed simulation and experimental results verify the validity and superiority of the advanced methods." }, { "instance_id": "R33783xR33721", "comparison_id": "R33783", "paper_id": "R33721", "text": "Compensation Current of Active Power Filter Generated by Artificial Neural Network Approach The semiconductor switches are presented in many applications and they can be considered the main source of harmonic distortion presented in the electrical power system. The use of filters - active or passive - has played an important role in order to minimize the harmonic effects injected in the power system. The proposal of this work is to present an alternative approach to estimate the harmonic content of a single-phase system with non-linear loads. It uses artificial neural networks to determine the compensation current. The system is composed of AC single-phase controllers and parallel active power filter. Simulation results are presented to validate the proposed approach" }, { "instance_id": "R33783xR33725", "comparison_id": "R33783", "paper_id": "R33725", "text": "Neural Network Hysteresis Control of three-phase Switched Capacitor Active Power Filter The switched capacitor active filter is different from the traditional active filter. It removes the requirement for a large current or voltage source, which leads to the reduction in cost and in physical size. A control method that combines the neural network technology with the hysteresis band technology is presented. Through training the neural network can learn the control rules by itself and can replace the real hysteresis comparator in power converter control. The computer simulation results show this filter's advantage." }, { "instance_id": "R33783xR33675", "comparison_id": "R33783", "paper_id": "R33675", "text": "Predictive and Adaptive ANN (Adaline) Based Harmonic Compensation for Shunt Active Power Filter Estimation of the current-reference to compensate for the harmonic and reactive component of the load current is important in a shunt type active power filter. This paper applies ANN based predictive and adaptive reference generation technique. Predictive scheme extracts the information of the fundamental component through an ANN that replaces a low pass filter. This ANN based low pass-filter is trained offline with large number of training set to predict the fundamental magnitude of load current. These predictive reference generation techniques work well for load change pattern closer to the trained data and for clean source voltage. However, the performance deteriorates in case of distortion in source voltage and also if training data drifts quite significantly from test data. To overcome this, an Adaline based ANN is applied after the operation of the predictive algorithm. It has been shown that the combined predictive adaptive approach offers better performance. Extensive results from simulation have confirmed the usefulness of the proposed technique." }, { "instance_id": "R33783xR33705", "comparison_id": "R33783", "paper_id": "R33705", "text": "Single-Phase Shunt Hybrid Active Power Filter Based on ANN In this paper, a single-phase shunt hybrid active power filter (APF) is presented to compensate reactive power and eliminate harmonics in power system. The hybrid active filter consists of one active filter and one passive filter connected in series. By controlling the equivalent output voltage of active filter, the harmonic currents generated by the nonlinear load are blocked and flowed into the passive filter. Sensing load current, dc bus voltage, dc bus reference voltage and source voltage compute reference voltage of APF through modified adaptive artificial neural network (ANN). And a voltage controller is used to generate the firing pulses of the voltage source inverter. The proposed system is implemented using Digital Signal Processor (DSP). Simulating results are presented to confirm the validity of the scheme." }, { "instance_id": "R33783xR33671", "comparison_id": "R33783", "paper_id": "R33671", "text": "Current Harmonic Compensation by a Single-Phase Shunt Active Power Filter Controlled by Adaptive Neural Filtering This paper presents a single-phase shunt active power filter (APF) for current harmonic compensation based on neural filtering. The shunt active filter, realized by a current-controlled inverter, has been used to compensate a nonlinear current load by receiving its reference from a neural adaptive notch filter. This is a recursive notch filter for the fundamental grid frequency (50 Hz) and is based on the use of a linear adaptive neuron (ADALINE). The filter's parameters are made adaptive with respect to the grid frequency fluctuations. A phase-locked loop system is used to extract the fundamental component from the coupling point voltage and to estimate the actual grid frequency. The current control of the inverter has been performed by a multiresonant controller. The estimated grid frequency is fed to the neural adaptive filter and to the multiresonant controller. In this way, the inverter creates a current equal in amplitude and opposite in sign to the load harmonic current, thus producing an almost sinusoidal grid current. An automatic tuning of the multiresonant controller is implemented, which recognizes the largest three harmonics of the load current to be compensated by the APF. The stability analysis of the proposed control system is shown. The methodology has been applied in numerical simulations and experimentally to a properly devised test setup, also in comparison with the classic sinusoidal current control based on the P-Q theory." }, { "instance_id": "R33783xR33715", "comparison_id": "R33783", "paper_id": "R33715", "text": "Neural-Network-Based Inverse Control Method for Active Power Filter System A new type of active power filter (APF) is described. A multi-layered neural network based inverse control method for this APF system is proposed. The functioning of the APF system is based on a switching network whose characteristics are nonlinear. The characteristic of the switching on-off time of the switching network and the output current of the APF was demonstrated. The switching on-off time can be instantaneously calculated by using the neural-network-based inverse control algorithm proposed. The neural network was designed. An all digital control way may be realized by using the algorithm. The validation of the results of inverse control for APF is proposed as well" }, { "instance_id": "R33783xR33657", "comparison_id": "R33783", "paper_id": "R33657", "text": "Study on Improved Neural Network PID Control of APF DC Voltage According to the active power balance principle, the paper analyzed the approximate mathematical model of APF. In order to optimize the control effect of dc bus voltage in APF, PID control method based on improved BP neural network is adopted to do closed-loop control to the system. The two strategies, adding momentum method and adaptive learning rate adjustment, are combined to improve BP network, which can not only effectively suppress the network appearing local minimum but also good to shorten learning time and improve stability of the network furthermore. The improved BP network adjusted the parameters such as KP and KI of PID controller according to the operation state of the system and realized optimum PID control. The experiment studies show that on condition of load power and harmonic content changing, APF system, controlled by PID control method based on improved BP network, can assure the harmonic distortion keeps in an allowed range and the dc side voltage becomes stable in a short time." }, { "instance_id": "R33783xR33709", "comparison_id": "R33783", "paper_id": "R33709", "text": "Design of Single-phase Shunt Active Power Filter Based on ANN In this paper, a single-phase shunt active power filter (APF) is presented to compensate reactive power and eliminate harmonics in power system. Sensing load current, dc bus voltage, reference dc bus voltage and source voltage compute reference current of APF through modified adaptive artificial neural network (ANN). A modified hysteretic current controller is used to generate the firing pulses of the voltage source inverter which generate reactive and harmonic current to compensate the nonlinear loads. The proposed system is implemented using digital signal processor (DSP). Simulating and experimental results are presented to confirm the validity of the scheme." }, { "instance_id": "R33783xR33717", "comparison_id": "R33783", "paper_id": "R33717", "text": "Adaptive Filtering for Unstable Power System Harmonics using Artificial Network Conventional approaches for harmonics filtering usually employ either passive, active filtering techniques or hybrid filters. This paper proposes an adaptive harmonic filtering approach using a modified discrete Hopfield network model. The advantage of the scheme is that it can extract the fundamental component of the distorted current and provide a suitable compensation current as the power harmonics may vary in amplitude and frequency from time to time. Therefore, the time-variant harmonic environments in real-time machine systems can be adapted successfully. Real-time performance experiments verify that the proposed scheme is feasible in term of real-time tracking, adaptive low frequency harmonics filtering, fast training and convergence speed." }, { "instance_id": "R33783xR33699", "comparison_id": "R33783", "paper_id": "R33699", "text": "A Single-Phase DG Generation Unit With Shunt Active Power Filter Capability by Adaptive Neural Filtering This paper deals with a single-phase distributed generation (DG) system with active power filtering (APF) capability, devised for utility current harmonic compensation. The idea is to integrate the DG unit functions with the shunt APF capabilities, since the DG is connected in parallel to the grid. With the proposed approach, the control of the DG unit is performed by injecting into the grid a current with the same phase and frequency of the grid voltage and with amplitude depending on the power available from the renewable sources. On the other hand, the load harmonic current compensation is performed by injecting into the AC system harmonic currents as those of the load but with opposite phase, thus keeping the line current almost sinusoidal. Both the phase detection of the grid voltage and the computation of the load harmonic compensation current have been performed by two neural adaptive filters with the same structure, one in configuration \"notch\" and the other complementary in configuration \"band\". The methodology has been tested successfully both in numerical simulation and experimentally on a suitably devised test setup" }, { "instance_id": "R33783xR33663", "comparison_id": "R33783", "paper_id": "R33663", "text": "FPGA Implementation of Harmonic Detection methods using Neural Network This work presents a Neural Networks based intelligent Active Power Filter control unit for harmonics current elimination. The paper is centered on an Improved Three-Monophase (ITM) method and a novel one called Two-Phase Flow (TPF) method for detecting harmonics from disturbed currents due to non linear loads in electric power systems. The TPF method introduces new currents decomposition in the DQ-space that results in separating AC from DC components. The DC terms are estimated by an Adaline Neural Network because of its learning capabilities with respect to real-time applications. Then the resulting AC terms from this approach are transformed to obtain the reference currents. After analyzing those two methods with respect to their performance and hardware resources consumption by means of Altera Dsp Builder\u00ae, a comparative study with the direct method is reported and some FPGA implementation results are also shown. From there, one can notice that the direct method is the simplest and offers the best performance whereas the TPF is the fastest and consumes less material resources." }, { "instance_id": "R33783xR33647", "comparison_id": "R33783", "paper_id": "R33647", "text": "Improved shunt APF based on using adaptive RBF neural network and modified hysteresis current control In this paper, a new combination is proposed to control shunt active power filters (APF). The recommended system has better specifications in comparison with other control methods. In the proposed combination, an RBF neural network is employed to extract compensation reference currents for a variable non-linear load. In order to make the employed model much simpler and tighter, an adaptive learning algorithm for RBF network is proposed. In addition, a modified hysteresis current control technique based on defining a variable hysteresis band is employed to avoid any power system resonance. In this method the hysteresis band is expressed as a function of source voltage, rate of reference current variations and voltage of DC link capacitor in such a way that the switching frequency of the inverter switches remains almost constant. In summary, extraction of compensation reference current is done with lower amount of computations. Beside, the threat of resonance occurrence is cancelled. The simulation results which are done by MATLAB/Simulink illustrate the validity and effectiveness of the proposed combination." }, { "instance_id": "R33851xR33849", "comparison_id": "R33851", "paper_id": "R33849", "text": "Software comparison for evaluating genomic copy number variation for Affymetrix 6.0 SNP array platform BackgroundCopy number data are routinely being extracted from genome-wide association study chips using a variety of software. We empirically evaluated and compared four freely-available software packages designed for Affymetrix SNP chips to estimate copy number: Affymetrix Power Tools (APT), Aroma.Affymetrix, PennCNV and CRLMM. Our evaluation used 1,418 GENOA samples that were genotyped on the Affymetrix Genome-Wide Human SNP Array 6.0. We compared bias and variance in the locus-level copy number data, the concordance amongst regions of copy number gains/deletions and the false-positive rate amongst deleted segments.ResultsAPT had median locus-level copy numbers closest to a value of two, whereas PennCNV and Aroma.Affymetrix had the smallest variability associated with the median copy number. Of those evaluated, only PennCNV provides copy number specific quality-control metrics and identified 136 poor CNV samples. Regions of copy number variation (CNV) were detected using the hidden Markov models provided within PennCNV and CRLMM/VanillaIce. PennCNV detected more CNVs than CRLMM/VanillaIce; the median number of CNVs detected per sample was 39 and 30, respectively. PennCNV detected most of the regions that CRLMM/VanillaIce did as well as additional CNV regions. The median concordance between PennCNV and CRLMM/VanillaIce was 47.9% for duplications and 51.5% for deletions. The estimated false-positive rate associated with deletions was similar for PennCNV and CRLMM/VanillaIce.ConclusionsIf the objective is to perform statistical tests on the locus-level copy number data, our empirical results suggest that PennCNV or Aroma.Affymetrix is optimal. If the objective is to perform statistical tests on the summarized segmented data then PennCNV would be preferred over CRLMM/VanillaIce. Specifically, PennCNV allows the analyst to estimate locus-level copy number, perform segmentation and evaluate CNV-specific quality-control metrics within a single software package. PennCNV has relatively small bias, small variability and detects more regions while maintaining a similar estimated false-positive rate as CRLMM/VanillaIce. More generally, we advocate that software developers need to provide guidance with respect to evaluating and choosing optimal settings in order to obtain optimal results for an individual dataset. Until such guidance exists, we recommend trying multiple algorithms, evaluating concordance/discordance and subsequently consider the union of regions for downstream association tests." }, { "instance_id": "R33851xR33795", "comparison_id": "R33851", "paper_id": "R33795", "text": "Comparative analysis of algorithms for identifying amplifications and deletions in array CGH data MOTIVATION Array Comparative Genomic Hybridization (CGH) can reveal chromosomal aberrations in the genomic DNA. These amplifications and deletions at the DNA level are important in the pathogenesis of cancer and other diseases. While a large number of approaches have been proposed for analyzing the large array CGH datasets, the relative merits of these methods in practice are not clear. RESULTS We compare 11 different algorithms for analyzing array CGH data. These include both segment detection methods and smoothing methods, based on diverse techniques such as mixture models, Hidden Markov Models, maximum likelihood, regression, wavelets and genetic algorithms. We compute the Receiver Operating Characteristic (ROC) curves using simulated data to quantify sensitivity and specificity for various levels of signal-to-noise ratio and different sizes of abnormalities. We also characterize their performance on chromosomal regions of interest in a real dataset obtained from patients with Glioblastoma Multiforme. While comparisons of this type are difficult due to possibly sub-optimal choice of parameters in the methods, they nevertheless reveal general characteristics that are helpful to the biological investigator." }, { "instance_id": "R33851xR33827", "comparison_id": "R33851", "paper_id": "R33827", "text": "Assessment of copy number variation using the Illumina Infinium 1M SNP-array: A comparison of methodological approaches in the Spanish Bladder Cancer/EPICURO study High\u2010throughput single nucleotide polymorphism (SNP)\u2010array technologies allow to investigate copy number variants (CNVs) in genome\u2010wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation\u2010dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19\u20130.28). In contrast, specificity was very high (0.97\u20130.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome\u2010wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1\u201310, 2011. \u00a9 2011 Wiley\u2010Liss, Inc." }, { "instance_id": "R33953xR33924", "comparison_id": "R33953", "paper_id": "R33924", "text": "Structured Testing Using Ant Colony Optimization Structural testing is one of the most widely used testing paradigms to test software. The aim of this paper is to present a simple and efficient algorithm that can automatically generate all possible paths in a Control Flow Graph for structural testing. Pheromone releasing behavior of ants is used in this algorithm for extracting optimal paths. This algorithm generates paths equal to the cyclomatic complexity." }, { "instance_id": "R33953xR33858", "comparison_id": "R33953", "paper_id": "R33858", "text": "Extracting Test Sequences from a Markov Software Usage Model by ACO The aim of the paper is to investigate methods for deriving a suitable set of test paths for a software system. The design and the possible uses of the software system are modelled by a Markov Usage Model which reflects the operational distribution of the software system and is enriched by estimates of failure probabilities, losses in case of failure and testing costs. Exploiting this information, we consider the tradeoff between coverage and testing costs and try to find an optimal compromise between both. For that purpose, we use a heuristic optimization procedure inspired by nature, Ant Colony Optimization, which seems to fit very well to the problem structure under consideration. A real world software system is studied to demonstrate the applicability of our approach and to obtain first experimental results." }, { "instance_id": "R33953xR33863", "comparison_id": "R33953", "paper_id": "R33863", "text": "The State Problem for Evolutionary Testing This paper shows how the presence of states in test objects can hinder or render impossible the search for test data using evolutionary testing. Additional guidance is required to find sequences of inputs that put the test object into some necessary state for certain test goals to become feasible. It is shown that data dependency analysis can be used to identify program statements responsible for state transitions, and then argued that an additional search is needed to find required transition sequences. In order to be able to deal with complex examples, the use of ant colony optimization is proposed. The results of a simple initial experiment are reported." }, { "instance_id": "R33953xR33943", "comparison_id": "R33953", "paper_id": "R33943", "text": "A New Software Data-Flow Testing Approach via Ant Colony Algorithms Search-based optimization techniques (e.g., hill climbing, simulated annealing, and genetic algorithms) have been applied to a wide variety of software engineering activities including cost estimation, next release problem, and test generation. Several search based test generation techniques have been developed. These techniques had focused on finding suites of test data to satisfy a number of control-flow or data-flow testing criteria. Genetic algorithms have been the most widely employed search-based optimization technique in software testing issues. Recently, there are many novel search-based optimization techniques have been developed such as Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Artificial Immune System (AIS), and Bees Colony Optimization. ACO and AIS have been employed only in the area of control-flow testing of the programs. This paper aims at employing the ACO algorithms in the issue of software data-flow testing. The paper presents an ant colony optimization based approach for generating set of optimal paths to cover all definition-use associations (du-pairs) in the program under test. Then, this approach uses the ant colony optimization to generate suite of test-data for satisfying the generated set of paths. In addition, the paper introduces a case study to illustrate our approach. Keywordsdata-flow testing; path-cover generation, test-data generation; ant colony optimization algorithms" }, { "instance_id": "R33953xR33918", "comparison_id": "R33953", "paper_id": "R33918", "text": "Automated Software Testing Using Metaheuristic Technique Based on an Ant Colony Optimization Software testing is an important and valuable part of the software development life cycle. Due to time, cost and other circumstances, exhaustive testing is not feasible that\u2019s why there is a need to automate the testing process. Testing effectiveness can be achieved by the State Transition Testing (STT) which is commonly used in real time, embedded and web-based kinds of software systems. The tester\u2019s main job to test all the possible transitions in the system. This paper proposed an Ant Colony Optimization (ACO) technique for the automated and full coverage of all state-transitions in the system. Present paper approach generates test sequence in order to obtain the complete software coverage. This paper also discusses the comparison between two metaheuristic techniques (Genetic Algorithm and Ant Colony optimization) for transition based testing." }, { "instance_id": "R33953xR34961", "comparison_id": "R33953", "paper_id": "R34961", "text": "Introduction: A Survey of the Evolutionary Computation Techniques for Software Engineering This chapter aims to present a part of the computer science literature in which the evolutionary computation techniques, optimization techniques and other bio-inspired techniques are used to solve different search and optimization problems in the area of software engineering." }, { "instance_id": "R34099xR33976", "comparison_id": "R34099", "paper_id": "R33976", "text": "Reconstructing habitat use of Coilia mystus and Coilia ectenes of the Yangtze River estuary, and of Coilia ectenes of Taihu Lake, based on otolith strontium and calcium The habitat use and migratory patterns of Osbeck\u2019s grenadier anchovy Coilia mystus in the Yangtze estuary and the estuarine tapertail anchovy Coilia ectenes from the Yangtze estuary and Taihu Lake, China, were studied by examining the environmental signatures of strontium and calcium in their otoliths using electron probe microanalysis. The results indicated that Taihu C. ectenes utilizes only freshwater habitats, whereas the habitat use patterns of Yangtze C. ectenes and C. mystus were much more flexible, apparently varying among fresh, brackish and marine areas. The present study suggests that the spawning populations of Yangtze C. ectenes and C. mystus in the Yangtze estuary consist of individuals with different migration histories, and individuals of these two Yangtze Coilia species seem to use a variety of different habitats during the non-spawning seasons." }, { "instance_id": "R34099xR33985", "comparison_id": "R34099", "paper_id": "R33985", "text": "Up-estuary dispersal of young-of-the-year bay anchovy Anchoa mitchilli in the Chesapeake Bay: inferences from microprobe analysis of strontium in otoliths Young-of-the-year (YOY) bay anchovy Anchoa mitchilli occur in higher proportion rel- ative to larvae in the upper Chesapeake Bay. This has led to the hypothesis that up-bay dispersal favors recruitment. Here we test whether recruitment of bay anchovy to different parts of the Chesa- peake Bay results from differential dispersal rates. Electron microprobe analysis of otolith strontium was used to hind-cast patterns and rates of movement across salinity zones. Individual chronologies of strontium were constructed for 55 bay anchovy aged 43 to 103 d collected at 5 Chesapeake Bay mainstem sites representing upper, middle, and lower regions of the bay during September 1998. Most YOY anchovy were estimated to have originated in the lower bay. Those collected at 5 and 11 psu sites exhibited the highest past dispersal rates, all in an up-estuary direction. No significant net dispersal up- or down-estuary occurred for recruits captured at the polyhaline (\u266218 psu) site. Ini- tiation of ingress to lower salinity waters (<15 psu) was estimated to occur near metamorphosis, dur- ing the early juvenile stage, at sizes \u2662 25 mm standard length (SL) and ages \u2662 50 d after hatch. Esti- mated maximum upstream dispersal rate (over-the-ground speed) during the first 50 to 100 d of life exceeded 50 mm s -1 ." }, { "instance_id": "R34099xR34072", "comparison_id": "R34099", "paper_id": "R34072", "text": "Migratory environmental history of the grey mullet Mugil cephalus as revealed by otolith Sr:Ca ratios We used an electron probe microanalyzer (EPMA) to determine the migratory environ- mental history of the catadromous grey mullet Mugil cephalus from the Sr:Ca ratios in otoliths of 10 newly recruited juveniles collected from estuaries and 30 adults collected from estuaries, nearshore (coastal waters and bay) and offshore, in the adjacent waters off Taiwan. Mean (\u00b1SD) Sr:Ca ratios at the edges of adult otoliths increased significantly from 6.5 \u00b1 0.9 \u00d7 10 -3 in estuaries and nearshore waters to 8.9 \u00b1 1.4 \u00d7 10 -3 in offshore waters (p < 0.01), corresponding to increasing ambi- ent salinity from estuaries and nearshore to offshore waters. The mean Sr:Ca ratios decreased sig- nificantly from the core (11.2 \u00b1 1.2 \u00d7 10 -3 ) to the otolith edge (6.2 \u00b1 1.4 \u00d7 10 -3 ) in juvenile otoliths (p < 0.001). The mullet generally spawned offshore and recruited to the estuary at the juvenile stage; therefore, these data support the use of Sr:Ca ratios in otoliths to reconstruct the past salinity history of the mullet. A life-history scan of the otolith Sr:Ca ratios indicated that the migratory environmen- tal history of the mullet beyond the juvenile stage consists of 2 types. In Type 1 mullet, Sr:Ca ratios range between 4.0 \u00d7 10 -3 and 13.9 \u00d7 10 -3 , indicating that they migrated between estuary and offshore waters but rarely entered the freshwater habitat. In Type 2 mullet, the Sr:Ca ratios decreased to a minimum value of 0.4 \u00d7 10 -3 , indicating that the mullet migrated to a freshwater habitat. Most mullet beyond the juvenile stage migrated from estuary to offshore waters, but a few mullet less than 2 yr old may have migrated into a freshwater habitat. Most mullet collected nearshore and offshore were of Type 1, while those collected from the estuaries were a mixture of Types 1 and 2. The mullet spawning stock consisted mainly of Type 1 fish. The growth rates of the mullet were similar for Types 1 and 2. The migratory patterns of the mullet were more divergent than indicated by previous reports of their catadromous behavior." }, { "instance_id": "R34099xR34003", "comparison_id": "R34099", "paper_id": "R34003", "text": "Effects of salinity and ontogenetic movements on strontium:calcium ratios in the otoliths of the Japanese eel, Anguilla japonica Temminck and Schlegel Abstract To study the mechanism of Sr incorporation into otoliths of the Japanese eel, Anguilla japonica , a total of 100 elvers collected from an estuary were reared in the laboratory at salinities of 0, 10, 25 and 35%. for approximately seven months. The elvers grew from 56 mm TL to 100\u2013300 mm TL. Twenty elvers were randomly selected and the Ca and Sr concentrations of their otoliths were analyzed from the primordium to the edge, using an electron microprobe equipped with a four-channel wavelength-dispersive spectrometer. Sr Ca ratios in the otoliths of eels reared in various salinities were much lower than the ratio of 15 \u00d7 10 \u22123 observed in elvers about one month before they arrived at the estuary. The irreversibility of Sr Ca ratios at 35%. salinity in this experiment indicated that the drastic change of the Sr Ca ratios in otoliths of elvers was not due to the reduction of salinity in the coastal waters, but more likely to the development from leptocephalus to glass eel. The mean Sr Ca ratios in the new increments of the otoliths of eels during the rearing period were highly correlated with salinity ( S ), [ Sr Ca ] \u00d7 10 3 = 3.797 + 0.14S ( n = 20, r = 0.77), which can be used to predict elver movements and habitat utilization rates." }, { "instance_id": "R34099xR34014", "comparison_id": "R34099", "paper_id": "R34014", "text": "Dynamics of white perch Morone americana population contingents in the Patuxent River estuary, Maryland, USA Alternative migratory pathways in the life histories of fishes can be difficult to assess but may have great importance to the dynamics of spatially structured populations. We used Sr/Ca in otoliths as a tracer of time spent in freshwater and brackish habitats to study the ontogenetic mov- ments of white perch Morone americana in the Patuxent River estuary. We observed that, soon after the larvae metamorphose, juveniles either move to brackish habitats (brackish contingent) or take up residency in tidal fresh water (freshwater contingent) for the first year of life. In one intensively stud- ied cohort of juveniles, the mean age at which individuals moved into brackish environments was 45 d (post-hatch), corresponding to the metamorphosis of lavae to juveniles and settlement in littoral habitats. Back-calculated growth rates of the freshwater contingent at this same age (median = 0.6 mm d -1 ) were significantly higher than the brackish contingent (median = 0.5 mm d -1 ). Strong year-class variability (>100-fold) was evident from juvenile surveys and from the age composition of adults sampled during spawning. Adult samples were dominated by the brackish contingent (93% of n = 363), which exhibited a significantly higher growth rate (von Bertalanffy, k = 0.67 yr -1 ) than the freshwater contingent (k = 0.39 yr -1 ). Combined with evidence that the relative frequency of the brackish contingent has increased in year-classes with high juvenile recruitment, these results impli- cate brackish environments as being important for maintaining abundance and productivity of the population. By comparison, disproportionately greater recruitment to the adult population by the freshwater contingent during years of low juvenile abundance suggested that freshwater habitats sustain a small but crucial reproductive segment of the population. Thus, both contingents appeared to have unique and complementary roles in the population dynamics of white perch." }, { "instance_id": "R34099xR34011", "comparison_id": "R34099", "paper_id": "R34011", "text": "Dispersive behaviors of black drum and red drum: Is otolith Sr:Ca a reliable indicator of salinity history? We tested the hypothesis that strontium:calcium (Sr:Ca) in otoliths are reflective of environmental salinity experienced by two estuarine fishes during early life. Laboratory and field experiments were performed to examine the effects of salinity and temperature on Sr:Ca in otoliths of black drum (Pogonias cromis) and red drum (Sciaenops ocellatus). Otolith Sr:Ca of juveniles reared at four salinities (5\u2030, 15\u2030, 25\u2030, 35\u2030) differed significantly forP. cromis while no salinity effect was observed forS. ocellatus. Otolith Sr:Ca of both species were not affected by temperature (23\u00b0C and 30\u00b0C), suggesting that partitioning of Sr in otoliths of these taxa is constant over the temperature range examined. A field verification trial was conducted forP. cromis and a positive relationship between otolith Sr:Ca and ambient salinity was observed, even though the percent variability explained was modest. A series of Sr:Ca point measurements were taken from the core to the edge of the otoliths of wildP. cromis andS. ocellatus, and otolith Sr:Ca chronologies of both species showed conspicuous declines during the first few months of life. While Sr:Ca chronologies of both species suggest that ingress is associated with a reduction in otolith Sr:Ca, inconsistencies in laboratory and field experiments intimate that Sr uptake in the otolith may be insensitive to salinity and regulated by other factors (aqueous chemistry, ontogenetic shifts in habitat, or physiology). Results from early life history transects of otolith Sr:Ca conform to expected patterns of estuarine ingress-egress during early life and indicate that the approach may be useful for detecting large-scale habitat transitions (marine to estuarine habitats)." }, { "instance_id": "R34099xR33999", "comparison_id": "R34099", "paper_id": "R33999", "text": "Migratory behaviour and habitat use by American eels Anguilla rostrata as revealed by otolith microchemistry The environmental history of American eels Anguilla rostrata from the East River, Nova Scotia, was investigated by electron microprobe analysis of the Sr:Ca ratio along transects of the eel otolith. The mean (\u00b1SD) Sr:Ca ratio in the otoliths of juvenile American eels was 5.42 \u00d7 10 -3 \u00b1 1.22 \u00d7 10 -3 at the elver check and decreased to 2.38 \u00d7 10 -3 \u00b1 0.99 \u00d7 10 -3 at the first annulus for eels that migrated directly into the river but increased to 7.28 \u00d7 10 -3 \u00b1 1.09 \u00d7 10 -3 for eels that had remained in the estuary for 1 yr or more before entering the river. At the otolith edge, Sr:Ca ratios of 4.0 \u00d7 10 -3 or less indicated freshwater residence and ratios of 5.0 \u00d7 10 -3 or more indicated estuarine residence. Four distinct but interrelated behavioural groups were identified by the temporal changes in Sr:Ca ratios in their otoliths: (1) entrance into freshwater as an elver, (2) coastal or estuarine residence for 1y r or more before entering freshwater, and, after entering freshwater, (3) continuous freshwater res- idence until the silver eel stage and (4) freshwater residence for 1 yr or more before engaging in peri- odic, seasonal movements between estuary and freshwater until the silver eel stage. Small (< 70 mm total length), highly pigmented elvers that arrived early in the elver run were confirmed as slow growing age-1 juvenile eels. Juvenile eels that remained 1 yr or more in the estuary before entering the river contributed to the production of silver eels to a relatively greater extent than did elvers that entered the river during the year of continental arrival." }, { "instance_id": "R34126xR34120", "comparison_id": "R34126", "paper_id": "R34120", "text": "Optic neuritis: findings on MRI, CSF examination and HLA class II typing in 60 patients and results of a short-term follow-up Optic neuritis (ON) is a common first manifestation of multiple sclerosis (MS), and examination of patients with ON provides opportunities to study the early clinical stages of MS. This prospective study compares results of brain magnetic resonance imaging (MRI), cerebrospinal fluid (CSF) examinations and HLA-Dw2 phenotyping in 60 consecutive patients with ON. At a median of 17 days after the onset of ON, 69% had oligoclonal IgG bands, and at a median on 79 days after onset, 53% had multiple (\u2265 3) white matter lesions on MRI. Subgroup analyses revealed that MRI abnormalities and oligoclonal IgG bands were equally common in patients examined early or late after the onset of ON. Strong correlations were found between the presence of MRI abnormalities and oligoclonal IgG bands. The HLA-Dw2 phenotype was significantly increased in ON patients compared with controls, but also significantly different from a group of MS patients from the same geographical area. A significant relation was found between Dw2 phenotype and oligoclonal IgG bands. During a mean follow-up time of about 2 years, the diagnosis in 17 of the patients changed to clinically definite MS. Initially, 16 of them had oligoclonal IgG bands and 12 had three or more MRI lesions. Both MRI and CSF studies are important diagnostic tools in the work-up of ON patients." }, { "instance_id": "R34126xR34111", "comparison_id": "R34126", "paper_id": "R34111", "text": "A long-term prospective study of optic neuritis: evaluation of risk factors Eighty\u2010six patients with monosymptomatic optic neuritis of unknown cause were followed prospectively for a median period of 12.9 years. At onset, cerebrospinal fluid (CSF) pleocytosis was present in 46 patients (53%) but oligoclonal immunoglobulin in only 40 (47%) of the patients. The human leukocyte antigen (HLA)\u2010DR2 was present in 45 (52%). Clinically definite multiple sclerosis (MS) was established in 33 patients. Actuarial analysis showed that the cumulative probability of developing MS within 15 years was 45%. Three risk factors were identified: low age and abnormal CSF at onset, and early recurrence of optic neuritis. Female gender, onset in the winter season, and the presence of HLA\u2010DR2 antigen increased the risk for MS, but not significantly. Magnetic resonance imaging detected bilateral discrete white matter lesions, similar to those in MS, in 11 of 25 patients, 7 to 18 years after the isolated attack of optic neuritis. Nine were among the 13 with abnormal CSF and only 2 belonged to the group of 12 with normal CSF (p = 0.01). Normal CSF at the onset of optic neuritis conferred better prognosis but did not preclude the development of MS." }, { "instance_id": "R34126xR34108", "comparison_id": "R34126", "paper_id": "R34108", "text": "Optic neuritis: Prognosis for multiple sclerosis from MRI, CSF, and HLA findings We investigated the paraclinical profile of monosymptomatic optic neuritis(ON) and its prognosis for multiple sclerosis (MS). The correct identification of patients with very early MS carrying a high risk for conversion to clinically definite MS is important when new treatments are emerging that hopefully will prevent or at least delay future MS. We conducted a prospective single observer and population-based study of 147 consecutive patients (118 women, 80%) with acute monosymptomatic ON referred from a catchment area of 1.6 million inhabitants between January 1, 1990 and December 31, 1995. Of 116 patients examined with brain MRI, 64 (55%) had three or more high signal lesions, 11 (9%) had one to two high signal lesions, and 41 (35%) had a normal brain MRI. Among 143 patients examined, oligoclonal IgG (OB) bands in CSF only were demonstrated in 103 patients (72%). Of 146 patients analyzed, 68 (47%) carried the DR15,DQ6,Dw2 haplotype. During the study period, 53 patients (36%) developed clinically definite MS. The presence of three or more MS-like MRI lesions as well as the presence of OB were strongly associated with the development of MS (p < 0.001). Also, Dw2 phenotype was related to the development of MS (p = 0.046). MRI and CSF studies in patients with ON give clinically important information regarding the risk for future MS." }, { "instance_id": "R34183xR34161", "comparison_id": "R34183", "paper_id": "R34161", "text": "Bt Cotton in India: Field Trial Results and Economic Projections Abstract The performance of Bt cotton in India is analyzed on the basis of field trial data from 2001. The amounts of pesticides applied during the trials were reduced to one-third of what was used in conventional cotton, while\u2013\u2013under severe pest pressure\u2013\u2013yield gains were 80%. Productivity effects are modeled econometrically with a damage-control specification. The first approval for the commercial cultivation of Bt hybrids was given in 2002. By 2005, the technology is expected to cover one-quarter of total Indian cotton area. Medium-term projections show sizeable welfare gains for the overall economy, with farmers being the main beneficiaries." }, { "instance_id": "R34183xR34153", "comparison_id": "R34183", "paper_id": "R34153", "text": "Five years of Bt cotton in China - the benefits continue Bt cotton is spreading very rapidly in China, in response to demand from farmers for technology that will reduce both the cost of pesticide applications and exposure to pesticides, and will free up time for other tasks. Based on surveys of hundreds of farmers in the Yellow River cotton-growing region in northern China in 1999, 2000 and 2001, over 4 million smallholders have been able to increase yield per hectare, and reduce pesticide costs, time spent spraying dangerous pesticides, and illnesses due to pesticide poisoning. The expansion of this cost-saving technology is increasing the supply of cotton and pushing down the price, but prices are still sufficiently high for adopters of Bt cotton to make substantial gains in net income." }, { "instance_id": "R34183xR34151", "comparison_id": "R34183", "paper_id": "R34151", "text": "Impact of Bt Cotton in China A sample of 283 cotton farmers in Northern China was surveyed in December 1999. Farmers that used cotton engineered to produce the Bacillus thuringiensis (Bt) toxin substantially reduced the use of pesticide without reducing the output/ha or quality of cotton. This resulted in substantial economic benefits for small farmers. Consumers did not benefit directly. Farmers obtained the major share of benefits and because of weak intellectual property rights very little went back to government research institutes or foreign firms that developed these varieties. Farmers using Bt cotton reported fewer pesticide poisonings than those using conventional cotton." }, { "instance_id": "R34183xR34181", "comparison_id": "R34183", "paper_id": "R34181", "text": "Genetic improvements in major US crops: the size and distribution of benefits The distribution of welfare gains of genetic improvements in major US crops is estimated using a world agricultural trade model. Multi-market welfare estimates were 75% larger than estimates based on the price-exogenous \u2018change in revenue\u2019 method frequently used by plant breeders. Annual benefits of these genetic improvements range from US$ 400\u2013600 million depending on the supply shift specification. Of this, 44\u201360% accrues to the US, 24\u201334% accrues to other developed countries. Developing and transitional economies capture 16\u201322% of the welfare gain. The global benefits of a one-time permanent increase in US yields are US$ 8.1 billion (discounted at 10%) and US$ 15.4 billion (discounted at 5%). Gains to consumers in developing and transitional economies range from US$ 6.1 billion (10% discount rate) to US$ 11.6 billion (5% discount rate)." }, { "instance_id": "R34183xR34179", "comparison_id": "R34183", "paper_id": "R34179", "text": "Potential Benefits of Agricultural Biotechnology: An Example from the Mexican Potato Sector The study analyzes ex ante the socioeconomic effects of transgenic virus resistance technology for potatoes in Mexico. All groups of potato growers could significantly gain from the transgenic varieties to be introduced, and the technology could even improve income distribution. Nonetheless, public support is needed to fully harness this potential. Different policy alternatives are tested within scenario calculations in order to supply information on how to optimize the technological outcome, both from an efficiency and an equity perspective. Transgenic disease resistance is a promising technology for developing countries. Providing these countries with better access to biotechnology should be given higher political priority." }, { "instance_id": "R34183xR34159", "comparison_id": "R34183", "paper_id": "R34159", "text": "Can GM-Technologies Help the Poor? The Impact of Bt Cotton in Makhathini Flats, KwaZulu-Natal Abstract The results of a two-year survey of smallholders in Makhathini Flats, KwaZulu-Natal show that farmers who adopted Bt cotton in 1999\u20132000 benefited according to all the measures used. Higher yields and lower chemical costs outweighed higher seed costs, giving higher gross margins. These measures showed negative benefits in 1998\u201399, which conflicts with continued adoption, but stochastic efficiency frontier estimation, which takes account of the labor saved, showed that adopters averaged 88% efficiency, as compared with 66% for the nonadopters. In 1999\u20132000, when late rains lowered yields, the gap widened to 74% for adopters and 48% for nonadopters." }, { "instance_id": "R34251xR34196", "comparison_id": "R34251", "paper_id": "R34196", "text": "On the adequacy of monetary arrangements in Sub-Saharan Africa We examine the economic rationale for monetary union(s) in Sub-Saharan Africa through the use of cluster analysis on a sample of 17 countries. The variables used stem from the theory of optimum currency areas and from the fear-of-floating literature. It is found that the existing CFA franc zone cannot be viewed as an optimum currency area: CEMAC and UEMOA countries do not belong to the same clusters, and a 'core' of the UEMOA can be defined on economic grounds. The results support the inclusion of the Gambia, Ghana and Sierra Leone in an extended UEMOA arrangement, or the creation of a separate monetary union with the 'core' of the UEMOA and the Gambia, rather than the creation of a monetary union around Nigeria. Finally, the creation of the West African Monetary Zone (WAMZ) around Nigeria is not supported by the data. Copyright Blackwell Publishing Ltd 2005." }, { "instance_id": "R34251xR34234", "comparison_id": "R34251", "paper_id": "R34234", "text": "A Short-Run Schumpeterian Trip to Embryonic African Monetary Zones With the spectre of the Euro crisis looming substantially large and scaring potential monetary unions, this study is a short-run trip to embryonic African monetary zones to assess the Schumpeterian thesis for positive spillovers of financial services on growth. Causality analysis is performed with seven financial development and three growth indicators in the proposed West African Monetary Zone (WAMZ) and East African Monetary Zone (EAMZ). The journey is promising for the EAMZ and lamentable for the WAMZ. Results of the EAMZ are broadly consistent with the traditional discretionary monetary policy arrangements while those of the WAMZ are in line with the non-traditional strand of regimes in which, policy instruments in the short-run cannot be used to offset adverse shocks to output. Policy implications are discussed." }, { "instance_id": "R34251xR34210", "comparison_id": "R34251", "paper_id": "R34210", "text": "Currency Unions in Africa: Is the Trade Effect Substantial Enough to Justify their Formation? Using estimates that currency unions double trade, we quantify the welfare effects of forming currency unions for the African regional economic communities and for the African Union as a whole. The potential increase in trade is shown to be small, and much less than that resulting from the adoption of the euro. Allowing for increased African trade does not overturn the negative assessment of African currency unions, due to asymmetries in countries' terms-of-trade shocks and their degree of fiscal discipline. Copyright 2007 The Author." }, { "instance_id": "R34251xR34187", "comparison_id": "R34251", "paper_id": "R34187", "text": "An evaluation of the viability of a single monetary zone in ECOWAS Currency convertibility and monetary integration activities of the Economic Community of West African States (ECOWAS) are directed at addressing the problems of multiple currencies and exchange rate changes that are perceived as stumbling blocks to regional integration. A real exchange rate (RER) variability model shows that ECOWAS is closer to a monetary union now than before. As expected, the implementation of structural adjustment programmes (SAPs) by various governments in the subregion has brought about a reasonable level of convergence. However, wide differences still exist between RER shocks facing CFA zone and non-CFA zone West African countries. Further convergence in economic policy and alternatives to dependence on revenues from taxes on international transactions are required for a stable region-wide monetary union in West Africa." }, { "instance_id": "R34251xR34249", "comparison_id": "R34251", "paper_id": "R34249", "text": "Is West African Monetary Zone (WAMZ) a common currency area? In this paper, we test whether the West African Monetary Zone (WAMZ) is a common currency area by using a structural vector autoregressive model to study the variance decomposition, impulse responses of key economic variables and linear dependence of the underlying structural shocks of the countries in the zone. The variance decomposition shows that the zone as a whole does not have common sources of shock, which is expected because of the diverse economic structures of these countries. The correlation of the structural shocks also shows that these countries respond asymmetrically to common supply, demand and monetary shocks and will therefore respond differently to a common monetary policy. It is therefore not in the interest of the individual countries to go into a monetary union now or in the near future unless the economies of these countries converge further." }, { "instance_id": "R34251xR34190", "comparison_id": "R34251", "paper_id": "R34190", "text": "Monetary union in West Africa: who might gain, who might lose, and why? We develop a model in which governments' financing needs exceed the socially optimal level because public resources are diverted to serve the narrow interests of the group in power. From a social welfare perspective, this results in undue pressure on the central bank to extract seigniorage. Monetary policy also suffers from an expansive bias, owing to the authorities' inability to precommit to price stability. Such a conjecture about the fiscal-monetary policy mix appears quite relevant in Africa, with deep implications for the incentives of fiscally heterogeneous countries to form a currency union. We calibrate the model to data for West Africa and use it to assess proposed ECOWAS monetary unions. Fiscal heterogeneity indeed appears critical in shaping regional currency blocs that would be mutually beneficial for all their members. In particular, Nigeria's membership in the configurations currently envisaged would not be in the interests of other ECOWAS countries unless it were accompanied by effective containment on Nigeria's financing needs." }, { "instance_id": "R34282xR34270", "comparison_id": "R34282", "paper_id": "R34270", "text": "Design and implementation of a common currency area in the East African community The East African Community (EAC) has fast-tracked its plans to create a single currency for the five countries making up the region, and hopes to conclude negotiations on a monetary union protocol by the end of 2012. While the benefits of lower transactions costs from a common currency may be significant, countries will also lose the ability to use monetary policy to respond to different shocks. Evidence presented shows that the countries differ in a number of respects, facing asymmetric shocks and different production structures. Countries have had difficulty meeting convergence criteria, most seriously as concerns fiscal deficits. Preparation for monetary union will require effective institutions for macroeconomic surveillance and enforcing fiscal discipline, and euro zone experience indicates that these institutions will be difficult to design and take a considerable time to become effective. This suggests that a timetable for monetary union in the EAC should allow for a substantial initial period of institution building. In order to have some visible evidence of the commitment to monetary union, in the meantime the EAC may want to consider introducing a common basket currency in the form of notes and coin, to circulate in parallel with national currencies." }, { "instance_id": "R34282xR34264", "comparison_id": "R34282", "paper_id": "R34264", "text": "A Fast-Track East African Community Monetary Union? Convergence Evidence from a Cointegration Analysis. There is a proposal for a fast-tracked approach to the African Community (EAC) monetary union. This paper uses cointegration techniques to determine whether the member countries would form a successful monetary union based on the long-run behavior of nominal and real exchange rates and monetary base. The three variables are each analyzed for co-movements among the five countries. The empirical results indicate only partial convergence for the variables considered, suggesting there could be substantial costs for the member countries from a fast-tracked process. This implies the EAC countries need significant adjustments to align their monetary policies and to allow a period of monetary policy coordination to foster convergence that will improve the chances of a sustainable currency union." }, { "instance_id": "R34316xR34303", "comparison_id": "R34316", "paper_id": "R34303", "text": "World Development Indicators 2015 The 1998 edition of world development indicators initiated a series of annual reports on progress toward the International development goals. In the foreword then, World Bank President James D. Wolfensohn recognized that 'by reporting regularly and systematically on progress toward the targets the international community has set for itself, the author will focus attention on the task ahead and make those responsible for advancing the development agenda accountable for results.' The same vision inspired world leaders to commit themselves to the millennium development goals. On this, the 10th anniversary of the millennium declaration, world development indicators 2010 focuses on progress toward the millennium development goals and the challenges of meeting them." }, { "instance_id": "R34316xR34299", "comparison_id": "R34316", "paper_id": "R34299", "text": "The Process of Monetary Integration in the SADC Region* The African Union has agreed, in principle, to implement monetary union and a single currency in Africa by 2021. This would be based upon the prior formation of regional monetary unions, including one in the SADC region. This article considers the economic prerequisites and implications for a monetary union and, in the light of this, whether a SADC monetary union is feasible. After reviewing the existing monetary union within SADC (the rand-based Common Monetary Area) and current SADC macroeconomic convergence initiatives, the article examines the extent to which key economic and monetary variables \u2013 inflation, interest rates and exchange rates \u2013 are converging within SADC. It concludes that there is a core \u2018convergence\u2019 group comprising the CMA countries \u2013 South Africa, Lesotho, Namibia and Swaziland \u2013 plus Botswana, Mauritius, Mozambique and Tanzania whose macroeconomic performance satisfies some of the criteria for monetary union. The remaining SADC countries \u2013 Angola, DRC, Malawi, Zambia and Zimbabwe \u2013 make up a \u2018non-converging\u2019 group that cannot yet be considered potential candidates for monetary union. However, even within the convergence group, countries remain far from satisfying the other prerequisites for monetary union, including significant intra-regional trade, and full capital and labour mobility. There are also major political constraints, making the AU monetary union proposals and timetable highly ambitious." }, { "instance_id": "R34411xR34394", "comparison_id": "R34411", "paper_id": "R34394", "text": "Fulminant small bowel enteritis: a rare complication of Clostridium difficile-associated disease To the Editor: A 54-year-old male was admitted to a community hospital with a 3-month history of diarrhea up to 8 times a day associated with bloody bowel motions and weight loss of 6 kg. He had no past medical history or family history of note. A clinical diagnosis of colitis was made and the patient underwent a limited colonoscopy which demonstrated continuous mucosal inflammation and ulceration that was most marked in the rectum. The clinical and endoscopic findings were suggestive of acute ulcerative colitis (UC), which was subsequently supported by histopathology. The patient was managed with bowel rest and intravenous steroids. However, he developed toxic megacolon on day 4 of his admission and underwent a total colectomy with end ileostomy. On the third postoperative day the patient developed a pyrexia of 39\u00b0C, a septic screen was performed, and the central venous line (CVP) was changed with the tip culturing methicillin-resistant Staphylococcus aureus (MRSA). Intravenous gentamycin was commenced and discontinued after 5 days, with the patient remaining afebrile and stable. On the tenth postoperative day the patient became tachycardic (pulse 110/min), diaphoretic (temperature of 39.4\u00b0C), hypotensive (diastolic of 60 mm Hg), and with a high volume nasogastric aspirates noted (2000 mL). A diagnosis of septic shock was considered although the etiology was unclear. The patient was resuscitated with intravenous fluids and transferred to the regional surgical unit for Intensive Care Unit monitoring and management. A computed tomography (CT) of the abdomen showed a marked inflammatory process with bowel wall thickening along the entire small bowel with possible intramural air, raising the suggestion of ischemic bowel (Fig. 1). However, on clinical assessment the patient elicited no signs of peritonism, his vitals were stable, he was not acidotic (pH 7.40), urine output was adequate, and his blood pressure was being maintained without inotropic support. Furthermore, his ileostomy appeared healthy and well perfused, although a high volume (2500 mL in the previous 18 hours), malodorous output was noted. A sample of the stoma output was sent for microbiological analysis. Given that the patient was not exhibiting evidence of peritonitis with normal vital signs, a conservative policy of fluid resuscitation was pursued with plans for exploratory laparotomy if he disimproved. Ileostomy output sent for microbiology assessment was positive for Clostridium difficile toxin A and B utilizing culture and enzyme immunoassays (EIA). Intravenous vancomycin, metronidazole, and rifampicin via a nasogastric tube were commenced in conjunction with bowel rest and total parenteral nutrition. The ileostomy output reduced markedly within 2 days and the patient\u2019s clinical condition improved. Follow-up culture of the ileostomy output was negative for C. difficile toxins. The patient was discharged in good health on full oral diet 12 days following transfer. Review of histopathology relating to the resected colon and subsequent endoscopic assessment of the retained rectum confirmed the initial diagnosis of UC, rather than a primary diagnosis of pseudomembranous colitis. Clostridium difficile is the leading cause of nosocomial diarrhea associated with antibiotic therapy and is almost always limited to the colonic mucosa.1 Small bowel enteritis secondary to C. difficile is exceedingly rare, with only 21 previous cases cited in the literature.2,3 Of this cohort, 18 patients had a surgical procedure at some timepoint prior to the development of C. difficile enteritis, while the remaining 3 patients had no surgical procedure prior to the infection. The time span between surgery and the development of enteritis ranged from 4 days to 31 years. Antibiotic therapy predisposed to the development of C. difficile enteritis in 20 of the cases. A majority of the patients (n 11) had a history of inflammatory bowel disease (IBD), with 8 having UC similar to our patient and the remaining 3 patients having a history of Crohn\u2019s disease. The etiology of small bowel enteritis remains unclear. C. difficile has been successfully isolated from the small bowel in both autopsy specimens and from jejunal aspirate of patients with chronic diarrhea, suggesting that the small bowel may act as a reservoir for C. difficile.4 This would suggest that C. difficile could become pathogenic in the small bowel following a disruption in the small bowel flora in the setting of antibiotic therapy. This would be supported by the observation that the majority of cases reported occurred within 90 days of surgery with attendant disruption of bowel function. The prevalence of C. difficile-associated disease (CDAD) in patients with IBD is increasing. Issa et al5 examined the impact of CDAD in a cohort of patients with IBD. They found that more than half of the patients with a positive culture for C. difficile were admitted and 20% required a colectomy. They reported that maintenance immunomodulator use and colonic involvement were independent risk factors for C. difficile infection in patients with IBD. The rising incidence of C. difficile in patients with IBD coupled with the use of increasingly potent immunomodulatory therapies means that clinicians must have a high index of suspiCopyright \u00a9 2008 Crohn\u2019s & Colitis Foundation of America, Inc. DOI 10.1002/ibd.20758 Published online 22 October 2008 in Wiley InterScience (www.interscience.wiley.com)." }, { "instance_id": "R34411xR34403", "comparison_id": "R34411", "paper_id": "R34403", "text": "Thinking beyond the colon-small bowel involvement in Clostridium difficile infection Small intestinal Clostridium difficile seems to be increasing in incidence. The spectrum of Clostridium difficile infection (CDI) has definitely expanded with small bowel involvement. They are more frequently reported in patients with inflammatory bowel disease (IBD) who have undergone total colectomy or patients with Ileal anal pouch anastomosis. The most common presentation is increased ileostomy output with associated dehydration. High clinical suspicion, early recognition and appropriate treatment are the keys to successful resolution. The increase in the number of these patients may actually reflect an increase in the rising incidence of CDI in general or increasing virulence of the organism. Heightened public awareness and initiation of prompt preventive measures are the keystones to control of this infection. This disease is no longer limited to the colon and physicians should be educated to think beyond the colon in patients with CDI." }, { "instance_id": "R34411xR34384", "comparison_id": "R34411", "paper_id": "R34384", "text": "Fatal Clostridium difficile infection of the small bowel after complex colorectal surgery Pseudomembranous colitis is a well recognized complication of antibiotic use1 and is due to disturbances of the normal colonic bacterial flora, resulting in overgrowth of Clostridium difficile. For recurrent or severe cases, oral vancomycin or metronidazole is the treatment of choice. Progression to acute fulminant colitis with systemic toxic effects occasionally occurs, especially in the elderly and in the immunosuppressed. Some of these patients may need surgical intervention for complications such as perforation.2 Clostridium difficile is commonly regarded as a colonic pathogen and there are few reports of C. difficile enteritis with involvement of the small bowel (Table 1). Pseudomembrane formation caused by C. difficile is generally restricted to the colon, with abrupt termination at the ileocaecal valve.1,3,5,8,9 We report a case of fulminant and fatal C. difficile infection with pseudomembranes throughout the entire small bowel and colon in a patient following complex colorectal surgery. The relevant literature is reviewed." }, { "instance_id": "R34411xR34322", "comparison_id": "R34411", "paper_id": "R34322", "text": "Clostridium Difficile Infection\u2014An Unusual Cause of Refractory Pouchitis: Report of a Case AbstractPURPOSE: Ileal pouch-anal anastomosis is the surgical procedure of choice for selected patients with severe ulcerative colitis. Pouchitis is a common complication of this procedure, with most cases responding to treatment with metronidazole, possibly with the addition of 5-aminosalicylic acid drugs and steroids. Clostridium difficile can frequently colonize the colon after treatment with broad-spectrum antibiotics, giving rise to diarrhea or colitis. The aim of this report was to describe the first case of Clostridium difficile\u2013associated diarrhea manifest as pouchitis. METHODS: The management of refractory pouchitis in a 35-year-old female with Clostridium difficile toxin in the stool is described followed by a literature review of small-intestinal Clostridium difficile infection. RESULTS: Assays for Clostridium difficile toxin on stool sent during an episode considered to be caused by idiopathic chronic pouchitis were positive, and treatment with oral vancomycin was initiated. The patient responded with a reduction in bowel frequency to twice daily, a successful discontinuation of her antidiarrheal medication, and a rapid increase in weight. A subsequent stool assay was negative for the toxin. CONCLUSIONS: Clostridium difficile infection can complicate pouchitis in patients with an ileal pouch-anal anastomosis and should be considered in patients who fail to respond to standard treatment, including metronidazole. In cases of refractory pouchitis, superadded infection with Clostridium difficile should be excluded before initiation of potent anti-inflammatory drugs." }, { "instance_id": "R34411xR34360", "comparison_id": "R34411", "paper_id": "R34360", "text": "Pseudomembranous enteritis after proctocolectomy: report of a case Intestinal pseudomembrane formation, sometimes a manifestation of antibiotic-associated diarrheal illnesses, is typically limited to the colon but rarely may affect the small bowel. A 56-year-old female taking antibiotics, who had undergone proctocolectomy for idiopathic inflammatory bowel disease, presented with septic shock and hypotension. A partial small-bowel resection revealed extensive mucosal pseudomembranes, which were cultured positive forClostridium difficile. Intestinal drainage contents from an ileostomy were enzyme immunoassay positive forC. difficile toxin A. Gross and histopathologic features of the small-bowel resection specimen were similar to those characteristic of pseudomembranous colitis. The patient was treated successfully with metronidazole. These findings suggest a reservoir forC. difficile also exists in the small intestine and that conditions for enhanced mucosal susceptibility toC. difficile overgrowth may occur in the small-bowel environment of antibiotic-treated patients after colectomy. Pseudomembranous enteritis should be a consideration in those patients who present with purulent ostomy drainage, abdominal pain, fever, leukocytosis, or symptoms of septic shock." }, { "instance_id": "R34411xR34372", "comparison_id": "R34411", "paper_id": "R34372", "text": "Isolation of Clostridium difficile from human jejunum: identification of a reservoir for disease? The possibility that the small intestine may represent a reservoir for Clostridium difficile was studied, using segments of human jejunum collected at necropsy. Our results (three of 100 specimens positive for C difficile culture) support the hypothesis that C difficile can be found in human jejunum and that it adheres to the normal mucosa as a resident bacterium. These findings suggest that gastrointestinal disease caused by C difficile has an endogenous origin." }, { "instance_id": "R34411xR34400", "comparison_id": "R34411", "paper_id": "R34400", "text": "Fulminant Clostridium difficile enteritis after proctocolectomy and ileal pouch-anal anastamosis Clostridium difficile ( C. difficile ) infection of the small bowel is very rare. The disease course is more severe than that of C. difficile colitis, and the mortality is high. We present a case of C. difficile enteritis in a patient with with ileal pouch-anal anastamosis (IPAA), and review previous case reports in order to better characterize this unusual condition." }, { "instance_id": "R34454xR34440", "comparison_id": "R34454", "paper_id": "R34440", "text": "Jian-Cheng Huang \u201cA New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDA\u201d ses a facial expression recognition system based on Gabor feature using a novel r bank. Traditionally, a global Gabor filter bank with 5 frequencies and 8 ten used to extract the Gabor feature. A lot of time will be involved to extract mensions of such Gabor feature vector are prohibitively high. A novel local Gabor art of frequency and orientation parameters is proposed. In order to evaluate the he local Gabor filter bank, we first employed a two-stage feature compression LDA to select and compress the Gabor feature, then adopted minimum distance nize facial expression. Experimental results show that the method is effective for eduction and good recognition performance in comparison with traditional entire . The best average recognition rate achieves 97.33% for JAFFE facial expression abor filter bank, feature extraction, PCA, LDA, facial expression recognition. deliver rich information about human emotion and play an essential role in human In order to facilitate a more intelligent and natural human machine interface of new cts, automatic facial expression recognition [1][18][20] had been studied world en years, which has become a very active research area in computer vision and n. There are many approaches have been proposed for facial expression analysis ages and image sequences [12][18] in the literature. we focus on the recognition of facial expression from single digital images with feature extraction. A number of approaches have been developed for extracting by: Motorola Labs Research Foundation (No.303D804372), NSFC (No.60275005), GDNSF 105938)." }, { "instance_id": "R34454xR34438", "comparison_id": "R34454", "paper_id": "R34438", "text": "A Region Based Methodology for Facial Expression Recognition This work investigates the use of a point distribution model to detect prominent features in a face (eyes, brows, mouth, etc) and the subsequent facial feature extraction and facial expression classification into seven categories (anger, fear, surprise, happiness, disgust, neutral and sadness). A multi-scale and multi-orientation Gabor filter bank, designed in such a way so as to avoid redundant information, is used to extract facial features at selected locations of the prominent features of a face (fiducial points). A region based approach is employed at the location of the fiducial points using different region sizes to allow some degree of flexibility and avoid artefacts due to incorrect automatic discovery of these points. A feed forward back propagation Artificial Neural Network is employed to classify the extracted feature vectors. The methodology is evaluated by forming 7 different regions and the feature vector is extracted at the location of 20 fiducial points." }, { "instance_id": "R34454xR34442", "comparison_id": "R34454", "paper_id": "R34442", "text": "Analysis of Facial Expressions from Video Images using PCA\u201d WCE recognition and expression analysis is one of the most challenging research areas in the field of computer vision. Though face exhibits different facial expressions, which can be instantly recognized by human eyes, it is very difficult for a computer to extract and use the information content from these expressions. In this paper we present a method to analyze facial expression from video images by focusing on the regions such as eyes, mouth etc whose geometries are mostly affected by variation in facial expressions. Face regions are extracted from video images. Skin color detection is used for identifying skin region and recognized using Principal Component Analysis (PCA) method. Face images are projected on to a feature space and the weight vectors are compared to get minimum variation. The geometric coordinates of highly expression reflected areas are extracted for analyzing facial expressions. Our method reliably works even with faces, which carry heavy expressions. This method exhibits a good performance ratio." }, { "instance_id": "R34605xR34567", "comparison_id": "R34605", "paper_id": "R34567", "text": "The link prediction problem for social networks Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures." }, { "instance_id": "R34605xR34603", "comparison_id": "R34605", "paper_id": "R34603", "text": "Anonimos: an LP based approach for anonymizing weighted social network graphs The increasing popularity of social networks has initiated a fertile research area in information extraction and data mining. Anonymization of these social graphs is important to facilitate publishing these data sets for analysis by external entities. Prior work has concentrated mostly on node identity anonymization and structural anonymization. But with the growing interest in analyzing social networks as a weighted network, edge weight anonymization is also gaining importance. We present Ano\u0301nimos, a Linear Programming-based technique for anonymization of edge weights that preserves linear properties of graphs. Such properties form the foundation of many important graph-theoretic algorithms such as shortest paths problem, k-nearest neighbors, minimum cost spanning tree, and maximizing information spread. As a proof of concept, we apply Ano\u0301nimos to the shortest paths problem and its extensions, prove the correctness, analyze complexity, and experimentally evaluate it using real social network data sets. Our experiments demonstrate that Ano\u0301nimos anonymizes the weights, improves k-anonymity of the weights, and also scrambles the relative ordering of the edges sorted by weights, thereby providing robust and effective anonymization of the sensitive edge-weights. We also demonstrate the composability of different models generated using Ano\u0301nimos, a property that allows a single anonymized graph to preserve multiple linear properties." }, { "instance_id": "R34605xR34515", "comparison_id": "R34605", "paper_id": "R34515", "text": "Resisting structural re-identification in anonymized social networks We identify privacy risks associated with releasing network data sets and provide an algorithm that mitigates those risks. A network consists of entities connected by links representing relations such as friendship, communication, or shared activity. Maintaining privacy when publishing networked data is uniquely challenging because an individual's network context can be used to identify them even if other identifying information is removed. In this paper, we quantify the privacy risks associated with three classes of attacks on the privacy of individuals in networks, based on the knowledge used by the adversary. We show that the risks of these attacks vary greatly based on network structure and size. We propose a novel approach to anonymizing network data that models aggregate network structure and then allows samples to be drawn from that model. The approach guarantees anonymity for network entities while preserving the ability to estimate a wide variety of network measures with relatively little bias." }, { "instance_id": "R34605xR34593", "comparison_id": "R34605", "paper_id": "R34593", "text": "Extended k-anonymity models against sensitive attribute disclosure p-Sensitive k-anonymity model has been recently defined as a sophistication of k-anonymity. This new property requires that there be at least p distinct values for each sensitive attribute within the records sharing a set of quasi-identifier attributes. In this paper, we identify the situations when the p-sensitive k-anonymity property is not enough for the sensitive attributes protection. To overcome the shortcoming of the p-sensitive k-anonymity principle, we propose two new enhanced privacy requirements, namely p^+-sensitive k-anonymity and (p,@a)-sensitive k-anonymity properties. These two new introduced models target at different perspectives. Instead of focusing on the specific values of sensitive attributes, p^+-sensitive k-anonymity model concerns more about the categories that the values belong to. Although (p,@a)-sensitive k-anonymity model still put the point on the specific values, it includes an ordinal metric system to measure how much the specific sensitive attribute values contribute to each QI-group. We make a thorough theoretical analysis of hardness in computing the data set that satisfies either p^+-sensitive k-anonymity or (p,@a)-sensitive k-anonymity. We devise a set of algorithms using the idea of top-down specification, which is clearly illustrated in the paper. We implement our algorithms on two real-world data sets and show in the comprehensive experimental evaluations that the two new introduced models are superior to the previous method in terms of effectiveness and efficiency." }, { "instance_id": "R34605xR34503", "comparison_id": "R34605", "paper_id": "R34503", "text": "The union-split algorithm and cluster-based anonymization of social networks Knowledge discovery on social network data can uncover latent social trends and produce valuable findings that benefit the welfare of the general public. A growing amount of research finds that social networks play a surprisingly powerful role in people's behaviors. Before the social network data can be released for research purposes, the data needs to be anonymized to prevent potential re-identification attacks. Most of the existing anonymization approaches were developed for relational data, and cannot be used to handle social network data directly. In this paper, we model social networks as undirected graphs and formally define privacy models, attack models for the anonymization problem, in particular an i-hop degree-based anonymization problem, i.e., the adversary's prior knowledge includes the target's degree and the degrees of neighbors within i hops from the target. We present two new and efficient clustering methods for undirected graphs: bounded t-means clustering and union-split clustering algorithms that group similar graph nodes into clusters with a minimum size constraint. These clustering algorithms are contributions beyond the specific social network problems studied and can be used to cluster general data types besides graph vertices. We also develop a simple-yet-effective inter-cluster matching method for anonymizing social networks by strategically adding and removing edges based on nodes' social roles. We carry out a series of experiments to evaluate the graph utilities of the anonymized social networks produced by our algorithms." }, { "instance_id": "R34605xR34540", "comparison_id": "R34605", "paper_id": "R34540", "text": "On identity disclosure in weighted graphs As an integral part of data security, identity disclosureis a major privacy breach, which reveals the identification of entities with certain background knowledge known by an adversary. Most recent studies on this problem focus on the protection of relational data or simple graph data (i.e. undirected, un weighted and acyclic). However, a weighted graph can introduce much more unique information than its simple version, which makes the disclosure easier. As more real-world graphs or social networks are released publicly, there is growing concern about privacy breaching for the entities involved. In this paper, we first formalize a general anonymizing model to deal with weight-related attacks, and discuss an efficient metric to quantify information loss incurred in the perturbation. Then we consider a very practical attack based on the sum of adjacent weights for each vertex, which is known as volume in graph theory field. We also propose a complete solution for the weight anonymization problem to prevent a graph from volume attack. Our approaches are efficient and practical, and have been validated by extensive experiments on both synthetic and real-world datasets." }, { "instance_id": "R34605xR34498", "comparison_id": "R34605", "paper_id": "R34498", "text": "Anonymizing bipartite graph data using safe groupings Private data often comes in the form of associations between entities, such as customers and products bought from a pharmacy, which are naturally represented in the form of a large, sparse bipartite graph. As with tabular data, it is desirable to be able to publish anonymized versions of such data, to allow others to perform ad hoc analysis of aggregate graph properties. However, existing tabular anonymization techniques do not give useful or meaningful results when applied to graphs: small changes or masking of the edge structure can radically change aggregate graph properties. We introduce a new family of anonymizations, for bipartite graph data, called (k, l)-groupings. These groupings preserve the underlying graph structure perfectly, and instead anonymize the mapping from entities to nodes of the graph. We identify a class of \"safe\" (k, l)-groupings that have provable guarantees to resist a variety of attacks, and show how to find such safe groupings. We perform experiments on real bipartite graph data to study the utility of the anonymized version, and the impact of publishing alternate groupings of the same graph data. Our experiments demonstrate that (k, l)-groupings offer strong tradeoffs between privacy and utility." }, { "instance_id": "R34605xR34522", "comparison_id": "R34605", "paper_id": "R34522", "text": "Towards identity anonymization on graphs The proliferation of network data in various application domains has raised privacy concerns for the individuals involved. Recent studies show that simply removing the identities of the nodes before publishing the graph/social network data does not guarantee privacy. The structure of the graph itself, and in its basic form the degree of the nodes, can be revealing the identities of individuals. To address this issue, we study a specific graph-anonymization problem. We call a graph k-degree anonymous if for every node v, there exist at least k-1 other nodes in the graph with the same degree as v. This definition of anonymity prevents the re-identification of individuals by adversaries with a priori knowledge of the degree of certain nodes. We formally define the graph-anonymization problem that, given a graph G, asks for the k-degree anonymous graph that stems from G with the minimum number of graph-modification operations. We devise simple and efficient algorithms for solving this problem. Our algorithms are based on principles related to the realizability of degree sequences. We apply our methods to a large spectrum of synthetic and real datasets and demonstrate their efficiency and practical utility." }, { "instance_id": "R34663xR34660", "comparison_id": "R34663", "paper_id": "R34660", "text": "A framework for performance and data quality assessment of Radio Frequency IDentification (RFID) systems in health care settings OBJECTIVE RFID offers great opportunities to health care. Nevertheless, prior experiences also show that RFID systems have not been designed and tested in response to the particular needs of health care settings and might introduce new risks. The aim of this study is to present a framework that can be used to assess the performance of RFID systems particularly in health care settings. METHODS We developed a framework describing a systematic approach that can be used for assessing the feasibility of using an RFID technology in a particular healthcare setting; more specific for testing the impact of environmental factors on the quality of RFID generated data and vice versa. This framework is based on our own experiences with an RFID pilot implementation in an academic hospital in The Netherlands and a literature review concerning RFID test methods and current insights of RFID implementations in healthcare. The implementation of an RFID system within the blood transfusion chain inside a hospital setting was used as a show case to explain the different phases of the framework. RESULTS The framework consists of nine phases, including an implementation development plan, RFID and medical equipment interference tests, data accuracy- and data completeness tests to be run in laboratory, simulated field and real field settings. CONCLUSIONS The potential risks that RFID technologies may bring to the healthcare setting should be thoroughly evaluated before they are introduced into a vital environment. The RFID performance assessment framework that we present can act as a reference model to start an RFID development, engineering, implementation and testing plan and more specific, to assess the potential risks of interference and to test the quality of the RFID generated data potentially influenced by physical objects in specific health care environments." }, { "instance_id": "R34663xR34649", "comparison_id": "R34663", "paper_id": "R34649", "text": "An evaluation of data quality in Canada\u2019s Continuing Care Reporting System (CCRS): secondary analyses of Ontario data submitted between 1996 and 2011 BackgroundEvidence informed decision making in health policy development and clinical practice depends on the availability of valid and reliable data. The introduction of interRAI assessment systems in many countries has provided valuable new information that can be used to support case mix based payment systems, quality monitoring, outcome measurement and care planning. The Continuing Care Reporting System (CCRS) managed by the Canadian Institute for Health Information has served as a data repository supporting national implementation of the Resident Assessment Instrument (RAI 2.0) in Canada for more than 15 years. The present paper aims to evaluate data quality for the CCRS using an approach that may be generalizable to comparable data holdings internationally.MethodsData from the RAI 2.0 implementation in Complex Continuing Care (CCC) hospitals/units and Long Term Care (LTC) homes in Ontario were analyzed using various statistical techniques that provide evidence for trends in validity, reliability, and population attributes. Time series comparisons included evaluations of scale reliability, patterns of associations between items and scales that provide evidence about convergent validity, and measures of changes in population characteristics over time.ResultsData quality with respect to reliability, validity, completeness and freedom from logical coding errors was consistently high for the CCRS in both CCC and LTC settings. The addition of logic checks further improved data quality in both settings. The only notable change of concern was a substantial inflation in the percentage of long term care home residents qualifying for the Special Rehabilitation level of the Resource Utilization Groups (RUG-III) case mix system after the adoption of that system as part of the payment system for LTC.ConclusionsThe CCRS provides a robust, high quality data source that may be used to inform policy, clinical practice and service delivery in Ontario. Only one area of concern was noted, and the statistical techniques employed here may be readily used to target organizations with data quality problems in that (or any other) area. There was also evidence that data quality was good in both CCC and LTC settings from the outset of implementation, meaning data may be used from the entire time series. The methods employed here may continue to be used to monitor data quality in this province over time and they provide a benchmark for comparisons with other jurisdictions implementing the RAI 2.0 in similar populations." }, { "instance_id": "R34663xR34639", "comparison_id": "R34663", "paper_id": "R34639", "text": "Structured electronic operative reporting: Comparison with dictation in kidney cancer surgery PURPOSE The purpose of this study was to evaluate the functionality of eKidney as a structured reporting tool in operative note generation. To do this, we compared completeness and timeliness of eKidney template-generated nephrectomy OR notes with standard narrative dictation. METHODS A group of academic uro-oncologists and medical informaticians at the University Health Network designed and adopted an electronic online, point-of-care clinical documentation tool, eCancerCare(Kidney) (eKidney) for kidney cancer patient care. The optimal components of clinic and operative note templates, including those for nephrectomy, were agreed upon by expert consensus of the uro-oncologists. Clinician nephrectomy OR reports were analyzed for completeness, comparing those generated in eKidney with conventionally dictated notes. Patterns of missing information from both dictated and eKidney-generated reports were analyzed. The procedure, note completion and transcription dates were recorded which generated time intervals between these events. The records of 189 procedures were included in the analysis. RESULTS Comparison of clinicians who used both note generation modalities, revealed a mean completion rate of 92% for eKidney/structured notes and 68% for dictated notes (p<0.0001). There was no significant difference in completion rates between attending staff and trainees (residents and fellows) (p=0.131). Most notes were dictated/entered on the day of surgery. Dictated notes were transcribed to EPR a median of 2 days after dictation, however roughly 30% of dictated notes took 5 days or more to get transcribed. All notes generated using eKidney were uploaded to the EPR immediately. LIMITATIONS Our study has three significant limitations. Firstly, our study was not randomized: physicians could elect to dictate or use eKidney. Secondly, we did not identify data from dictated notes that were not captured by eKidney. Third, we did not compare the time it took physicians to complete the fields in eKidney with the time it takes to dictate a note. CONCLUSIONS We have demonstrated that the use of structured reporting improves the completeness and timeliness of documentation in kidney cancer surgery. eKidney is an example of the power of templates in ensuring that important details of a procedure are recorded. Future studies looking at user satisfaction, and research and educational potential of eKidney would be valuable." }, { "instance_id": "R34663xR34611", "comparison_id": "R34663", "paper_id": "R34611", "text": "Structured data quality reports to improve EHR data quality OBJECTIVE To examine whether a structured data quality report (SDQR) and feedback sessions with practice principals and managers improve the quality of routinely collected data in EHRs. METHODS The intervention was conducted in four general practices participating in the Fairfield neighborhood electronic Practice Based Research Network (ePBRN). Data were extracted from their clinical information systems and summarised as a SDQR to guide feedback to practice principals and managers at 0, 4, 8 and 12 months. Data quality (DQ) metrics included completeness, correctness, consistency and duplication of patient records. Information on data recording practices, data quality improvement, and utility of SDQRs was collected at the feedback sessions at the practices. The main outcome measure was change in the recording of clinical information and level of meeting Royal Australian College of General Practice (RACGP) targets. RESULTS Birth date was 100% and gender 99% complete at baseline and maintained. DQ of all variables measured improved significantly (p<0.01) over 12 months, but was not sufficient to comply with RACGP standards. Improvement was greatest with allergies. There was no significant change in duplicate records. CONCLUSIONS SDQRs and feedback sessions support general practitioners and practice managers to focus on improving the recording of patient information. However, improved practice DQ, was not sufficient to meet RACGP targets. Randomised controlled studies are required to evaluate strategies to improve data quality and any associated improved safety and quality of care." }, { "instance_id": "R34663xR34631", "comparison_id": "R34663", "paper_id": "R34631", "text": "Does single-source create an added value? Evaluating the impact of introducing x4T into the clinical routine on workflow modifications, data quality and cost\u2013benefit OBJECTIVES The first objective of this study is to evaluate the impact of integrating a single-source system into the routine patient care documentation workflow with respect to process modifications, data quality and execution times in patient care as well as research documentation. The second one is to evaluate whether it is cost-efficient using a single-source system in terms of achieved savings in documentation expenditures. METHODS We analyzed the documentation workflow of routine patient care and research documentation in the medical field of pruritus to identify redundant and error-prone process steps. Based on this, we established a novel documentation workflow including the x4T (exchange for Trials) system to connect hospital information systems with electronic data capture systems for the exchange of study data. To evaluate the workflow modifications, we performed a before/after analysis as well as a time-motion study. Data quality was assessed by measuring completeness, correctness and concordance of previously and newly collected data. A cost-benefit analysis was conducted to estimate the savings using x4T per collected data element and the additional costs for introducing x4T. RESULTS The documentation workflow of patient care as well as clinical research was modified due to the introduction of the x4T system. After x4T implementation and workflow modifications, half of the redundant and error-prone process steps were eliminated. The generic x4T system allows direct transfer of routinely collected health care data into the x4T research database and avoids manual transcription steps. Since x4T has been introduced in March 2012, the number of included patients has increased by about 1000 per year. The average entire documentation time per patient visit has been significantly decreased by 70.1% (from 1116\u00b1185 to 334\u00b183 s). After the introduction of the x4T system and associated workflow changes, the completeness of mandatory data elements raised from 82.2% to 100%. In case of the pruritus research study, the additional costs for introducing the x4T system are \u20ac434.01 and the savings are 0.48ct per collected data element. So, with the assumption of a 5-year runtime and 82 collected data elements per patient, the amount of documented patients has to be higher than 1102 to create a benefit. CONCLUSION Introduction of the x4T system into the clinical and research documentation workflow can optimize the data collection workflow in both areas. Redundant and cumbersome process steps can be eliminated in the research documentation, with the result of reduced documentation times as well as increased data quality. The usage of the x4T system is especially worthwhile in a study with a large amount of collected data or a high number of included patients." }, { "instance_id": "R34663xR34644", "comparison_id": "R34663", "paper_id": "R34644", "text": "HIS-based Kaplan-Meier plots - a single source approach for documenting and reusing routine survival information BackgroundSurvival or outcome information is important for clinical routine as well as for clinical research and should be collected completely, timely and precisely. This information is relevant for multiple usages including quality control, clinical trials, observational studies and epidemiological registries. However, the local hospital information system (HIS) does not support this documentation and therefore this data has to generated by paper based or spreadsheet methods which can result in redundantly documented data. Therefore we investigated, whether integrating the follow-up documentation of different departments in the HIS and reusing it for survival analysis can enable the physician to obtain survival curves in a timely manner and to avoid redundant documentation.MethodsWe analysed the current follow-up process of oncological patients in two departments (urology, haematology) with respect to different documentation forms. We developed a concept for comprehensive survival documentation based on a generic data model and implemented a follow-up form within the HIS of the University Hospital Muenster which is suitable for a secondary use of these data. We designed a query to extract the relevant data from the HIS and implemented Kaplan-Meier plots based on these data. To re-use this data sufficient data quality is needed. We measured completeness of forms with respect to all tumour cases in the clinic and completeness of documented items per form as incomplete information can bias results of the survival analysis.ResultsBased on the form analysis we discovered differences and concordances between both departments. We identified 52 attributes from which 13 were common (e.g. procedures and diagnosis dates) and were used for the generic data model. The electronic follow-up form was integrated in the clinical workflow. Survival data was also retrospectively entered in order to perform survival and quality analyses on a comprehensive data set. Physicians are now able to generate timely Kaplan-Meier plots on current data. We analysed 1029 follow-up forms of 965 patients with survival information between 1992 and 2010. Completeness of forms was 60.2%, completeness of items ranges between 94.3% and 98.5%. Median overall survival time was 16.4 years; median event-free survival time was 7.7 years.ConclusionIt is feasible to integrate survival information into routine HIS documentation such that Kaplan-Meier plots can be generated directly and in a timely manner." }, { "instance_id": "R34663xR34625", "comparison_id": "R34663", "paper_id": "R34625", "text": "An importance-performance analysis of hospital information system attributes: A nurses' perspective PURPOSE Health workers have numerous concerns about hospital IS (HIS) usage. Addressing these concerns requires understanding the system attributes most important to their satisfaction and productivity. Following a recent HIS implementation, our objective was to identify priorities for managerial intervention based on user evaluations of the performance of the HIS attributes as well as the relative importance of these attributes to user satisfaction and productivity outcomes. PROCEDURES We collected data along a set of attributes representing system quality, data quality, information quality, and service quality from 154 nurse users. Their quantitative responses were analysed using the partial least squares approach followed by an importance-performance analysis. Qualitative responses were analysed using thematic analysis to triangulate and supplement the quantitative findings. MAIN FINDINGS Two system quality attributes (responsiveness and ease of learning), one information quality attribute (detail), one service quality attribute (sufficient support), and three data quality attributes (records complete, accurate and never missing) were identified as high priorities for intervention. CONCLUSIONS Our application of importance-performance analysis is unique in HIS evaluation and we have illustrated its utility for identifying those system attributes for which underperformance is not acceptable to users and therefore should be high priorities for intervention." }, { "instance_id": "R34663xR34622", "comparison_id": "R34663", "paper_id": "R34622", "text": "Optimizing Medical Data Quality Based on Multiagent Web Service Framework One of the most important issues in e-healthcare information systems is to optimize the medical data quality extracted from distributed and heterogeneous environments, which can extremely improve diagnostic and treatment decision making. This paper proposes a multiagent web service framework based on service-oriented architecture for the optimization of medical data quality in the e-healthcare information system. Based on the design of the multiagent web service framework, an evolutionary algorithm (EA) for the dynamic optimization of the medical data quality is proposed. The framework consists of two main components; first, an EA will be used to dynamically optimize the composition of medical processes into optimal task sequence according to specific quality attributes. Second, a multiagent framework will be proposed to discover, monitor, and report any inconstancy between the optimized task sequence and the actual medical records. To demonstrate the proposed framework, experimental results for a breast cancer case study are provided. Furthermore, to show the unique performance of our algorithm, a comparison with other works in the literature review will be presented." }, { "instance_id": "R34706xR34704", "comparison_id": "R34706", "paper_id": "R34704", "text": "A Min-Min Max-Min selective algorihtm for grid task scheduling Today, the high cost of supercomputers in the one hand and the need for large-scale computational resources on the other hand, has led to use network of computational resources known as Grid. Numerous research groups in universities, research labs, and industries around the world are now working on a type of Grid called Computational Grids that enable aggregation of distributed resources for solving large-scale data intensive problems in science, engineering, and commerce. Several institutions and universities have started research and teaching programs on Grid computing as part of their parallel and distributed computing curriculum. To better use tremendous capabilities of this distributed system, effective and efficient scheduling algorithms are needed. In this paper, we introduce a new scheduling algorithm based on two conventional scheduling algorithms, Min-Min and Max-Min, to use their cons and at the same time, cover their pros. It selects between the two algorithms based on standard deviation of the expected completion time of tasks on resources. We evaluate our scheduling heuristic, the Selective algorithm, within a grid simulator called GridSim. We also compared our approach to its two basic heuristics. The experimental results show that the new heuristic can lead to significant performance gain for a variety of scenarios." }, { "instance_id": "R34706xR34690", "comparison_id": "R34706", "paper_id": "R34690", "text": "Honey bee behavior inspired load balancing of tasks in cloud computing environments Scheduling of tasks in cloud computing is an NP-hard optimization problem. Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing (HBB-LB), which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue." }, { "instance_id": "R34706xR34674", "comparison_id": "R34706", "paper_id": "R34674", "text": "Join-Idle-Queue: A novel load balancing algorithm for dynamically scalable web services The prevalence of dynamic-content web services, exemplified by search and online social networking, has motivated an increasingly wide web-facing front end. Horizontal scaling in the Cloud is favored for its elasticity, and distributed design of load balancers is highly desirable. Existing algorithms with a centralized design, such as Join-the-Shortest-Queue (JSQ), incur high communication overhead for distributed dispatchers. We propose a novel class of algorithms called Join-Idle-Queue (JIQ) for distributed load balancing in large systems. Unlike algorithms such as Power-of-Two, the JIQ algorithm incurs no communication overhead between the dispatchers and processors at job arrivals. We analyze the JIQ algorithm in the large system limit and find that it effectively results in a reduced system load, which produces 30-fold reduction in queueing overhead compared to Power-of-Two at medium to high load. An extension of the basic JIQ algorithm deals with very high loads using only local information of server load." }, { "instance_id": "R34706xR34682", "comparison_id": "R34706", "paper_id": "R34682", "text": "An Improved Max-Min Task-Scheduling Algorithm for Elastic Cloud In cloud computing, load balancing aids in minimizing resource consumption and avoids bottlenecks. Although many load balancing schemes have been presented, there is no scheme providing the elasticity in cloud computing. A Max-Min task-scheduling algorithm for load balance in the elastic cloud is proposed in this paper. To realize the load balancing, the proposed algorithm maintains a task status table to estimate the real-time load of virtual machines and the expected completion time of tasks, which can allocate the workload among nodes and realize the load balance. The extensive experiments demonstrate that the proposed Max-Min task-scheduling algorithm can improve the resource utilization as well as reduce the response time of tasks." }, { "instance_id": "R34845xR34843", "comparison_id": "R34845", "paper_id": "R34843", "text": "Variation in the Incidence of Hatching Failure in the Cedar Waxwing and Other Species Hatching failure due to embryonic death or sterility is an important aspect of the breeding biology of songbirds, yet it has received little attention. Many nesting studies present the overall success rate of eggs but lump the failures due to predation and other factors with those due to sterility and embryonic death. Two recent exceptions are studies by Ricklefs (1969) and Jehl (1971). During 1968 and 1969, I conducted studies (Rothstein 1971a) which provided data relevant to egg hatchability in several species. The Cedar Waxwing (Bombycilla cedrorum) yielded especially significant data because eggs in nests near farms suffered a greater incidence of sterility or embryonic death than eggs in nests from other areas. Field work was conducted" }, { "instance_id": "R34845xR34835", "comparison_id": "R34845", "paper_id": "R34835", "text": "Water Loss, Conductance, and Structure of Eggs of Pied Flycatchers during Egg Laying and Incubation Eggs of Pied Flycatchers (Ficedula hypoleuca) lose water at a slow, constant rate ($$\\dot{M}_{H_{2}O}$$) during egg laying but at a much higher, linear rate during incubation. Prepipping losses average 20% of the egg's mass when freshly laid. The watervapor conductance ($$G_{H_{2}O}$$) of the eggshell increases (linearly) fourfold between the time the egg is laid and the end of incubation. Linear increases in $$\\dot{M}_{H_{2}O}$$ and $$G_{H_{2}O}$$ have previously been associated with the eggs of large, precocial birds rather than those of altricial songbirds. At least 88% of the increase in $$\\dot{M}_{H_{2}O}$$ during the incubation period can be accounted for by changes in the egg's $$G_{H_{2}O}$$. The high water loss that occurs during the second half of incubation appears to result from shell erosion, which reduces pore length and shell thickness, and perhaps increases pore size, in the equatorial and sharp regions of the egg. Increasing $$\\dot{M}_{H_{2}O}$$ and $$G_{H_{2}O}$$ are not due to losses of cuticle from the egg's surface or to increases in pore number, the number of open pores in the shell, or egg temperature during incubation. Flycatcher eggshells have four types of pores. They are usually open, frequently bent, and evenly distributed over the egg's surface." }, { "instance_id": "R34845xR34824", "comparison_id": "R34845", "paper_id": "R34824", "text": "California Condors and DDE: a re-evaluation Eggshells of wild California Condors Gymnogyps californianus were much thinner in the 1960s, when DDT was used heavily, than during earlier pre-DDT and later reduced-DDT periods. However, eggshell thickness was more strongly linked to egg size (mass) than to measured levels of p,p\u2032DDE (the primary metabolite of DDT). Egg size was consistent within individual females and yielded correlation coefficients with shell thickness ranging from 0.49 to 0.97, depending on the period and the analysis assumptions used. Measured DDE levels, although often substantial, provided only a weak correlation (r = \u22120.33) with shell thickness. In part, the absence of a strong DDE/thickness correlation may have been an artefact of losses of DDE from fragment membranes over time. Nevertheless, the extreme (28\u201329%) shell thinning of the 1960s was not linked with clearly increased egg-breakage or nest-failure rates, and one female of the 1980s with 25.6% shell thinning was the most productive female of her era. Some eggs with over 30% shell thinning hatched successfully, and broken eggs closely resembled hatched eggs in shell thickness, strongly suggesting that shell thinning was not an important cause of breakage. The apparent absence of harmful effects from the extreme shell thinning of the 1960s may have resulted from (1) the fact that historic pre-DDT condor eggs were on average 16.7% thicker shelled for their mass than predicted by the overall egg mass/shell thickness curve for birds, and (2) a possible egg-size decline or sampling bias toward small-egged females in the 1960s. That DDE was an important cause of the Condor's decline appears unlikely from overall available data." }, { "instance_id": "R34845xR34803", "comparison_id": "R34845", "paper_id": "R34803", "text": "Incubation Water Loss in King Penguin Egg. I. Change in Egg and Brood Pouch Parameters Water loss and thermal relations of the king penguin egg and its microenvironment (brood pouch) were studied under natural conditions. Despite the low ambient humidity (Pamb = 6.8 Torr, Tamb = 6.9\u00b0 C), diffusive water loss during the prolonged 53-d incubation was within the range for other bird eggs, being 13% of the mean 302-g initial egg mass. Daily water loss increased throughout incubation to 1.54 times the initial value, while the low initial water vapor conductance of the shell, Gsb = 28.1 \u00b7 mg \u00b7 (d \u00b7 Torr)\u22121, increased by 16%. The increase in Gsb was correlated withpartial loss of the organic shell cover by contact with the moist brood pouch. Brood patch temperature remained constant at 38. 2\u00b0 C, while egg core temperature increased throughout incubation by 3\u00b0C The accompanying decrease in the vertical gradient of temperature inside the egg was explained mainly by a warming from 28\u00b0 to 35\u00b0C of the foot region in contact with the shell. The influence of the embryo on egg temperature increase appears secondary, as the fertile eggs at day 50 had a core temperature only 0.7\u00b0C higher than unfertile eggs." }, { "instance_id": "R36153xR36114", "comparison_id": "R36153", "paper_id": "R36114", "text": "Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study Abstract Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10 th January \u2013 8 th February. We analyzed the data for the period before the closure of Wuhan city (10 th January \u2013 23 rd January) and the post-closure period (23 rd January \u2013 8 th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63 \u2013 5.13), dropping to 3.41 (95% CI: 3.16 \u2013 3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09 \u2013 3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus. Funding National Natural Science Foundation of China, China Medical Board, National Science and Technology Major Project of China" }, { "instance_id": "R36153xR36151", "comparison_id": "R36153", "paper_id": "R36151", "text": "Effects of voluntary event cancellation and school closure as countermeasures against COVID-19 outbreak in Japan Abstract Background To control the COVID-19 outbreak in Japan, sports and entertainment events were canceled and schools were closed throughout Japan from February 26 through March 19. That policy has been designated as voluntary event cancellation and school closure (VECSC). Object This study assesses VECSC effectiveness based on predicted outcomes. Method: A simple susceptible\u2013infected\u2013recovery model was applied to data of patients with symptoms in Japan during January 14 through March 25. The respective reproduction numbers were estimated before VECSC (R), during VECSC (R e ), and after VECSC (R a ). Results Results suggest R before VECSC as 1.987 [1.908, 2.055], R e during VECSC as 1.122 [0.980, 1.260], and R a after VECSC as 3.086 [2.529, 3.739]. Discussion and Conclusion Results demonstrated that VECSC can reduce COVID-19 infectiousness considerably, but the value of R rose to exceed 2.5 after VECSC." }, { "instance_id": "R36153xR36146", "comparison_id": "R36153", "paper_id": "R36146", "text": "COVID-19 outbreak in Algeria: A mathematical model to predict the incidence Abstract Introduction Since December 29, 2019 a pandemic of new novel coronavirus-infected pneumonia named COVID-19 has started from Wuhan, China, has led to 254 996 confirmed cases until midday March 20, 2020. Sporadic cases have been imported worldwide, in Algeria, the first case reported on February 25, 2020 was imported from Italy, and then the epidemic has spread to other parts of the country very quickly with 139 confirmed cases until March 21, 2020. Methods It is crucial to estimate the cases number growth in the early stages of the outbreak, to this end, we have implemented the Alg-COVID-19 Model which allows to predict the incidence and the reproduction number R0 in the coming months in order to help decision makers. The Alg-COVIS-19 Model initial equation 1, estimates the cumulative cases at t prediction time using two parameters: the reproduction number R0 and the serial interval SI. Results We found R0=2.55 based on actual incidence at the first 25 days, using the serial interval SI= 4,4 and the prediction time t=26. The herd immunity HI estimated is HI=61%. Also, The Covid-19 incidence predicted with the Alg-COVID-19 Model fits closely the actual incidence during the first 26 days of the epidemic in Algeria Fig. 1.A. which allows us to use it. According to Alg-COVID-19 Model, the number of cases will exceed 5000 on the 42 th day (April 7 th ) and it will double to 10000 on 46th day of the epidemic (April 11 th ), thus, exponential phase will begin (Table 1; Fig.1.B) and increases continuously until reaching \u00e0 herd immunity of 61% unless serious preventive measures are considered. Discussion This model is valid only when the majority of the population is vulnerable to COVID-19 infection, however, it can be updated to fit the new parameters values." }, { "instance_id": "R36153xR36149", "comparison_id": "R36153", "paper_id": "R36149", "text": "Analysis of the epidemic growth of the early 2019-nCoV outbreak using internationally confirmed cases Abstract Background On January 23, 2020, a quarantine was imposed on travel in and out of Wuhan, where the 2019 novel coronavirus (2019-nCoV) outbreak originated from. Previous analyses estimated the basic epidemiological parameters using symptom onset dates of the confirmed cases in Wuhan and outside China. Methods We obtained information on the 46 coronavirus cases who traveled from Wuhan before January 23 and have been subsequently confirmed in Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan as of February 5, 2020. Most cases have detailed travel history and disease progress. Compared to previous analyses, an important distinction is that we used this data to informatively simulate the infection time of each case using the symptom onset time, previously reported incubation interval, and travel history. We then fitted a simple exponential growth model with adjustment for the January 23 travel ban to the distribution of the simulated infection time. We used a Bayesian analysis with diffuse priors to quantify the uncertainty of the estimated epidemiological parameters. We performed sensitivity analysis to different choices of incubation interval and the hyperparameters in the prior specification. Results We found that our model provides good fit to the distribution of the infection time. Assuming the travel rate to the selected countries and regions is constant over the study period, we found that the epidemic was doubling in size every 2.9 days (95% credible interval [CrI], 2 days\u20144.1 days). Using previously reported serial interval for 2019-nCoV, the estimated basic reproduction number is 5.7 (95% CrI, 3.4\u20149.2). The estimates did not change substantially if we assumed the travel rate doubled in the last 3 days before January 23, when we used previously reported incubation interval for severe acute respiratory syndrome (SARS), or when we changed the hyperparameters in our prior specification. Conclusions Our estimated epidemiological parameters are higher than an earlier report using confirmed cases in Wuhan. This indicates the 2019-nCoV could have been spreading faster than previous estimates." }, { "instance_id": "R38484xR23499", "comparison_id": "R38484", "paper_id": "R23499", "text": "Climate change projections using the IPSL-CM5 Earth System Model: from CMIP3 to CMIP5 We present the global general circulation model IPSL-CM5 developed to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5). This model includes an interactive carbon cycle, a representation of tropospheric and stratospheric chemistry, and a comprehensive representation of aerosols. As it represents the principal dynamical, physical, and bio-geochemical processes relevant to the climate system, it may be referred to as an Earth System Model. However, the IPSL-CM5 model may be used in a multitude of configurations associated with different boundary conditions and with a range of complexities in terms of processes and interactions. This paper presents an overview of the different model components and explains how they were coupled and used to simulate historical climate changes over the past 150 years and different scenarios of future climate change. A single version of the IPSL-CM5 model (IPSL-CM5A-LR) was used to provide climate projections associated with different socio-economic scenarios, including the different Representative Concentration Pathways considered by CMIP5 and several scenarios from the Special Report on Emission Scenarios considered by CMIP3. Results suggest that the magnitude of global warming projections primarily depends on the socio-economic scenario considered, that there is potential for an aggressive mitigation policy to limit global warming to about two degrees, and that the behavior of some components of the climate system such as the Arctic sea ice and the Atlantic Meridional Overturning Circulation may change drastically by the end of the twenty-first century in the case of a no climate policy scenario. Although the magnitude of regional temperature and precipitation changes depends fairly linearly on the magnitude of the projected global warming (and thus on the scenario considered), the geographical pattern of these changes is strikingly similar for the different scenarios. The representation of atmospheric physical processes in the model is shown to strongly influence the simulated climate variability and both the magnitude and pattern of the projected climate changes." }, { "instance_id": "R38484xR23485", "comparison_id": "R38484", "paper_id": "R23485", "text": "The simulation of SST, sea ice extents and ocean heat transports in a version of the Hadley Centre coupled model without flux adjustments Abstract Results are presented from a new version of the Hadley Centre coupled model (HadCM3) that does not require flux adjustments to prevent large climate drifts in the simulation. The model has both an improved atmosphere and ocean component. In particular, the ocean has a 1.25\u00b0 \u00d7 1.25\u00b0 degree horizontal resolution and leads to a considerably improved simulation of ocean heat transports compared to earlier versions with a coarser resolution ocean component. The model does not have any spin up procedure prior to coupling and the simulation has been run for over 400 years starting from observed initial conditions. The sea surface temperature (SST) and sea ice simulation are shown to be stable and realistic. The trend in global mean SST is less than 0.009 \u00b0C per century. In part, the improved simulation is a consequence of a greater compatibility of the atmosphere and ocean model heat budgets. The atmospheric model surface heat and momentum budget are evaluated by comparing with climatological ship-based estimates. Similarly the ocean model simulation of poleward heat transports is compared with direct ship-based observations for a number of sections across the globe. Despite the limitations of the observed datasets, it is shown that the coupled model is able to reproduce many aspects of the observed heat budget." }, { "instance_id": "R38484xR23408", "comparison_id": "R38484", "paper_id": "R23408", "text": "Simulating present-day climate with the INMCM4.0 coupled model of the atmospheric and oceanic general circulations The INMCM3.0 climate model has formed the basis for the development of a new climate-model version: the INMCM4.0. It differs from the previous version in that there is an increase in its spatial resolution and some changes in the formulation of coupled atmosphere-ocean general circulation models. A numerical experiment was conducted on the basis of this new version to simulate the present-day climate. The model data were compared with observational data and the INMCM3.0 model data. It is shown that the new model adequately reproduces the most significant features of the observed atmospheric and oceanic climate. This new model is ready to participate in the Coupled Model Intercomparison Project Phase 5 (CMIP5), the results of which are to be used in preparing the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC)." }, { "instance_id": "R41148xR41136", "comparison_id": "R41148", "paper_id": "R41136", "text": "Toward cost-effective manufacturing of silicon solar cells: electrodeposition of high-quality Si films in a CaCl 2 -based molten salt Electrodeposition of Si films from a Si-containing electrolyte is a cost-effective approach for the manufacturing of solar cells. Proposals relying on fluoride-based molten salts have suffered from low product quality due to difficulties in impurity control. Here we demonstrate the successful electrodeposition of high-quality Si films from a CaCl2 -based molten salt. Soluble SiIV -O anions generated from solid SiO2 are electrodeposited onto a graphite substrate to form a dense film of crystalline Si. Impurities in the deposited Si film are controlled at low concentrations (both B and P are less than 1 ppm). In the photoelectrochemical measurements, the film shows p-type semiconductor character and large photocurrent. A p-n junction fabricated from the deposited Si film exhibits clear photovoltaic effects. This study represents the first step to the ultimate goal of developing a cost-effective manufacturing process for Si solar cells based on electrodeposition." }, { "instance_id": "R41148xR41122", "comparison_id": "R41148", "paper_id": "R41122", "text": "Direct Electrolytic Reduction of Solid Silicon Dioxide in molten LiCl-KCl-CaCl at 773 K We investigated electrolytic reduction of solid SiO 2 by a contacting electrode method in molten LiCl-KCl-CaCl 2 at 773 K. The results of cyclic voltammetry indicated that reduction of SiO 2 occurs at potential more negative than 0.85 V (vs Ca 2 + , Li + /Ca-Li). Samples were prepared by potentiostatic electrolysis for 2 h at 0.25, 0.50, 0.70, and 1.00 V. Energy dispersive X-ray analysis and Raman spectra clarified that the reduction products at 0.50 and 0.70 V are composed of amorphous Si and microcrystalline Si. Scanning electron microscope (SEM) observations revealed that the morphology of the produced Si is spongelike with a particle size smaller than 50 nm. The mechanism of Si formation was discussed by comparing the SEM observations and the Raman spectra of the Si samples prepared at 773 and 1123 K. The reduction mechanism of the direct electrolytic reduction of SiO 2 at lower temperature was also discussed." }, { "instance_id": "R41148xR41128", "comparison_id": "R41148", "paper_id": "R41128", "text": "Verification and implications of the dissolution-electrodeposition process during the electro-reduction of solid silica in molten CaCl 2 With the verification of the existence of the dissolution-electrodeposition mechanism during the electro-reduction of solid silica in molten CaCl2, the present study not only provides direct scientific support for the controllable electrolytic extraction of nanostructured silicon in molten salts but it also opens an avenue to a continuous silicon extraction process via the electro-deposition of dissolved silicates in molten CaCl2. In addition, the present study increases the general understanding of the versatile material extraction route via the electro-deoxidization process of solid oxides in molten salts, which also provokes reconsiderations on the electrochemistry of insulating compounds." }, { "instance_id": "R41148xR41132", "comparison_id": "R41148", "paper_id": "R41132", "text": "The use of silicon wafer barriers in the electrochemical reduction of solid silica to form silicon in molten salts Nowadays, silicon is the most critical element in solar cells and/or solar chips. Silicon having 98 to 99% Si as being metallurgical grade, requires further refinement/purification processes such as zone refining [1,2] and/or Siemens process [3] to upgrade it for solar applications. A promising method, based on straightforward electrochemical reduction of oxides by FFC Cambridge Process [4], was adopted to form silicon from porous SiO2 pellets in molten CaCl2 and CaCl2-NaCl salt mixture [5]. It was reported that silicon powder contaminated by iron and nickel emanated from stainless steel cathode, consequently disqualified the product from solar applications. SiO2 pellets sintered at 1300oC for 4 hours, were placed in between pure silicon wafer plates to defeat the contamination problem. Encouraging results indicated a reliable alternative method of direct solar grade silicon production for expanding solar energy field." }, { "instance_id": "R41148xR41138", "comparison_id": "R41148", "paper_id": "R41138", "text": "Electrodeposition of crystalline and photoactive silicon directly from silicon dioxide nanoparticles in molten CaCl 2 Silicon is a widely used semiconductor for electronic and photovoltaic devices because of its earth-abundance, chemical stability, and the tunable electrical properties by doping. Therefore, the production of pure silicon films by simple and inexpensive methods has been the subject of many investigations. The desire for lower-cost silicon-based solar photovoltaic devices has encouraged the quest for solar-grade silicon production through processes alternative to the currently used Czochralski process or other processes. Electrodeposition is one of the least expensive methods for fabricating films of metals and semiconductors. Electrodeposition of silicon has been studied for over 30 years, in various solution media such as molten salts (LiF-KF-K2SiF6 at 745 8C and BaO-SiO2-BaF2 at 1465 8C ), organic solvents (acetonitrile, tetrahydrofuran), and room-temperature ionic liquids. Recently, the direct electrochemical reduction of bulk solid silicon dioxide in a CaCl2 melt was reported. [7] A key factor for silicon electrodeposition is the purity of silicon deposit because Si for the use in photovoltaic devices is solargrade silicon (> 99.9999% or 6N) and its grade is even higher in electronic devices (electronic-grade silicon or 11N). In most cases, the electrodeposited silicon does not meet these requirements without further purification and, to our knowledge, none have been shown to exhibit a photoresponse. In fact, silicon electrodeposition is not as straightforward as metal deposition, since the deposited semiconductor layer is resistive at room temperature, which complicates electron transfer through the deposit. In many cases, for example in room-temperature aprotic solvents, the deposited silicon acts as an insulating layer and prevents a continuous deposition reaction. In some cases, the silicon deposit contains a high level of impurities (> 2%). Moreover, the nucleation and growth of silicon requires a large amount of energy. The deposition is made even more challenging if the Si precursor is SiO2, which is a very resistive material. We reported previously the electrochemical formation of silicon on molybdenum from a CaCl2 molten salt (850 8C) containing a SiO2 nanoparticle (NP with a diameter of 5\u2013 15 nm) suspension by applying a constant reduction current. However this Si film did not show photoactivity. Here we show the electrodeposition of photoactive crystalline silicon directly from SiO2 NPs from CaCl2 molten salt on a silver electrode that shows a clear photoresponse. To the best of our knowledge, this is a first report of the direct electrodeposition of photoactive silicon. The electrochemical reduction and the cyclic voltammetry (CV) of SiO2 were investigated as described previously. [8] In this study, we found that the replacement of the Mo substrate by silver leads to a dramatic change in the properties of the silicon deposit. The silver substrate exhibited essentially the same electrochemical and CV behavior as other metal substrates, that is, a high reduction current for SiO2 at negative potentials of 1.0 V with the development of a new redox couple near 0.65 V vs. a graphite quasireference electrode (QRE) (Figure 1a). Figure 1b shows a change in the reduction current as a function of the reduction potential, and the optical images of silver electrodes before and after the electrolysis, which displays a dark gray-colored deposit after the reduction. Figure 2 shows SEM images of silicon deposits grown potentiostatically ( 1.25 V vs. graphite QRE) on silver. The amount of silicon deposit increased with the deposition time, and the deposit finally covered the whole silver surface (Figure 2). High-magnification images show that the silicon deposit is not a film but rather platelets or clusters of silicon crystals of domain sizes in the range of tens of micrometers. The average height of the platelets was around 25 mm after a 10000 s deposition (Figure 2b), and 45 mm after a 20000s deposition (Figure 2c), respectively. The edges of the silicon crystals were clearly observed. Contrary to other substrates, silver enhanced the crystallization of silicon produced from silicon dioxide reduction and it is known that silver induces the crystallization of amorphous silicon. Energy-dispersive spectrometry (EDS) elemental mapping (images shown in the bottom row of Figure 2) revealed that small silver islands exist on the top of the silicon deposits, which we think is closely related to the growth mechanism of silicon on silver. The EDS spectrum of the silicon deposit (Figure 3a) suggested that the deposited silicon was quite pure and the amounts of other elements such as C, Ca, and Cl were below the detection limit (about 0.1 atom%). Since the oxygen signal was probably from the native oxide formed on exposure of the deposit to air and silicon does not form an alloy with silver, the purity of silicon was estimated to be at least 99.9 atom%. The successful reduction of Si(4+) in silicon dioxide to elemental silicon (Si) was confirmed by Xray photoelectron spectroscopy (XPS) of the silicon deposit [*] Dr. S. K. Cho, Dr. F.-R. F. Fan, Prof. A. J. Bard Center for Electrochemistry, Department of Chemistry and Biochemistry, The University of Texas at Austin Austin, TX 78712 (USA) E-mail: ajbard@mail.utexas.edu" }, { "instance_id": "R41466xR41016", "comparison_id": "R41466", "paper_id": "R41016", "text": "Unique epidemiological and clinical features of the emerging 2019 novel coronavirus pneumonia (COVID-19) implicate special control measures By 27 February 2020, the outbreak of coronavirus disease 2019 (COVID\u201019) caused 82 623 confirmed cases and 2858 deaths globally, more than severe acute respiratory syndrome (SARS) (8273 cases, 775 deaths) and Middle East respiratory syndrome (MERS) (1139 cases, 431 deaths) caused in 2003 and 2013, respectively. COVID\u201019 has spread to 46 countries internationally. Total fatality rate of COVID\u201019 is estimated at 3.46% by far based on published data from the Chinese Center for Disease Control and Prevention (China CDC). Average incubation period of COVID\u201019 is around 6.4 days, ranges from 0 to 24 days. The basic reproductive number (R0) of COVID\u201019 ranges from 2 to 3.5 at the early phase regardless of different prediction models, which is higher than SARS and MERS. A study from China CDC showed majority of patients (80.9%) were considered asymptomatic or mild pneumonia but released large amounts of viruses at the early phase of infection, which posed enormous challenges for containing the spread of COVID\u201019. Nosocomial transmission was another severe problem. A total of 3019 health workers were infected by 12 February 2020, which accounted for 3.83% of total number of infections, and extremely burdened the health system, especially in Wuhan. Limited epidemiological and clinical data suggest that the disease spectrum of COVID\u201019 may differ from SARS or MERS. We summarize latest literatures on genetic, epidemiological, and clinical features of COVID\u201019 in comparison to SARS and MERS and emphasize special measures on diagnosis and potential interventions. This review will improve our understanding of the unique features of COVID\u201019 and enhance our control measures in the future." }, { "instance_id": "R41466xR41008", "comparison_id": "R41466", "paper_id": "R41008", "text": "Communicating the Risk of Death from Novel Coronavirus Disease (COVID-19) To understand the severity of infection for a given disease, it is common epidemiological practice to estimate the case fatality risk, defined as the risk of death among cases. However, there are three technical obstacles that should be addressed to appropriately measure this risk. First, division of the cumulative number of deaths by that of cases tends to underestimate the actual risk because deaths that will occur have not yet observed, and so the delay in time from illness onset to death must be addressed. Second, the observed dataset of reported cases represents only a proportion of all infected individuals and there can be a substantial number of asymptomatic and mildly infected individuals who are never diagnosed. Third, ascertainment bias and risk of death among all those infected would be smaller when estimated using shorter virus detection windows and less sensitive diagnostic laboratory tests. In the ongoing COVID-19 epidemic, health authorities must cope with the uncertainty in the risk of death from COVID-19, and high-risk individuals should be identified using approaches that can address the abovementioned three problems. Although COVID-19 involves mostly mild infections among the majority of the general population, the risk of death among young adults is higher than that of seasonal influenza, and elderly with underlying comorbidities require additional care." }, { "instance_id": "R44930xR44819", "comparison_id": "R44930", "paper_id": "R44819", "text": "Report 3: Transmissibility of 2019-nCoV. 2020. WHO Collaborating Centre for Infectious Disease Modelling, MRC Centre for Global Infectious Disease Analysis Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18 January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak. It is likely, based on the experience of SARS and MERS-CoV, that the number of secondary cases caused by a case of 2019-nCoV is highly variable \u2013 with many cases causing no secondary infections, and a few causing many. Whether transmission is continuing at the same rate currently depends on the effectiveness of current control measures implemented in China and the extent to which the populations of affected areas have adopted risk-reducing behaviours. In the absence of antiviral drugs or vaccines, control relies upon the prompt detection and isolation of symptomatic cases. It is unclear at the current time whether this outbreak can be contained within China; uncertainties include the severity spectrum of the disease caused by this virus and whether cases with relatively mild symptoms are able to transmit the virus efficiently. Identification and testing of potential cases need to be as extensive as is permitted by healthcare and diagnostic testing capacity \u2013 including the identification, testing and isolation of suspected cases with only mild to moderate disease (e.g. influenza-like illness), when logistically feasible." }, { "instance_id": "R44930xR44879", "comparison_id": "R44930", "paper_id": "R44879", "text": "Estimation of the reproductive number of novel coronavirus (COVID-19) and the probable outbreak size on the Diamond Princess cruise ship: A data-driven analysis Abstract Backgrounds Up to February 16, 2020, 355 cases have been confirmed as having COVID-19 infection on the Diamond Princess cruise ship. It is of crucial importance to estimate the reproductive number (R0) of the novel virus in the early stage of outbreak and make a prediction of daily new cases on the ship. Method We fitted the reported serial interval (mean and standard deviation) with a gamma distribution and applied \u201cearlyR\u201d package in R to estimate the R0 in the early stage of COVID-19 outbreak. We applied \u201cprojections\u201d package in R to simulate the plausible cumulative epidemic trajectories and future daily incidence by fitting the data of existing daily incidence, a serial interval distribution, and the estimated R0 into a model based on the assumption that daily incidence obeys approximately Poisson distribution determined by daily infectiousness. Results The Maximum-Likelihood (ML) value of R0 was 2.28 for COVID-19 outbreak at the early stage on the ship. The median with 95% confidence interval (CI) of R0 values was 2.28 (2.06\u20132.52) estimated by the bootstrap resampling method. The probable number of new cases for the next ten days would gradually increase, and the estimated cumulative cases would reach 1514 (1384\u20131656) at the tenth day in the future. However, if R0 value was reduced by 25% and 50%, the estimated total number of cumulative cases would be reduced to 1081 (981\u20131177) and 758 (697\u2013817), respectively. Conclusion The median with 95% CI of R0 of COVID-19 was about 2.28 (2.06\u20132.52) during the early stage experienced on the Diamond Princess cruise ship. The future daily incidence and probable outbreak size is largely dependent on the change of R0. Unless strict infection management and control are taken, our findings indicate the potential of COVID-19 to cause greater outbreak on the ship." }, { "instance_id": "R44930xR44743", "comparison_id": "R44930", "paper_id": "R44743", "text": "Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study Abstract Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10 th January \u2013 8 th February. We analyzed the data for the period before the closure of Wuhan city (10 th January \u2013 23 rd January) and the post-closure period (23 rd January \u2013 8 th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63 \u2013 5.13), dropping to 3.41 (95% CI: 3.16 \u2013 3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09 \u2013 3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus. Funding National Natural Science Foundation of China, China Medical Board, National Science and Technology Major Project of China" }, { "instance_id": "R44930xR44793", "comparison_id": "R44930", "paper_id": "R44793", "text": "Effects of voluntary event cancellation and school closure as countermeasures against COVID\u221219 outbreak in Japan Abstract Background To control the COVID-19 outbreak in Japan, sports and entertainment events were canceled and schools were closed throughout Japan from February 26 through March 19. That policy has been designated as voluntary event cancellation and school closure (VECSC). Object This study assesses VECSC effectiveness based on predicted outcomes. Method: A simple susceptible\u2013infected\u2013recovery model was applied to data of patients with symptoms in Japan during January 14 through March 25. The respective reproduction numbers were estimated before VECSC (R), during VECSC (R e ), and after VECSC (R a ). Results Results suggest R before VECSC as 1.987 [1.908, 2.055], R e during VECSC as 1.122 [0.980, 1.260], and R a after VECSC as 3.086 [2.529, 3.739]. Discussion and Conclusion Results demonstrated that VECSC can reduce COVID-19 infectiousness considerably, but the value of R rose to exceed 2.5 after VECSC." }, { "instance_id": "R44930xR44799", "comparison_id": "R44930", "paper_id": "R44799", "text": "Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus\u2013Infected Pneumonia Abstract Background The initial cases of novel coronavirus (2019-nCoV)\u2013infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)" }, { "instance_id": "R44930xR44842", "comparison_id": "R44930", "paper_id": "R44842", "text": "Early Transmissibility Assessment of a Novel Coronavirus in Wuhan, China Between December 1, 2019 and January 26, 2020, nearly 3000 cases of respiratory illness caused by a novel coronavirus originating in Wuhan, China have been reported. In this short analysis, we combine publicly available cumulative case data from the ongoing outbreak with phenomenological modeling methods to conduct an early transmissibility assessment. Our model suggests that the basic reproduction number associated with the outbreak (at time of writing) may range from 2.0 to 3.1. Though these estimates are preliminary and subject to change, they are consistent with previous findings regarding the transmissibility of the related SARS-Coronavirus and indicate the possibility of epidemic potential." }, { "instance_id": "R44978xR44704", "comparison_id": "R44978", "paper_id": "R44704", "text": "Randomised controlled trial of problem solving treatment, antidepressant medication, and combined treatment for major depression in primary care Abstract Objectives: To determine whether problem solving treatment combined with antidepressant medication is more effective than either treatment alone in the management of major depression in primary care. To assess the effectiveness of problem solving treatment when given by practice nurses compared with general practitioners when both have been trained in the technique. Design: Randomised controlled trial with four treatment groups. Setting: Primary care in Oxfordshire. Participants: Patients aged 18-65 years with major depression on the research diagnostic criteria\u2014a score of 13 or more on the 17 item Hamilton rating scale for depression and a minimum duration of illness of four weeks. Interventions: Problem solving treatment by research general practitioner or research practice nurse or antidepressant medication or a combination of problem solving treatment and antidepressant medication. Main outcome measures: Hamilton rating scale for depression, Beck depression inventory, clinical interview schedule (revised), and the modified social adjustment schedule assessed at 6, 12, and 52 weeks. Results: Patients in all groups showed a clear improvement over 12 weeks. The combination of problem solving treatment and antidepressant medication was no more effective than either treatment alone. There was no difference in outcome irrespective of who delivered the problem solving treatment. Conclusions: Problem solving treatment is an effective treatment for depressive disorders in primary care. The treatment can be delivered by suitably trained practice nurses or general practitioners. The combination of this treatment with antidepressant medication is no more effective than either treatment alone. Key messages Problem solving treatment is an effective treatment for depressive disorders in primary care Problem solving treatment can be delivered by suitably trained practice nurses as effectively as by general practitioners The combination of problem solving treatment and antidepressant medication is no more effective than either treatment alone Problem solving treatment is most likely to benefit patients who have a depressive disorder of moderate severity and who wish to participate in an active psychological treatment" }, { "instance_id": "R44978xR44693", "comparison_id": "R44978", "paper_id": "R44693", "text": "A randomised controlled trial of cognitive behaviour therapy vs treatment as usual in the treatment of mild to moderate late life depression This study provides an empirical evaluation of Cognitive Behaviour Therapy (CBT) alone vs Treatment as usual (TAU) alone (generally pharmacotherapy) for late life depression in a UK primary care setting." }, { "instance_id": "R44978xR44700", "comparison_id": "R44978", "paper_id": "R44700", "text": "Telephone-based treatment for family practice patients with mild depression The need for treating milder forms of depression has recently been of increased interest. This was a randomized, controlled study to evaluate the effects of telephone-based problem-solving therapy for mild depression. Comparison groups were a treatment-as-usual group and another group receiving stress-management training by telephone. From 1,742 family practice patients screened for depression, 54 with mild depression entered the study. Treatment was provided by experienced family practice nurses, trained and supervised in the treatments. The Hamilton Rating Scale for Depression was administered before and after the intervention period, and the Beck Depression Inventory and Duke Health Profile were administered at the end of the intervention period. Of the 36 subjects assigned to the problem-solving and stress-management groups, half dropped out early in the study. Five from the treatment-as-usual group were lost to follow-up. In the remaining subjects, there was a significant decrease in depression scores. There were no significant differences in the amount of decrease between the groups on any scores. The small sample and high dropout rate limit the interpretation of the findings. However, since all subjects tended to improve, regardless of treatment received, mild levels of depression may generally remit even without focal intervention, and watchful waiting may be a reasonable alternative for management." }, { "instance_id": "R44978xR44685", "comparison_id": "R44978", "paper_id": "R44685", "text": "Treatment of dysthymia and minor depression in primary care: a randomized trial in patients aged 18 to 59 years OBJECTIVE The researchers evaluated the effectiveness of paroxetine and Problem-Solving Treatment for Primary Care (PST-PC) for patients with minor depression or dysthymia. STUDY DESIGN This was an 11-week randomized placebo-controlled trial conducted in primary care practices in 2 communities (Lebanon, NH, and Seattle, Wash). Paroxetine (n=80) or placebo (n=81) therapy was started at 10 mg per day and increased to a maximum 40 mg per day, or PST-PC was provided (n=80). There were 6 scheduled visits for all treatment conditions. POPULATION A total of 241 primary care patients with minor depression (n=114) or dysthymia (n=127) were included. Of these, 191 patients (79.3%) completed all treatment visits. OUTCOMES Depressive symptoms were measured using the 20-item Hopkins Depression Scale (HSCL-D-20). Remission was scored on the Hamilton Depression Rating Scale (HDRS) as less than or equal to 6 at 11 weeks. Functional status was measured with the physical health component (PHC) and mental health component (MHC) of the 36-item Medical Outcomes Study Short Form. RESULTS All treatment conditions showed a significant decline in depressive symptoms over the 11-week period. There were no significant differences between the interventions or by diagnosis. For dysthymia the remission rate for paroxetine (80%) and PST-PC (57%) was significantly higher than for placebo (44%, P=.008). The remission rate was high for minor depression (64%) and similar for each treatment group. For the MHC there were significant outcome differences related to baseline level for paroxetine compared with placebo. For the PHC there were no significant differences between the treatment groups. CONCLUSIONS For dysthymia, paroxetine and PST-PC improved remission compared with placebo plus nonspecific clinical management. Results varied for the other outcomes measured. For minor depression, the 3 interventions were equally effective; general clinical management (watchful waiting) is an appropriate treatment option." }, { "instance_id": "R44978xR44719", "comparison_id": "R44978", "paper_id": "R44719", "text": "Treatment of dysthymia and minor depression in primary care: A randomized controlled trial in older adults CONTEXT Insufficient evidence exists for recommendation of specific effective treatments for older primary care patients with minor depression or dysthymia. OBJECTIVE To compare the effectiveness of pharmacotherapy and psychotherapy in primary care settings among older persons with minor depression or dysthymia. DESIGN Randomized, placebo-controlled trial (November 1995-August 1998). SETTING Four geographically and clinically diverse primary care practices. PARTICIPANTS A total of 415 primary care patients (mean age, 71 years) with minor depression (n = 204) or dysthymia (n = 211) and a Hamilton Depression Rating Scale (HDRS) score of at least 10 were randomized; 311 (74.9%) completed all study visits. INTERVENTIONS Patients were randomly assigned to receive paroxetine (n = 137) or placebo (n = 140), starting at 10 mg/d and titrated to a maximum of 40 mg/d, or problem-solving treatment-primary care (PST-PC; n = 138). For the paroxetine and placebo groups, the 6 visits over 11 weeks included general support and symptom and adverse effects monitoring; for the PST-PC group, visits were for psychotherapy. MAIN OUTCOME MEASURES Depressive symptoms, by the 20-item Hopkins Symptom Checklist Depression Scale (HSCL-D-20) and the HDRS; and functional status, by the Medical Outcomes Study Short-Form 36 (SF-36) physical and mental components. RESULTS Paroxetine patients showed greater (difference in mean [SE] 11-week change in HSCL-D-20 scores, 0.21 [0. 07]; P =.004) symptom resolution than placebo patients. Patients treated with PST-PC did not show more improvement than placebo (difference in mean [SE] change in HSCL-D-20 scores, 0.11 [0.13]; P =.13), but their symptoms improved more rapidly than those of placebo patients during the latter treatment weeks (P =.01). For dysthymia, paroxetine improved mental health functioning vs placebo among patients whose baseline functioning was high (difference in mean [SE] change in SF-36 mental component scores, 5.8 [2.02]; P =. 01) or intermediate (difference in mean [SE] change in SF-36 mental component scores, 4.4 [1.74]; P =.03). Mental health functioning in dysthymia patients was not significantly improved by PST-PC compared with placebo (P>/=.12 for low-, intermediate-, and high-functioning groups). For minor depression, both paroxetine and PST-PC improved mental health functioning in patients in the lowest tertile of baseline functioning (difference vs placebo in mean [SE] change in SF-36 mental component scores, 4.7 [2.03] for those taking paroxetine; 4.7 [1.96] for the PST-PC treatment; P =.02 vs placebo). CONCLUSIONS Paroxetine showed moderate benefit for depressive symptoms and mental health function in elderly patients with dysthymia and more severely impaired elderly patients with minor depression. The benefits of PST-PC were smaller, had slower onset, and were more subject to site differences than those of paroxetine." }, { "instance_id": "R46295xR45114", "comparison_id": "R46295", "paper_id": "R45114", "text": "Effect of pH on absorption spectra of photogenerated holes in nanocrystalline TiO2 films Abstract We have measured transient absorption spectra of TiO 2 under several pH conditions. We successfully extracted the spectral contribution of two different trapped holes from the transient absorption spectra. Based on the results, we discuss the origin of absorption spectra of holes in TiO 2 ." }, { "instance_id": "R46295xR45102", "comparison_id": "R46295", "paper_id": "R45102", "text": "Picosecond flash spectroscopy of titania colloids with adsorbed dyes Spectra for electrons trapped in TiO{sub 2} have been reported. In this study, kinetic analysis of processes taking place when TiO{sub 2} colloids are flashed in the presence of three dyes leads to assignment of the spectrum of a trapped hole. Within the duration of a 20-ps pulse at 350 nm, a transient is formed in TiO{sub 2} which decays with a second-order rate constant of 2.4 {times} 10{sup {minus}10} n{sub e} s{sup {minus}1}, where n{sub e} is the number of electrons. The absorbance is probably attributable to electrons in the conduction band (a term that must be used cautiously for these very amorphous systems), and the rate constant measures the rate of hole-electron recombination. Upon addition of a dye that may scavenge carriers, a new transient grown with a rate constant of 5 {times} 10{sup 8} s{sup {minus}1}. This feature, with an absorption maximum at 630 nm, is attributed to a trapped hole. The mechanism proposed for the results of this intense pulse experiment involves two photons and excitation of both dye and colloid. The evidence includes observation of spectra of reduced dyes and quantitative consistency not achieved with any other model." }, { "instance_id": "R46295xR45126", "comparison_id": "R46295", "paper_id": "R45126", "text": "Femtosecond Diffuse-Reflectance Spectroscopy of Various Commercially Available TiO2 Powders ABSTRACT The transient absorption properties of several commercially available TiO2 photocatalysts were investigated by femtosecond diffuse-reflectance spectroscopy. Using femtosecond diffuse-reflectance spectroscopy, the quantities and rates of the initial trapping processes of holes and electrons generated by the photoexcitation of TiO2 photocatalysts were investigated. It was found that the total amounts of trapped electrons for the pure-anatase and pure-rutile TiO2 became smaller with increasing particle size, but increased again when the particles\u2019 diameters were larger than 50 nm. The anatase\u2013rutile mixed TiO2 photocatalysts were found to have smaller amounts of trapped electrons compared with pure-anatase and pure-rutile TiO2 photocatalysts. The lifetimes of trapped holes of various TiO2 photocatalysts were also investigated, and it was found that the lifetimes were proportional to the anatase\u2013rutile mixed ratios." }, { "instance_id": "R46295xR45124", "comparison_id": "R46295", "paper_id": "R45124", "text": "Photocatalytic Oxidation Reactivity of Holes in the Sulfur- and Carbon-Doped TiO2 Powders Studied by Time-Resolved Diffuse Reflectance Spectroscopy The photocatalytic oxidation reactivities of the photogenerated holes (h+) during ultraviolet or visible laser flash photolysis of pure anatase and sulfur- and carbon-doped TiO2 powders were investigated using time-resolved diffuse reflectance (TDR) spectroscopy. The one-electron oxidation processes of substrates such as methanol and 4-(methylthio)phenyl methanol (MTPM) by h+ at the TiO2 surface were examined. The TDR spectra and time traces observed for charge carriers and the MTPM radical cation (MTPM\u2022+) revealed that the oxidation reactions of substrates by h+ generated during the 355-nm laser photolysis of TiO2 powders increased in the order of pure TiO2 > S-doped TiO2 > C-doped TiO2. On the other hand, no one-electron oxidation reactions of the substrates were observed during the 430-nm laser photolysis of the S- and C-doped TiO2 powders, although the charge carriers were sufficiently generated upon excitation. The effects of the trapping and detrapping processes of h+ at the doping sites on the oxid..." }, { "instance_id": "R46295xR45108", "comparison_id": "R46295", "paper_id": "R45108", "text": "Identification of Reactive Species in Photoexcited Nanocrystalline TiO2 Films by Wide-Wavelength-Range (400\u22122500 nm) Transient Absorption Spectroscopy Reactive species, holes, and electrons in photoexcited nanocrystalline TiO2 films were studied by transient absorption spectroscopy in the wavelength range from 400 to 2500 nm. The electron spectrum was obtained through a hole-scavenging reaction under steady-state light irradiation. The spectrum can be analyzed by a superposition of the free-electron and trapped-electron spectra. By subtracting the electron spectrum from the transient absorption spectrum, the spectrum of trapped holes was obtained. As a result, three reactive speciestrapped holes and free and trapped electronswere identified in the transient absorption spectrum. The reactivity of these species was evaluated through transient absorption spectroscopy in the presence of hole- and electron-scavenger molecules. The spectra indicate that trapped holes and electrons are localized at the surface of the particles and free electrons are distributed in the bulk." }, { "instance_id": "R46296xR46115", "comparison_id": "R46296", "paper_id": "R46115", "text": "Hydrogenated TiO2 Nanocrystals: A Novel Microwave Absorbing Material Here, we report, for the first time, hydrogenated TiO2 nanocrystals as a novel and exciting microwave absorbing material, based on an innovative collective-movement-of-interfacial-dipole mechanism which causes collective-interfacial-polarization-amplified microwave absorption at the crystalline/disordered and anatase/rutile interfaces. This mechanism is intriguing and upon further exploration may trigger other new concepts and applications." }, { "instance_id": "R46296xR46119", "comparison_id": "R46296", "paper_id": "R46119", "text": "Enhancing Visible Light Photo-oxidation of Water with TiO2 Nanowire Arrays via Cotreatment with H2 and NH3: Synergistic Effects between Ti3+ and N We report a synergistic effect involving hydrogenation and nitridation cotreatment of TiO(2) nanowire (NW) arrays that improves the water photo-oxidation performance under visible light illumination. The visible light (>420 nm) photocurrent of the cotreated TiO(2) is 0.16 mA/cm(2) and accounts for 41% of the total photocurrent under simulated AM 1.5 G illumination. Electron paramagnetic resonance (EPR) spectroscopy reveals that the concentration of Ti(3+) species in the bulk of the TiO(2) following hydrogenation and nitridation cotreatment is significantly higher than that of the sample treated solely with ammonia. It is believed that the interaction between the N-dopant and Ti(3+) is the key to the extension of the active spectrum and the superior visible light water photo-oxidation activity of the hydrogenation and nitridation cotreated TiO(2) NW arrays." }, { "instance_id": "R46296xR46068", "comparison_id": "R46296", "paper_id": "R46068", "text": "N-Doped TiO2 Nanoparticle Based Visible Light Photocatalyst by Modified Peroxide Sol\u2212Gel Method The peroxide gel route is employed to synthesize N-doped TiO2 nanoparticles (NP) at low temperature using titanium tetraisopropoxide, ethylmethylamine, and hydrogen peroxide as precursors. Structural studies show anatase phase in the undoped titania NPs as well as at 5 at. % N-doped titania NPs, although with a degree of matrix disorder in the latter case. The annealing of N-doped titania NPs at different temperatures shows that above 400 \u00b0C nitrogen escapes the O\u2212Ti\u2212O matrix and at 500 \u00b0C the sample becomes crystalline. Transmission electron microscopy reveals that the particle size is in the range of 20\u221230 nm for the undoped TiO2 but only 5\u221210 nm for N-doped TiO2. At higher nitrogen concentration (10 at. %) bubble-like agglomerates form. FTIR and photoluminescence quenching also confirm the incorporation of nitrogen in anatase TiO2. Optical properties reveal an extended tailing of the absorption edge toward the visible region upon nitrogen doping. X-ray photoelectron spectroscopy is used to examine the ..." }, { "instance_id": "R46296xR46076", "comparison_id": "R46296", "paper_id": "R46076", "text": "One-step hydrothermal synthesis of N-doped TiO 2/C nanocomposites with high visible light photocatalytic activity N-doped TiO(2) nanoparticles modified with carbon (denoted N-TiO(2)/C) were successfully prepared by a facile one-pot hydrothermal treatment in the presence of L-lysine, which acts as a ligand to control the nanocrystal growth and as a source of nitrogen and carbon. As-prepared nanocomposites were characterized by thermogravimetric analysis (TGA), X-ray diffraction (XRD), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, ultraviolet-visible (UV-vis) diffuse reflectance spectroscopy, X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FTIR), electron paramagnetic resonance (EPR) spectra, and N(2) adsorption-desorption analysis. The photocatalytic activities of the as-prepared photocatalysts were measured by the degradation of methyl orange (MO) under visible light irradiation at \u03bb\u2265 400 nm. The results show that N-TiO(2)/C nanocomposites increase absorption in the visible light region and exhibit a higher photocatalytic activity than pure TiO(2), commercial P25 and previously reported N-doped TiO(2) photocatalysts. We have demonstrated that the nitrogen was doped into the lattice and the carbon species were modified on the surface of the photocatalysts. N-doping narrows the band gap and C-modification enhances the visible light harvesting and accelerates the separation of the photo-generated electrons and holes. As a consequence, the photocatalytic activity is significantly improved. The molar ratio of L-lysine/TiCl(4) and the pH of the hydrothermal reaction solution are important factors affecting the photocatalytic activity of the N-TiO(2)/C; the optimum molar ratio of L-lysine/TiCl(4) is 8 and the optimum pH is ca. 4, at which the catalyst exhibits the highest reactivity. Our findings demonstrate that the as-obtained N-TiO(2)/C photocatalyst is a better and more promising candidate than well studied N-doped TiO(2) alternatives as visible light photocatalysts for potential applications in environmental purification." }, { "instance_id": "R46296xR46125", "comparison_id": "R46296", "paper_id": "R46125", "text": "A Facile Method to Improve the Photocatalytic and Lithium\u2010Ion Rechargeable Battery Performance of TiO2 Nanocrystals Author(s): Xia, T; Zhang, W; Murowchick, JB; Liu, G; Chen, X | Abstract: TiO2 has been well studied as an ultraviolet (UV) photocatalyst and electrode material for lithium-ion rechargeable batteries. Recent studies have shown that hydrogenated TiO2 displayed better photocatalytic and lithium ion battery performances. Here it is demonstrated that the photocatalytic and battery performances of TiO2 nanocrystals can be successfully improved with a facile low-temperature vacuum process. These TiO2 nanocrystals extend their optical absorption far into the visible-light region, display nanometer-scale surface atomic rearrangement, possess superoxide ion characteristics at room temperature without light irradiation, show a 4-fold improvement in photocatalytic activity, and has 30% better performance in capacity and charge/discharge rates for lithium ion battery. This facile method could provide an alternative and effective approach to improve the performance of TiO2 and other materials towards their practical applications. \u00a9 2013 WILEY-VCH Verlag GmbH a Co. KGaA, Weinheim." }, { "instance_id": "R46296xR46101", "comparison_id": "R46296", "paper_id": "R46101", "text": "Evaluating the potential of a new titania precursor for the synthesis of mesoporous Fe-doped titania with enhanced photocatalytic activity Abstract Mesoporous Fe (III) doped TiO 2 nanoparticles with an anatase phase were prepared by using a stable precursor potassium hexafluorotitanate as Ti source for the first time and its physical as well as photocatalytic properties were compared with that of Fe doped titania prepared from the most common Ti source titanium isopropoxide. FeSO 4 \u00b77H 2 O and Fe (NO) 3 \u00b79H 2 O were used for doping titania with Fe (III). Physicochemical properties of the samples were characterized by XRD, XPS, FTIR, Raman spectroscopy, N 2 adsorption\u2013desorption isotherms, UV\u2013vis diffuse reflectance spectroscopy. EDX confirms the presence of Fe. DRS and TEM reveals that doping has taken place. It was found that Fe-doped nanostructured titania prepared from potassium hexafluorotitanate was much more effective in the photocatalytic decomposition of bromocresol green than undoped nanostructured titania as well as commercial titania." }, { "instance_id": "R46296xR46078", "comparison_id": "R46296", "paper_id": "R46078", "text": "Self-doped Ti3+-enhanced TiO2 nanoparticles with a high-performance photocatalysis Abstract A series of TiO 2 catalysts self-doped with Ti 3+ were successfully synthesized by a simple one-step solvothermal method with low-cost NaBH 4 added as a reductant. During NaBH 4 reduction, large amounts of NaBO 2 and carbonaceous impurities covered the surface of the TiO 2 self-doped with Ti 3+ , resulting in an inhibition of visible-light absorption and photocatalytic activity. In the preparation, HCl solution was used to wash off the by-product NaBO 2 and carbonaceous impurities coated on the surface of the catalysts. The samples were characterized by XRD, UV-DRS, ESR, XPS, and FT-IR analyses. After HCl washing, the photocatalytic activity of the Ti 3+ -doped TiO 2 increased markedly, while the Ti 3+ content remained the same. Furthermore, it was found that the visible-light photocatalytic activity of Ti 3+ -doped TiO 2 depended on the amount of Ti 3+ added, while there was no significant impact of Ti 3+ doping on its UV-light photocatalytic activity." }, { "instance_id": "R46296xR46111", "comparison_id": "R46296", "paper_id": "R46111", "text": "Photocatalytic Performance of N-Doped TiO2 Adsorbed with Fe3+ Ions under Visible Light by a Redox Treatment A simple method to prepare the N\u2212TiO2 adsorbed with Fe3+ ions only on the surface of catalysts and modify the catalysts by a redox treatment (NaBH4 reduction and air oxidation treatment) was proposed. The samples were characterized by X-ray diffraction (XRD), UV\u2212vis diffuse reflectance spectroscopy, FTIR, X-ray photoelectron spectroscopy (XPS), and high-resolution transmission electron micrograph (HRTEM). The photocatalytic activities of the samples were evaluated for degradation of methylene blue (MB) in aqueous solutions under visible light (\u03bb > 420 nm). The results of XRD, FTIR, XPS, and HRTEM analysis indicated that the structure of Fe compounds changed from Fe2O3 to \u03b3-FeOOH after redox treatment. Compared to N\u2212TiO2 with Fe3+ ions, the catalysts after redox treatment showed higher photoactivity under visible light, and the formation of \u03b3-FeOOH was responsible for the improvement of photocatalytic activity. Furthermore, to the catalysts after redox treatment, the mechanism for degradation of MB under v..." }, { "instance_id": "R46296xR46082", "comparison_id": "R46296", "paper_id": "R46082", "text": "Formation of New Structures and Their Synergistic Effects in Boron and Nitrogen Codoped TiO2 for Enhancement of Photocatalytic Performance A novel double hydrothermal method to prepare the boron and nitrogen codoped TiO2 is developed. Two different ways have been used for the synthesis of the catalysts, one through the addition of boron followed by nitrogen, and the other through the addition of nitrogen first and then by boron. The X-ray photoelectron spectroscopy analysis indicates the synergistic effect of boron and nitrogen with the formation of Ti\u2212B\u2212N\u2212Ti and Ti\u2212N\u2212B\u2212O compounds on the surface of catalysts when nitrogen is introduced to the materials first. When the boron is added first, only Ti\u2212N\u2212B\u2212O species occurs on the surface of catalysts. The above two compounds are all thought to enhance the photocatalytic activities of codoped TiO2. Density functional theory simulations are also performed to investigate the B\u2212N synergistic effect. For the (101) surface, the formation of Ti\u2212B\u2212N\u2212Ti structures gives rise to the localized states within the TiO2 band gap." }, { "instance_id": "R46297xR46150", "comparison_id": "R46297", "paper_id": "R46150", "text": "Insights into the role of Cu in promoting photocatalytichydrogenproductionoverultrathinHNb3O8 nanosheets Cu was loaded on ultrathin HNb3O8 nanosheets via a facile photodeposition method. The oxidation state of the Cu was further verified by XPS. TEM and STEM-EDX mapping demonstrated that the Cu cluster was highly dispersed on the nanosheets. The photocatalytic H2 evolution activity of (0.5%) Cu/HNb3O8 was about 23.6 times higher than that of the bare HNb3O8 nanosheets under simulated solar light irradiation. The role of Cu in promoting photocatalytic hydrogen evolution activity over HNb3O8 nanosheets was attributed to the reduction of hydrogen evolution potential and the improvement of separation of photogenerated carriers. These results were confirmed by a series of electrochemical characterizations, such as CV, LSV, I-t, and EIS. Finally, photocatalytic hydrogen evolution processes over HNb3O8 nanosheets with and without Cu modification were proposed, which might provide insights into the photocatalytic hydrogen evolution mechanism over niobate-based metal oxides." }, { "instance_id": "R46297xR46168", "comparison_id": "R46297", "paper_id": "R46168", "text": "Rapid fabrication of KTa0.75Nb0.25/g-C3N4 composite via microwave heating for e\ufb03cient photocatalytic H2 eVolution Abstract A novel KTa0.75Nb0.25O3 (KTN)/g-C3N4 composite photocatalyst was fabricated through microwave heating for realizing the efficient photocatalytic H2 evolution. The energy-efficient preparation method allowed g-C3N4 to be formed in-situ on KTN surface in thirty five minutes. The binary constitution of the KTN/g-C3N4 composite was verified by X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS) experiments. UV\u2013visible diffuse reflection spectroscopy (DRS) experiments suggested that the photoabsorption performance was increased after the introduction of KTN. N2-adsorption analysis indicated that the addition of KTN slightly increased the surface area of g-C3N4. Photoluminescence (PL) spectroscopy, electrochemical impedance spectroscopy (EIS) and transient photocurrent response (PC) analyses confirmed that the KTN/g-C3N4 composite displayed longer lifetime of photoexcited charge carriers than g-C3N4, owing to the suitable band potentials and the close contact of KTN and g-C3N4. This property was believed to the key characteristic of the composite, which led to its excellent photocatalytic performance. Under simulated sunlight irradiation, the optimal KTN/g-C3N4 catalyst presented a photocatalytic H2-generation rate of 1673 \u03bcmol\u00b7g\u22121\u00b7h\u22121, 2.5 and 2.4 times higher than that of KTN and pure g-C3N4, respectively. Under visible light irradiation, the value was determined to be 86.2 \u03bcmol\u00b7g\u22121\u00b7h\u22121, which achieved 9.3 times that of g-C3N4." }, { "instance_id": "R46297xR46146", "comparison_id": "R46297", "paper_id": "R46146", "text": "A hybrid of CdS/HCa2Nb3O10 ultrathin nanosheets for promoting photocatalytic hydrogen eVolution

A hybrid of CdS/HCa2Nb3O10 ultrathin nanosheets with a tough heterointerface was successfully fabricated. Efficient interfacial charge transfer from CdS to HCa2Nb3O10 nanosheets was achieved to realize the enhanced photocatalytic H2 evolution activity.

" }, { "instance_id": "R46299xR46231", "comparison_id": "R46299", "paper_id": "R46231", "text": "Polymeric g-C3N4 coupled with NaNbO3 nanowires toward enhanced photocatalytic reduction of CO2 into renewable fuel Visible-light-responsive g-C3N4/NaNbO3 nanowires photocatalysts were fabricated by introducing polymeric g-C3N4 on NaNbO3 nanowires. The microscopic mechanisms of interface interaction, charge transfer and separation, as well as the influence on the photocatalytic activity of g-C3N4/NaNbO3 composite were systematic investigated. The high-resolution transmission electron microscopy (HR-TEM) revealed that an intimate interface between C3N4 and NaNbO3 nanowires formed in the g-C3N4/NaNbO3 heterojunctions. The photocatalytic performance of photocatalysts was evaluated for CO2 reduction under visible-light illumination. Significantly, the activity of g-C3N4/NaNbO3 composite photocatalyst for photoreduction of CO2 was higher than that of either single-phase g-C3N4 or NaNbO3. Such a remarkable enhancement of photocatalytic activity was mainly ascribed to the improved separation and transfer of photogenerated electron\u2013hole pairs at the intimate interface of g-C3N4/NaNbO3 heterojunctions, which originated from the..." }, { "instance_id": "R46299xR46223", "comparison_id": "R46299", "paper_id": "R46223", "text": "Constructing cubic-orthorhombic surface-phasejunctionsofNaNbO3 towardssigni\ufb01cantenhancementofCO2 photoreduction

NaNbO3 with cubic\u2013orthorhombic surface-phase junctions were synthesized via a polymerized complex method. Improved charge separation meant that the mixed-phase NaNbO3 exhibited much higher activity in CO2 photoreduction than cubic and orthorhombic NaNbO3.

" }, { "instance_id": "R46299xR46235", "comparison_id": "R46299", "paper_id": "R46235", "text": "Photocatalytic CO2 Reduction by Re(I) Polypyridyl Complexes Immobilized on Niobates Nanoscrolls Immobilization of Re(I) CO2 reduction photocatalysts on metal oxide surfaces is an interesting approach to improve their stability and recyclability. In this work, we describe the photocatalytic activity of two Re(I) complexes (fac-[Re(NN)(CO)3(Cl)], NN = 4,4'-dicarboxylic acid-2,2'-bipyridine, 1, or 5,6-dione-1,10-phenantroline, 2) on the surface of hexaniobate nanoscrolls. After adsorption, the turnover number for CO production (TONCO) in DMF/TEOA of 1 was increased from 9 to 58, which is 20% higher than that observed on TiO2, being among the highest reported values for a Re(I)-based photocatalyst under visible light irradiation without any sensitizer. The complex 2 is inactive in solution under visible-light irradiation, but it has a TONCO of 35 when immobilized on hexaniobate nanoscrolls. Transient absorption spectroscopy studies reveal that the slow back-electron transfer and the higher reducing power of the hexaniobate conduction-band electrons play a major role for the photocatalytic process. The r..." }, { "instance_id": "R48103xR46662", "comparison_id": "R48103", "paper_id": "R46662", "text": "A hybrid named entity recognizer for Turkish Highlights? First hybrid named entity recognizer for Turkish addressing the porting problem. ? The recognizer achieves considerably better results compared to its rule based predecessor. ? The proposed recognizer is successfully applied to video texts for automatic video indexing. Named entity recognition is an important subfield of the broader research area of information extraction from textual data. Yet, named entity recognition research conducted on Turkish texts is still rare as compared to related research carried out on other languages such as English, Spanish, Chinese, and Japanese. In this study, we present a hybrid named entity recognizer for Turkish, which is based on a manually engineered rule based recognizer that we have proposed. Since rule based systems for specific domains require their knowledge sources to be manually revised when ported to other domains, we enrich our rule based recognizer and turn it into a hybrid recognizer so that it learns from annotated data when available and improves its knowledge sources accordingly. The hybrid recognizer is originally engineered for generic news texts, but with its learning capability, it is improved to be applicable to that of financial news texts, historical texts, and child stories as well, without human intervention. Both the hybrid recognizer and its rule based predecessor are evaluated on the same corpora and the hybrid recognizer achieves better results as compared to its predecessor. The proposed hybrid named entity recognizer is significant since it is the first hybrid recognizer proposal for Turkish addressing the above porting problem considering that Turkish possesses different structural properties compared to widely studied languages such as English and there is very limited information extraction research conducted on Turkish texts. Moreover, the employment of the proposed hybrid recognizer for semantic video indexing is shown as a case study on Turkish news videos. The genuine textual and video corpora utilized throughout the paper are compiled and annotated by the authors due to the lack of publicly available annotated corpora for information extraction research on Turkish texts." }, { "instance_id": "R48103xR46664", "comparison_id": "R48103", "paper_id": "R46664", "text": "Multiobjective optimization for biomedical named entity recognition and classification Abstract Named Entity Recognition and Classi\ufb01cation (NERC) is one of the most fundamental and important tasks in biomedical informa\u2013tion extraction. Biomedical named entities (NEs) include mentions of proteins, genes, DNA, RNA etc. which, in general, have complex structures and are dif\ufb01cult to recognize. We have developed a large number of features for identifying NEs from biomed\u2013ical texts. Two robust diverse classi\ufb01cation methods like Conditional Random Field (CRF) and Support Vector Machine (SVM) are used to build a number of models depending upon the various representations of the set of features and/or feature templates. Finally the outputs of these different classi\ufb01ers are combined using multiobjective weighted voted approach. We hypothesize that the reliability of predictions of each classi\ufb01er differs among the various output classes. Thus, in an ensemble system, it is neces\u2013sary to determine the appropriate weight of vote for each output class in each classi\ufb01er. Here, a multiobjective genetic algorithm is utilized for determining appropriate weights of votes for combining the outputs of classi\ufb01ers. The developed technique is evaluated with the benchmark dataset of JNLPBA 2004 that yields the overall recall, precision and F-measure values of 74.10%, 77.58% and 75.80%, respectively." }, { "instance_id": "R48103xR46666", "comparison_id": "R48103", "paper_id": "R46666", "text": "Two-stage NER for tweets with clustering One main challenge of Named Entities Recognition (NER) for tweets is the insufficient information in a single tweet, owing to the noisy and short nature of tweets. We propose a novel system to tackle this challenge, which leverages redundancy in tweets by conducting two-stage NER for multiple similar tweets. Particularly, it first pre-labels each tweet using a sequential labeler based on the linear Conditional Random Fields (CRFs) model. Then it clusters tweets to put tweets with similar content into the same group. Finally, for each cluster it refines the labels of each tweet using an enhanced CRF model that incorporates the cluster level information, i.e., the labels of the current word and its neighboring words across all tweets in the cluster. We evaluate our method on a manually annotated dataset, and show that our method boosts the F1 of the baseline without collectively labeling from 75.4% to 82.5%." }, { "instance_id": "R48392xR48350", "comparison_id": "R48392", "paper_id": "R48350", "text": "Trends and acceleration in global and regional sea levels since 1807 Abstract We use 1277 tide gauge records since 1807 to provide an improved global sea level reconstruction and analyse the evolution of sea level trend and acceleration. In particular we use new data from the polar regions and remote islands to improve data coverage and extend the reconstruction to 2009. There is a good agreement between the rate of sea level rise (3.2 \u00b1 0.4 mm\u00b7yr \u2212 1 ) calculated from satellite altimetry and the rate of 3.1 \u00b1 0.6 mm\u00b7yr \u2212 1 from tide gauge based reconstruction for the overlapping time period (1993\u20132009). The new reconstruction suggests a linear trend of 1.9 \u00b1 0.3 mm\u00b7yr \u2212 1 during the 20th century, with 1.8 \u00b1 0.5 mm\u00b7yr \u2212 1 since 1970. Regional linear trends for 14 ocean basins since 1970 show the fastest sea level rise for the Antarctica (4.1 \u00b1 0.8 mm\u00b7yr \u2212 1 ) and Arctic (3.6 \u00b1 0.3 mm\u00b7yr \u2212 1 ). Choice of GIA correction is critical in the trends for the local and regional sea levels, introducing up to 8 mm\u00b7yr \u2212 1 uncertainties for individual tide gauge records, up to 2 mm\u00b7yr \u2212 1 for regional curves and up to 0.3\u20130.6 mm\u00b7yr \u2212 1 in global sea level reconstruction. We calculate an acceleration of 0.02 \u00b1 0.01 mm\u00b7yr \u2212 2 in global sea level (1807\u20132009). In comparison the steric component of sea level shows an acceleration of 0.006 mm\u00b7yr \u2212 2 and mass loss of glaciers accelerates at 0.003 mm\u00b7yr \u2212 2 over 200 year long time series." }, { "instance_id": "R48401xR48257", "comparison_id": "R48401", "paper_id": "R48257", "text": "Projecting twenty-first century regional sea-level changes We present regional sea-level projections and associated uncertainty estimates for the end of the 21st century. We show regional projections of sea-level change resulting from changing ocean circulation, increased heat uptake and atmospheric pressure in CMIP5 climate models. These are combined with model- and observation-based regional contributions of land ice, groundwater depletion and glacial isostatic adjustment, including gravitational effects due to mass redistribution. A moderate and a warmer climate change scenario are considered, yielding a global mean sea-level rise of 0.54 \u00b10.19 m and 0.71 \u00b10.28 m respectively (mean \u00b11\u03c3). Regionally however, changes reach up to 30 % higher in coastal regions along the North Atlantic Ocean and along the Antarctic Circumpolar Current, and up to 20 % higher in the subtropical and equatorial regions, confirming patterns found in previous studies. Only 50 % of the global mean value is projected for the subpolar North Atlantic Ocean, the Arctic Ocean and off the western Antarctic coast. Uncertainty estimates for each component demonstrate that the land ice contribution dominates the total uncertainty." }, { "instance_id": "R52143xR52086", "comparison_id": "R52143", "paper_id": "R52086", "text": "Role of species identity in plant invasions: experimental test using Imperata cylindrica The role of species richness, functional diversity and species identity of native Florida sandhill understory species were tested with Imperata cylindrica, an exotic rhizomatous grass, in mesocosms. I. cylindrica was introduced 1 year after the following treatments were established: a control with no native species, five monocultures, a grass mix treatment, a forb mix treatment, and a 3-species treatment and a 5-species treatment. Monthly cover, final biomass, root length, root length density (RLD) and specific root length (SRL) of all species were determined for one full growing season. There was a significant negative linear relationship between the cover of native species and I. cylindrica (r2 = 0.59, P = 0.01) and a negative logarithmic relationship between the biomass of native species and I. cylindrica (r2 = 0.70, P = 0.003). There was no diversity\u2013invasibility relationship. Grasses proved to be the most resistant functional group providing resistance alone and in mixed functional communities. Repeated measures analysis demonstrated that treatments including Andropogon virginicus were the most resistant to invasion over time (P < 0.001). Significantly greater root length (P = 0.002), RLD (P = 0.011) and SRL (P < 0.001) than all of the native species and I. cylindrica in monocultures and in mixed communities made A. virginicus successful. The root morphology characteristics allowed it to be a great competitor belowground where I. cylindrica was most aggressive. The results suggest that species identity could be more important than species or functional richness in determining community resistance to invasion." }, { "instance_id": "R52143xR52071", "comparison_id": "R52143", "paper_id": "R52071", "text": "Identifying Native Vegetation for Reducing Exotic Species during the Restoration of Desert Ecosystems There is currently much interest in restoration ecology in identifying native vegetation that can decrease the invasibility by exotic species of environments undergoing restoration. However, uncertainty remains about restoration's ability to limit exotic species, particularly in deserts where facilitative interactions between plants are prevalent. Using candidate native species for restoration in the Mojave Desert of the southwestern U.S.A., we experimentally assembled a range of plant communities from early successional forbs to late-successional shrubs and assessed which vegetation types reduced the establishment of the priority invasive annuals Bromus rubens (red brome) and Schismus spp. (Mediterranean grass) in control and N-enriched soils. Compared to early successional grass and shrub and late-successional shrub communities, an early forb community best resisted invasion, reducing exotic species biomass by 88% (N added) and 97% (no N added) relative to controls (no native plants). In native species monocultures, Sphaeralcea ambigua (desert globemallow), an early successional forb, was the least invasible, reducing exotic biomass by 91%. However, the least-invaded vegetation types did not reduce soil N or P relative to other vegetation types nor was native plant cover linked to invasibility, suggesting that other traits influenced native-exotic species interactions. This study provides experimental field evidence that native vegetation types exist that may reduce exotic grass establishment in the Mojave Desert, and that these candidates for restoration are not necessarily late-successional communities. More generally, results indicate the importance of careful native species selection when exotic species invasions must be constrained for restoration to be successful." }, { "instance_id": "R52143xR52126", "comparison_id": "R52143", "paper_id": "R52126", "text": "Physiological and morphological traits of exotic- invasive exotic- and native plant species in tallgrass prairie We compared 13 traits of invasive exotic, noninvasive exotic, and ecologically similar native species to determine if there are generalizable differences among these groups that relate to persistence and spread of exotic species in tallgrass prairie plant communities. When species were grouped as invasive (two species), noninvasive (five species), and native (six species), no differences were found for the suite of traits examined, likely because of the high variability within and between groups. However, when exotic species, regardless of invasiveness, were compared with the native species, specific leaf area was ca. 40% higher for the exotic species, a result that is consistent with that of other studies. This pattern was also observed for five of seven pairwise comparisons of exotic and native species with similar life history traits. In contrast, total end\u2010of\u2010season biomass was as much as three times higher for the native species in five of seven of the native\u2010exotic species pairs. For other traits, differences between exotic and native species were species\u2010specific and were generally more numerous for noninvasive than for invasive exotic species pair\u2010wise comparisons. Thus, contrary to predictions, exotic species capable of successfully invading tallgrass prairie did not differ considerably from native species in most traits related to resource utilization and carbon gain. Moreover, invasive exotic species, those capable of displacing native species and dominating a community, were not distinct for the observed traits from their native counterparts. These results indicate that other traits, such as the ability to respond to resource pulses or herbivory, may explain more effectively why certain invasive species are able to invade these communities aggressively." }, { "instance_id": "R52143xR52140", "comparison_id": "R52143", "paper_id": "R52140", "text": "Functionally Similar Species Confer Greater Resistance to Invasion: Implications for Grassland Restoration Plant community functional composition can be manipulated in restored ecosystems to reduce the establishment potential of invading species. This study was designed to compare invasion resistance among communities with species functionally similar or dissimilar to yellow starthistle (Centaurea solstitialis), a late-season annual. A field experiment was conducted in the Central Valley of Cali fornia with six experimental plant communities that included (1) six early-season native annual forbs (AF); (2) five late-season native perennials and one summer annual forb (NP); (3) a combination of three early-season native annual forbs and three late-season native perennials (FP); (4) six early-season non-native annual grasses (AG); (5) monoculture of the late-season native perennial grass Elymus glaucus (EG); and (6) monoculture of the late-season native perennial Grindelia camporum (GC). Following establishment, C. solstitialis seed was added to half of the plots, and a monoculture of C. solstitialis (CS) was established as a control. Over a 5-year period, the AF and AG communities were ineffective at preventing C. solstitialis invasion. Centaurea solstitialis cover remained less than 10% in the FP and NP communities, except in year 1. By the fourth year, E. glaucus cover was greater than 50% in NP and FP communities and had spread to all other communities (e.g., 27% cover in CS in year 5). Communities containing E. glaucus, which is functionally similar to C. solstitialis, better resisted invasion than communities lacking a functional analog. In contrast, G. camporum, which is also functionally similar to C. solstitialis, failed to survive. Consequently, species selection for restored communities must consider not only functional similarity to the invader but also establishment success, competitiveness, and survivorship." }, { "instance_id": "R52143xR52073", "comparison_id": "R52143", "paper_id": "R52073", "text": "Using ecological restoration to constrain biological invasion Summary 1 Biological invasion can permanently alter ecosystem structure and function. Invasive species are difficult to eradicate, so methods for constraining invasions would be ecologically valuable. We examined the potential of ecological restoration to constrain invasion of an old field by Agropyron cristatum, an introduced C3 grass. 2 A field experiment was conducted in the northern Great Plains of North America. One-hundred and forty restored plots were planted in 1994\u201396 with a mixture of C3 and C4 native grass seed, while 100 unrestored plots were not. Vegetation on the plots was measured periodically between 1994 and 2002. 3 Agropyron cristatum invaded the old field between 1994 and 2002, occurring in 5% of plots in 1994 and 66% of plots in 2002, and increasing in mean cover from 0\u00b72% in 1994 to 17\u00b71% in 2002. However, A. cristatum invaded one-third fewer restored than unrestored plots between 1997 and 2002, suggesting that restoration constrained invasion. Further, A. cristatum cover in restored plots decreased with increasing planted grass cover. Stepwise regression indicated that A. cristatum cover was more strongly correlated with planted grass cover than with distance from the A. cristatum source, species richness, percentage bare ground or percentage litter. 4 The strength of the negative relationship between A. cristatum and planted native grasses varied among functional groups: the correlation was stronger with species with phenology and physiology similar to A. cristatum (i.e. C3 grasses) than with dissimilar species (C4 grasses). 5 Richness and cover of naturally establishing native species decreased with increasing A. cristatum cover. In contrast, restoration had little effect on the establishment and colonization of naturally establishing native species. Thus, A. cristatum hindered colonization by native species while planted native grasses did not. 6 Synthesis and applications. To our knowledge, this study provides the first indication that restoration can act as a filter, constraining invasive species while allowing colonization by native species. These results suggest that resistance to invasion depends on the identity of species in the community and that restoration seed mixes might be tailored to constrain selected invaders. Restoring areas before invasive species become established can reduce the magnitude of biological invasion." }, { "instance_id": "R52143xR52116", "comparison_id": "R52143", "paper_id": "R52116", "text": "Are competitive effects of native species on an invader mediated by water availability? Question Climate change processes could influence the dynamics of biotic interactions such as plant competition, especially in response to disturbance phenomena such as invasional processes. Are competitive effects of native species on an invader mediated by water availability? Location Glasshouse facility, New South Wales, Australia. Methods We constructed competitive hierarchies for a representative suite of species from coastal dune communities that have been invaded by the Asteraceae shrub, bitou (Chrysanthemoides monilifera subsp. rotundata). We used a comparative phytometer approach, where the invader species was grown with or without a suite of native species in glasshouse trials. This was used to construct competition hierarchies under two water stress conditions: non-droughted and droughted. The treatments were designed to simulate current and potential future water availability respectively. Results We found that the invader experienced fewer competitive effects from some native species under water stress, particularly with regard to below-ground biomass effects. Native species were often poor competitors with the invader, despite their adaptation to periodic water stress in native coastal environments. Of the native species with significant competitive effects on the invader, functionally similar shrub species were the most effective competitors, as expressed in below-ground biomass. The relative position of species in the hierarchy was consistent across water treatments based on below-ground bitou biomass, but was contingent on water treatment when based on above-ground bitou biomass. Conclusions The competitive effects of native species on an invader are affected by water stress. While the direction of response to water stress is species-specific, many species have small competitive effects on the invader under droughted conditions. This could allow an increase in invader dominance with climate change." }, { "instance_id": "R52143xR52079", "comparison_id": "R52143", "paper_id": "R52079", "text": "Patterns of trait convergence and divergence among native and exotic species in herbaceous plant communities are not modified by nitrogen enrichment Summary 1. Community assembly theories predict that the success of invading species into a new community should be predictable by functional traits. Environmental filters could constrain the number of successful ecological strategies in a habitat, resulting in similar suites of traits between native and successfully invading species (convergence). Conversely, concepts of limiting similarity and competitive exclusion predict native species will prevent invasion by functionally similar exotic species, resulting in trait divergence between the two species pools. Nutrient availability may further alter the strength of convergent or divergent forces in community assembly, by relaxing environmental constraints and \u2044 or influencing competitive interactions. 2. To investigate how nutrient availability influences forces of divergence and convergence during the invasion of exotic species into native communities, we conducted multivariate analyses of community composition and functional traits from naturally assembled plant communities in long-term nitrogen (N) addition experiments across North America. 3. Relative abundances of key functional traits differed between the native and exotic plant communities, consistent with limiting similarity or a trait bias in the exotic species pool. Environmental context also played an important role in invasion because sites varied in the identity of the traits that predicted dissimilarity between native and exotic communities. Nitrogen enrichment did not alter these patterns. 4. Nitrogen enrichment tended to increase exotic abundance, but this result was driven by a dramatic increase in exotics in only a few experiments. When similarity between native and exotic communities was included in the statistical model, N enrichment no longer predicted an increase in exotic relative abundance. Instead, sites with the highest abundance of exotic species were the ones where native and exotic communities had the highest trait similarity. 5. Synthesis. Our analysis of natural patterns of invasion across herbaceous communities in North America found evidence of both divergent and convergent forces on community assembly with exotic species. Together, these results suggest that while functionally dissimilar exotic species may be more likely to invade, they are unlikely to become abundant unless they have traits pre-adapting them to environmental conditions in their invaded range. Contrary to prior studies, invasion was not consistently promoted by N enrichment." }, { "instance_id": "R52143xR52106", "comparison_id": "R52143", "paper_id": "R52106", "text": "Functional composition controls invasion success in a California serpentine grassland Summary 1. Recent debates about the role of biotic resistance in controlling invasion success have focused on effects of species richness. However, functional composition could be a stronger control: species already in the community with similar functional traits to those of the invaders should have the greatest competitive effect on invaders. Still, experiments assessing effects of functional similarity have found contradictory results. 2. We used experimental communities in a serpentine grassland in California, USA, to assess the extent to which functional composition and functional diversity influenced success of two different types of invading plants: early season annuals (E) and late-season annuals (L) that have been previously shown to differ in patterns of resource acquisition. 3. We seeded known quantities of seed of six different species (three in each functional group) into experimental plots containing established communities differing in functional composition and functional diversity. The experimental communities contained different combinations of E, L, perennial bunchgrass (P) and nitrogen-fixer (N) functional groups, with functional diversity ranging from 0 to 4 groups. Each invading species was seeded into a separate quadrat within each plot to minimize competitive effects of invaders on each other. We measured both seedling and adult success of invaders for two full growing seasons to further understand mechanisms underlying biotic resistance. 4. More functionally diverse communities were less invaded overall, as measured by the average success of individual invaders. However, assessment of invaders by functional groups was more informative: Es in the extant community suppressed E invaders the most, and Ls in the extant community suppressed L invaders the most. 5. We observed a variety of interactions among extant functional groups in reducing invader success, including synergism, complementarity and \u2018basement\u2019 effects, where two or more groups negatively affected invaders, but combinations of groups were no more suppressive than single groups. The extant community influenced invaders more strongly through suppression of adult plant growth than through effects on seedling establishment. 6. Synthesis. Contrary to predictions from neutral theory, these results indicate that niche overlap was an important component of biotic resistance in these experimental plant communities and summed up to significant effects of species richness." }, { "instance_id": "R52143xR52090", "comparison_id": "R52143", "paper_id": "R52090", "text": "Variation in resource acquisition and utilization traits between native and invasive perennial forbs Understanding the functional traits that allow invasives to outperform natives is a necessary first step in improving our ability to predict and manage the spread of invaders. In nutrient-limited systems, plant competitive ability is expected to be closely tied to the ability of a plant to exploit nutrient-rich microsites and use these captured nutrients efficiently. The broad objective of this work was to compare the ability of native and invasive perennial forbs to acquire and use nutrients from nutrient-rich microsites. We evaluated morphological and physiological responses among four native and four invasive species exposed to heterogeneous (patch) or homogeneous (control) nutrient distribution. Invasives, on average, allocated more biomass to roots and allocated proportionately more root length to nutrient-rich microsites than did natives. Invasives also had higher leaf N, photosynthetic rates, and photosynthetic nitrogen use efficiency than natives, regardless of treatment. While these results suggest multiple traits may contribute to the success of invasive forbs in low-nutrient environments, we also observed large variation in these traits among native forbs. These observations support the idea that functional trait variation in the plant community may be a better predictor of invasion resistance than the functional group composition of the plant community." }, { "instance_id": "R53407xR53261", "comparison_id": "R53407", "paper_id": "R53261", "text": "A phylogenetic approach towards understanding the drivers of plant invasiveness on Robben Island- South Africa Invasive plant species are a considerable threat to ecosystems globally and on islands in particular where species diversity can be relatively low. In this study, we examined the phylogenetic basis of invasion success on Robben Island in South Africa. The flora of the island was sampled extensively and the phylogeny of the local community was reconstructed using the two core DNA barcode regions, rbcLa and matK. By analysing the phylogenetic patterns of native and invasive floras at two different scales, we found that invasive alien species are more distantly related to native species, a confirmation of Darwin's naturalization hypothesis. However, this pattern also holds even for randomly generated communities, therefore discounting the explanatory power of Darwin's naturalization hypothesis as the unique driver of invasion success on the island. These findings suggest that the drivers of invasion success on the island may be linked to species traits rather than their evolutionary history alone, or to the combination thereof. This result also has implications for the invasion management programmes currently being implemented to rehabilitate the native diversity on Robben Island. \u00a9 2013 The Linnean Society of London, Botanical Journal of the Linnean Society, 2013, 172, 142\u2013152." }, { "instance_id": "R53407xR53271", "comparison_id": "R53407", "paper_id": "R53271", "text": "Darwin's naturalization hypothesis revisited In The Origin of Species, Darwin (1859) drew attention to observations by Alphonse de Candolle (1855) that floras gain by naturalization far more species belonging to new genera than species belonging to native genera. Darwin (1859, p. 86) goes on to give a specific example: \u201cIn the last edition of Dr. Asa Gray\u2019s \u2018Manual of the Flora of the United States\u2019 ... out of the 162 naturalised genera, no less than 100 genera are not there indigenous.\u201d Darwin used these data to support his theory of intense competition between congeners, described only a few pages earlier: \u201cAs the species of the same genus usually have, though by no means invariably, much similarity in habits and constitution, and always in structure, the struggle will generally be more severe between them\u201d (1859, p. 60). Darwin\u2019s intriguing observations have recently attracted renewed interest, as comprehensive lists of naturalized plants have become available for various regions of the world. Two studies (Mack 1996; Rejmanek 1996, 1998) have concluded that naturalized floras provide some support for Darwin\u2019s hypothesis, but only one of these studies used statistical tests. Analyses of additional floras are needed to test the generality of Darwin\u2019s naturalization hypothesis. Mack (1996) tabulated data from six regional floras within the United States and noted that naturalized species more often belong to alien genera than native genera, with the curious exception of one region (New York). In addition to the possibility of strong competition between native and introduced congeners, Mack (1996) proposed that specialist native herbivores, or pathogens, may be" }, { "instance_id": "R53407xR53400", "comparison_id": "R53407", "paper_id": "R53400", "text": "The Roles of Climate- Phylogenetic Relatedness- Introduction Effort- and Reproductive Traits in the Establishment of Non-Native Reptiles and Amphibians We developed a method to predict the potential of non-native reptiles and amphibians (herpetofauna) to establish populations. This method may inform efforts to prevent the introduction of invasive non-native species. We used boosted regression trees to determine whether nine variables influence establishment success of introduced herpetofauna in California and Florida. We used an independent data set to assess model performance. Propagule pressure was the variable most strongly associated with establishment success. Species with short juvenile periods and species with phylogenetically more distant relatives in regional biotas were more likely to establish than species that start breeding later and those that have close relatives. Average climate match (the similarity of climate between native and non-native range) and life form were also important. Frogs and lizards were the taxonomic groups most likely to establish, whereas a much lower proportion of snakes and turtles established. We used results from our best model to compile a spreadsheet-based model for easy use and interpretation. Probability scores obtained from the spreadsheet model were strongly correlated with establishment success as were probabilities predicted for independent data by the boosted regression tree model. However, the error rate for predictions made with independent data was much higher than with cross validation using training data. This difference in predictive power does not preclude use of the model to assess the probability of establishment of herpetofauna because (1) the independent data had no information for two variables (meaning the full predictive capacity of the model could not be realized) and (2) the model structure is consistent with the recent literature on the primary determinants of establishment success for herpetofauna. It may still be difficult to predict the establishment probability of poorly studied taxa, but it is clear that non-native species (especially lizards and frogs) that mature early and come from environments similar to that of the introduction region have the highest probability of establishment." }, { "instance_id": "R53407xR53301", "comparison_id": "R53407", "paper_id": "R53301", "text": "Evidence that phylogenetically novel non-indigenous plants experience less herbivory The degree to which biotic interactions influence invasion by non-indigenous species may be partly explained by the evolutionary relationship of these invaders with natives. Darwin\u2019s naturalization hypothesis controversially proposes that non-native plants are more likely to invade if they lack close relatives in their new range. A possible mechanism for this pattern is that exotics that are more closely related to natives are more likely to share their herbivores, and thus will suffer more damage than phylogenetically isolated species. We tested this prediction using exotic plants in Ontario, Canada. We measured herbivore damage to 32 species of exotic plants in a common garden experiment, and 52 in natural populations. We estimated their phylogenetic distances from locally occurring natives in three ways: as mean distance (age) to all native plants, mean distance to native members of the same family, and distance to the closest native species. In the common garden, the proportion of leaves damaged and the average proportion of leaf area damaged declined with mean phylogenetic distance to native family relatives by late summer. Distance to native confamilials was a better predictor of damage than distance to the closest native species, while mean distance to the entire native plant community failed to predict damage. No significant patterns were detected for plants in natural populations, likely because uncontrolled site-to-site variation concealed these phylogenetic trends. To the extent that herbivory has negative demographic impacts, these results suggest that exotics that are more phylogenetically isolated from native confamilials should be more invasive; conversely, native communities should be more resistant to invasion if they harbor close familial relatives of potential invaders. However, the large scatter in this relationship suggests that these often are likely to be weak effects; as a result, these effects often may be difficult to detect in uncontrolled surveys of natural populations." }, { "instance_id": "R53407xR53350", "comparison_id": "R53407", "paper_id": "R53350", "text": "Phylogenetic isolation increases plant success despite increasing susceptibility to generalist herbivores Aim Theory suggests that introduced species that are phylogenetically distant from their recipient communities should be more successful than closely related introduced species because they can exploit open niches and escape enemies in their new range, i.e. Darwin\u2019s Naturalization Hypothesis. Alternatively, it has also been hypothesized that closely related invaders might be more successful than novel invaders because they are pre-adapted to conditions in their new range; a paradox coined Darwin\u2019s Naturalization Conundrum. To date, these hypotheses have been tested primarily at the regional scale, not within local plant communities where introduced species colonize, compete and encounter herbivores. Location Global. Methods and Results We used community phylogenetics to analyse data from 49 published experiments to examine the importance of phylogenetic relatedness and generalist herbivory on native and exotic plant success at the community level. Plants that were categorized as \u2018invasive\u2019 were indeed less related to the recipient community than \u2018non-pest\u2019 exotic plants. Distantly related exotic plants were also more abundant than closely related species. Phylogenetic relatedness predicted herbivore impact, but in a way that was opposite to predictions, as herbivores had stronger, not lesser, impacts on distantly related plants. Importantly, these same patterns generally held for native plants, as distantly related native plants were more abundant and more susceptible to herbivores than closely related species, ultimately resulting in herbivores suppressing community-level phylogenetic diversity. Main conclusions Distantly related plants were more locally successful despite experiencing stronger control by generalist herbivores, a finding that was robust across native and exotic species. To our knowledge, this is the first evidence that phylogenetic matching influences the local success of both native and exotic species and that herbivores can influence community phylodiversity. Phylogenetic relatedness explained a relatively small portion of the variance in the data even after taking herbivory into account, however, suggesting that phylogenetic matching works in combination with other factors to influence community assembly." }, { "instance_id": "R53407xR53287", "comparison_id": "R53407", "paper_id": "R53287", "text": "Ecology: Darwin's naturalization hypothesis challenged Naturalized plants can have a significant ecological and economic impact, yet they comprise only a fraction of the plant species introduced into new areas by humans. Darwin proposed that introduced plant species will be less likely to establish a self-sustaining wild population in places with congeneric native species because the introduced plants have to compete with their close native relatives, or are more likely to be attacked by native herbivores or pathogens, a theory known as Darwin's naturalization hypothesis. Here we analyse a complete list of seed-plant species that have been introduced to New Zealand and find that those with congeneric relatives are significantly more, not less, likely to naturalize \u2014 perhaps because they share with their native relatives traits that pre-adapt them to their new environment." }, { "instance_id": "R53407xR53390", "comparison_id": "R53407", "paper_id": "R53390", "text": "Associations between a highly invasive species and native macrophytes differ across spatial scales The association between invasive and native species varies across spatial scales and is affected by phylogenetic relatedness, but these issues have rarely been addressed in aquatic ecosystems. In this study, we used a non-native, highly invasive species of Poaceae (tropical signalgrass) to test the hypotheses that (i) tropical signalgrass success correlates negatively with success of most native species of macrophytes at fine spatial scales, but its success correlates positively or at random with natives at coarse spatial scales, and that (ii) tropical signalgrass is less associated with native species belonging to the family Poaceae than with species belonging to other families (Darwin\u2019s naturalization hypothesis). We used a dataset obtained at fine (0.25 m2) and coarse (ca. 1,000 m2) scales. The presence/absence of all species was recorded at both scales, and their biomass was also measured at the fine scale. We tested the association between tropical signalgrass biomass and individual native species with logistic regressions at the fine scale, and using the T-score index between tropical signalgrass and each native species at both scales. The likelihood of the occurrence of six species (submersed and free-floating) was negatively affected by tropical signalgrass biomass at the fine scale. T-scores showed that three species were less associated with tropical signalgrass than expected by chance, but 22 species co-occurred more than expected by chance at the coarse scale. Associations between species of Poaceae and tropical signalgrass were null at the fine scale, but were positive or null at the coarse scale. In addition to showing that spatial scale affects the patterns of association among the non-native and individual native species, our results indicate that phylogeny did not explain associations between the invasive and native macrophytes, at both scales." }, { "instance_id": "R53407xR53258", "comparison_id": "R53407", "paper_id": "R53258", "text": "Patterns of phylogenetic diversity are linked to invasion impacts- not invasion resistance- in a native grassland Question: There are often more invasive species in communities that are less phylogenetically diverse or distantly related to the invaders. This is thought to indicate reduced biotic resistance, but recent theory predicts that phylogenetic relationships have more influence on competitive outcomes when interactions are more pair-wise than diffuse. Therefore, phylogenetic relationships should change when the invader becomes dominant and interactions are more pairwise, rather than alter biotic resistance, which is the outcome of diffuse interactions with the resident community; however both processes can produce similar phylogenetic structures within communities. We ask whether phylogenetic structure is more associated with biotic resistance or invasion impacts following Bromus inermis (brome) invasion and identify the mechanisms behind changes to phylogenetic structure. Location: Native grassland in Alberta, Canada. Methods: We tested whether phylogenetic structure affected biotic resistance by transplanting brome seedlings into intact vegetation and quantified invasion impacts on community structure by surveying across multiple invasion edges. Additionally, we tested whether relatedness, rarity, average patch size, evolutionary distinctiveness or environmental tolerances determined species\u2019 response to brome invasion. Results: Neither phylogenetic diversity, nor relatedness to brome, influenced the strength of biotic resistance; resource availability was the strongest determinant of resistance. However, communities did become less diverse and phylogenetically over-dispersed following brome invasion, but not because of the loss of related species. Brome invasion was associated with declines in common species from common lineages and increases in shade-tolerant species and rare species from species-poor lineages. Conclusions: Ourresults suggest that invasion is morelikelytoaffectthe phylogenetic structure of the community than the phylogenetic structure of the community will affect invasion. However, they also suggest that the degree of relatedness between the invader and the resident community is unlikely todrive these effects on phylogenetic community structure. Consistent with previous studies, invasion effects were stronger for common species as they have reduced shade tolerance and cannot persist in a subordinate role. This suggests that invasion effects on phylogenetic community structure will depend on which species exhibit traits that enable persistence with the invader and how these traits are distributed across the phylogeny." }, { "instance_id": "R53407xR53328", "comparison_id": "R53407", "paper_id": "R53328", "text": "Congener diversity- topographic heterogeneity and human-assisted dispersal predict spread rates of alien herpetofauna at a global scale Understanding the factors that determine rates of range expansion is not only crucial for developing risk assessment schemes and management strategies for invasive species, but also provides important insight into the ability of species to disperse in response to climate change. However, there is little knowledge on why some invasions spread faster than others at large spatiotemporal scales. Here, we examine the effects of human activities, species traits and characteristics of the invaded range on spread rates using a global sample of alien reptile and amphibian introductions. We show that spread rates vary remarkably among invaded locations within a species, and differ across biogeographical realms. Spread rates are positively related to the richness of native congeneric species and human-assisted dispersal in the invaded range but are negatively correlated with topographic heterogeneity. Our findings highlight the importance of environmental characteristics and human-assisted dispersal in developing robust frameworks for predicting species' range shifts." }, { "instance_id": "R53407xR53369", "comparison_id": "R53407", "paper_id": "R53369", "text": "Does Darwin's naturalization hypothesis explain fish invasions? Darwin\u2019s naturalization hypothesis predicts that introduced species tend not to invade areas containing congeneric native species, because they would otherwise compete with their close relatives and would likely encounter predators and pathogens that can attack them. An opposing view is that introduced species should succeed in areas where native congeners are present because they are more likely to share traits that pre-adapt them to their new environment. A test of both these hypotheses using data on fish introductions from several independent regions fails to support either viewpoints. In contrast to studies of nonindigenous plants, our results suggest that taxonomic affiliation is not an important general predictor of fish invasion success." }, { "instance_id": "R53407xR53366", "comparison_id": "R53407", "paper_id": "R53366", "text": "Distinctiveness magnifies the impact of biological invaders in aquatic ecosystems There exist few empirical rules for the effects of introduced species, reflecting the context-dependent nature of biological invasions. A promising approach toward developing generalizations is to explore hypotheses that incorporate characteristics of both the invader and the recipient system. We present the first general test of the hypothesis that an invader\u2019s impact is determined by the system\u2019s evolutionary experience with similar species. Through a meta-analysis, we compared the taxonomic distinctiveness of high- and low-impact invaders in several aquatic systems. We find that high-impact invaders (i.e. those that displace native species) are more likely to belong to genera not already present in the system." }, { "instance_id": "R54244xR54146", "comparison_id": "R54244", "paper_id": "R54146", "text": "Alien plant species favoured over congeneric natives under experimental climate warming in temperate Belgian climate Climate warming and biological invasions by alien species are two key factors threatening the world\u2019s biodiversity. To date, their impact has largely been studied independently, and knowledge on whether climate warming will promote invasions relies strongly on bioclimatic models. We therefore set up a study to experimentally compare responses to warming in native and alien plant species. Ten congeneric species pairs were exposed to ambient and elevated temperature (+3\u00b0C) in sunlit, climate-controlled chambers, under optimal water and nutrient supply to avoid interaction with other factors. All species pairs combined, total plant biomass reacted differently to warming in alien versus native species, which could be traced to significantly different root responses. On average, native species became less productive in the warmer climate, whereas their alien counterparts showed no response. The three alien species with the strongest warming response (Lathyrus latifolius, Cerastium tomentosum and Artemisia verlotiorum) are currently non-invasive but all originate from regions with a warmer climate. Still, other alien species that also originate from warmer regions became less or remained equally productive. Structural or ecophysiological acclimation to warming was largely absent, both in native and alien species, apart from light-saturated photosynthetic rate, where warming tended to restrain the native but not the alien species. A difference in the capacity to acclimate photosynthetic rates to the new climate may therefore have caused the contrasting biomass response. Future experiments are needed to ascertain whether climate warming can effectively tip the balance between native and alien competitors." }, { "instance_id": "R54244xR54136", "comparison_id": "R54244", "paper_id": "R54136", "text": "Invasion strategies in clonal aquatic plants: are phenotypic differences caused by phenotypic plasticity or local adaptation? BACKGROUND AND AIMS The successful spread of invasive plants in new environments is often linked to multiple introductions and a diverse gene pool that facilitates local adaptation to variable environmental conditions. For clonal plants, however, phenotypic plasticity may be equally important. Here the primary adaptive strategy in three non-native, clonally reproducing macrophytes (Egeria densa, Elodea canadensis and Lagarosiphon major) in New Zealand freshwaters were examined and an attempt was made to link observed differences in plant morphology to local variation in habitat conditions. METHODS Field populations with a large phenotypic variety were sampled in a range of lakes and streams with different chemical and physical properties. The phenotypic plasticity of the species before and after cultivation was studied in a common garden growth experiment, and the genetic diversity of these same populations was also quantified. KEY RESULTS For all three species, greater variation in plant characteristics was found before they were grown in standardized conditions. Moreover, field populations displayed remarkably little genetic variation and there was little interaction between habitat conditions and plant morphological characteristics. CONCLUSIONS The results indicate that at the current stage of spread into New Zealand, the primary adaptive strategy of these three invasive macrophytes is phenotypic plasticity. However, while limited, the possibility that genetic diversity between populations may facilitate ecotypic differentiation in the future cannot be excluded. These results thus indicate that invasive clonal aquatic plants adapt to new introduced areas by phenotypic plasticity. Inorganic carbon, nitrogen and phosphorous were important in controlling plant size of E. canadensis and L. major, but no other relationships between plant characteristics and habitat conditions were apparent. This implies that within-species differences in plant size can be explained by local nutrient conditions. All together this strongly suggests that invasive clonal aquatic plants adapt to a wide range of habitats in introduced areas by phenotypic plasticity rather than local adaptation." }, { "instance_id": "R54244xR54212", "comparison_id": "R54244", "paper_id": "R54212", "text": "Phenotypic plasticity of native vs. invasive purple loosestrife: A two-state multivariate approach The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years." }, { "instance_id": "R54244xR54084", "comparison_id": "R54244", "paper_id": "R54084", "text": "Higher plasticity in ecophysiological traits enhances the performance and invasion success of Taraxacum officinale (dandelion) in alpine environments Phenotypic plasticity has long been suggested to facilitate biological invasions in changing environments, allowing a species to maintain a good ecophysiological performance. High-mountain habitats have been particularly useful for evaluation of the relative importance of environmental conditions in the colonization and invasion process, because they have heterogeneous and stressful climatic conditions, inducing photoinhibition. Light intensity is one of the most changing conditions along altitudinal gradients, showing more variability in higher altitudes. In this study, we analyzed the plasticity in photoprotective strategies and performance of the invasive Taraxacum officinale. Additionally, we tested whether higher plasticity enhances competitive ability in an alpine environment We conducted an experiment to evaluate plasticity with a second generation (F2) of T. officinale individuals from 1,600 to 3,600 m, in a greenhouse with variation in light intensity. Treatments consisted of transferring 120 individuals from each altitude to two conditions of light intensity. We then recorded concentrations of photoprotection pigment, de-epoxidation state of the xanthophyll cycle, foliar angles, photochemical efficiency by fluorescence of photosystem II, total dry biomass and flower production. Additionally, we compared plasticity in both photoprotective and performance traits between T. officinale and the co-occurring native species Hypochaeris thrincioides. Finally, we performed a manipulative experiment under two light regimes in order to assess the competitive outcome between the invasive T. officinale and the native H. thrincioides. Individuals from higher altitude showed significantly greater plasticity than individuals from lower altitude. Similarly, individuals under high light intensity showed higher levels of photoprotective pigments, biomass and flower production. On the other hand, the invasive plant species showed significantly greater plasticity than the co-occurring native species, and a strong negative impact on the biomass of the native plant. Phenotypic plasticity seems to be a successful strategy in T. officinale to compete with native species and may be positively associated with the success of invasions, being greater in individuals from more heterogeneous and stressful environments." }, { "instance_id": "R54244xR54022", "comparison_id": "R54244", "paper_id": "R54022", "text": "Wood anatomical traits as a measure of plant responses to water availability: invasive Acacia mearnsii De Wild. compared with native tree species in fynbos riparian ecotones, South Africa Riparian ecotones in the fynbos biome of South Africa are heavily invaded by woody invasive alien species, which are known to reduce water supply to downstream environments. To explore whether variation in species-specific functional traits pertaining to drought-tolerance exist, we investigated wood anatomical traits of key native riparian species and the invasive Acacia mearnsii across different water availability proxies. Wood density, vessel resistance against implosion, vessel lumen diameter and vessel wall thickness were measured. Wood density varied significantly between species, with A. mearnsii having denser wood at sites in rivers with high discharge. As higher wood density is indicative of increased drought tolerance and typical of drier sites, this counter-intuitive finding suggests that increased wood density was more closely related to midday water stress, than streamflow quantity per se. Wood density was positively correlated with vessel resistance against implosion. Higher wood density may also be evidence that A. mearnsii is more resistant against drought-induced cavitation than the studied native species. The observed plastic response of A. mearnsii anatomical traits to variable water availability indicates the ability of this species to persist under various environmental conditions. A possible non-causal relationship between wood anatomy and drought tolerance in these riparian systems is discussed." }, { "instance_id": "R54244xR54102", "comparison_id": "R54244", "paper_id": "R54102", "text": "Lantana camara L.: a weed with great light-acclimation capacity Plant invasions may be limited by low radiation levels in ecosystems such as forests. Lantana camara has been classified among the world\u2019s 10 worst weeds since it is invading many different habitats all around the planet. Morphological and physiological responses to different light fluxes were analyzed. L. camara was able to acclimate to moderately shaded environments, showing a high phenotypic plasticity. Morphological acclimation to low light fluxes was typified by increasing leaf size, leaf biomass, leaf area index and plant height and by reduced stomatal density and leaf thickness. Plants in full sunlight produced many more inflorescences than in shaded conditions. Physiological acclimation to low radiation levels was shown to be higher stomatal conductance, higher net photosynthetic rates and higher efficiency of photosystem II (PSII). L. camara behaves as a facultative shade-tolerant plant, being able to grow in moderately sheltered environments, however its invasion could be limited in very shady habitats. Control efforts in patchy environments should be mainly directed against individuals in open areas since that is where the production of seeds would be higher and the progress of the invasion would be faster." }, { "instance_id": "R54244xR54070", "comparison_id": "R54244", "paper_id": "R54070", "text": "Phenotypic variation of an alien species in a new environment: the body size and diet of American mink over time and at local and continental scales Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. \u00a9 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681\u2013693." }, { "instance_id": "R54244xR54162", "comparison_id": "R54244", "paper_id": "R54162", "text": "Trade-off between morphological convergence and opportunistic diet behavior in fish hybrid zone Abstract Background The invasive Chondrostoma nasus nasus has colonized part of the distribution area of the protected endemic species Chondrostoma toxostoma toxostoma . This hybrid zone is a complex system where multiple effects such as inter-species competition, bi-directional introgression, strong environmental pressure and so on are combined. Why do sympatric Chondrostoma fish present a unidirectional change in body shape? Is this the result of inter-species interactions and/or a response to environmental effects or the result of trade-offs? Studies focusing on the understanding of a trade-off between multiple parameters are still rare. Although this has previously been done for Cichlid species flock and for Darwin finches, where mouth or beak morphology were coupled to diet and genetic identification, no similar studies have been done for a fish hybrid zone in a river. We tested the correlation between morphology (body and mouth morphology), diet (stable carbon and nitrogen isotopes) and genomic combinations in different allopatric and sympatric populations for a global data set of 1330 specimens. To separate the species interaction effect from the environmental effect in sympatry, we distinguished two data sets: the first one was obtained from a highly regulated part of the river and the second was obtained from specimens coming from the less regulated part. Results The distribution of the hybrid combinations was different in the two part of the sympatric zone, whereas all the specimens presented similar overall changes in body shape and in mouth morphology. Sympatric specimens were also characterized by a larger diet behavior variance than reference populations, characteristic of an opportunistic diet. No correlation was established between the body shape (or mouth deformation) and the stable isotope signature. Conclusion The Durance River is an untamed Mediterranean river despite the presence of numerous dams that split the river from upstream to downstream. The sympatric effect on morphology and the large diet behavior range can be explained by a tendency toward an opportunistic behavior of the sympatric specimens. Indeed, the similar response of the two species and their hybrids implied an adaptation that could be defined as an alternative trade-off that underline the importance of epigenetics mechanisms for potential success in a novel environment." }, { "instance_id": "R54244xR54094", "comparison_id": "R54244", "paper_id": "R54094", "text": "Multispecies comparison reveals that invasive and native plants differ in their traits but not in their plasticity Summary 1. Plastic responses to spatiotemporal environmental variation strongly influence species distribution, with widespread species expected to have high phenotypic plasticity. Theoretically, high phenotypic plasticity has been linked to plant invasiveness because it facilitates colonization and rapid spreading over large and environmentally heterogeneous new areas. 2. To determine the importance of phenotypic plasticity for plant invasiveness, we compare well-known exotic invasive species with widespread native congeners. First, we characterized the phenotype of 20 invasive\u2013native ecologically and phylogenetically related pairs from the Mediterranean region by measuring 20 different traits involved in resource acquisition, plant competition ability and stress tolerance. Second, we estimated their plasticity across nutrient and light gradients. 3. On average, invasive species had greater capacity for carbon gain and enhanced performance over a range of limiting to saturating resource availabilities than natives. However, both groups responded to environmental variations with high albeit similar levels of trait plasticity. Therefore, contrary to the theory, the extent of phenotypic plasticity was not significantly higher for invasive plants. 4. We argue that the combination of studying mean values of a trait with its plasticity can render insightful conclusions on functional comparisons of species such as those exploring the performance of species coexisting in heterogeneous and changing environments." }, { "instance_id": "R54244xR54040", "comparison_id": "R54244", "paper_id": "R54040", "text": "Architectural strategies of Rhamnus cathartica (Rhamnaceae) in relation to canopy openness While phenotypic plasticity is considered the major means that allows plant to cope with environmental heterogeneity, scant information is available on phenotypic plasticity of the whole-plant architecture in relation to ontogenic processes. We performed an architectural analysis to gain an understanding of the structural and ontogenic properties of common buckthorn ( Rhamnus cathartica L., Rhamnaceae) growing in the understory and under an open canopy. We found that ontogenic effects on growth need to be calibrated if a full description of phenotypic plasticity is to be obtained. Our analysis pointed to three levels of organization (or nested structural units) in R. cathartica. Their modulation in relation to light conditions leads to the expression of two architectural strategies that involve sets of traits known to confer competitive advantage in their respective environments. In the understory, the plant develops a tree-like form. Its strategy here is based on restricting investment in exploitation structures while promoting major vertical exploration and is probably key to species survival in the understory. Under an open canopy, the second strategy leads the plant to adopt a shrub-like shape. It develops densely branched exploitation structures and flowers abundantly and rapidly. This strategy perfectly matches its aggressive behaviour observed in full sunlight. We propose, as hypotheses, that these two light-related strategies are implicated in the ability of R. cathartica to outcompete the surrounding vegetation in a range of environmental conditions." }, { "instance_id": "R54244xR54096", "comparison_id": "R54244", "paper_id": "R54096", "text": "Nitrogen acquisition by annual and perennial grass seedlings: testing the roles of performance and plasticity to explain plant invasion Differences in resource acquisition between native and exotic plants is one hypothesis to explain invasive plant success. Mechanisms include greater resource acquisition rates and greater plasticity in resource acquisition by invasive exotic species compared to non-invasive natives. We assess the support for these mechanisms by comparing nitrate acquisition and growth of invasive annual and perennial grass seedlings in western North America. Two invasive exotic grasses (Bromus tectorum and Taeniatherum caput-medusae) and three perennial native and exotic grasses (Pseudoroegneria spicata, Elymus elymoides, and Agropyron cristatum) were grown at various temperatures typical of autumn and springtime when resource are abundant and dominance is determined by rapid growth and acquisition of resources. Bromus tectorum and perennial grasses had similar rates of nitrate acquisition at low temperature, but acquisition by B. tectorum significantly exceeded perennial grasses at higher temperature. Consequently, B. tectorum had the highest acquisition plasticity, showcasing its ability to take advantage of transient warm periods in autumn and spring. Nitrate acquisition by perennial grasses was limited either by root production or rate of acquisition per unit root mass, suggesting a trade-off between nutrient acquisition and allocation of growth to structural tissues. Our results indicate the importance of plasticity in resource acquisition when temperatures are warm such as following autumn emergence by B. tectorum. Highly flexible and opportunistic nitrate acquisition appears to be a mechanism whereby invasive annual grasses exploit soil nitrogen that perennials cannot use." }, { "instance_id": "R54244xR54214", "comparison_id": "R54244", "paper_id": "R54214", "text": "Phenotypic plasticity, precipitation, and invasiveness in the fire-promoting grass Pennisetum setaceum (poaceae) Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass." }, { "instance_id": "R54244xR54026", "comparison_id": "R54244", "paper_id": "R54026", "text": "Complex interactions between spatial pattern of resident species and invasiveness of newly arriving species affect invasibility Understanding the factors that affect establishment success of new species in established communities requires the study of both the ability of new species to establish and community resistance. Spatial pattern of species within a community can affect plant performance by changing the outcome of inter-specific competition, and consequently community invasibility. We studied the effects of spatial pattern of resident plant communities on fitness of genotypes from the native and introduced ranges of two worldwide invasive species, Centaurea stoebe and Senecio inaequidens, during their establishment stage. We experimentally established artificial plant mixtures with 4 or 8 resident species in intra-specifically aggregated or random spatial patterns, and added seedlings of genotypes from the native and introduced ranges of the two target species. Early growth of both S. inaequidens and C. stoebe was higher in aggregated than randomly assembled mixtures. However, a species-specific interaction between invasiveness and invasibility highlighted more complex patterns. Genotypes from native and introduced ranges of S. inaequidens showed the same responses to spatial pattern. By contrast, genotypes from the introduced range of C. stoebe did not respond to spatial pattern whereas native ones did. Based on phenotypic plasticity, we argue that the two target species adopted different strategies to deal with the spatial pattern of the resident plant community. We show that effects of spatial pattern of the resident community on the fitness of establishing species may depend on the diversity of the recipient community. Our results highlight the need to consider the interaction between invasiveness and invasibility in order to increase our understanding of invasion success." }, { "instance_id": "R54244xR54076", "comparison_id": "R54244", "paper_id": "R54076", "text": "Intraspecies differences in phenotypic plasticity: Invasive versus non-invasive populations of Ceratophyllum demersum Abstract High phenotypic plasticity has been hypothesized to affect the invasiveness of plants, as high plasticity may enlarge the breath of environments in which the plants can survive and reproduce. Here we compare the phenotypic plasticity of invasive and non-invasive populations of the same species in response to growth temperature. Populations of the submerged macrophyte Ceratophyllum demersum from New Zealand, where the species is introduced and invasive, and from Denmark, where the species is native and non-invasive, were grown in a common garden setup at temperatures of 12, 18, 25 and 35 \u00b0C. We hypothesized that the phenotypic plasticity in fitness-related traits like growth and photosynthesis were higher in the invasive than in the non-invasive population. The invasive population acclimated to elevated temperatures through increased rates of photosynthesis (range: P amb : 8\u2013452 \u03bcmol O 2 g \u22121 DM h \u22121 ) and relative growth rates (range: 0.01\u20130.05 d \u22121 ) and associated regulations in the photosynthetic machinery. The non-invasive population had a lower acclimation potential (range: P amb : 43\u2013173 \u03bcmol O 2 g \u22121 DM h \u22121 ; RGR: 0.01\u20130.03 d \u22121 ), but was better at acclimating to cooler conditions by regulation of the light-harvesting complex. Hence, the invasive population of C. demersum from New Zealand had higher phenotypic plasticity in response to temperature than the non-invasive Danish population. This might be the result of genetic evolution since its introduction to New Zealand five decades ago, but further studies are needed to test this hypothesis. The study also indicate, that the global increase in temperature may exacerbate the problems experienced with the invasive C. demersum in New Zealand, as the performance and fitness of this population appear to be favoured at elevated temperatures." }, { "instance_id": "R54244xR54134", "comparison_id": "R54244", "paper_id": "R54134", "text": "Thermal variability alters climatic stress resistance and plastic responses in a globally invasive pest, the Mediterranean fruit fly (Ceratitis capitata) Climatic means with different degrees of variability (\u03b4) may change in the future and could significantly impact ectotherm species fitness. Thus, there is an increased interest in understanding the effects of changes in means and variances of temperature on traits of climatic stress resistance. Here, we examined short\u2010term (within\u2010generation) variation in mean temperature (23, 25, and 27 \u00b0C) at three levels of diel thermal fluctuations (\u03b4 = 1, 3, or 5 \u00b0C) on an invasive pest insect, the Mediterranean fruit fly, Ceratitis capitata (Wiedemann) (Diptera: Tephritidae). Using the adult flies, we address the hypothesis that temperature variability may affect the climatic stress resistance over and above changes in mean temperature at constant variability levels. We scored the traits of high\u2010 and low\u2010thermal tolerance, high\u2010 and low\u2010temperature acute hardening ability, water balance, and egg production under benign conditions after exposure to each of the nine experimental scenarios. Most importantly, results showed that temperature variance may have significant effects in addition to the changes in mean temperature for most traits scored. Although typical acclimation responses were detected for most of the traits under low variance conditions, high variance scenarios dramatically altered the outcomes, with poorer climatic stress resistance detected in some, but not all, traits. These results suggest that large temperature fluctuations might limit plastic responses which in turn could reduce the insect fitness. Increased mean temperatures in conjunction with increased temperature variability may therefore have stronger negative effects on this agricultural pest than elevated temperatures alone. The results of this study therefore have significant implications for understanding insect responses to climate change and suggest that analyses or simulations of only mean temperature variation may be inappropriate for predicting population\u2010level responses under future climate change scenarios despite their widespread use." }, { "instance_id": "R54244xR54194", "comparison_id": "R54244", "paper_id": "R54194", "text": "Major morphological changes in a Lake Victoria cichlid fish within two decades During the upsurge of the introduced predatory Nile perch in Lake Victoria in the 1980s, the zooplanktivorous Haplochromis (Yssichromis) pyrrhocephalus nearly vanished. The species recovered coincident with the intense fishing of Nile perch in the 1990s, when water clarity and dissolved oxygen levels had decreased dramatically due to increased eutrophication. In response to the hypoxic conditions, total gill surface in resurgent H. pyrrhocephalus increased by 64%. Remarkably, head length, eye length, and head volume decreased in size, whereas cheek depth increased. Reductions in eye size and depth of the rostral part of the musculus sternohyoideus, and reallocation of space between the opercular and suspensorial compartments of the head may have permitted accommodation of larger gills in a smaller head. By contrast, the musculus levator posterior, located dorsal to the gills, increased in depth. This probably reflects an adaptive response to the larger and tougher prey types in the diet of resurgent H. pyrrhocephalus. These striking morphological changes over a time span of only two decades could be the combined result of phenotypic plasticity and genetic change and may have fostered recovery of this species." }, { "instance_id": "R54244xR54030", "comparison_id": "R54244", "paper_id": "R54030", "text": "Seedling traits, plasticity and local differentiation as strategies of invasive species of Impatiens in central Europe Background and Aims Invasiveness of some alien plants is associated with their traits, plastic responses to environmental conditions and interpopulation differentiation. To obtain insights into the role of these processes in contributing to variation in performance, we compared congeneric species of Impatiens (Balsaminaceae) with different origin and invasion status that occur in central Europe. Methods Native I. noli-tangere and three alien species (highly invasive I. glandulifera, less invasive I. parviflora and potentially invasive I. capensis) were studied and their responses to simulated canopy shading and different nutrient and moisture levels were determined in terms of survival and seedling traits. Key Results and Conclusions Impatiens glandulifera produced high biomass in all the treatments and the control, exhibiting the \u2018Jack-and-master\u2019 strategy that makes it a strong competitor from germination onwards. The results suggest that plasticity and differentiation occurred in all the species tested and that along the continuum from plasticity to differentiation, the species at the plasticity end is the better invader. The most invasive species I. glandulifera appears to be highly plastic, whereas the other two less invasive species, I. parviflora and I. capensis, exhibited lower plasticity but rather strong population differentiation. The invasive Impatiens species were taller and exhibited higher plasticity and differentiation than native I. noli-tangere. This suggests that even within one genus, the relative importance of the phenomena contributing to invasiveness appears to be species'specific." }, { "instance_id": "R54244xR54024", "comparison_id": "R54244", "paper_id": "R54024", "text": "Nonnative African jewelfish are more fit but not bolder at the invasion front: a trait comparison across an Everglades range expansion Invasive species present a global threat to natural ecosystems and native biodiversity. Previous studies have shown that invasive range expansion is often related to the invader\u2019s life histories and dispersal behavior. Among behavioral traits, boldness is a key trait that may aid species in performing well in novel environments. Thus, along a species\u2019 invaded range, individuals from the invasion front should be bolder, better dispersers, and have life histories that maximize population growth relative to established populations. We tested these hypotheses with the invasion of the African jewelfish Hemichromis letourneuxi in Everglades National Park (ENP). Jewelfish entered ENP in 2000, and since then they have expanded their range rapidly but traceably. Our study examined variation in reproductive investment, body condition, gut fullness, boldness, and dispersal behavior across six wild-caught populations of African jewelfish. Boldness and dispersal were tested using an emergence-activity test and an emergence-dispersal test in large, outdoor experimental setups. We dissected fish from the six populations to assess life histories. Populations from the invasion front (western ENP) had higher reproductive investment, higher gut fullness, and better body condition, but they were not relatively bolder nor better dispersers than inner populations (eastern ENP). As the invasion progressed, lower intraspecific density at the invasion front may have relaxed competition and allowed for higher fitness and reproductive investment. Understanding underlying behavioral and life-history mechanisms of an invasion is key for the development of management strategies that aim to contain current invaders and prevent the spread of future ones." }, { "instance_id": "R54244xR54018", "comparison_id": "R54244", "paper_id": "R54018", "text": "The acclimation potential of Acacia longifolia to water stress: Implications for invasiveness The ability of an invasive species to establish and spread to new areas may depend on its ability to tolerate a broad range of environmental conditions. Due to climate change, increasing occurrences of extreme events such as droughts are expected in the Mediterranean region and invasive species may expand if they cope with water stress. Limited information is available on the responses of Acacia longifolia, one of the most aggressive plant species in Portuguese coastal sand dune ecosystems, to prolonged water stress. In this study, we exposed A. longifolia plants from two distinct populations, one from the wet (northern) and another from the dry (southern) climate regions of Portugal, to drought conditions, and monitored morphological, physiological and biochemical responses. One-month-old seedlings were submitted to three different water treatments which involved watering twice a week, every 7 days and every 10 days, respectively, for three months, under controlled conditions. Overall, the progressive drought stress significantly affected most of the growth parameters considered, except the root:shoot ratio. Water stress also increased the uptake of ions (Ca\u00b2\u207a, Mg\u00b2\u207a, K\u207a and Na\u207a) and N concentration. On the contrary, the C/N ratio decreased under water stress conditions. Isotopic analysis did not reveal significant differences in \u03b4\u00b9\u00b3C with water treatments but the same pattern was not observed in \u03b4\u00b9\u2075N values. Compared with the wet climate population, the dry climate population showed somewhat differing responses to water stress, indicating a genetic difference between populations. These results provide insights into limitations and opportunities for establishment of A. longifolia in a drought-prone scenario." }, { "instance_id": "R54244xR54140", "comparison_id": "R54244", "paper_id": "R54140", "text": "Phenotypic plasticity of thermal tolerance contributes to the invasion potential of Mediterranean fruit flies (Ceratitis capitata) 1. The invasion success of Ceratitis capitata probably stems from physiological, morphological, and behavioural adaptations that enable them to survive in different habitats. However, it is generally poorly understood if variation in acute thermal tolerance and its phenotypic plasticity might be important in facilitating survival of C. capitata upon introduction to novel environments." }, { "instance_id": "R54244xR54128", "comparison_id": "R54244", "paper_id": "R54128", "text": "Functional differences in response to drought in the invasive Taraxacum officinale from native and introduced alpine habitat ranges Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes." }, { "instance_id": "R54244xR54108", "comparison_id": "R54244", "paper_id": "R54108", "text": "Forests are not immune to plant invasions: phenotypic plasticity and local adaptation allow Prunella vulgaris to colonize a temperate evergreen rainforest In the South American temperate evergreen rainforest (Valdivian forest), invasive plants are mainly restricted to open sites, being rare in the shaded understory. This is consistent with the notion of closed-canopy forests as communities relatively resistant to plant invasions. However, alien plants able to develop shade tolerance could be a threat to this unique forest. Phenotypic plasticity and local adaptation are two mechanisms enhancing invasiveness. Phenotypic plasticity can promote local adaptation by facilitating the establishment and persistence of invasive species in novel environments. We investigated the role of these processes in the recent colonization of Valdivian forest understory by the perennial alien herb Prunella vulgaris from nearby populations in open sites. Using reciprocal transplants, we found local adaptation between populations. Field data showed that the shade environment selected for taller plants and greater specific leaf areas. We found population differentiation and within-population genetic variation in both mean values and reaction norms to light variation of several ecophysiological traits in common gardens from seeds collected in sun and shade populations. The colonization of the forest resulted in a reduction of plastic responses to light variation, which is consistent with the occurrence of genetic assimilation and suggests that P. vulgaris individuals adapted to the shade have reduced probabilities to return to open sites. All results taken together confirm the potential for rapid evolution of shade tolerance in P. vulgaris and suggest that this alien species may pose a threat to the native understory flora of Valdivian forest." }, { "instance_id": "R54244xR54060", "comparison_id": "R54244", "paper_id": "R54060", "text": "Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252\u2013259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment." }, { "instance_id": "R54244xR54132", "comparison_id": "R54244", "paper_id": "R54132", "text": "Elevational distribution limits of non-native species: combining observational and experimental evidence Background: In temperate mountains, most non-native plant species reach their distributional limit somewhere along the elevational gradient. However, it is unclear if growth limitations can explain upper range limits and whether phenotypic plasticity or genetic changes allow species to occupy a broad elevational gradient. Aims: We investigated how non-native plant individuals from different elevations responded to growing season temperatures, which represented conditions at the core and margin of the elevational distributions of the species. Methods: We recorded the occurrence of nine non-native species in the Swiss Alps and subsequently conducted a climate chamber experiment to assess growth rates of plants from different elevations under different temperature treatments. Results: The elevational limit observed in the field was not related to the species' temperature response in the climate chamber experiment. Almost all species showed a similar level of reduction in growth rates under lower temperatures independent of the upper elevational limit of the species' distribution. For two species we found indications for genetic differentiation among plants from different elevations. Conclusions: We conclude that factors other than growing season temperatures, such as extreme events or winter mortality, might shape the elevational limit of non-native species, and that ecological filtering might select for genotypes that are phenotypically plastic." }, { "instance_id": "R54244xR54062", "comparison_id": "R54244", "paper_id": "R54062", "text": "Shell morphology and relative growth variability of the invasive pearl oyster Pinctada radiata in coastal Tunisia The variability of shell morphology and relative growth of the invasive pearl oyster Pinctada radiata was studied within and among ten populations from coastal Tunisia using discriminant tests. Therefore, 12 morphological characters were examined and 34 metric and weight ratios were defined. In addition to the classic morphological characters, populations were compared by the thickness of the nacreous layer. Results of Duncan's multiple comparison test showed that the most discriminative ratios were the width of nacreous layer of right valve to the inflation of shell, the hinge line length to the maximum width of shell and the nacre thickness to the maximum width of shell. The analysis of variance revealed an important inter-population morphological variability. Both multidimensional scaling analysis and the squared Mahalanobis distances ( D 2 ) of metric ratios divided Tunisian P. radiata populations into four biogeographical groupings: the north coast (La Marsa); harbours (Hammamet, Monastir and Zarzis); the Gulf of Gab\u00e8s (Sfax, Kerkennah Island, Mahar\u00e8s, Skhira and Djerba) and the intertidal area (Ajim). However, the Kerkennah Island population was discriminated by the squared Mahalanobis distances ( D 2 ) of weight ratios in an isolated group suggesting particular trophic conditions in this area. The allometric study revealed high linear correlation between shell morphological characters and differences in allometric growth among P. radiata populations. Unlike the morphological discrimination, allometric differentiation shows no clear geographical distinction. This study revealed that the pearl oyster P. radiata exhibited considerable phenotypic plasticity related to differences of environmental and/or ecological conditions along Tunisian coasts and highlighted the discriminative character of the nacreous layer thickness parameter." }, { "instance_id": "R54244xR54176", "comparison_id": "R54244", "paper_id": "R54176", "text": "Inducible defences as key adaptations for the successful invasion of Daphnia lumholtzi in North America? The mechanisms underlying successful biological invasions often remain unclear. In the case of the tropical water flea Daphnia lumholtzi , which invaded North America, it has been suggested that this species possesses a high thermal tolerance, which in the course of global climate change promotes its establishment and rapid spread. However, D. lumholtzi has an additional remarkable feature: it is the only water flea that forms rigid head spines in response to chemicals released in the presence of fishes. These morphologically (phenotypically) plastic traits serve as an inducible defence against these predators. Here, we show in controlled mesocosm experiments that the native North American species Daphnia pulicaria is competitively superior to D. lumholtzi in the absence of predators. However, in the presence of fish predation the invasive species formed its defences and became dominant. This observation of a predator-mediated switch in dominance suggests that the inducible defence against fish predation may represent a key adaptation for the invasion success of D. lumholtzi ." }, { "instance_id": "R54244xR54106", "comparison_id": "R54244", "paper_id": "R54106", "text": "High temperature tolerance and thermal plasticity in emerald ash borer Agrilus planipennis 1 The emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) (EAB), an invasive wood\u2010boring beetle, has recently caused significant losses of native ash (Fraxinus spp.) trees in North America. Movement of wood products has facilitated EAB spread, and heat sanitation of wooden materials according to International Standards for Phytosanitary Measures No. 15 (ISPM 15) is used to prevent this. 2 In the present study, we assessed the thermal conditions experienced during a typical heat\u2010treatment at a facility using protocols for pallet wood treatment under policy PI\u201007, as implemented in Canada. The basal high temperature tolerance of EAB larvae and pupae was determined, and the observed heating rates were used to investigate whether the heat shock response and expression of heat shock proteins occurred in fourth\u2010instar larvae. 3 The temperature regime during heat treatment greatly exceeded the ISPM 15 requirements of 56 \u00b0C for 30 min. Emerald ash borer larvae were highly tolerant of elevated temperatures, with some instars surviving exposure to 53 \u00b0C without any heat pre\u2010treatments. High temperature survival was increased by either slow warming or pre\u2010exposure to elevated temperatures and a recovery regime that was accompanied by up\u2010regulated hsp70 expression under some of these conditions. 4 Because EAB is highly heat tolerant and exhibits a fully functional heat shock response, we conclude that greater survival than measured in vitro is possible under industry treatment conditions (with the larvae still embedded in the wood). We propose that the phenotypic plasticity of EAB may lead to high temperature tolerance very close to conditions experienced in an ISPM 15 standard treatment." }, { "instance_id": "R54244xR54052", "comparison_id": "R54244", "paper_id": "R54052", "text": "Light Response of Native and Introduced Miscanthus sinensis Seedlings The Asian grass Miscanthus sinensis (Poaceae) is being considered for use as a bioenergy crop in the U.S. Corn Belt. Originally introduced to the United States for ornamental plantings, it escaped, forming invasive populations. The concern is that naturalized M. sinensis populations have evolved shade tolerance. We tested the hypothesis that seedlings from within the invasive U.S. range of M. sinensis would display traits associated with shade tolerance, namely increased area for light capture and phenotypic plasticity, compared with seedlings from the native Japanese populations. In a common garden experiment, seedlings of 80 half-sib maternal lines were grown from the native range (Japan) and 60 half-sib maternal lines from the invasive range (U.S.) under four light levels. Seedling leaf area, leaf size, growth, and biomass allocation were measured on the resulting seedlings after 12 wk. Seedlings from both regions responded strongly to the light gradient. High light conditions resulted in seedlings with greater leaf area, larger leaves, and a shift to greater belowground biomass investment, compared with shaded seedlings. Japanese seedlings produced more biomass and total leaf area than U.S. seedlings across all light levels. Generally, U.S. and Japanese seedlings allocated a similar amount of biomass to foliage and equal leaf area per leaf mass. Subtle differences in light response by region were observed for total leaf area, mass, growth, and leaf size. U.S. seedlings had slightly higher plasticity for total mass and leaf area but lower plasticity for measures of biomass allocation and leaf traits compared with Japanese seedlings. Our results do not provide general support for the hypothesis of increased M. sinensis shade tolerance within its introduced U.S. range compared with native Japanese populations. Nomenclature: Eulaliagrass; Miscanthus sinensis Anderss. Management Implications: Eulaliagrass (Miscanthus sinensis), an Asian species under consideration for biomass production in the Midwest, has escaped ornamental plantings in the United States to form naturalized populations. Evidence suggests that U.S. populations are able to tolerate relatively shady conditions, but it is unclear whether U.S. populations have greater shade tolerance than the relatively shade-intolerant populations within the species' native range in Asia. Increased shade tolerance could result in a broader range of invaded light environments within the introduced range of M. sinensis. However, results from our common garden experiment do not support the hypothesis of increased shade tolerance in introduced U.S. populations compared with seedlings from native Asian populations. Our results do demonstrate that for both U.S. and Japanese populations under low light conditions, M. sinensis seeds germinate and seedlings gain mass and leaf area; therefore, land managers should carefully monitor or eradicate M. sinensis within these habitats." }, { "instance_id": "R54244xR54098", "comparison_id": "R54244", "paper_id": "R54098", "text": "Heritable pollution tolerance in a marine invader The global spread of fouling invasive species is continuing despite the use of antifouling biocides. Furthermore, previous evidence suggests that non-indigenous species introduced via hull fouling may be capable of adapting to metal-polluted environments. Using a laboratory based toxicity assay, we investigated tolerance to copper in the non-indigenous bryozoan Watersipora subtorquata from four source populations. Individual colonies were collected from four sites within Port Hacking (Sydney, Australia) and their offspring exposed to a range of copper concentrations. This approach, using a full-sib, split-family design, tests for a genotype by environment (G\u00d7E) interaction. Settlement and complete metamorphosis (recruitment) were measured as ecologically relevant endpoints. Larval sizes were also measured for each colony. Successful recruitment was significantly reduced by the highest copper concentration of 80\u03bcgL(-1). While there was no difference in pollution tolerance between sites, there was a significant G\u00d7E interaction, with large variation in the response of colony offspring within sites. Larval size differed significantly both between sites and between colonies and was positively correlated with tolerance. The high level of variation in copper tolerance between colonies suggests that there is considerable potential within populations to adapt to elevated copper levels, as tolerance is a heritable trait. Also, colonies that produce large larvae are more tolerant to copper, suggesting that tolerance may be a direct consequence of larger size." }, { "instance_id": "R54244xR54068", "comparison_id": "R54244", "paper_id": "R54068", "text": "Phenotypic plasticity of Chenopodium murale across contrasting habitat conditions in peri-urban areas in Indian dry tropics: Is it indicative of its invasiveness? Phenotypic plasticity is an important plant trait associated with invasiveness of alien plants that reflects its ability to occupy a wide range of environments. We investigated the phenotypic response of Chenopodium murale to resource variability and ontogeny. Its plant-level and leaf-level traits were studied at high-resource (HR) and low-resource (LR) sites in peri-urban areas in Indian dry tropics. Plants at LR had significantly higher root length, root/shoot biomass ratio, stem mass and root mass fractions. Plants at HR had higher shoot length, basal diameter, leaf mass fraction and leaf area ratio. Leaf-level traits like leaf area and chlorophyll a were also higher here. Mean plasticity indices for plant- and leaf-level traits were higher at HR. With increasing total plant biomass, there was significant increase in the biomass of leaf, stem, root, and reproductive parts, and root and shoot lengths, whereas root/shoot length ratio, their biomass ratio, and leaf and root mass fractions declined significantly. Allocation to roots and leaves significantly decreased with increasing plant size at both sites. But, at any size, allocation to roots was greater at LR, indicative of optimization of capture of soil nutrients, whereas leaf allocation was higher at HR. Consistently increasing stem allocation equaled leaf allocation at comparatively higher shoot lengths at HR. Reproductive biomass comprised 10\u201312% of the plant\u2019s total biomass. In conclusion, the success of alien weed C. murale across environmentally diverse habitat conditions in Indian dry tropics can be attributed to its high phenotypic plasticity, resource utilization capability in low-resource habitats and higher reproductive potential. These characteristics suggest that it will continue to be an aggressive invader." }, { "instance_id": "R54244xR54142", "comparison_id": "R54244", "paper_id": "R54142", "text": "Microhabitat analysis of the invasive exotic liana Lonicera japonica Thunb. Abstract We documented microhabitat occurrence and growth of Lonicera japonica to identify factors related to its invasion into a southern Illinois shale barren. The barren was surveyed for L. japonica in June 2003, and the microhabitats of established L. japonica plants were compared to random points that sampled the range of available microhabitats in the barren. Vine and leaf characters were used as measurements of plant growth. Lonicera japonica occurred preferentially in areas of high litter cover and species richness, comparatively small trees, low PAR, low soil moisture and temperature, steep slopes, and shallow soils. Plant growth varied among these microhabitats. Among plots where L. japonica occurred, growth was related to soil and light conditions, and aspects of surrounding cover. Overhead canopy cover was a common variable associated with nearly all measured growth traits. Plasticity of traits to improve invader success can only affect the likelihood of invasion once constraints to establishment and persistence have been surmounted. Therefore, understanding where L. japonica invasion occurs, and microhabitat interactions with plant growth are important for estimating invasion success." }, { "instance_id": "R54244xR54126", "comparison_id": "R54244", "paper_id": "R54126", "text": "Differential patterns of plasticity to water availability along native and naturalized latitudinal gradients Questions: Does plasticity to water availability differ between native and naturalized and laboratory plant accessions? Is there a relationship between morphological plasticity and a fitness measure? Can we account for latitudinal patterns of plasticity with rainfall data from the seed source location? Organism: We examined an array of 23 native, 14 naturalized, and 5 laboratory accessions of Arabidopsis thaliana. Methods: We employed a split-plot experimental design in the greenhouse with two water treatments. We measured morphological and fitness-related traits at various developmental stages. We utilized a published dataset representing 30-year average precipitation trends for each accession origin. Results: We detected evidence of differential patterns of plasticity between native, naturalized, and laboratory populations for several morphological traits. Native, laboratory, and naturalized populations also differed in which traits were positively associated with fitness, and did not follow the Jack-of-all-trades or Master-of-some scenarios. Significant negative relationships were detected for plasticity in morphological traits with latitude. We found modest evidence that rainfall may play a role in this latitudinal trend." }, { "instance_id": "R54244xR54156", "comparison_id": "R54244", "paper_id": "R54156", "text": "Ecophysiology of the invader Pennisetum setaceum and three native grasses in the Canary Islands Pennisetum setaceum (fountain grass) is an aggressive invader in the arid and semi-arid habitats of the tropics and subtropics. In the last twenty years the spread of fountain grass in the Canary Islands has been very rapid. We compared its ecophysiological, architectural and reproductive traits with those of three native grasses (Hyparrhenia hirta, Cenchrus ciliaris and Aristida adscensionis) in two habitats of Tenerife Island which differ in rainfall. The detection of traits that differ between native and invader grasses may provide information for the improved control and eradication of the latter contributing to protect the native plant diversity. P. setaceum and the native grasses differed in all measured traits and in their response to water availability which is more restricted in the southern site. Specific leaf area was lower in P. setaceum than in the native grasses. Although this reduces carbon assimilation per unit area, it also reduces transpiration, increasing water use efficiency and contributes to the maintenance of high relative water content. Leaf N in P. setaceum was lower than in the native grasses indicating higher nitrogen use efficiency. The activity of photosystem II was higher and lasted longer in P. setaceum than in the native grasses. The ecophysiological traits of P. setaceum support its large size, extensive canopy and shorter leaf senescence period. They confer considerable competitive advantage to the invader and partially explain its success in the Canary Islands. The differences between the invader and the native grasses were maintained in both sites revealing a good adaptation of P. setaceum to the low resource local habitats in the Canary Islands and confirms its large plasticity. The large invasive potential of P. setaceum, in concert with the projected global changes, forecast eventual risks for the conservation of the endemic flora and remaining native communities in the Canary Islands." }, { "instance_id": "R54244xR54144", "comparison_id": "R54244", "paper_id": "R54144", "text": "Evolution of dispersal traits along an invasion route in the wind-dispersed Senecio inaequidens (Asteraceae) In introduced organisms, dispersal propensity is expected to increase during range expansion. This prediction is based on the assumption that phenotypic plasticity is low compared to genetic diversity, and an increase in dispersal can be counteracted by the Allee effect. Empirical evidence in support of these hypotheses is however lacking. The present study tested for evidence of differentiation in dispersal-related traits and the Allee effect in the wind-dispersed invasive Senecio inaequidens (Asteraceae). We collected capitula from individuals in ten field populations, along an invasion route including the original introduction site in southern France. In addition, we conducted a common garden experiment from field-collected seeds and obtained capitula from individuals representing the same ten field populations. We analysed phenotypic variation in dispersal traits between field and common garden environments as a function of the distance between populations and the introduction site. Our results revealed low levels of phenotypic differentiation among populations. However, significant clinal variation in dispersal traits was demonstrated in common garden plants representing the invasion route. In field populations, similar trends in dispersal-related traits and evidence of an Allee effect were not detected. In part, our results supported expectations of increased dispersal capacity with range expansion, and emphasized the contribution of phenotypic plasticity under natural conditions." }, { "instance_id": "R54867xR54689", "comparison_id": "R54867", "paper_id": "R54689", "text": "Effect of disturbance and nutrient addition on native and introduced annuals in plant communities in the Western Australian wheatbelt To investigate factors affecting the ability of introduced species to invade natural communities in the Western Australian wheatbelt, five communities were examined within a nature reserve near Kellerberrin. Transect studies indicated that introduced annuals were more abundant in woodland than in shrub communities, despite an input of introduced seed into all communities. The response of native and introduced annuals to soil disturbance and fertilizer addition was examined. Small areas were disturbed and/or provided with fertilizer prior to addition of seed of introduced annuals. In most communities, the introduced species used (Avena fatua and Ursinia anthemoides) established well only where the soil had been disturbed, but their growth was increased greatly when fertilizer was also added. Establishment and growth of other introduced species also increased where nutrient addition and soil disturbance were combined. Growth of several native annuals increased greatly with fertilizer addition, but showed little response to disturbance. Fertilizer addition also significantly increased the number of native species present in most communities. This indicates that growth of both native and introduced species is limited by nutrient availability in these communities, but also that introduced species respond more to a combination of nutrient addition and soil disturbance." }, { "instance_id": "R54867xR54841", "comparison_id": "R54867", "paper_id": "R54841", "text": "Lack of native species recovery following severe exotic disturbance in southern Californian shrublands Summary 1. Urban and agricultural activities are not part of natural disturbance regimes and may bear little resemblance to them. Such disturbances are common in densely populated semi-arid shrub communities of the south-western US, yet successional studies in these regions have been limited primarily to natural successional change and the impact of human-induced changes on natural disturbance regimes. Although these communities are resilient to recurrent and large-scale disturbance by fire, they are not necessarily well-adapted to recover from exotic disturbances. 2. This study investigated the effects of severe exotic disturbance (construction, heavy-vehicle activity, landfill operations, soil excavation and tillage) on shrub communities in southern California. These disturbances led to the conversion of indigenous shrublands to exotic annual communities with low native species richness. 3. Nearly 60% of the cover on disturbed sites consisted of exotic annual species, while undisturbed sites were primarily covered by native shrub species (68%). Annual species dominant on disturbed sites included Erodium botrys, Hypochaeris glabra, Bromus spp., Vulpia myuros and Avena spp. 4. The cover of native species remained low on disturbed sites even 71 years after initial exotic disturbance ceased. Native shrub seedlings were also very infrequent on disturbed sites, despite the presence of nearby seed sources. Only two native shrubs, Eriogonum fasciculatum and Baccharis sarothroides, colonized some disturbed sites in large numbers. 5. Although some disturbed sites had lower total soil nitrogen and percentage organic matter and higher pH than undisturbed sites, soil variables measured in this study were not sufficient to explain variations in species abundances on these sites. 6. Non-native annual communities observed in this study did not recover to a predisturbed state within typical successional time (< 25 years), supporting the hypothesis that altered stable states can occur if a community is pushed beyond its threshold of resilience." }, { "instance_id": "R54867xR54704", "comparison_id": "R54867", "paper_id": "R54704", "text": "Prescribed fire effects on dalmation toadflax Prescribed fires are important for rangeland restoration and affect plant community composition and species interactions. Many rangeland plant communities have been, or are under the threat of noxious weed invasion, however there is little information on how fire effects weeds. Our objective was to determine the effects of prescribed rangeland fire on dalmatian toadflax [Linaria dalmatica (L.) Miller] density, cover, biomass, and seed production. These plant characteristics, as well as density, cover, and biomass of perennial grasses and forbs were measured within burned and adjacent not-burned areas on 3 Artemisia tridentata/Agropyron spicatum habitat types in Montana. Areas were burned in the spring and measured in the fall 1999. Comparisons of plant characteristics between the burned and not-burned sites were made using t-tests and non-parametric Wilcoxon Rank Sum tests. After 1 growing season, fire did not affect density or cover of dalmatian toadflax. Burning increased dalmatian toadflax bio- mass per square meter at 2 sites, and per plant biomass at all 3 sites. Seed production of dalmatian toadflax was increased by fire at all 3 sites. Fire reduced forb cover at 1 site and increased grass biomass at 2 sites. The increases in dalmatian toadflax biomass and seed production suggest that fire used to restore healthy plant communities may increase dalmatian toadflax dominance. We recommend weed management procedures, such as herbicide control and seeding desirable species, be integrated with prescribed fire where dalmatian toadflax is present in the plant community." }, { "instance_id": "R54867xR54808", "comparison_id": "R54867", "paper_id": "R54808", "text": "Resource availability and invasibility in an intertidal macroalgal assemblage The invasibility of a low intertidal macroalgal assemblage was experimentally tested from March 2003 to April 2004 at 1 locality in northern Spain. It was hypothesised that a community becomes more susceptible to invasion when there is an increase in the amount of key resources. A bifactorial ('nutrient supply' and 'macroalgal biomass removed') orthogonal experiment was designed with 3 levels in each factor (high, medium and control). Fertile plants of Sargassum muticum (Yendo) Fensholt were transplanted to each plot to simulate the arrival of an invader. The invasibility of the assemblage was quantified in the pre- (density of recruits) and post-settlement (percentage cover, size and density of S. muticum at the end of the experiment) phases of S. muticum's life cycle. Results supported the initial hypothesis. Both space availability and nutrient enrichment facilitated the establishment and spread of S. muticum in the experimental plots. Established S. muticum plants grew faster in enriched plots than in controls. Furthermore, different successional assemblages played different roles in resisting invasion as S. muticum's life cycle pro- gressed. In the initial stage of the invasion, the Bifurcaria bifurcata canopy inhibited recruitment by S. muticum, whereas understory species did not have a significant effect on invasion success. In contrast, an increased survivorship of S. muticum beneath the canopy of B. bifurcata was observed in those plots where S. muticum had successfully recruited. This study shows that the invasibility of this low intertidal assemblage is mediated by a complex interaction of several resources acting at different stages during S. muticum's invasion." }, { "instance_id": "R54867xR54595", "comparison_id": "R54867", "paper_id": "R54595", "text": "Contingency of grassland restoration on year, site, and competition from introduced grasses Semiarid ecosystems such as grasslands are characterized by high temporal variability in abiotic factors, which has led to suggestions that management actions may be more effective in some years than others. Here we examine this hypothesis in the context of grassland restoration, which faces two major obstacles: the contingency of native grass establishment on unpredictable precipitation, and competition from introduced species. We established replicated restoration experiments over three years at two sites in the northern Great Plains in order to examine the extent to which the success of several restoration strategies varied between sites and among years. We worked in 50-yr-old stands of crested wheatgrass (Agropyron cristatum), an introduced perennial grass that has been planted on >10 \u00d7 106 ha in western North America. Establishment of native grasses was highly contingent on local conditions, varying fourfold among years and threefold between sites. Survivorship also varied greatly and increased signi..." }, { "instance_id": "R54867xR54590", "comparison_id": "R54867", "paper_id": "R54590", "text": "Establishment of the invasive perennial Vincetoxicum rossicum across a disturbance gradient in New York State, USA Vincetoxicum rossicum (pale swallow-wort) is a non-native, perennial, herbaceous vine in the Apocynaceae. The species\u2019 abundance is steadily increasing in the northeastern United States and southeastern Canada. Little is known about Vincetoxicum species recruitment and growth. Therefore, we conducted a field experiment in New York State to address this knowledge gap. We determined the establishment, survival, and growth of V. rossicum during the first 2 years after sowing in two old fields subjected to four disturbance regimens. We hypothesized that establishment and survival would be higher in treatments with greater disturbance. At the better-drained location, overall establishment was 15 \u00b1 1% [mean \u00b1 standard error] and did not differ among treatments. At the poorly drained location, establishment varied by treatment; mowed and control plots had greater establishment [10 \u00b1 2%] than herbicide + tillage and herbicide-only plots [1.6 \u00b1 0.5%]. Of those seedlings that emerged, overall survival was high at both locations (70\u201384%). Similarly, total (above + belowground) biomass was greater in herbicide + tillage and herbicide-only plots than in mowed and control plots at both locations. Thus, V. rossicum was successful in establishing and surviving across a range of disturbance regimens particularly relative to other old field species, but growth was greater in more disturbed treatments. The relatively high-establishment rates in old field habitats help explain the invasiveness of this Vincetoxicum species in the northeastern U.S. and southeastern Canada." }, { "instance_id": "R54867xR54774", "comparison_id": "R54867", "paper_id": "R54774", "text": "Effects of Microstegium Vimineum (Trin.) A. Camus on native woody species density and diversity in a productive mixed-hardwood forest in Tennessee Abstract We investigated the impacts of Microstegium vimineum (Trin.) A. Camus, on the density and diversity of native woody species regeneration following canopy disturbance in a productive mixed-hardwood forest in southwest Tennessee. Field observations of M. vimineum in the forest understory pre- and post-canopy disturbance led us to believe the species might have an impact on post-disturbance regeneration. Specifically, we noticed what appeared to be a dramatic increase in post-disturbance M. vimineum which we hypothesized would compete with native woody species regeneration, negatively impacting species diversity and seedling density. Total native woody species stems per hectare declined with increasing M. vimineum cover ( P r 2 = 0.80). Simple species richness of native woody species and Shannon's and Simpson's diversity indecies also decreased with increasing M. vimineum percent cover ( P = 0.0023, r 2 = 0.47, P = 0.002, r 2 = 0.47 and P = 0.02, r 2 = 0.31, respectively). Our results indicate that M. vimineum , may have a negative impact on native woody species regeneration in southern forests." }, { "instance_id": "R54867xR54757", "comparison_id": "R54867", "paper_id": "R54757", "text": "Exotic vascular plant invasiveness and forest invasibility in urban boreal forest types The riverine forests of the northern city of Edmonton, Alberta, Canada display strong resilience to disturbance and are similar in species composition to southern boreal mixedwood forest types. This study addressed questions such as, how easily do exotic species become established in urban boreal forests (species invasiveness) and do urban boreal forest structural characteristics such as, native species richness, abundance, and vertical vegetation layers, confer resistance to exotic species establishment and spread (community invasibility)? Eighty-four forest stands were sampled and species composition and mean percent cover analyzed using ordination methods. Results showed that exotic tree/shrub types were of the most concern for invasion to urban boreal forests and that exotic species type, native habitat and propagule supply may be good indicators of invasive potential. Native forest structure appeared to confer a level of resistance to exotic species and medium to high disturbance intensity was associated with exotic species growth and spread without a corresponding loss in native species richness. Results provided large-scale evidence that diverse communities are less vulnerable to exotic species invasion, and that intermediate disturbance intensity supports species coexistence. From a management perspective, the retention of native species and native forest structure in urban forests is favored to minimize the impact of exotic species introductions, protect natural succession patterns, and minimize the spread of exotic species." }, { "instance_id": "R54867xR54731", "comparison_id": "R54867", "paper_id": "R54731", "text": "Multiple disturbances accelerate invasion of reed canary grass (Phalaris arundinacea L.) in a mesocosm study Disturbances that intensify with agriculture and/or urban development are thought to promote the spread of invasive plants, such as the clonal perennial reed canary grass ( Phalaris arundinacea L). To test this relationship and interactions among disturbances, we subjected wet prairie assemblages within 1.1 m2 mesocosms to invasion by Phalaris and addition of nutrients, sediments, and flooding. Species richness decreased with the application of sediments and/or flooding of 4 consecutive weeks or longer. Losses of up to six dominant and subdominant species in these treatments increased light transmission through the plant canopy by as much as 400% over the control. Light availability in July and September was a strong predictor of end-of-season aboveground biomass of Phalaris. Phalaris was also 35% and 195% more productive when nutrients were added at low and high levels, respectively. Multiple factors in combination were usually additive in their effects on invasion, but sediments and nutrients interacted with flood regime to synergistically increase invasion in some cases. A separate experiment likewise revealed a synergistic interaction between added nutrients and simulated grazing. We suggest that multiple factors be mitigated simultaneously to reduce invasion of Phalaris." }, { "instance_id": "R54867xR54744", "comparison_id": "R54867", "paper_id": "R54744", "text": "Biogenic disturbance determines invasion success in a subtidal soft-sediment system Theoretically, disturbance and diversity can influence the success of invasive colonists if (1) resource limitation is a prime determinant of invasion success and (2) disturbance and diversity affect the availability of required resources. However, resource limitation is not of overriding importance in all systems, as exemplified by marine soft sediments, one of Earth's most widespread habitat types. Here, we tested the disturbance-invasion hypothesis in a marine soft-sediment system by altering rates of biogenic disturbance and tracking the natural colonization of plots by invasive species. Levels of sediment disturbance were controlled by manipulating densities of burrowing spatangoid urchins, the dominant biogenic sediment mixers in the system. Colonization success by two invasive species (a gobiid fish and a semelid bivalve) was greatest in plots with sediment disturbance rates < 500 cm(3) x m(-2) x d(-1), at the low end of the experimental disturbance gradient (0 to > 9000 cm(3) x m(-2) x d(-1)). Invasive colonization declined with increasing levels of sediment disturbance, counter to the disturbance-invasion hypothesis. Increased sediment disturbance by the urchins also reduced the richness and diversity of native macrofauna (particularly small, sedentary, surface feeders), though there was no evidence of increased availability of resources with increased disturbance that would have facilitated invasive colonization: sediment food resources (chlorophyll a and organic matter content) did not increase, and space and access to overlying water were not limited (low invertebrate abundance). Thus, our study revealed the importance of biogenic disturbance in promoting invasion resistance in a marine soft-sediment community, providing further evidence of the valuable role of bioturbation in soft-sediment systems (bioturbation also affects carbon processing, nutrient recycling, oxygen dynamics, benthic community structure, and so on.). Bioturbation rates are influenced by the presence and abundance of large burrowing species (like spatangoid urchins). Therefore, mass mortalities of large bioturbators could inflate invasion risk and alter other aspects of ecosystem performance in marine soft-sediment habitats." }, { "instance_id": "R54867xR54571", "comparison_id": "R54867", "paper_id": "R54571", "text": "Exotic and native vegetation establishment following channelization of a western Iberian river Channelization is often a major cause of human impacts on river systems. It affects both hydrogeomorphic features and habitat characteristics and potentially impacts riverine flora and fauna. Human-disturbed fluvial ecosystems also appear to be particularly vulnerable to exotic plant establishment. Following a 12-year recovery period, the distribution, composition and cover of both exotic and native plant species were studied along a Portuguese lowland river segment, which had been subjected to resectioning, straightening and two-stage bank reinforcement, and were compared with those of a nearby, less impacted segment. The species distribution was also related to environmental data. Species richness and floristic composition in the channelized river segment were found to be similar to those at the more \u2018natural\u2019 river sites. Floral differences were primarily consistent with the dominance of cover by certain species. However, there were significant differences in exotic and native species richness and cover between the \u2018natural\u2019 corridor and the channelized segment, which was more susceptible to invasion by exotic perennial taxa, such as Eryngium pandanifolium, Paspalum paspalodes, Tradescantia fluminensis and Acacia dealbata. Factorial and canonical correspondence analyses revealed considerable patchiness in the distribution of species assemblages. The latter were associated with small differences in substrate composition and their own relative position across the banks and along the river segments in question. Data was also subjected to an unweighted pair-group arithmetic average clustering, and the Indicator Value methodology was applied to selected cluster noda in order to obtain significant indicator species. Copyright \u00a9 2001 John Wiley & Sons, Ltd." }, { "instance_id": "R54867xR54684", "comparison_id": "R54867", "paper_id": "R54684", "text": "Influence of fire and soil nutrients on native and non-native annuals at remnant vegetation edges in the Western Australian wheatbelt . The effect of fire on annual plants was examined in two vegetation types at remnant vegetation edges in the Western Australian wheatbelt. Density and cover of non-native species were consistently greatest at the reserve edges, decreasing rapidly with increasing distance from reserve edge. Numbers of native species showed little effect of distance from reserve edge. Fire had no apparent effect on abundance of non-natives in Allocasuarina shrubland but abundance of native plants increased. Density of both non-native and native plants in Acacia acuminata-Eucalyptus loxophleba woodland decreased after fire. Fewer non-native species were found in the shrubland than in the woodland in both unburnt and burnt areas, this difference being smallest between burnt areas. Levels of soil phosphorus and nitrate were higher in burnt areas of both communities and ammonium also increased in the shrubland. Levels of soil phosphorus and nitrate were higher at the reserve edge in the unburnt shrubland, but not in the woodland. There was a strong correlation between soil phosphorus levels and abundance of non-native species in the unburnt shrubland, but not after fire or in the woodland. Removal of non-native plants in the burnt shrubland had a strong positive effect on total abundance of native plants, apparently due to increases in growth of smaller, suppressed native plants in response to decreased competition. Two native species showed increased seed production in plots where non-native plants had been removed. There was a general indication that, in the short term, fire does not necessarily increase invasion of these communities by non-native species and could, therefore be a useful management tool in remnant vegetation, providing other disturbances are minimised." }, { "instance_id": "R54867xR54620", "comparison_id": "R54867", "paper_id": "R54620", "text": "A comparison of the urban flora of different phytoclimatic regions in Italy This study is a comparison of the spontaneous vascular flora of five Italian cities: Milan, Ancona, Rome, Cagliari and Palermo. The aims of the study are to test the hypothesis that urbanization results in uniformity of urban floras, and to evaluate the role of alien species in the flora of settlements located in different phytoclimatic regions. To obtain comparable data, ten plots of 1 ha, each representing typical urban habitats, were analysed in each city. The results indicate a low floristic similarity between the cities, while the strongest similarity appears within each city and between each city and the seminatural vegetation of the surrounding region. In the Mediterranean settlements, even the most urbanized plots reflect the characters of the surrounding landscape and are rich in native species, while aliens are relatively few. These results differ from the reported uniformity and the high proportion of aliens which generally characterize urban floras elsewhere. To explain this trend the importance of apophytes (indigenous plants expanding into man-made habitats) is highlighted; several Mediterranean species adapted to disturbance (i.e. grazing, trampling, and human activities) are pre-adapted to the urban environment. In addition, consideration is given to the minor role played by the \u2018urban heat island\u2019 in the Mediterranean basin, and to the structure and history of several Italian settlements, where ancient walls, ruins and archaeological sites in the periphery as well as in the historical centres act as conservative habitats and provide connection with seed-sources on the outskirts." }, { "instance_id": "R54867xR54819", "comparison_id": "R54867", "paper_id": "R54819", "text": "Distribution of an alien aquatic snail in relation to flow variability, human activities and water quality 1. Disturbance and anthropogenic land use changes are usually considered to be key factors facilitating biological invasions. However, specific comparisons of invasion success between sites affected to different degrees by these factors are rare. 2. In this study we related the large-scale distribution of the invading New Zealand mud snail ( Potamopyrgus antipodarum ) in southern Victorian streams, Australia, to anthropogenic land use, flow variability, water quality and distance from the site to the sea along the stream channel. 3. The presence of P. antipodarum was positively related to an index of flow-driven disturbance, the coefficient of variability of mean daily flows for the year prior to the study. 4. Furthermore, we found that the invader was more likely to occur at sites with multiple land uses in the catchment, in the forms of grazing, forestry and anthropogenic developments (e.g. towns and dams), compared with sites with low-impact activities in the catchment. However, this relationship was confounded by a higher likelihood of finding this snail in lowland sites close to the sea. 5. We conclude that P. antipodarum could potentially be found worldwide at sites with similar ecological characteristics. We hypothesise that its success as an invader may be related to an ability to quickly re-colonise denuded areas and that population abundances may respond to increased food resources. Disturbances could facilitate this invader by creating spaces for colonisation (e.g. a possible consequence of floods) or changing resource levels (e.g. increased nutrient levels in streams with intense human land use in their catchments)." }, { "instance_id": "R54867xR54759", "comparison_id": "R54867", "paper_id": "R54759", "text": "Competitive interactions between native and invasive exotic plant species are altered under elevated carbon dioxide We hypothesized that the greater competitive ability of invasive exotic plants relative to native plants would increase under elevated CO2 because they typically have traits that confer the ability for fast growth when resources are not limiting and thus are likely to be more responsive to elevated CO2. A series of competition experiments under ambient and elevated CO2 glasshouse conditions were conducted to determine an index of relative competition intensity for 14 native-invasive exotic species-pairs. Traits including specific leaf area, leaf mass ratio, leaf area ratio, relative growth rate, net assimilation rate and root weight ratio were measured. Competitive rankings within species-pairs were not affected by CO2 concentration: invasive exotic species were more competitive in 9 of the 14 species-pairs and native species were more competitive in the remaining 5 species-pairs, regardless of CO2 concentration. However, there was a significant interaction between plant type and CO2 treatment due to reduced competitive response of native species under elevated compared with ambient CO2 conditions. Native species had significantly lower specific leaf area and leaf area ratio under elevated compared with ambient CO2. We also compared traits of more-competitive with less-competitive species, regardless of plant type, under both CO2 treatments. More-competitive species had smaller leaf weight ratio and leaf area ratio, and larger relative growth rate and net assimilation rate under both ambient and elevated CO2 conditions. These results suggest that growth and allocation traits can be useful predictors of the outcome of competitive interactions under both ambient and elevated CO2 conditions. Under predicted future atmospheric CO2 conditions, competitive rankings among species may not change substantially, but the relative success of invasive exotic species may be increased. Thus, under future atmospheric CO2 conditions, the ecological and economic impact of some invasive exotic plants may be even greater than under current conditions." }, { "instance_id": "R54867xR54694", "comparison_id": "R54867", "paper_id": "R54694", "text": "Removal of nonnative vines and post-hurricane recruitment in tropical hardwood forests of Florida Abstract In hardwood subtropical forests of southern Florida, nonnative vines have been hypothesized to be detrimental, as many species form dense \u201cvine blankets\u201d that shroud the forest. To investigate the effects of nonnative vines in post-hurricane regeneration, we set up four large (two pairs of 30 \u00d7 60 m) study areas in each of three study sites. One of each pair was unmanaged and the other was managed by removal of nonnative plants, predominantly vines. Within these areas, we sampled vegetation in 5 \u00d7 5 m plots for stems 2 cm DBH (diameter at breast height) or greater and in 2 \u00d7 0.5 m plots for stems of all sizes. For five years, at annual censuses, we tagged and measured stems of vines, trees, shrubs and herbs in these plots. For each 5 \u00d7 5 m plot, we estimated percent coverage by individual vine species, using native and nonnative vines as classes. We investigated the hypotheses that: (1) plot coverage, occurrence and recruitment of nonnative vines were greater than that of native vines in unmanaged plots; (2) the management program was effective at reducing cover by nonnative vines; and (3) reduction of cover by nonnative vines improved recruitment of seedlings and saplings of native trees, shrubs, and herbs. In unmanaged plots, nonnative vines recruited more seedlings and had a significantly higher plot-cover index, but not a higher frequency of occurrence. Management significantly reduced cover by nonnative vines and had a significant overall positive effect on recruitment of seedlings and saplings of native trees, shrubs and herbs. Management also affected the seedling community (which included vines, trees, shrubs, and herbs) in some unanticipated ways, favoring early successional species for a longer period of time. The vine species with the greatest potential to \u201cstrangle\u201d gaps were those that rapidly formed dense cover, had shade tolerant seedling recruitment, and were animal-dispersed. This suite of traits was more common in the nonnative vines than in the native vines. Our results suggest that some vines may alter the spatiotemporal pattern of recruitment sites in a forest ecosystem following a natural disturbance by creating many very shady spots very quickly." }, { "instance_id": "R54867xR54826", "comparison_id": "R54867", "paper_id": "R54826", "text": "Shoreline development drives invasion of Phragmites australis and the loss of plant diversity on New England salt marshes The reed Phragmites australis Cav. is aggressively invading salt marshes along the Atlantic Coast of North America. We examined the interactive role of habitat alteration (i.e., shoreline development) in driv- ing this invasion and its consequences for plant richness in New England salt marshes. We surveyed 22 salt marshes in Narragansett Bay, Rhode Island, and quantified shoreline development, Phragmites cover, soil salin- ity, and nitrogen availability. Shoreline development, operationally defined as removal of the woody vegetation bordering marshes, explained >90% of intermarsh variation in Phragmites cover. Shoreline development was also significantly correlated with reduced soil salinities and increased nitrogen availability, suggesting that re- moving woody vegetation bordering marshes increases nitrogen availability and decreases soil salinities, thus facilitating Phragmites invasion. Soil salinity (64%) and nitrogen availability (56%) alone explained a large proportion of variation in Phragmites cover, but together they explained 80% of the variation in Phragmites invasion success. Both univariate and aggregate (multidimensional scaling) analyses of plant community composition revealed that Phragmites dominance in developed salt marshes resulted in an almost three-fold decrease in plant species richness. Our findings illustrate the importance of maintaining integrity of habitat borders in conserving natural communities and provide an example of the critical role that local conserva- tion can play in preserving these systems. In addition, our findings provide ecologists and natural resource managers with a mechanistic understanding of how human habitat alteration in one vegetation community can interact with species introductions in adjacent communities (i.e., flow-on or adjacency effects) to hasten ecosystem degradation." }, { "instance_id": "R54867xR54647", "comparison_id": "R54867", "paper_id": "R54647", "text": "Camponotus punctulatus ant's demography: a temporal study across land-use types and spatial scales Abstract.Agricultural activities promote the explosion of diverse pest populations. In Argentina, the ant Camponotus punctulatus invades agricultural fields after production ceases. The temporal demography and spatial distribution of colonies of C. punctulatus were studied over a five year period using replicated plots of different land use types representing a gradient of increasing agricultural disturbance. We experimentally tested the hypothesis that the increase in C. punctulatus colony density was related to increasing levels of agricultural disturbance. Abandoned rice fields represented the situation with greatest disturbance. Sown pastures were intermediate. Natural grasslands represented no agricultural disturbance. The predictions were (1) the greater the soil disturbance produced by agriculture, the greater the susceptibility for invasion by C. punctulatus, (2) rice fields offers greater opportunities for establishment of colonizing species than sown pastures, and (3) disturbed land use areas that were more recently colonized as well as land use areas with greater soil disturbance will exhibit patterns of colony aggregation at a small scale but with time the patterns will become uniform. Initially, colonies in the abandoned rice fields had a higher annual mortality and larger turnover than in sown pastures. Over five years, abandoned rice fields sustained higher densities of colonies than sown pastures. The colonies were the largest and had the longest lifespans in abandoned ricefields. Natural grasslands had the lowest colony density, survivorship, and size but had variable levels of colonization. More than one type of spatial distribution was found in field replicates. At small spatial scales across disturbed land use types, replicates exhibited regular distributions. At greater spatial scales, spatial distributions were mostly random in sown pastures, there were many cases of aggregation in rice fields, although some cases of uniform distributions were also found in all disturbed land uses. These results highlight significant intraspecific variation in ant demography across types of land use, space, and time, and show a clear predisposition of C. punctulatus to invade and successfully establish in the most disturbed land use types. Hypotheses that can account for the changes in demography across land use types are discussed." }, { "instance_id": "R54867xR54630", "comparison_id": "R54867", "paper_id": "R54630", "text": "Factors influencing dynamics of two invasive C-4 grasses in seasonally dry Hawaiian woodlands The introduced C4 bunchgrass, Schizachyrium condensatum, is abundant in unburned, seasonally dry woodlands on the island of Hawaii, where it promotes the spread of fire. After fire, it is partially replaced by Melinis minutiflora, another invasive C4 grass. Seed bank surveys in unburned woodland showed that Melinis seed is present in locations without adult plants. Using a combination of germination tests and seedling outplant ex- periments, we tested the hypothesis that Melinis was unable to invade the unburned wood- land because of nutrient and/or light limitation. We found that Melinis germination and seedling growth are depressed by the low light levels common under Schizachyrium in unburned woodland. Outplanted Melinis seedlings grew rapidly to flowering and persisted for several years in unburned woodland without nutrient additions, but only if Schizachyrium individuals were removed. Nutrients alone did not facilitate Melinis establishment. Competition between Melinis and Schizachyrium naturally occurs when individuals of both species emerge from the seed bank simultaneously, or when seedlings of one species emerge in sites already dominated by individuals of the other species. When both species are grown from seed, we found that Melinis consistently outcompetes Schizachyrium, re- gardless of light or nutrient treatments. When seeds of Melinis were added to pots with well-established Schizachyrium (and vice versa), Melinis eventually invaded and overgrew adult Schizachyrium under high, but not low, nutrients. By contrast, Schizachyrium could not invade established Melinis pots regardless of nutrient level. A field experiment dem- onstrated that Schizachyrium individuals are suppressed by Melinis in burned sites through competition for both light and nutrients. Overall, Melinis is a dominant competitor over Schizachyrium once it becomes estab- lished, whether in a pot or in the field. We believe that the dominance of Schizachyrium, rather than Melinis, in the unburned woodland is the result of asymmetric competition due to the prior establishment of Schizachyrium in these sites. If Schizachyrium were not present, the unburned woodland could support dense stands of Melinis. Fire disrupts the priority effect of Schizachyrium and allows the dominant competitor (Melinis) to enter the system where it eventually replaces Schizachyrium through resource competition." }, { "instance_id": "R54867xR54815", "comparison_id": "R54867", "paper_id": "R54815", "text": "Assessing bird assemblages along an urban gradient in a Caribbean island (Margarita, Venezuela) Several studies have shown that urbanization usually leads to severe biotic homogenizing, i.e. the local extirpation of many native species and the expansion to regional scales of a small group of \u201curban-adaptors\u201d, some of them exotics. Margarita Island (Venezuela) is a tropical Caribbean island that has undergone an accelerated urban development in the last 40 years. Because the island also has a high bird diversity with several endemic and threatened species, we evaluated the effect of urban development on bird species richness, assemblage structure, seasonal changes and feeding guilds. We defined an urban gradient from areas with high vegetation cover (remnant woodlands) through areas with intermediate vegetation (traditional towns) to areas with next to no vegetation (recent suburbs). Each experimental unit was replicated 3 times and birds were surveyed during the dry and rainy seasons. Richness decreased as urbanization increased, being severely depleted in the recent suburbs. The bird assemblage consisted of native species, including six endemic sub-species, but only one exotic, Columba livia. There were no seasonal changes in assemblage structure. We identified the species most tolerant and most sensitive to urbanization. Omnivorous birds were common along the gradient and granivores were also tolerant to urban development. Specialized insectivores and frugivores were the most negatively affected groups. The considerable amount of native woodland and other vegetation present in the traditional towns we evaluated, and their proximity to natural protected areas, favors the persistence of native bird species in these urban areas on Margarita Island." }, { "instance_id": "R54867xR54667", "comparison_id": "R54867", "paper_id": "R54667", "text": "Anthropogenic fires increase alien and native annual species in the Chilean coastal matorral Aim We tested the hypothesis that anthropogenic fires favour the successful establishment of alien annual species to the detriment of natives in the Chilean coastal matorral. Location Valparao\u00b4so Region, central Chile. Methods We sampled seed rain, seedbank emergence and establishment of species in four paired burned and unburned areas and compared (using GLMM) fire resistance and propagule arrival of alien and native species. To assess the relative importance of seed dispersal and seedbank survival in explaining plant establishment after fire, we compared seed rain and seedbank structure with postfire vegetation using ordination analyses. Results Fire did not change the proportion of alien species in the coastal matorral. However, fire increased the number of annual species (natives and aliens) of which 87% were aliens. Fire reduced the alien seedbank and not the native seedbank, but alien species remained dominant in burned soil samples (66% of the total species richness). Seed rain was higher for alien annuals than for native annuals or perennials, thus contributing to their establishment after fire. Nevertheless, seed rain was less important than seedbank survival in explaining plant establishment in burned areas. Main conclusions Anthropogenic fires favoured alien and native annuals. Thus, fire did not increase the alien/native ratio but increased the richness of alien species. The successful establishment of alien annuals was attributable to their ability to maintain rich seedbanks in burned areas and to the greater propagule arrival compared to native species. The native seedbank also survived fire, indicating that the herbaceous community has become highly resilient after centuries of human disturbances. Our results demonstrate that fire is a relevant factor for the maintenance of alien-dominated grasslands in the matorral and highlight the importance of considering the interactive effect of seed rain and seedbank survival to understand plant invasion patterns in fire-prone ecosystems." }, { "instance_id": "R54867xR54822", "comparison_id": "R54867", "paper_id": "R54822", "text": "Invasion, competitive dominance, and resource use by exotic and native California grassland species The dynamics of invasive species may depend on their abilities to compete for resources and exploit disturbances relative to the abilities of native species. We test this hypothesis and explore its implications for the restoration of native ecosystems in one of the most dramatic ecological invasions worldwide, the replacement of native perennial grasses by exotic annual grasses and forbs in 9.2 million hectares of California grasslands. The long-term persistence of these exotic annuals has been thought to imply that the exotics are superior competitors. However, seed-addition experiments in a southern California grassland revealed that native perennial species, which had lower requirements for deep soil water, soil nitrate, and light, were strong competitors, and they markedly depressed the abundance and fecundity of exotic annuals after overcoming recruitment limitations. Native species reinvaded exotic grasslands across experimentally imposed nitrogen, water, and disturbance gradients. Thus, exotic annuals are not superior competitors but rather may dominate because of prior disturbance and the low dispersal abilities and extreme current rarity of native perennials. If our results prove to be general, it may be feasible to restore native California grassland flora to at least parts of its former range." }, { "instance_id": "R54867xR54853", "comparison_id": "R54867", "paper_id": "R54853", "text": "Small mammals in a mosaic of forest remnants and anthropogenic habitats\u2014evaluating matrix quality in an Atlantic forest landscape The matrix of altered habitats that surrounds remnants in human dominated landscapes has been considered homogeneous and inhospitable. Recent studies, however, have shown the crucial role of the matrix in maintaining diversity in fragmented landscapes, acting as a mosaic of units with varying permeability to different species. Inclusion of matrix quality parameters is especially urgent in managing fragmented landscapes in the tropics where agriculture frontiers are still expanding. Using standardized surveys in 23 sites in an Atlantic forest landscape, we evaluated matrix use by small mammals, the most diverse ecological group of mammals in the Neotropics, and tested the hypothesis that endemic species are the most affected by the conversion of original forest into anthropogenic habitats. By comparing species distribution among forest remnants and the predominant adjacent habitats (native vegetation in initial stages of regeneration, eucalyptus plantations, areas of agriculture and rural areas with buildings), we found a strong dissimilarity in small mammal assemblages between native vegetation (including initial stages) and anthropogenic habitats, with only two species being able to use all habitats. Endemic small mammals tended to occupy native vegetation, whereas invading species from other countries or open biomes tended to occupy areas of non-native vegetation. Our results highlight that future destruction of native vegetation will favor invading or generalist species which could dominate highly disturbed landscapes, and that some matrix habitats, such as regenerating native vegetation, should be managed to increase connectivity among populations of endemic species." }, { "instance_id": "R54867xR54789", "comparison_id": "R54867", "paper_id": "R54789", "text": "Conservation of the Grassy White Box Woodlands: Relative Contributions of Size and Disturbance to Floristic Composition and Diversity of Remnants Before European settlement, grassy white box woodlands were the dominant vegetation in the east of the wheat-sheep belt of south-eastern Australia. Tree clearing, cultivation and pasture improvement have led to fragmentation of this once relatively continuous ecosystem, leaving a series of remnants which themselves have been modified by livestock grazing. Little-modified remnants are extremely rare. We examined and compared the effects of fragmentation and disturbance on the understorey flora of woodland remnants, through a survey of remnants of varying size, grazing history and tree clearing. In accordance with fragmentation theory, species richness generally increased with remnant size, and, for little-grazed remnants, smaller remnants were more vulnerable to weed invasion. Similarly, tree clearing and grazing encouraged weed invasion and reduced native species richness. Evidence for increased total species richness at intermediate grazing levels, as predicted by the intermediate disturbance hypothesis, was equivocal. Remnant quality was more severely affected by grazing than by remnant size. All little-grazed remnants had lower exotic species abundance and similar or higher native species richness than grazed remnants, despite their extremely small sizes (< 6 ha). Further, small, littlegrazed remnants maintained the general character of the pre-European woodland understorey, while grazing caused changes to the dominant species. Although generally small, the little-grazed remnants are the best representatives of the pre-European woodland understorey, and should be central to any conservation plan for the woodlands. Selected larger remnants are needed to complement these, however, to increase the total area of woodland conserved, and, because most little-grazed remnants are cleared, to represent the ecosystem in its original structural form. For the maintenance of native plant diversity and composition in little-grazed remnants, it is critical that livestock grazing continues to be excluded. For grazed remnants, maintenance of a site in its current state would allow continuation of past management, while restoration to a pre-European condition would require management directed towards weed removal, and could take advantage of the difference noted in the predominant life-cycle of native (perennial) versus exotic (annual or biennial) species." }, { "instance_id": "R54867xR54797", "comparison_id": "R54867", "paper_id": "R54797", "text": "Mammals of the northern Philippines: tolerance for habitat disturbance and resistance to invasive species in an endemic insular fauna Aim Island faunas, particularly those with high levels of endemism, usually are considered especially susceptible to disruption from habitat disturbance and invasive alien species. We tested this general hypothesis by examining the distribution of small mammals along gradients of anthropogenic habitat disturbance in northern Luzon Island, an area with a very high level of mammalian endemism. Location Central Cordillera, northern Luzon Island, Philippines. Methods Using standard trapping techniques, we documented the occurrence and abundance of 16 endemic and two non-native species along four disturbance gradients where habitat ranged from mature forest to deforested cropland. Using regression analysis and AICc for model selection, we assessed the influence of four predictor variables (geographic range, elevational range, body size and diet breadth) on the disturbance tolerance of species. Results Non-native species dominated areas with the most severe disturbance and were rare or absent in mature forest. Native species richness declined with increasing disturbance level, but responses of individual species varied. Elevational range (a measure of habitat breadth) was the best predictor of response of native species to habitat disturbance. Geographic range, body size and diet breadth were weakly correlated. Main conclusions The endemic small mammal fauna of northern Luzon includes species adapted to varying levels of natural disturbance and appears to be resistant to disruption by resident alien species. In these respects, it resembles a diverse continental fauna rather than a depauperate insular fauna. We conclude that the long and complex history of Luzon as an ancient member of the Philippine island arc system has involved highly dynamic ecological conditions resulting in a biota adapted to changing conditions. We predict that similar responses will be seen in other taxonomic groups and in other ancient island arc systems." }, { "instance_id": "R54867xR54652", "comparison_id": "R54867", "paper_id": "R54652", "text": "Plant and Small Vertebrate Composition and Diversity 36-39 Years After Root Plowing Abstract Root plowing is a common management practice to reduce woody vegetation and increase herbaceous forage for livestock on rangelands. Our objective was to test the hypotheses that four decades after sites are root plowed they have 1) lower plant species diversity, less heterogeneity, greater percent canopy cover of exotic grasses; and 2) lower abundance and diversity of amphibians, reptiles, and small mammals, compared to sites that were not disturbed by root plowing. Pairs of 4-ha sites were selected for sampling: in each pair of sites, one was root plowed in 1965 and another was not disturbed by root plowing (untreated). We estimated canopy cover of woody and herbaceous vegetation during summer 2003 and canopy cover of herbaceous vegetation during spring 2004. We trapped small mammals and herpetofauna in pitfall traps during late spring and summer 2001\u20132004. Species diversity and richness of woody plants were less on root-plowed than on untreated sites; however, herbaceous plant and animal species did not differ greatly between treatments. Evenness of woody vegetation was less on root-plowed sites, in part because woody legumes were more abundant. Abundance of small mammals and herpetofauna varied with annual rainfall more than it varied with root plowing. Although structural differences existed between vegetation communities, secondary succession of vegetation reestablishing after root plowing appears to be leading to convergence in plant and small animal species composition with untreated sites." }, { "instance_id": "R54867xR54638", "comparison_id": "R54867", "paper_id": "R54638", "text": "Exotic invasive species in urban wetlands: environmental correlates and implications for wetland management Summary 1. Wetlands in urban regions are subjected to a wide variety of anthropogenic disturbances, many of which may promote invasions of exotic plant species. In order to devise management strategies, the influence of different aspects of the urban and natural environments on invasion and community structure must be understood. 2. The roles of soil variables, anthropogenic effects adjacent to and within the wetlands, and vegetation structure on exotic species occurrence within 21 forested wetlands in north-eastern New Jersey, USA, were compared. The hypotheses were tested that different vegetation strata and different invasive species respond similarly to environmental factors, and that invasion increases with increasing direct human impact, hydrologic disturbance, adjacent residential land use and decreasing wetland area. Canonical correspondence analyses, correlation and logistic regression analyses were used to examine invasion by individual species and overall site invasion, as measured by the absolute and relative number of exotic species in the site flora. 3. Within each stratum, different sets of environmental factors separated exotic and native species. Nutrients, soil clay content and pH, adjacent land use and canopy composition were the most frequently identified factors affecting species, but individual species showed highly individualistic responses to the sets of environmental variables, often responding in opposite ways to the same factor. 4. Overall invasion increased with decreasing area but only when sites > 100 ha were included. Unexpectedly, invasion decreased with increasing proportions of industrial/commercial adjacent land use. 5. The hypotheses were only partially supported; invasion does not increase in a simple way with increasing human presence and disturbance. 6. Synthesis and applications . The results suggest that a suite of environmental conditions can be identified that are associated with invasion into urban wetlands, which can be widely used for assessment and management. However, a comprehensive ecosystem approach is needed that places the remediation of physical alterations from urbanization within a landscape context. Specifically, sediment, inputs and hydrologic changes need to be related to adjoining urban land use and to the overlapping requirements of individual native and exotic species." }, { "instance_id": "R54867xR54605", "comparison_id": "R54867", "paper_id": "R54605", "text": "Interannual variation of fish assemblage structure in a Mediterranean River: Implications of streamflow on the dominance of native or exotic species Streams in mediterranean-type climate regions are shaped by predictable seasonal events of flooding and drying over an annual cycle, but also present a strong interannual flow variation. The Guadiana River is one of the most important rivers in the Iberian Peninsula. The fish fauna presents 11 native freshwater species, including eight with high conservation status. Several exotic species are present, the most important being the American centrarchids pumpkinseed fish and largemouth bass. As a typical mediterranean-type river, the Guadiana has an irregular hydrological regime with severe drought periods and floods; the interannual variation of discharge presents a ratio of c. 100 to 1. From 1980 to 1995 several dry years were observed, culminating in the drought of 1991/92-1994/95. Analysing the variation of the fish assemblage structure during this period, exotic species (mostly pumpkinseed) progressively increased, strongly dominating in 1995. Indigenous populations dramatically decreased and a previously common endemic cyprinid (Anaecypris hispanica Steindachner) became endangered and one of the most threatened fishes of Europe. However, the following years presented above-average flows with several flood events and an inverse process occurred, with native species increasing their contribution in a short period. The importance of floods as a disturbance factor in the control of lentic or slow flowing water exotics is discussed. Native species apparently possess adaptive responses to high flows which exotics lack. This hypothesis is consistent with probability-of-use curves and preflood-postflood surveys. Results emphasize the importance of floods in the environmental flows of mediterranean-type rivers. In the absence of flooding disturbance, exotic fish populations predictably grow, increasing the pressure on native species; abiotic disturbance may supersede deterministic outcomes of predation or competition and influence community structure by reducing populations of the exotic species." }, { "instance_id": "R54867xR54734", "comparison_id": "R54867", "paper_id": "R54734", "text": "Importance of molehill disturbances for invasion by Bunias orientails in meadows and pastures Abstract Small-scale soil disturbances by fossorial animals can change physical and biotic conditions in disturbed patches and influence spatial and temporal dynamics, and the composition of plant communities. They create regeneration niches and colonization openings for native plants and, according to the intermediate disturbance hypothesis, they are expected to increase plant community diversity. However, it also has been reported that increased disturbance resource availability and decreased competition with native species may result in the invasion of communities by alien plant species, as predicted by the fluctuating resources theory of invasibility. In this study, we investigated the importance of European mole disturbances for the invasion of semi-natural fresh meadows and pastures by the alien plant, Bunias orientalis, which has mainly spread throughout Central Europe on anthropogenically disturbed sites. We hypothesized that the invader, being particularly well adapted to anthropogenic disturbances, enters into dense vegetation of meadows and pastures mainly on mole mounds. To assess the seedling recruitment of B. orientalis in relation to disturbance, we counted the number of seedlings that emerged on molehills and control plots in meadows and pastures. The establishment of juvenile (0\u20131 year) rosette plants on and off molehills was surveyed on 5 \u00d7 5 m plots. In accordance with our hypothesis, mole disturbances were found to serve as a gateway for B. orientalis by which the invader may colonize semi-natural grasslands. The seedlings of the species emerged almost solely on molehills and the young rosettes were established predominantly on mole mounds. Although the seedling density did not differ significantly between the meadows and pastures, the number of established plants in the pastures was considerably higher. We suggest that the invasion by B. orientalis in pastures may be facilitated by vegetative regeneration following root fragmentation by sheep pasturing." }, { "instance_id": "R54867xR54715", "comparison_id": "R54867", "paper_id": "R54715", "text": "Human activity facilitates altitudinal expansion of exotic plants along a road in montane grassland, South Africa ABSTRACT Question: Do anthropogenic activities facilitate the distribution of exotic plants along steep altitudinal gradients? Location: Sani Pass road, Grassland biome, South Africa. Methods: On both sides of this road, presence and abundance of exotic plants was recorded in four 25-m long road-verge plots and in parallel 25 m \u00d7 2 m adjacent land plots, nested at five altitudinal levels: 1500, 1800, 2100, 2400 and 2700 m a.s.l. Exotic community structure was analyzed using Canonical Correspondence Analysis while a two-level nested Generalized Linear Model was fitted for richness and cover of exotics. We tested the upper altitudinal limits for all exotics along this road for spatial clustering around four potential propagule sources using a t-test. Results: Community structure, richness and abundance of exotics were negatively correlated with altitude. Greatest invasion by exotics was recorded for adjacent land at the 1500 m level. Of the 45 exotics, 16 were found at higher altitudes than expected and observations were spatially clustered around potential propagule sources. Conclusions: Spatial clustering of upper altitudinal limits around human inhabited areas suggests that exotics originate from these areas, while exceeding expected altitudinal limits suggests that distribution ranges of exotics are presently underestimated. Exotics are generally characterised by a high propagule pressure and/or persistent seedbanks, thus future tarring of the Sani Pass may result in an increase of exotic species richness and abundance. This would initially result from construction-related soil disturbance and subsequently from increased traffic, water run-off, and altered fire frequency. We suggest examples of management actions to prevent this. Nomenclature: Germishuizen & Meyer (2003)." }, { "instance_id": "R54867xR54753", "comparison_id": "R54867", "paper_id": "R54753", "text": "Are invasive species the drivers or passengers of change in degraded ecosystems? Few invaded ecosystems are free from habitat loss and disturbance, leading to uncertainty whether dominant invasive species are driving community change or are passengers along for the environmental ride. The ''driver'' model predicts that invaded communities are highly interactive, with subordinate native species being limited or ex- cluded by competition from the exotic dominants. The ''passenger'' model predicts that invaded communities are primarily structured by noninteractive factors (environmental change, dispersal limitation) that are less constraining on the exotics, which thus dominate. We tested these alternative hypotheses in an invaded, fragmented, and fire-suppressed oak savanna. We examined the impact of two invasive dominant perennial grasses on community structure using a reduction (mowing of aboveground biomass) and removal (weeding of above- and belowground biomass) experiment conducted at different seasons and soil depths. We examined the relative importance of competition vs. dispersal limitation with experimental seed additions. Competition by the dominants limits the abundance and re- production of many native and exotic species based on their increased performance with removals and mowing. The treatments resulted in increased light availability and bare soil; soil moisture and N were unaffected. Although competition was limiting for some, 36 of 79 species did not respond to the treatments or declined in the absence of grass cover. Seed additions revealed that some subordinates are dispersal limited; competition alone was insufficient to explain their rarity even though it does exacerbate dispersal inefficiencies by lowering reproduction. While the net effects of the dominants were negative, their presence restricted woody plants, facilitated seedling survival with moderate disturbance (i.e., treatments applied in the fall), or was not the primary limiting factor for the occurrence of some species. Finally, the species most functionally distinct from the dominants (forbs, woody plants) responded most significantly to the treatments. This suggests that relative abundance is determined more by trade-offs relating to environmental conditions (long- term fire suppression) than to traits relating to resource capture (which should most impact functionally similar species). This points toward the passenger model as the underlying cause of exotic dominance, although their combined effects (suppressive and facilitative) on community structure are substantial." }, { "instance_id": "R54867xR54806", "comparison_id": "R54867", "paper_id": "R54806", "text": "Fire effects on plant diversity in serpentine vs. sandstone chaparral Fire contributes to the maintenance of species diversity in many plant com- munities, but few studies have compared its impacts in similar communities that vary in such attributes as soils and productivity. We compared how a wildfire affected plant diversity in chaparral vegetation on serpentine and sandstone soils. We hypothesized that because biomass and cover are lower in serpentine chaparral, space and light are less limiting, and therefore postfire increases in plant species diversity would be lower than in sandstone chaparral. In 40 pairs of burned and unburned 250-m 2 plots, we measured changes in the plant community after a fire for three years. The diversity of native and exotic species increased more in response to fire in sandstone than serpentine chaparral, at both the local (plot) and regional (whole study) scales. In serpentine compared with sandstone chaparral, specialized fire-dependent species were less prevalent, mean fire severity was lower, mean time since last fire was longer, postfire shrub recruitment was lower, and regrowth of biomass was slower. Within each chaparral type, the responses of diversity to fire were positively correlated with prefire shrub cover and with a number of measures of soil fertility. Fire severity was negatively related to the postfire change in diversity in sandstone chaparral, and unimodally related to the postfire change in diversity in serpentine chaparral. Our results suggest that the effects of fire on less productive plant communities like serpentine chaparral may be less pronounced, although longer lasting, than the effects of fire on similar but more productive communities." }, { "instance_id": "R54867xR54857", "comparison_id": "R54867", "paper_id": "R54857", "text": "Case studies of the expansion of Acacia dealbata in the valley of the river Mino (Galicia, Spain) Aim of study: Acacia dealbata is a naturalized tree of invasive behaviour that has expanded from small plots associated with vineyards into forest ecosystems. Our main objective is to find evidence to support the notion that disturbances, particularly forest fires, are important driving factors in the current expansion of A. dealbata. Area of study: We mapped it current distribution using three study areas and assesses the temporal changes registered in forest cover in these areas of the valley of the river Mino. Material and Methods: The analyses were based on visual interpretation of aerial photographs taken in 1985 and 2003 of three 1x1 km study areas and field works. Main result: A 62.4%, 48.6% and 22.2% of the surface area was covered by A. dealbata in 2003 in pure or mixed stands. Furthermore, areas composed exclusively of A. dealbata make up 33.8%, 15.2% and 5.7% of the stands. The transition matrix analyses between the two dates support our hypothesis that the areas currently covered by A. dealbata make up a greater proportion of the forest area previously classified as unwooded or open forest than those without A. dealbata cover. Both of these surface types are the result of an important impact of fire in the region. Within each area, A. dealbata is mainly located on steeper terrain, which is more affected by fires. Research highlights: A. dealbata is becoming the dominant tree species over large areas and the invasion of this species gives rise to monospecific stands, which may have important implications for future fire regimes. Keywords: Fire regime; Mimosa; plant invasion; silver wattle." }, { "instance_id": "R54867xR54599", "comparison_id": "R54867", "paper_id": "R54599", "text": "The roles of competition and disturbance in a marine invasion Two hypotheses for the decline of native species are the superior exploitation of disturbance by exotic species and the competitive displacement of native species by their exotic counterparts. Theory predicts that functional similarity will increase the intensity of competition between native and invasive species. Ecologically important \u201cfoundation\u201d species, Zostera marina and other seagrasses have globally declined during the past century. This study used transplant and vegetation removal experiments to test the hypotheses that disturbance and competitive interactions with an invasive congener (Z. japonica) are contributing to the decline of native Z. marina in the northeastern Pacific. Interspecific competition reduced Z. marina and Z. japonica above-ground biomass by 44 and 96%, respectively, relative to intraspecific competition. Disturbance substantially enhanced Z. japonica productivity and fitness, and concomitantly decreased Z. marina performance, effects that persisted two years following substratum disturbance. These results demonstrate that disturbance and competitive interactions with Z. japonica reduce Z. marina performance, and suggest that Z. japonica\u2019s success as an invasive species stems dually from its ability to persist in competition with Z. marina and its positive response to disturbance. These results highlight the importance of understanding the interconnected roles of species interactions and disturbance in the decline of seagrass habitats, and provide a rationale for amending conservation policy in Washington State. In the interest of conserving native eelgrass populations, the current policy of protecting both native and invasive Zostera spp. should be refined to differentiate between native and invader, and to rescind the protection of invasive eelgrass." }, { "instance_id": "R54867xR54709", "comparison_id": "R54867", "paper_id": "R54709", "text": "Epifaunal disturbance by periodic low levels of dissolved oxygen: native vs. invasive species response Hypoxia is increasing in marine and estuarine systems worldwide, primarily due to anthropogenic causes. Periodic hypoxia represents a pulse disturbance, with the potential to restruc- ture estuarine biotic communities. We chose the shallow, epifaunal community in the lower Chesa- peake Bay, Virginia, USA, to test the hypothesis that low dissolved oxygen (DO) (<4 mg l -1 ) affects community dynamics by reducing the cover of spatial dominants, creating space both for less domi- nant native species and for invasive species. Settling panels were deployed at shallow depths in spring 2000 and 2001 at Gloucester Point, Virginia, and were manipulated every 2 wk from late June to mid-August. Manipulation involved exposing epifaunal communities to varying levels of DO for up to 24 h followed by redeployment in the York River. Exposure to low DO affected both species com- position (presence or absence) and the abundance of the organisms present. Community dominance shifted away from barnacles as level of hypoxia increased. Barnacles were important spatial domi- nants which reduced species diversity when locally abundant. The cover of Hydroides dianthus, a native serpulid polychaete, doubled when exposed to periodic hypoxia. Increased H. dianthus cover may indicate whether a local region has experienced periodic, local DO depletion and thus provide an indicator of poor water-quality conditions. In 2001, the combined cover of the invasive and crypto- genic species in this community, Botryllus schlosseri (tunicate), Molgula manhattensis (tunicate), Ficopomatus enigmaticus (polychaete) and Diadumene lineata (anemone), was highest on the plates exposed to moderately low DO (2 mg l -1 < DO < 4 mg l -1 ). All 4 of these species are now found world- wide and exhibit life histories well adapted for establishment in foreign habitats. Low DO events may enhance success of invasive species, which further stress marine and estuarine ecosystems." }, { "instance_id": "R55219xR55044", "comparison_id": "R55219", "paper_id": "R55044", "text": "The interacting effects of diversity and propagule pressure on early colonization and population size We are now beginning to understand the role of intraspecific diversity on fundamental ecological phenomena. There exists a paucity of knowledge, however, regarding how intraspecific, or genetic diversity, may covary with other important factors such as propagule pressure. A combination of theoretical modelling and experimentation was used to explore the way propagule pressure and genetic richness may interact. We compare colonization rates of the Australian bivalve Saccostrea glomerata (Gould 1885). We cross propagule size and genetic richness in a factorial design in order to examine the generalities of our theoretical model. Modelling showed that diversity and propagule pressure should generally interact synergistically when positive feedbacks occur (e.g. aggregation). The strength of genotype effects depended on propagule size, or the numerical abundance of arriving individuals. When propagule size was very small (<4 individuals), however, greater genetic richness unexpectedly reduced colonization. The probability of S. glomerata colonization was 76% in genetically rich, larger propagules, almost 39 percentage points higher than in genetically poor propagules of similar size. This pattern was not observed in less dense, smaller propagules. We predict that density-dependent interactions between larvae in the water column may explain this pattern." }, { "instance_id": "R55219xR55152", "comparison_id": "R55219", "paper_id": "R55152", "text": "Biological control as an invasion process: disturbance and propagule pressure affect the invasion success of Lythrum salicaria biological control agents Understanding the mechanisms behind the successful colonization and establishment of introduced species is important for both preventing the invasion of unwanted species and improving release programs for biological control agents. However, it is often not possible to determine important introduction details, such as date, number of organisms, and introduction location when examining factors affecting invasion success. Here we use biological control introduction data to assess the role of propagule pressure, disturbance, and residence time on invasion success of four herbivorous insect species introduced for the control of the invasive wetland plant, Lythrum salicaria, in the Columbia River Estuary. Two sets of field surveys determined persistence at prior release sites, colonization of new sites, and abundance within colonized sites. We quantified propagule pressure in four ways to examine the effect of different measurements. These included three measurements of introduction size (proximity to introduction site, introduction size at a local scale, and introduction size at a regional scale) and one measure of introduction number (number of introduction events in a region). Disturbance was examined along a tidal inundation gradient (distance from river mouth) and as habitat (island or mainland). Statistical models and model averaging were used to determine which factors were driving invasion success. In this study we found: (1) sparse evidence for the positive influence of propagule pressure on invasion success; (2) disturbance can negatively affect the invasion success of herbivorous insects; (3) the effects of disturbance and propagule pressure are species specific and vary among invasion stages, and (4) not all measures of propagule pressure show the same results, therefore single measures and proxies should be used cautiously." }, { "instance_id": "R55219xR55097", "comparison_id": "R55219", "paper_id": "R55097", "text": "Effect of propagule pressure on the establishment and spread of the little fire ant Wasmannia auropunctata in a Gabonese oilfield We studied the effect of propagule pressure on the establishment and subsequent spread of the invasive little fire ant Wasmannia auropunctata in a Gabonese oilfield in lowland rain forest. Oil well drilling, the major anthropogenic disturbance over the past 21 years in the area, was used as an indirect measure of propagule pressure. An analysis of 82 potential introductions at oil production platforms revealed that the probability of successful establishment significantly increased with the number of drilling events. Specifically, the shape of the dose\u2010response establishment curve could be closely approximated by a Poisson process with a 34% chance of infestation per well drilled. Consistent with our knowledge of largely clonal reproduction by W. auropunctata , the shape of the establishment curve suggested that the ants were not substantially affected by Allee effects, probably greatly contributing to this species\u2019 success as an invader. By contrast, the extent to which W. auropunctata spread beyond the point of initial introduction, and thus the extent of its damage to diversity of other ant species, was independent of propagule pressure. These results suggest that while establishment success depends on propagule pressure, other ecological or genetic factors may limit the extent of further spread. Knowledge of the shape of the dose\u2010response establishment curve should prove useful in modelling the future spread of W. auropunctata and perhaps the spread of other clonal organisms." }, { "instance_id": "R55219xR55059", "comparison_id": "R55219", "paper_id": "R55059", "text": "Reproductive potential and seedling establishment of the invasive alien tree Schinus molle (Anacardiaceae) in South Africa Schinus molle (Peruvian pepper tree) was introduced to South Africa more than 150 years ago and was widely planted, mainly along roads. Only in the last two decades has the species become naturalized and invasive in some parts of its new range, notably in semi-arid savannas. Research is being undertaken to predict its potential for further invasion in South Africa. We studied production, dispersal and predation of seeds, seed banks, and seedling establishment in relation to land uses at three sites, namely ungrazed savanna once used as a military training ground; a savanna grazed by native game; and an ungrazed mine dump. We found that seed production and seed rain density of S. molle varied greatly between study sites, but was high at all sites (384 864\u20131 233 690 seeds per tree per year; 3877\u20139477 seeds per square metre per year). We found seeds dispersed to distances of up to 320 m from female trees, and most seeds were deposited within 50 m of putative source trees. Annual seed rain density below canopies of Acacia tortillis, the dominant native tree at all sites, was significantly lower in grazed savanna. The quality of seed rain was much reduced by endophagous predators. Seed survival in the soil was low, with no survival recorded beyond 1 year. Propagule pressure to drive the rate of recruitment: densities of seedlings and sapling densities were higher in ungrazed savanna and the ungrazed mine dump than in grazed savanna, as reflected by large numbers of young individuals, but adult : seedling ratios did not differ between savanna sites. Frequent and abundant seed production, together with effective dispersal of viable S. molle seed by birds to suitable establishment sites below trees of other species to overcome predation effects, facilitates invasion. Disturbance enhances invasion, probably by reducing competition from native plants." }, { "instance_id": "R55219xR55061", "comparison_id": "R55219", "paper_id": "R55061", "text": "Determinants of vertebrate invasion success in Europe and North America Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species\u2019 population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process." }, { "instance_id": "R55219xR55068", "comparison_id": "R55219", "paper_id": "R55068", "text": "The role of propagule pressure in the invasion success of bluegill sunfish, Lepomis macrochirus, in Japan The bluegill sunfish, Lepomis macrochirus, is a widespread exotic species in Japan that is considered to have originated from 15 fish introduced from Guttenberg, Iowa, in 1960. Here, the genetic and phenotypic traits of Japanese populations were examined, together with 11 native populations of the USA using 10 microsatellite markers and six meristic traits. Phylogenetic analysis reconfirmed a single origin of Japanese populations, among which populations established in the 1960s were genetically close to Guttenberg population, keeping high genetic diversity comparable to the ancestral population. In contrast, genetic diversity of later\u2010established populations significantly declined with genetic divergence from the ancestral population. Among the 1960s established populations, that from Lake Biwa showed a significant isolation\u2010by\u2010distance pattern with surrounding populations in which genetic bottlenecks increased with geographical distance from Lake Biwa. Although phenotypic divergence among populations was recognized in both neutral and adaptive traits, PST\u2013FST comparisons showed that it is independent of neutral genetic divergence. Divergent selection was suggested in some populations from reservoirs with unstable habitats, while stabilizing selection was dominant. Accordingly, many Japanese populations of L. macrochirus appear to have derived from Lake Biwa population, expanding their distribution with population bottlenecks. Despite low propagule pressure, the invasion success of L. macrochirus is probably because of its drastic population growth in Lake Biwa shortly after its introduction, together with artificial transplantations. It not only enabled the avoidance of a loss in genetic diversity but also formed a major gene pool that supported local adaptation with high phenotypic plasticity." }, { "instance_id": "R55219xR55112", "comparison_id": "R55219", "paper_id": "R55112", "text": "Establishment success of 25 rare wetland species introduced into restored habitats is best predicted by ecological distance to source habitats In response to ongoing local extinction of species and the current biodiversity crisis, the number of reintroduction programs aiming to establish new populations of rare species in the wild has increased. However, only a small proportion of these programs has been planned and monitored scientifically and comparative multi-species studies are missing in this context. Therefore, the relative importance of factors involved in reintroduction success is poorly known. In 2007, we assessed population growth since introduction as a measure of establishment success of 25 wetland species (rare or extinct in the wild nationwide) and a total of 50 populations in Switzerland that had been introduced at seven restored sites with apparently adequate environmental conditions between 1997 and 2005. We related establishment success to 32 life-history traits of these species obtained from the BiolFlor database, to initial number of introduced plants (propagule pressure with 1\u2013130 individuals introduced per population), and to the ecological distance between source sites and restored sites based on vegetation records. Our results clearly showed the importance of close ecological similarity between source and introduction sites for successful establishment of wetland species into restored pond habitats. In contrast, neither life-history traits nor propagule pressure were related to establishment success in our study. Based on our results, we strongly recommend enforcing ecological studies prior to reintroduction to accurately assess the suitability of restored sites. To unambiguously assess the key determinants of successful establishment, future reintroduction programs should be set-up according to experimental designs." }, { "instance_id": "R55219xR54992", "comparison_id": "R55219", "paper_id": "R54992", "text": "Propagule pressure and disturbance interact to overcome biotic resistance of marine invertebrate Propagule pressure is fundamental to invasion success, yet our understanding of its role in the marine domain is limited. Few studies have manipulated or controlled for propagule supply in the field, and consequently there is little empirical data to test for non-linearities or interactions with other processes. Supply of non-indigenous propagules is most likely to be elevated in urban estuaries, where vessels congregate and bring exotic species on fouled hulls and in ballast water. These same environments are also typically subject to elevated levels of disturbance from human activities, creating the potential for propagule pressure and disturbance to interact. By applying a controlled dose of free-swimming larvae to replicate assemblages, we were able to quantify a dose-response relationship at much finer spatial and temporal scales than previously achieved in the marine environment. We experimentally crossed controlled levels of propagule pressure and disturbance in the field, and found that both were required for invasion to occur. Only recruits that had settled onto bare space survived beyond three months, precluding invader persistence in undisturbed communities. In disturbed communities initial survival on bare space appeared stochastic, such that a critical density was required before the probability of at least one colony surviving reached a sufficient level. Those that persisted showed 75% survival over the following three months, signifying a threshold past which invaders were resilient to chance mortality. Urban estuaries subject to anthropogenic disturbance are common throughout the world, and similar interactions may be integral to invasion dynamics in these ecosystems." }, { "instance_id": "R55219xR55004", "comparison_id": "R55219", "paper_id": "R55004", "text": "Water and boating activity as dispersal vectors for Schinus terebinthifolius (Brazilian pepper) seeds in freshwater and estuarine habitats Schinus terebinthifolius (Brazilian pepper), a native of South America, is currently naturalized in 20 countries worldwide and can alter native systems by displacing flora and forming monotypic stands. The primary described mechanism of seed dispersal is through consumption of fruits by birds and mammals. We evaluated an alternative dispersal method by evaluating the potential for S. terebinthifolius growing in freshwater and estuarine environments to disperse via water currents. Specifically, we: (1) determined the duration fruits remained buoyant in three salinities, (2) estimated the viability of seeds after 7 days in water, (3) estimated the dispersal rate of floating solitary fruits, and (4) examined the role of boat wakes in moving seeds above mean high water at the shoreline. The length of time fruits floated in 0 ppt water (4.9 days) was significantly less than 15 ppt saltwater (6.2 days), and 30 ppt saltwater (6.9 days). After 7 days, over 13% of seeds remained viable in 0 ppt, 15 ppt, and 30 ppt water. By combining mean dispersal rates and the mean number of days fruits floated, we calculated individual fruits could be transported 16.9 km in 0 ppt and over 22 km in 15 and 30 ppt water. To increase germination, seeds must be stranded above the intertidal zone. Wind wakes alone never achieved this result; however, boat wakes plus wind wakes significantly increased the movement of fruits above the intertidal region into drier soils. The use of both vertebrate dispersal vectors and water dispersal may potentially increase the rate of invasion, establishment, and survival of S. terebinthifolius in freshwater and estuarine environments." }, { "instance_id": "R55219xR54973", "comparison_id": "R55219", "paper_id": "R54973", "text": "How many founders for a biological invasion? Predicting introduction outcomes from propagule pressure Ecological theory on biological invasions attempts to characterize the predictors of invasion success and the relative importance of the different drivers of population establishment. An outstanding question is how propagule pressure determines the probability of population establishment, where propagule pressure is the number of individuals of a species introduced into a specific location (propagule size) and their frequency of introduction (propagule number). Here, we used large-scale replicated mesocosm ponds over three reproductive seasons to identify how propagule size and number predict the probability of establishment of one of world's most invasive fish, Pseudorasbora parva, as well as its effect on the somatic growth of individuals during establishment. We demonstrated that, although a threshold of 11 introduced pairs of fish (a pair is 1 male, 1 female) was required for establishment probability to exceed 95%, establishment also occurred at low propagule size (1-5 pairs). Although single introduction events were as effective as multiple events at enabling establishment, the propagule sizes used in the multiple introductions were above the detected threshold for establishment. After three reproductive seasons, population abundance was also a function of propagule size, with rapid increases in abundance only apparent when propagule size exceeded 25 pairs. This was initially assisted by adapted biological traits, including rapid individual somatic growth that helped to overcome demographic bottlenecks." }, { "instance_id": "R55219xR55095", "comparison_id": "R55219", "paper_id": "R55095", "text": "The effect of propagule size on the invasion of an alien insect 1. The movement of species from their native ranges to alien environments is a serious threat to biological diversity. The number of individuals involved in an invasion provides a strong theoretical basis for determining the likelihood of establishment of an alien species. 2. Here a field experiment was used to manipulate the critical first stages of the invasion of an alien insect, a psyllid weed biocontrol agent, Arytainilla spartiophila Forster, in New Zealand and to observe the progress of the invasion over the following 6 years. 3. Fifty-five releases were made along a linear transect 135 km long: 10 releases of two, four, 10, 30 and 90 psyllids and five releases of 270 psyllids. Six years after their original release, psyllids were present in 22 of the 55 release sites. Analysis by logistic regression showed that the probability of establishment was significantly and positively related to initial release size, but that this effect was important only during the psyllids' first year in the field. 4. Although less likely to establish, some of the releases of two and four psyllids did survive 5 years in the field. Overall, releases that survived their first year had a 96% chance of surviving thereafter, providing the release site remained secure. The probability of colony loss due to site destruction remained the same throughout the experiment, whereas the probability of natural extinction reduced steeply over time. 5. During the first year colonies were undergoing a process of establishment and, in most cases, population size decreased. After this first year, a period of exponential growth ensued. 6. A lag period was observed before the populations increased dramatically in size. This was thought to be due to inherent lags caused by the nature of population growth, which causes the smaller releases to appear to have a longer lag period." }, { "instance_id": "R55219xR54989", "comparison_id": "R55219", "paper_id": "R54989", "text": "Effects of pre-existing submersed vegetation and propagule pressure on the invasion success of Hydrilla verticillata Summary 1 With biological invasions causing widespread problems in ecosystems, methods to curb the colonization success of invasive species are needed. The effective management of invasive species will require an integrated approach that restores community structure and ecosystem processes while controlling propagule pressure of non-native species. 2 We tested the hypotheses that restoring native vegetation and minimizing propagule pressure of invasive species slows the establishment of an invader. In field and greenhouse experiments, we evaluated (i) the effects of a native submersed aquatic plant species, Vallisneria americana, on the colonization success of a non-native species, Hydrilla verticillata; and (ii) the effects of H. verticillata propagule density on its colonization success. 3 Results from the greenhouse experiment showed that V. americana decreased H. verticillata colonization through nutrient draw-down in the water column of closed mesocosms, although data from the field experiment, located in a tidal freshwater region of Chesapeake Bay that is open to nutrient fluxes, suggested that V. americana did not negatively impact H. verticillata colonization. However, H. verticillata colonization was greater in a treatment of plastic V. americana look-alikes, suggesting that the canopy of V. americana can physically capture H. verticillata fragments. Thus pre-emption effects may be less clear in the field experiment because of complex interactions between competitive and facilitative effects in combination with continuous nutrient inputs from tides and rivers that do not allow nutrient draw-down to levels experienced in the greenhouse. 4 Greenhouse and field tests differed in the timing, duration and density of propagule inputs. However, irrespective of these differences, propagule pressure of the invader affected colonization success except in situations when the native species could draw-down nutrients in closed greenhouse mesocosms. In that case, no propagules were able to colonize. 5 Synthesis and applications. We have shown that reducing propagule pressure through targeted management should be considered to slow the spread of invasive species. This, in combination with restoration of native species, may be the best defence against non-native species invasion. Thus a combined strategy of targeted control and promotion of native plant growth is likely to be the most sustainable and cost-effective form of invasive species management." }, { "instance_id": "R55219xR55006", "comparison_id": "R55219", "paper_id": "R55006", "text": "Propagule pressure and persistence in experimental populations Average inoculum size and number of introductions are known to have positive effects on population persistence. However, whether these factors affect persistence independently or interact is unknown. We conducted a two-factor experiment in which 112 populations of parthenogenetic Daphnia magna were maintained for 41 days to study effects of inoculum size and introduction frequency on: (i) population growth, (ii) population persistence and (iii) time-to-extinction. We found that the interaction of inoculum size and introduction frequency\u2014the immigration rate\u2014affected all three dependent variables, while population growth was additionally affected by introduction frequency. We conclude that for this system the most important aspect of propagule pressure is immigration rate, with relatively minor additional effects of introduction frequency and negligible effects of inoculum size." }, { "instance_id": "R55219xR55046", "comparison_id": "R55219", "paper_id": "R55046", "text": "Role of Propagule Size in the Success of Incipient Colonies of the Invasive Argentine Ant : Factors that contribute to the successful establishment of invasive species are often poorly understood. Propagule size is considered a key determinant of establishment success, but experimental tests of its importance are rare. We used experimental colonies of the invasive Argentine ant ( Linepithema humile) that differed both in worker and queen number to test how these attributes influence the survivorship and growth of incipient colonies. All propagules without workers experienced queen mortality, in contrast to only 6% of propagules with workers. In small propagules (10\u20131,000 workers), brood production increased with worker number but not queen number. In contrast, per capita measures of colony growth decreased with worker number over these colony sizes. In larger propagules ( 1,000\u201311,000 workers), brood production also increased with increasing worker number, but per capita brood production appeared independent of colony size. Our results suggest that queens need workers to establish successfully but that propagules with as few as 10 workers can grow quickly. Given the requirements for propagule success in Argentine ants, it is not surprising how easily they spread via human commerce. Resumen: Los factores que contribuyen al establecimiento exitoso de especies invasoras son frecuentemente poco entendido. El tamano del propagulo es considerado un factor clave del exito del establecimiento, pero se han realizado pocas pruebas experimentales para determinar la importancia de este factor. Para estudiar como influyen estos atributos en la supervivencia y el crecimiento de colonias incipientes utilizamos colonias experimentales de la hormiga invasora argentina ( Linepithema humile) distintas en cuanto al numero de hormigas obreras como de reinas. Hubo mortalidad de reinas en todos los propagulos sin obreras y, en contraste, en solo 6% de los propagulos con obreras. En propagulos pequenos (10\u20131,000 obreras) la produccion de crias incremento con el numero de obreras, pero no con el numero de reinas. En contraste, las mediciones per capita del crecimiento de la colonia disminuyeron para estos tamanos de colonia. En propagulos grandes (1,000-11,000 obreras), la produccion de crias tambien incremento con el numero de obreras, pero la produccion de crias per capita parece ser independiente del tamano de la colonia. Nuestros resultados sugieren que las reinas necesitan obreras para establecerse exitosamente, pero los propagulos con tan poco como 10 obreras pueden crecer rapidamente. Dado que son los requerimientos para el exito de los propagulos de hormigas argentinas, no nos sorprende la facilidad con la cual estas hormigas se dispersan como resultado del comercio humano." }, { "instance_id": "R55219xR54977", "comparison_id": "R55219", "paper_id": "R54977", "text": "Introduction history and species characteristics partly explain naturalization success of North American woody species in Europe Summary 1 The search for general characteristics of invasive species has not been very successful yet. A reason for this could be that current invasion patterns are mainly reflecting the introduction history (i.e. time since introduction and propagule pressure) of the species. Accurate data on the introduction history are, however, rare, particularly for introduced alien species that have not established. As a consequence, few studies that tested for the effects of species characteristics on invasiveness corrected for introduction history. 2 We tested whether the naturalization success of 582 North American woody species in Europe, measured as the proportion of European geographic regions in which each species is established, can be explained by their introduction history. For 278 of these species we had data on characteristics related to growth form, life cycle, growth, fecundity and environmental tolerance. We tested whether naturalization success can be further explained by these characteristics. In addition, we tested whether the effects of species characteristics differ between growth forms. 3 Both planting frequency in European gardens and time since introduction significantly increased naturalization success, but the effect of the latter was relatively weak. After correction for introduction history and taxonomy, six of the 26 species characteristics had significant effects on naturalization success. Leaf retention and precipitation tolerance increased naturalization success. Tree species were only 56% as likely to naturalize as non-tree species (vines, shrubs and subshrubs), and the effect of planting frequency on naturalization success was much stronger for non-trees than for trees. On the other hand, the naturalization success of trees, but not for non-trees, increased with native range size, maximum plant height and seed spread rate. 4 Synthesis. Our results suggest that introduction history, particularly planting frequency, is an important determinant of current naturalization success of North American woody species (particularly of non-trees) in Europe. Therefore, studies comparing naturalization success among species should correct for introduction history. Species characteristics are also significant determinants of naturalization success, but their effects may differ between growth forms." }, { "instance_id": "R55219xR55066", "comparison_id": "R55219", "paper_id": "R55066", "text": "Restoration of species-rich grasslands on ex-arable land: Seed addition outweighs soil fertility reduction A common practice in biodiversity conservation is restoration of former species-rich grassland on ex-arable land. Major constraints for grassland restoration are high soil fertility and limited dispersal ability of plant species to target sites. Usually, studies focus on soil fertility or on methods to introduce plant seeds. However, the question is whether soil fertility reduction is always necessary for getting plant species established on target sites. In a three-year field experiment with ex-arable soil with intensive farming history, we tested single and combined effects of soil fertility reduction and sowing mid-successional plant species on plant community development and soil biological properties. A controlled microcosm study was performed to test short-term effects of soil fertility reduction measures on biomass production of mid-successional species. Soil fertility was manipulated by adding carbon (wood or straw) to incorporate plant-available nutrients into organic matter, or by removing nutrients through top soil removal (TSR). The sown species established successfully and their establishment was independent of carbon amendments. TSR reduced plant biomass, and effectively suppressed arable weeds, however, created a desert-like environment, inhibiting the effectiveness of sowing mid-successional plant species. Adding straw or wood resulted in short-term reduction of plant biomass, suggesting a temporal decrease in plant-available nutrients by microbial immobilisation. Straw and wood addition had little effects on soil biological properties, whereas TSR profoundly reduced numbers of bacteria, fungal biomass and nematode abundance. In conclusion, in ex-arable soils, on a short-term sowing is more effective for grassland restoration than strategies aiming at soil fertility reduction." }, { "instance_id": "R55219xR55150", "comparison_id": "R55219", "paper_id": "R55150", "text": "Propagule pressure and colony social organization are associated with the successful invasion and rapid range expansion of fire ants in China We characterized patterns of genetic variation in populations of the fire ant Solenopsis invicta in China using mitochondrial DNA sequences and nuclear microsatellite loci to test predictions as to how propagule pressure and subsequent dispersal following establishment jointly shape the invasion success of this ant in this recently invaded area. Fire ants in Wuchuan (Guangdong Province) are genetically differentiated from those found in other large infested areas of China. The immediate source of ants in Wuchuan appears to be somewhere near Texas, which ranks first among the southern USA infested states in the exportation of goods to China. Most colonies from spatially distant, outlying areas in China are genetically similar to one another and appear to share a common source (Wuchuan, Guangdong Province), suggesting that long\u2010distance jump dispersal has been a prevalent means of recent spread of fire ants in China. Furthermore, most colonies at outlier sites are of the polygyne social form (featuring multiple egg\u2010laying queens per nest), reinforcing the important role of this social form in the successful invasion of new areas and subsequent range expansion following invasion. Several analyses consistently revealed characteristic signatures of genetic bottlenecks for S. invicta populations in China. The results of this study highlight the invasive potential of this pest ant, suggest that the magnitude of international trade may serve as a predictor of propagule pressure and indicate that rates and patterns of subsequent range expansion are partly determined by the interplay between species traits and the trade and transportation networks." }, { "instance_id": "R55219xR54954", "comparison_id": "R55219", "paper_id": "R54954", "text": "Founder population size and number of source populations enhance colonization success in waterstriders Understanding the factors that underlie colonization success is crucial both for ecological theory and conservation practices. The most effective way to assess colonization ability is to introduce experimentally different sets of individuals in empty patches of suitable habitat and to monitor the outcome. We translocated mated female waterstriders, Aquarius najas, into 90 streams that were not currently inhabited by the species. We manipulated sizes of propagules (from 2 to 16 mated females) and numbers of origin populations (one or two). Three origin populations were genetically different from each other, but they were less than 150 km from the streams of translocation. The results demonstrate clearly that both the larger propagule size and the high number of source populations have positive effects on the probability of colonizing a new stream. Thus, in addition to the stochastic factors related to the propagule size it may be essential to consider also the diversity of genetic origin for colonization success." }, { "instance_id": "R55219xR54965", "comparison_id": "R55219", "paper_id": "R54965", "text": "Animal trade and non-indigenous species introduction: the world-wide spread of squirrels Aim In this study, a dataset on world-wide squirrel introductions has been used to locate the relative pathways and to determine the factors correlated with species establishment. Location The world. Methods The analysis includes a chronological table of introductions, a biogeographical analysis and an assessment of the likelihood of establishment according to species, propagule pressure, area of origin and characteristics of the recipient area. Results The main vector of such introductions was the intentional importation of live animals. Introductions increased in developed countries and proportionately to the volume of imported mammals. Moreover, areas characterized by higher numbers of native squirrels were more invaded. Squirrels were often introduced deliberately and only to a smaller extent escaped from captivity. The likelihood of their establishment increased proportionately to the number of animals released and decreased proportionately to the increase of the latitudinal distance between the recipient area and the native range of the species. The likelihood that the release of one pair of either Sciurus or Callosciurus species would establish a new population was higher than 50%. Main conclusion Squirrels proved to be successful invaders and their importation should be restricted so as to prevent further introductions." }, { "instance_id": "R55219xR55081", "comparison_id": "R55219", "paper_id": "R55081", "text": "Invasive species profiling? Exploring the characteristics of non-native fishes across invasion stages in California Summary 1. The global spread of non-native species is a major concern for ecologists, particularly in regards to aquatic systems. Predicting the characteristics of successful invaders has been a goal of invasion biology for decades. Quantitative analysis of species characteristics may allow invasive species profiling and assist the development of risk assessment strategies. 2. In the current analysis we developed a data base on fish invasions in catchments throughout California that distinguishes among the establishment, spread and integration stages of the invasion process, and separates social and biological factors related to invasion success. 3. Using Akaike's information criteria (AIC), logistic and multiple regression models, we show suites of biological variables, which are important in predicting establishment (parental care and physiological tolerance), spread (life span, distance from nearest native source and trophic status) and abundance (maximum size, physiological tolerance and distance from nearest native source). Two variables indicating human interest in a species (propagule pressure and prior invasion success) are predictors of successful establishment and prior invasion success is a predictor of spread and integration. 4. Despite the idiosyncratic nature of the invasion process, our results suggest some assistance in the search for characteristics of fish species that successfully transition between invasion stages." }, { "instance_id": "R55219xR55132", "comparison_id": "R55219", "paper_id": "R55132", "text": "Residence time and human-mediated propagule pressure at work in the alien flora of Galapagos Introduced species present the greatest threat to the unique terrestrial biodiversity of the Galapagos Islands. We assess the current status of plant invasion in Galapagos, predict the likelihood of future naturalizations and invasions from the existing introduced flora, and suggest measures to help limit future invasions. There has been a 1.46 fold increase in plant biodiversity in Galapagos due to alien plant naturalizations, reflecting a similar trend on islands elsewhere. There are 870 alien plant species recorded in the archipelago. Of evaluated species, 34% species have naturalized. Within this group are the invasive species (16% of evaluated) and the transformers (3.3% of evaluated). We show that, as expected, naturalized species have been present in the archipelago longer than non-naturalized species. We also find that a higher human-mediated propagule pressure is associated with a greater human population and with properties that have been settled longer. This, combined with the relatively recent introduction of most species, leads us to the conclusion that Galapagos is at an early stage of plant invasion. We predict that more species from the existing alien flora will find an opportunity to naturalize and invade as propagule pressure increases alongside rapid human population growth associated with immigration to serve the booming tourism industry. In order to reduce future invasion risk, we suggest reviewing inter-island quarantine measures and continuing community education efforts to reduce human-mediated propagule pressure.ResumenLas especies introducidas representan la mayor amenaza a la biodiversidad terrestre \u00fanica de las Islas Gal\u00e1pagos. En este estudio, evaluamos la condici\u00f3n actual de invasi\u00f3n de plantas en Gal\u00e1pagos, predecimos la posibilidad de nuevas naturalizaciones e invasiones de la flora introducida existente, y sugerimos maneras para limitar invasiones en el futuro. Ha habido un incremento de 1.46 veces en total de biodiversidad de plantas en Gal\u00e1pagos atribuido a las naturalizaciones de plantas introducidas; este patr\u00f3n se refleja en otras islas del mundo. Han sido reportadas 870 especies de plantas introducidas en el archipi\u00e9lago. De las especies evaluadas, 34% est\u00e1n naturalizadas; dentro de este grupo est\u00e1n las especies invasoras (16% de las evaluadas) y las transformadores (3.3% de las evaluadas). Como se esperaba, se muestra que las especies naturalizadas han estado presentes en el archipi\u00e9lago m\u00e1s tiempo que las especies no-naturalizadas. Tambi\u00e9n encontramos que el incremento del n\u00famero de individuos de las especies introducidas sembradas por los seres humanos est\u00e1 directamente relacionado con el tama\u00f1o de la poblaci\u00f3n humana y con la antig\u00fcedad de las propiedades privadas. Este hecho, en combinaci\u00f3n con la introducci\u00f3n relativamente reciente de la mayor\u00eda de las especies, nos lleva a la conclusi\u00f3n de que Gal\u00e1pagos est\u00e1 en las primeras etapas de invasi\u00f3n. Predecimos que m\u00e1s de las especies introducidas ya presentes van a encontrar una oportunidad para naturalizarse a medida que se incrementa el n\u00famero de individuos por el crecimiento acelerado de la poblaci\u00f3n humana debido al turismo. Para limitar el riesgo de m\u00e1s invasiones en el futuro, sugerimos hacer una revisi\u00f3n del sistema de cuarentena inter-islas, y continuar con los esfuerzos con campa\u00f1as de educaci\u00f3n en la comunidad y de esta manera disminuir el incremento en el n\u00famero de individuos de las especies ya presentes." }, { "instance_id": "R55219xR55105", "comparison_id": "R55219", "paper_id": "R55105", "text": "Is propagule size the critical factor in predicting introduction outcomes in passeriform birds? Influential analyses of the propagule pressure hypothesis have been based on multiple bird species introduced to one region (e.g. New Zealand). These analyses implicitly assume that species-level and site-level characteristics are less important than the number of individuals released. In this study we compared records of passerine introductions with propagule size information across multiple regions (New Zealand, Australia, and North America). We excluded species introduced to just one of the three regions or with significant uncertainty in the historical record, as well as species that succeeded or failed in all regions. Because it is often impossible to attribute success to any single event or combination of events, our analysis compared randomly selected propagule sizes of unsuccessful introductions with those of successful introductions. Using Monte Carlo repeated sampling we found no statistical support for the propagule pressure hypothesis, even when using assumptions biased toward showing an effect." }, { "instance_id": "R55219xR55072", "comparison_id": "R55219", "paper_id": "R55072", "text": "Planting history and propagule pressure as predictors of invasion by woody species in a temperate region We studied 28 alien tree species currently planted for forestry purposes in the Czech Republic to determine the probability of their escape from cultivation and naturalization. Indicators of propagule pressure (number of administrative units in which a species is planted and total planting area) and time of introduction into cultivation were used as explanatory variables in multiple regression models. Fourteen species escaped from cultivation, and 39% of the variance was explained by the number of planting units and the time of introduction, the latter being more important. Species introduced early had a higher probability of escape than those introduced later, with more than 95% probability of escape for those introduced before 1801 and <5% for those introduced after 1892. Probability of naturalization was more difficult to predict, and eight species were misclassified. A model omitting two species with the largest influence on the model yielded similar predictors of naturalization as did the probability of escape. Both phases of invasion therefore appear to be driven by planting and introduction history in a similar way. Our results demonstrate the importance of forestry for recruitment of invasive trees. Six alien forestry trees, classified as invasive in the Czech Republic, are currently reported in nature reserves. In addition, forestry authorities want to increase the diversity of alien species and planting area in the country." }, { "instance_id": "R55219xR54956", "comparison_id": "R55219", "paper_id": "R54956", "text": "The vulnerability of habitats to plant invasion: disentangling the roles of propagule pressure, time and sampling effort Aim To quantify the vulnerability of habitats to invasion by alien plants having accounted for the effects of propagule pressure, time and sampling effort. Location New Zealand. Methods We used spatial, temporal and habitat information taken from 9297 herbarium records of 301 alien plant species to examine the vulnerability of 11 terrestrial habitats to plant invasions. A null model that randomized species records across habitats was used to account for variation in sampling effort and to derive a relative measure of invasion based either on all records for a species or only its first record. The relative level of invasion was related to the average distance of each habitat from the nearest conurbation, which was used as a proxy for propagule pressure. The habitat in which a species was first recorded was compared to the habitats encountered for all records of that species to determine whether the initial habitat could predict subsequent habitat occupancy. Results Variation in sampling effort in space and time significantly masked the underlying vulnerability of habitats to plant invasions. Distance from the nearest conurbation had little effect on the relative level of invasion in each habitat, but the number of first records of each species significantly declined with increasing distance. While Urban, Streamside and Coastal habitats were over-represented as sites of initial invasion, there was no evidence of major invasion hotspots from which alien plants might subsequently spread. Rather, the data suggest that certain habitats (especially Roadsides) readily accumulate alien plants from other habitats. Main conclusions Herbarium records combined with a suitable null model provide a powerful tool for assessing the relative vulnerability of habitats to plant invasion. The first records of alien plants tend to be found near conurbations, but this pattern disappears with subsequent spread. Regardless of the habitat where a species was first recorded, ultimately most alien plants spread to Roadside and Sparse habitats. This information suggests that such habitats may be useful targets for weed surveillance and monitoring." }, { "instance_id": "R55219xR54975", "comparison_id": "R55219", "paper_id": "R54975", "text": "Short- and long-term effects of disturbance and propagule pressure on a biological invasion Summary 1. Invading species typically need to overcome multiple limiting factors simultaneously in order to become established, and understanding how such factors interact to regulate the invasion process remains a major challenge in ecology. 2. We used the invasion of marine algal communities by the seaweed Sargassum muticum as a study system to experimentally investigate the independent and interactive effects of disturbance and propagule pressure in the short term. Based on our experimental results, we parameterized an integrodifference equation model, which we used to examine how disturbances created by different benthic herbivores influence the longer term invasion success of S. muticum . 3. Our experimental results demonstrate that in this system neither disturbance nor propagule input alone was sufficient to maximize invasion success. Rather, the interaction between these processes was critical for understanding how the S. muticum invasion is regulated in the short term. 4. The model showed that both the size and spatial arrangement of herbivore disturbances had a major impact on how disturbance facilitated the invasion, by jointly determining how much space-limitation was alleviated and how readily disturbed areas could be reached by dispersing propagules. 5. Synthesis. Both the short-term experiment and the long-term model show that S. muticum invasion success is co-regulated by disturbance and propagule pressure. Our results underscore the importance of considering interactive effects when making predictions about invasion success." }, { "instance_id": "R56110xR56106", "comparison_id": "R56110", "paper_id": "R56106", "text": "Establishment success of introduced amphibians increases in the presence of congeneric species Darwin\u2019s naturalization hypothesis predicts that the success of alien invaders will decrease with increasing taxonomic similarity to the native community. Alternatively, shared traits between aliens and the native assemblage may preadapt aliens to their novel surroundings, thereby facilitating establishment (the preadaptation hypothesis). Here we examine successful and failed introductions of amphibian species across the globe and find that the probability of successful establishment is higher when congeneric species are present at introduction locations and increases with increasing congener species richness. After accounting for positive effects of congeners, residence time, and propagule pressure, we also find that invader establishment success is higher on islands than on mainland areas and is higher in areas with abiotic conditions similar to the native range. These findings represent the first example in which the preadaptation hypothesis is supported in organisms other than plants and suggest that preadaptation has played a critical role in enabling introduced species to succeed in novel environments." }, { "instance_id": "R56110xR56080", "comparison_id": "R56110", "paper_id": "R56080", "text": "The island biogeography of exotic bird species Aim: A recent upsurge of interest in the island biogeography of exotic species has followed from the argument that they may provide valuable information on the natural processes structuring island biotas. Here, we use data on the occurrence of exotic bird species across oceanic islands worldwide to demonstrate an alternative and previously untested hypothesis that these distributional patterns are a simple consequence of where humans have released such species, and hence of the number of species released. Location: Islands around the world. Methods: Statistical analysis of published information on the numbers of exotic bird species introduced to, and established on, islands around the world. Results: Established exotic birds showed very similar species-area relationships to native species, but different species-isolation relationships. However, in both cases the relationship for established exotics simply mimicked that for the number of exotic bird species introduced. Exotic bird introductions scaled positively with human population size and island isolation, and islands that had seen more native species extinctions had had more exotic species released. Main conclusion: The island biogeography of exotic birds is primarily a consequence of human, rather than natural, processes. \u00a9 2007 The Authors Journal compilation \u00a9 2007 Blackwell Publishing Ltd." }, { "instance_id": "R56110xR56092", "comparison_id": "R56110", "paper_id": "R56092", "text": "Global assessment of establishment success for amphibian and reptile invaders Context According to the tens rule, 10% of introduced species establish themselves. Aims We tested this component of the tens rule for amphibians and reptiles globally, in Europe and North America, where data are presumably of good quality, and on islands versus continents. We also tested whether there was a taxonomic difference in establishment success between amphibians and reptiles. Methods We examined data comprising 206 successful and 165 failed introduction records for 161 species of amphibians to 55 locations, and 560 successful and 641 failed introduction records for 469 species of reptiles to 116 locations around the world. Key results Globally, establishment success was not different between amphibians (67%) and reptiles (62%). Both means were well above the 10% value predicted by the tens rule. In Europe and North America, establishment success was lower, although still higher than 10%. For reptiles, establishment success was higher on islands than on continents. Our results question the tens rule and do not show taxonomic differences in establishment success. Implications Similar to studies on other taxa (birds and mammals), we found that establishment success was generally above 40%. This suggests that we should focus management on reducing the number of herptile species introduced because both reptiles and amphibians have a high likelihood of establishing. As data collection on invasions continue, testing establishment success in light of other factors, including propagule pressure, climate matching and taxonomic classifications, may provide additional insight into which species are most likely to establish in particular areas." }, { "instance_id": "R56110xR56078", "comparison_id": "R56110", "paper_id": "R56078", "text": "Global patterns in threats to vertebrates by biological invasions Biological invasions as drivers of biodiversity loss have recently been challenged. Fundamentally, we must know where species that are threatened by invasive alien species (IAS) live, and the degree to which they are threatened. We report the first study linking 1372 vertebrates threatened by more than 200 IAS from the completely revised Global Invasive Species Database. New maps of the vulnerability of threatened vertebrates to IAS permit assessments of whether IAS have a major influence on biodiversity, and if so, which taxonomic groups are threatened and where they are threatened. We found that centres of IAS-threatened vertebrates are concentrated in the Americas, India, Indonesia, Australia and New Zealand. The areas in which IAS-threatened species are located do not fully match the current hotspots of invasions, or the current hotspots of threatened species. The relative importance of biological invasions as drivers of biodiversity loss clearly varies across regions and taxa, and changes over time, with mammals from India, Indonesia, Australia and Europe are increasingly being threatened by IAS. The chytrid fungus primarily threatens amphibians, whereas invasive mammals primarily threaten other vertebrates. The differences in IAS threats between regions and taxa can help efficiently target IAS, which is essential for achieving the Strategic Plan 2020 of the Convention on Biological Diversity." }, { "instance_id": "R56945xR56610", "comparison_id": "R56945", "paper_id": "R56610", "text": "Invasive fruits, novel foods, and choice: na investigation of european starling and american robin frugivory Abstract We compared the feeding choices of an invasive frugivore, the European Starling (Sturnus vulgaris), with those of a native, the American Robin (Turdus migratorius). Using captive birds, we tested whether these species differ in their preferences when offered a choice between a native and an invasive fruit, and between a novel and a familiar food. We examined willingness to eat fruits of selected invasive plants and to select a novel food by measuring the time elapsed before feeding began. Both species demonstrated significant preferences for invasive fruits over similar native fruits in two of three choice tests. Both starlings and robins ate autumn olive (Elaeagnus umbellata) fruits significantly more willingly than Asiatic bittersweet (Celastrus orbiculatus). Starlings, but not robins when choosing between a novel and a familiar food, strongly preferred the familiar food. We found no differences in willingness of birds to eat a novel food when it was the only food available. These results suggest that some fleshy-fruited invasive plants may receive more dispersal services than native plants with similar fruits, and that different frugivores may be seed dispersers for different invasive plants." }, { "instance_id": "R56945xR56583", "comparison_id": "R56945", "paper_id": "R56583", "text": "Positive interactions between invasive plants: The influence of Pyracantha angustifolia on the recruitment of native and exotic woody species Abstract: Positive interactions between species are known to play an important role in the dynamics of plant communities, including the enhancement of invasions by exotics. We studied the influence of the invasive shrub Pyracantha angustifolia (Rosaceae) on the recruitment of native and exotic woody species in a secondary shrubland in central Argentina mountains. We recorded woody sapling recruitment and micro-environmental conditions under the canopies of Pyracantha and the dominant native shrub Condalia montana (Rhamnaceae), and in the absence of shrub cover, considering these situations as three treatments. We found that native and exotic species richness were higher under Pyracantha than under the other treatments. Ligustrum lucidum (Oleaceae), an exotic bird-dispersed shade-tolerant tree, was the most abundant species recruiting in the area, and its density was four times higher under the canopy of Pyracantha. This positive interaction may be related to Pyracantha's denser shading, to the mechanical protection of its canopy against ungulates, and/or to the simultaneous fruit ripening of both woody invaders." }, { "instance_id": "R56945xR56614", "comparison_id": "R56945", "paper_id": "R56614", "text": "Inhibition between invasives: a newly introduced predator moderates the impacts of a previously established invasive predator 1. With continued globalization, species are being transported and introduced into novel habitats at an accelerating rate. Interactions between invasive species may provide important mechanisms that moderate their impacts on native species. 2. The European green crab Carcinus maenas is an aggressive predator that was introduced to the east coast of North America in the mid-1800 s and is capable of rapid consumption of bivalve prey. A newer invasive predator, the Asian shore crab Hemigrapsus sanguineus, was first discovered on the Atlantic coast in the 1980s, and now inhabits many of the same regions as C. maenas within the Gulf of Maine. Using a series of field and laboratory investigations, we examined the consequences of interactions between these predators. 3. Density patterns of these two species at different spatial scales are consistent with negative interactions. As a result of these interactions, C. maenas alters its diet to consume fewer mussels, its preferred prey, in the presence of H. sanguineus. Decreased mussel consumption in turn leads to lower growth rates for C. maenas, with potential detrimental effects on C. maenas populations. 4. Rather than an invasional meltdown, this study demonstrates that, within the Gulf of Maine, this new invasive predator can moderate the impacts of the older invasive predator." }, { "instance_id": "R56945xR56599", "comparison_id": "R56945", "paper_id": "R56599", "text": "Invasion of the southern Gulf of St. Lawrence by the clubbed tunicate (Styela clava Herdman): Potential mechanisms for invasions of Prince Edward Island estuaries All but one of the nine non-native marine species that established populations in the southern Gulf of St. Lawrence (sGSL) in the past decade initially invaded the sGSL via coastal and estuarine waters of Prince Edward Island (PEI). Almost half of these species are tunicates, and all but one still occur only in PEI. Recent introductions include Styela clava Herdman in 1997, Botryllus schlosseri (Pallas) in 2001, Botrylloides violaceus Oka in 2002, and Ciona intestinalis (Linnaeus) in 2004. The goal of this paper was to investigate which characteristics of PEI estuaries may have resulted in their being more susceptible to tunicate invasions than estuaries elsewhere in the sGSL. At least one genus that recently established viable populations in PEI was previously introduced to the Gulf of St. Lawrence, apparently without establishing permanent populations. This implies that either propagule pressure has increased or environmental factors are more conducive to establishment now than they were previously. The fluctuating resource availability model predicts increased invasibility of environments that experience pulses of resources such as space or nutrients. Intense development of both agriculture and aquaculture in PEI, and high population density compared to other areas of the sGSL, are associated with high and fluctuating estuarine nutrient levels and a large surface area of artificial substrates (mussel socks) that is kept relatively free of competitors, and is replaced regularly. Changes in nutrient loading and the development of aquaculture have also occurred within the past two to three decades. The provision of artificial structure is likely a critical factor in the successful establishment of tunicates in PEI, because natural hard substrates are scarce. Facilitation by green crabs (Carcinus maenas L.) may be a contributing factor in the spread of Styela. Only one estuary lacking green crabs has an established population of Styela, and at least two known inoculations of Styela into estuaries without green crabs have failed. A likely mechanism for facilitation is the consumption by green crab of the snail Astyris lunata, a known Styela predator." }, { "instance_id": "R56945xR56903", "comparison_id": "R56945", "paper_id": "R56903", "text": "Quantifying \"apparent\" impact and distinguishing impact from invasiveness in multispecies plant invasions The quantification of invader impacts remains a major hurdle to understanding and managing invasions. Here, we demonstrate a method for quantifying the community-level impact of multiple plant invaders by applying Parker et al.'s (1999) equation (impact = range x local abundance x per capita effect or per unit effect) using data from 620 survey plots from 31 grasslands across west-central Montana, USA. In testing for interactive effects of multiple invaders on native plant abundance (percent cover), we found no evidence for invasional meltdown or synergistic interactions for the 25 exotics tested. While much concern exists regarding impact thresholds, we also found little evidence for nonlinear relationships between invader abundance and impacts. These results suggest that management actions that reduce invader abundance should reduce invader impacts monotonically in this system. Eleven of 25 invaders had significant per unit impacts (negative local-scale relationships between invader and native cover). In decomposing the components of impact, we found that local invader abundance had a significant influence on the likelihood of impact, but range (number of plots occupied) did not. This analysis helped to differentiate measures of invasiveness (local abundance and range) from impact to distinguish high-impact invaders from invaders that exhibit negligible impacts, even when widespread. Distinguishing between high- and low-impact invaders should help refine trait-based prediction of problem species. Despite the unique information derived from evaluation of per unit effects of invaders, invasiveness 'scores based on range and local abundance produced similar rankings to impact scores that incorporated estimates of per unit effects. Hence, information on range and local abundance alone was sufficient to identify problematic plant invaders at the regional scale. In comparing empirical data on invader impacts to the state noxious weed list, we found that the noxious weed list captured 45% of the high impact invaders but missed 55% and assigned the lowest risk category to the highest-impact invader. While such subjective weed lists help to guide invasive species management, empirical data are needed to develop more comprehensive rankings of ecological impacts. Using weed lists to classify invaders for testing invasion theory is not well supported." }, { "instance_id": "R56945xR56795", "comparison_id": "R56945", "paper_id": "R56795", "text": "Invasive parasites in multiple invasive hosts: the arrival of a new host revives a stalled prior parasite invasion The success of a biological invasion can depend upon other invasions; and in some cases, an earlier invader may fail to spread until facilitated by a second invader. Our study documents a case whereby an invasive parasite has remained patchily distributed for decades due to the fragmented nature of available hosts; but the recent arrival of a broadly distributed alternative invasive host species provides an opportunity for the parasite to expand its range considerably. At least 20 years ago, endoparasitic pentastomids (Raillietiella frenata) were brought with their native host, the invasive Asian house gecko Hemidactylus frenatus, to the port city of Darwin in tropical Australia. These geckos rarely disperse away from human habitation, restricting the transmission of their parasites to urban environments \u2013 and thus, their pentastomids have remained patchily distributed and have only been recorded in scant localities, primarily surrounding Darwin. The recent range expansion of the invasive cane toad Rhinella marina into the Darwin area has provided an alternative host for this pentastomid. Our results show that the cane toad is a competent host for Ra. frenata\u2013 toads shed fully embryonated pentastomid eggs in their faeces \u2013 and that pentastomids are now common in cane toads near Darwin. Likely reflecting the tendency for the parasite's traditional definitive host (the Asian house gecko) and only known intermediate host (the cockroach) to reside around buildings, we found the prevalence of this parasite follows an urban distribution. Because cane toads are widely distributed through urban and rural habitat and can shed viable pentastomid eggs, the toad invasion is likely to facilitate the parasite's spread across the tropics, into areas (and additional susceptible hosts) that were previously inaccessible to it." }, { "instance_id": "R56945xR56738", "comparison_id": "R56945", "paper_id": "R56738", "text": "Invasional meltdown. Invader-invader mutualism facilitates a secondary invasion In multiply invaded ecosystems, introduced species should interact with each other as well as with native species. Invader-invader interactions may affect the success of further invaders by altering attributes of recipient communities and propagule pressure. The invasional meltdown hypothesis (IMH) posits that positive interactions among invaders initiate positive population-level feedback that intensifies impacts and promotes secondary invasions. IMH remains controversial: few studies show feedback between invaders that amplifies their effects, and none yet demonstrate facilitation of entry and spread of secondary invaders. Our results show that supercolonies of an alien ant, promoted by mutualism with introduced honeydew-secreting scale insects, permitted invasion by an exotic land snail on Christmas Island, Indian Ocean. Modeling of land snail spread over 750 sites across 135 km2 over seven years showed that the probability of land snail invasion was facilitated 253-fold in ant supercolonies but impeded in intact forest where predaceous native land crabs remained abundant. Land snail occurrence at neighboring sites, a measure of propagule pressure, also promoted land snail spread. Site comparisons and experiments revealed that ant supercolonies, by killing land crabs but not land snails, disrupted biotic resistance and provided enemy-free space. Predation pressure on land snails was lower (28.6%), survival 115 times longer, and abundance 20-fold greater in supercolonies than in intact forest. Whole-ecosystem suppression of supercolonies reversed the probability of land snail invasion by allowing recolonization of land crabs; land snails were much less likely (0.79%) to invade sites where supercolonies were suppressed than where they remained intact. Our results provide strong empirical evidence for IMH by demonstrating that mutualism between invaders reconfigures key interactions in the recipient community. This facilitates entry of secondary invaders and elevates propagule pressure, propagating their spread at the whole-ecosystem level. We show that identification and management of key facilitative interactions in invaded ecosystems can be used to reverse impacts and restore resistance to further invasions." }, { "instance_id": "R56945xR56682", "comparison_id": "R56945", "paper_id": "R56682", "text": "Facilitative effects of introduced Pacific oysters on native macroalgae are limited by a secondary invader, the seaweed Sargassum muticum Abstract Introduced habitat-providing organisms such as epibenthic bivalves may facilitate the invasion and expansion of further non-native species which may modify the effects of the primary invader on the native system. In the sedimentary intertidal Wadden Sea (south-eastern North Sea) introduced Pacific oysters ( Crassostrea gigas ) have overgrown native blue mussel beds ( Mytilus edulis ). These oyster beds are now providing the major attachment substratum for macroalgae. Recently, oysters have expanded their distribution into the shallow subtidal zone of the Wadden Sea, and there support a rich associated species community including the Japanese seaweed Sargassum muticum , which has been presumably introduced together with the oysters. With a block designed field experiment, we explored the effects of S. muticum on the associated community of soft-bottom C. gigas beds in the shallow subtidal. Replicated oyster plots of 1 m 2 were arranged with a density of 0, 7, 15 or 45 S. muticum m \u2212 2 , respectively. We found no effects of different S. muticum densities on associated epi- and endobenthic community compositions associated to the oyster plots. However, the overall coverage of sessile organisms settling on the oyster shells was significantly reduced at high S. muticum densities. The occurrence of abundant native macro-algal species such as Polysiphonia nigrescens , Antithamnion plumula and Elachista fucicola decreased with increasing S. muticum densities. Sessile invertebrates, by contrast, were only marginally affected and we found no effects of S. muticum canopy on diversity and abundance of endofauna organisms. We conclude that increasing densities of S. muticum on C. gigas beds in the shallow subtidal zone of the Wadden Sea limit the occurrence of native macroalgae which otherwise would benefit from the additional hard substratum provided by the oysters. Thus, a secondary invader may abolish the effects of the primary invader for native species by occupying the new formed niche." }, { "instance_id": "R56945xR56571", "comparison_id": "R56945", "paper_id": "R56571", "text": "Broom and honeybees in Australia: An alien liaison Facilitative interactions between non-indigenous species are gaining recognition as a major driver of invasion success. Cytisus scoparius (L.) Link (Fabaceae), or Scotch broom, is a cosmopolitan invasive shrub that lacks the capacity for vegetative reproduction and is a good model to study facilitative interactions. Its success in pioneer environments is determined by constraints on its reproduction. We determined whether pollinators were required for seed set in C. scoparius at Barrington Tops, NSW, Australia, where the species has infested ca. 14,000 ha across the plateau. Field and laboratory experiments showed that C. scoparius is an obligate outcrossing species at Barrington Tops. Monitoring of plants (10.7 h) showed that the flowers of C. scoparius have to be tripped to effect seed set and the only pollinator to do this was the introduced honeybee, Apis mellifera L. Most floral visits by honeybees result in fruit set (84 %) and because fruits have many ovules (10 - 18 per ovary) a single bee on an average foraging day can effect the production of over 6000 seeds. A review of C. scoparius pollination across four continents revealed major differences in pollen quantity, which may explain differences in the efficiencies of honeybees as pollinators of C. scoparius. The incorporation of pollinator management in an integrated approach for the control of C. scoparius is discussed." }, { "instance_id": "R56945xR56871", "comparison_id": "R56945", "paper_id": "R56871", "text": "Experimental test of the invasional meltdown hypothesis: na exotic herbivore facilitates na exotic plant, but the plant does not reciprocally facilitate the herbivore Summary Ecosystems with multiple exotic species may be affected by facilitative invader interactions, which could lead to additional invasions (invasional meltdown hypothesis). Experiments show that one-way facilitation favours exotic species and observational studies suggest that reciprocal facilitation among exotic species may lead to an invasional meltdown. We conducted a mesocosm experiment to determine whether reciprocal facilitation occurs in wetland communities. We established communities with native wetland plants and aquatic snails. Communities were assigned to treatments: control (only natives), exotic snail (Pomacea maculata) invasion, exotic plant (Alternanthera philoxeroides) invasion, sequential invasion (snails then plants or plants then snails) or simultaneous invasion (snails and plants). Pomacea maculata preferentially consumed native plants, so A. philoxeroides comprised a larger percentage of plant mass and native plant mass was lowest in sequential (snail then plant) invasion treatments. Even though P. maculata may indirectly facilitate A. philoxeroides, A. philoxeroides did not reciprocally facilitate P. maculata. Rather, ecosystems invaded by multiple exotic species may be affected by one-way facilitation or reflect exotic species\u2019 common responses to abiotic factors or common paths of introduction." }, { "instance_id": "R56945xR56678", "comparison_id": "R56945", "paper_id": "R56678", "text": "Effects of introduced Canada geese (Branta canadensis) on native plant communities of the southern gulf islands, British Columbia Abstract: Recent experiments suggest that introduced, non-migratory Canada geese (Branta canadensis) may be facilitating the spread of exotic grasses and decline of native plant species abundance on small islets in the Georgia Basin, British Columbia, which otherwise harbour outstanding examples of threatened maritime meadow ecosystems. We examined this idea by testing if the presence of geese predicted the abundance of exotic grasses and native competitors at 2 spatial scales on 39 islands distributed throughout the Southern Gulf and San Juan Islands of Canada and the United States, respectively. At the plot level, we found significant positive relationships between the percent cover of goose feces and exotic annual grasses. However, this trend was absent at the scale of whole islands. Because rapid population expansion of introduced geese in the region only began in the 1980s, our results are consistent with the hypothesis that the deleterious effects of geese on the cover of exotic annual grasses have yet to proceed beyond the local scale, and that a window of opportunity now exists in which to implement management strategies to curtail this emerging threat to native ecosystems. Research is now needed to test if the removal of geese results in the decline of exotic annual grasses." }, { "instance_id": "R56945xR56923", "comparison_id": "R56945", "paper_id": "R56923", "text": "Early life stages of exotic gobiids as new hosts for unionid glochidia Summary Introduction of an exotic species has the potential to alter interactions between fish and bivalves; yet our knowledge in this field is limited, not least by lack of studies involving fish early life stages (ELS). Here, for the first time, we examine glochidial infection of fish ELS by native and exotic bivalves in a system recently colonised by two exotic gobiid species (round goby Neogobius melanostomus, tubenose goby Proterorhinus semilunaris) and the exotic Chinese pond mussel Anodonta woodiana. The ELS of native fish were only rarely infected by native glochidia. By contrast, exotic fish displayed significantly higher native glochidia prevalence and mean intensity of infection than native fish (17 versus 2% and 3.3 versus 1.4 respectively), inferring potential for a parasite spillback/dilution effect. Exotic fish also displayed a higher parasitic load for exotic glochidia, inferring potential for invasional meltdown. Compared to native fish, presence of gobiids increased the total number of glochidia transported downstream on drifting fish by approximately 900%. We show that gobiid ELS are a novel, numerous and \u2018attractive\u2019 resource for unionid glochidia. As such, unionids could negatively affect gobiid recruitment through infection-related mortality of gobiid ELS and/or reinforce downstream unionid populations through transport on drifting gobiid ELS. These implications go beyond what is suggested in studies of older life stages, thereby stressing the importance of an holistic ontogenetic approach in ecological studies." }, { "instance_id": "R56945xR56650", "comparison_id": "R56945", "paper_id": "R56650", "text": "Removal of invasive shrubs reduces exotic earthworm populations Invasive species are a leading threat to native ecosystems, and research regarding their effective control is at the forefront of applied ecology. Exotic facilitation has been credited with advancing the success of several aggressive invasive species. Here, we suggest using the knowledge of exotic facilitations to control invasive earthworm populations. In northern hardwood forests, the invasive shrubs Rhamnus cathartica (buckthorn) and Lonicera x bella (honeysuckle) produce high quality leaf litter, and their abundance is positively correlated with exotic earthworms, which increase nutrient cycling rates. We performed an invasive plant removal experiment in two northern hardwood forest stands, one dominated by buckthorn and the other by honeysuckle. Removal of invasive shrubs reduced exotic earthworm populations by roughly 50% for the following 3 years. By targeting invasive species that are part of positive feedback loops, land managers can multiply the positive effects of invasive species removal." }, { "instance_id": "R56945xR56640", "comparison_id": "R56945", "paper_id": "R56640", "text": "Role of invasive Melilotus officinalis in two native plant communities This study examines the impact of the exotic nitrogen-fixing legume Melilotus officinalis (L.) Lam. on native and exotic species cover in two Great Plains ecosystems in Badlands National Park, South Dakota. Melilotus is still widely planted and its effects on native ecosystems are not well studied. Melilotus could have direct effects on native plants, such as through competition or facilitation. Alternatively, Melilotus may have indirect effects on natives, e.g., by favoring exotic species which in turn have a negative effect on native species. This study examined these interactions across a 4-year period in two contrasting vegetation types: Badlands sparse vegetation and western wheatgrass (Pascopyrum smithii) mixed-grass prairie. Structural equation models were used to analyze the pathways through which Melilotus, native species, and other exotic species interact over a series of 2-year time steps. Melilotus can affect native and exotic species both in the current year and in the years after its death (a lag effect). A lag effect is possible because the death of a Melilotus plant can leave an open, potentially nitrogen-enriched site on the landscape. The results showed that the relationship between Melilotus and native and exotic species varied depending on the habitat and the year. In Badlands sparse vegetation, there was a consistent, strong, and positive relationship between Melilotus cover and native and exotic species cover suggesting that Melilotus is acting as a nurse plant and facilitating the growth of other species. In contrast, in western wheatgrass prairie, Melilotus was acting as a weak competitor and had no consistent effect on other species. In both habitats, there was little evidence for a direct lag effect of Melilotus on other species. Together, these results suggest both facilitative and competitive roles for Melilotus, depending on the vegetation type it invades." }, { "instance_id": "R56945xR56911", "comparison_id": "R56945", "paper_id": "R56911", "text": "Invasion of the redback spider Latrodectus hasseltii (Araneae: Theridiidae) into human-modified sand dune ecosystems in Japan Invasions of some areas of Japan by the exotic redback spider Latrodectus hasseltii Thorell (Araneae: Theridiidae) have been reported. While most of these invasions have occurred in urban areas, anthropogenic habitat modifications may provide an opportunity for L. hasseltii to invade semi-natural ecosystems, but the ecological impacts of L. hasseltii have only rarely been studied. We therefore examined the distribution of L. hasseltii in sand dune ecosystems and its potential impacts on other animals. In addition, we surveyed the occurrence of spiders on the exotic yucca Yucca gloriosa L. (Asparagaceae), another invader of sand dune ecosystems. Latrodectus hasseltii was observed in six of 18 sand dunes in the Chita Peninsula, central Japan, and was the dominant web-building spider at one site. The web contents of L. hasseltii consisted of various arthropod species, including the threatened ground beetle Scarites sulcatus Olivier (Carabidae). In all, 24 of 172 patches of exotic yucca were occupied by L. hasseltii, suggesting that colonization by exotic plants may facilitate the invasion of L. hasseltii into sand dunes. This is the first report of the invasion of L. hasseltii into semi-natural habitats in Japan, and these results suggest that L. hasseltii poses a threat to the conservation of coastal insects inhabiting human-modified sand dune ecosystems." }, { "instance_id": "R56945xR56869", "comparison_id": "R56945", "paper_id": "R56869", "text": "A single ectomycorrhizal fungal species can enable a Pinus invasion Like all obligately ectomycorrhizal plants, pines require ectomycorrhizal fungal symbionts to complete their life cycle. Pines introduced into regions far from their native range are typically incompatible with local ectomycorrhizal fungi, and, when they invade, coinvade with fungi from their native range. While the identities and distributions of coinvasive fungal symbionts of pine invasions are poorly known, communities that have been studied are notably depauperate. However, it is not yet clear whether any number of fungal coinvaders is able to support a Pinaceae invasion, or whether very depauperate communities are unable to invade. Here, we ask whether there is evidence for a minimum species richness of fungal symbionts necessary to support a pine/ectomycorrhizal fungus coinvasion. We sampled a Pinus contorta invasion front near Coyhaique, Chile, using molecular barcoding to identify ectomycorrhizal fungi. We report that the site has a total richness of four species, and that many invasive trees appear to be supported by only a single ectomycorrhizal fungus, Suillus luteus. We conclude that a single ectomycorrhizal (ECM) fungus can suffice to enable a pine invasion." }, { "instance_id": "R56945xR56670", "comparison_id": "R56945", "paper_id": "R56670", "text": "Do rabbits eat voles? Apparent competition, habitat heterogeneity and large-scale coexistence under mink predation Habitat heterogeneity is predicted to profoundly influence the dynamics of indirect interspecific interactions; however, despite potentially significant consequences for multi-species persistence, this remains almost completely unexplored in large-scale natural landscapes. Moreover, how spatial habitat heterogeneity affects the persistence of interacting invasive and native species is also poorly understood. Here we show how the persistence of a native prey (water vole, Arvicola terrestris) is determined by the spatial distribution of an invasive prey (European rabbit, Oryctolagus cuniculus) and directly infer how this is defined by the mobility of a shared invasive predator (American mink, Neovison vison). This study uniquely demonstrates that variation in habitat connectivity in large-scale natural landscapes creates spatial asynchrony, enabling coexistence between apparent competitive native and invasive species. These findings highlight that unexpected interactions may be involved in species declines, and also that in such cases habitat heterogeneity should be considered in wildlife management decisions." }, { "instance_id": "R56945xR56933", "comparison_id": "R56945", "paper_id": "R56933", "text": "A synergistic trio of invasive mammals? Facilitative interactions among beavers, muskrats, and mink at the southern end of the Americas With ecosystems increasingly having co-occurring invasive species, it is becoming more important to understand invasive species interactions. At the southern end of the Americas, American beavers (Castor canadensis), muskrats (Ondatra zibethicus), and American mink (Neovison vison), were independently introduced. We used generalized linear models to investigate how muskrat presence related to beaver-modified habitats on Navarino Island, Chile. We also investigated the trophic interactions of the mink with muskrats and beavers by studying mink diet. Additionally, we proposed a conceptual species interaction framework involving these invasive species on the new terrestrial community. Our results indicated a positive association between muskrat presence and beaver-modified habitats. Model average coefficients indicated that muskrats preferred beaver-modified freshwater ecosystems, compared to not dammed naturally flowing streams. In addition, mammals and fish represented the main prey items for mink. Although fish were mink\u2019s dominant prey in marine coastal habitats, muskrats represented >50 % of the biomass of mink diet in inland environments. We propose that beavers affect river flow and native vegetation, changing forests into wetlands with abundant grasses and rush vegetation. Thus, beavers facilitate the existence of muskrats, which in turn sustain inland mink populations. The latter have major impacts on the native biota, especially on native birds and small rodents. The facilitative interactions among beavers, muskrats, and mink that we explored in this study, together with other non-native species, suggest that an invasive meltdown process may exist; however further research is needed to confirm this hypothesis. Finally, we propose a community-level management to conserve the biological integrity of native ecosystems." }, { "instance_id": "R56945xR56551", "comparison_id": "R56945", "paper_id": "R56551", "text": "Na emergent multiple predator effect may enhance biotic resistance in a stream fish assemblage While two cyprinid fishes introduced from nearby drainages have become widespread and abundant in the Eel River of northwestern California, a third nonindigenous cyprinid has remained largely confined to \u226425 km of one major tributary (the Van Duzen River) for at least 15 years. The downstream limit of this species, speckled dace, does not appear to correspond with any thresholds or steep gradients in abiotic conditions, but it lies near the upstream limits of three other fishes: coastrange sculpin, prickly sculpin, and nonindigenous Sacramento pikeminnow. We conducted a laboratory stream experiment to explore the potential for emergent multiple predator effects to influence biotic resistance in this situation. Sculpins in combination with Sacramento pikeminnow caused greater mortality of speckled dace than predicted based on their separate effects. In contrast to speckled dace, 99% of sculpin survived trials with Sacramento pikeminnow, in part because sculpin usually occupied benthic cover units while Sacramento pikeminnow occupied the water column. A 10-fold difference in benthic cover availability did not detectably influence biotic interactions in the experiment. The distribution of speckled dace in the Eel River drainage may be limited by two predator taxa with very different patterns of habitat use and a shortage of alternative habitats." }, { "instance_id": "R56945xR56658", "comparison_id": "R56945", "paper_id": "R56658", "text": "Experimental test of the impacts of feral hogs on forest dynamics and processes in the southeastern US Abstract The foraging activities of nonindigenous feral hogs (Sus scrofa) create widespread, conspicuous soil disturbances. Hogs may impact forest regeneration dynamics through both direct effects, such as consumption of seeds, or indirectly via changes in disturbance frequency or intensity. Because they incorporate litter and live plant material into the soil, hogs may also influence ground cover and soil nutrient concentrations. We investigated the impacts of exotic feral hogs in a mixed pine-hardwood forest in the Big Thicket National Preserve (Texas, USA) where they are abundant. We established sixteen 10 m \u00d7 10 m plots and fenced eight of them to exclude feral hogs for 7 years. Excluding hogs increased the diversity of woody plants in the understory. Large seeded (>250 mg) species known to be preferred forage of feral hogs all responded positively to hog exclusion, thus consumption of Carya (hickory nuts), Quercus (acorns), and Nyssa seeds (tupelo) by hogs may be causing this pattern. The only exotic woody species, Sapium sebiferum (Chinese tallow tree), was more than twice as abundant with hogs present, perhaps as a response to increased disturbance. Hogs increased the amount of bare soil by decreasing the amounts of plant cover and surface litter. Plots with hogs present had lower soil C:N, possibly due to accelerated rates of nitrogen mineralization. These results demonstrate that hogs may influence future overstory composition and reduce tree diversity in this forest. Management of hogs may be desirable in this and other forests where large-seeded species are an important component of the ecosystem. Further, by accelerating litter breakdown and elevating nitrogen in the soil, hogs have the potential to impact local vegetation composition via nitrogen inputs as well." }, { "instance_id": "R56945xR56740", "comparison_id": "R56945", "paper_id": "R56740", "text": "Reduced fecundity by one invader in the presence of another. A potential mechanism leading to species replacement Abstract As invasive species proliferate and expand their ranges, they often interact either with natives or with other invasives across a broad geographic range. Moreover, because geographic ranges span a diversity of environments, the outcome of interactions between species pairs may vary spatially. The European green crab Carcinus maenas and the Asian shore crab Hemigrapsus sanguineus are both introduced species in North America where they co-occur over a large portion of the Atlantic coast. While interactions between the two crabs in the southern portion of this range within Long Island Sound resulted in the elimination of C. maenas within 2\u20133 years of H. sanguineus' arrival, species replacement appears to be taking much longer in northern areas within the Gulf of Maine. Previous work implicates predation by H. sanguineus on C. maenas recruits as the mechanism underlying species replacement. Here we explore an alternative or additional mechanism underlying this species replacement that can also account for the observed spatial variation in the timescale of species replacement between northern and southern areas. Specifically, we demonstrate that a previously documented shift in C. maenas diet which occurs in the presence of H. sanguineus can cause a reduction in C. maenas fecundity. This, combined with near-shore current patterns may explain the regional differences in the outcome of this species interaction." }, { "instance_id": "R56945xR56776", "comparison_id": "R56945", "paper_id": "R56776", "text": "Role of top-down and botton-up forces on the invasibility of intertidal macroalgal assemblages article i nfo Despite the available information regarding the negative effects of non-indigenous species (NIS) on ecosystem structure and functioning, the mechanisms controlling NIS invasion remain poorly understood. Here, we inves- tigated the relative roles of top-down and bottom-up control on the invasion of intertidal macroalgal assem- blages by the macroalga Sargassum muticum (Yendo) Fensholt. Using a factorial experiment, nutrient availability and intensity of herbivory were manipulated along an intertidal rocky shore. We found that early re- cruitment of S. muticumwas enhanced by low nutrient enrichmentbut noeffectof grazers was observed. In con- trast, atthe end of the experiment (9-months after invasion) top-down control, together with the numberof NIS and the percentage cover of ephemerals, was a significant predictor for the invasion success of S. muticum .I n ad- dition, both top-down and bottom-up forces played a significant role in structuring macroalgal assemblages, which indirectly could have influenced invasion success. Hence, by shaping community structure, main and in- teractive effects of bottom-up and top down forces may indirectly act on invasion. Our study highlights the importance of the recipient community structureon the invasion process and emphasizesthe specifi cr egulation of top-down and bottom-up forces in different stages of S. muticum invasion. \u00a9 2012 Published by Elsevier B.V." }, { "instance_id": "R56945xR56744", "comparison_id": "R56945", "paper_id": "R56744", "text": "Invasional meltdown. Pollination of the invasive liana Passiflora tripartita var mollissima (Passifloracea) in New Zealand Banana passionfruit (Passiflora tripartita var. mollissima) is an invasive vine in New Zealand where it lacks its natural hummingbird pollinator. We investigated the mating system and reproductive traits that facilitate its spread in the Marlborough Sounds. Flower observations revealed that visitors were almost exclusively introduced honeybees and bumblebees, indicating an invasive mutualism. We investigated the pollination system of banana passionfruit by comparing fruit set, fruit size, seed set, and germination success between hand-selfed, hand-crossed, bagged and open flowers, and inbreeding depression in seedlings grown in competition. Fruit set was reduced by 83% when pollinators were excluded (3.0% fruit set, compared with 18.0% for unmanipulated flowers) indicating reliance on pollinators for reproduction. While banana passionfruit is partially self-compatible, fruit set was significantly reduced in hand-selfed flowers (17.5%) compared with crossed flowers (29.5%), and we found significant pollen limitation (hand-crossed vs unmanipulated, Pollen Limitation Index = 0.39). There was no significant inbreeding depression found in fruit size, seeds per fruit, germination success, seedling growth or seedling survival. Combining these data showed that natural unmanipulated flowers produce more seedlings per flower (1.7) than bagged flowers (0.9), but fewer than hand-selfed (3.0) and hand-crossed (5.3) flowers. Thus, reproduction in Passiflora tripartita var. mollissima is facilitated by an (imperfect) new association with exotic bees." }, { "instance_id": "R56945xR56708", "comparison_id": "R56945", "paper_id": "R56708", "text": "Interactions between two introduced species: Zostera japonica (dwarf eelgrass) facilitates itself and reduces condition of Ruditapes philippinarum (Manila clam) on intertidal flats Dwarf eelgrass (duckgrass; Zostera japonica) and Manila clams (Ruditapes philippinarum) are two introduced species that co-occur on intertidal flats of the northeast Pacific. Through factorial manipulation of clam (0, 62.5, 125 clams m\u22122) and eelgrass density (present, removed by hand, harrowed), we examined intra- and interspecific effects on performance, as well as modification of the physical environment. The presence of eelgrass reduced water flow by up to 40% and was also observed to retain water at low tide, which may ameliorate desiccation and explain why eelgrass grew faster in the presence of conspecifics (positive feedback). Although shell growth of small (20\u201350 mm) clams was not consistently affected by either treatment in this 2-month experiment, clam condition improved when eelgrass was removed. Reciprocally, clams at aquaculture densities had no effect on eelgrass growth, clam growth and condition, or porewater nutrients. Overall, only Z. japonica demonstrated strong population-level interactions. Interspecific results support an emerging paradigm that invasive marine ecosystem engineers often negatively affect infauna. Positive feedbacks for Z. japonica may characterize its intraspecific effects particularly at the stressful intertidal elevation of this study (+1 m above mean lower low water)." }, { "instance_id": "R56945xR56732", "comparison_id": "R56945", "paper_id": "R56732", "text": "Identification of alien predators that should not be removed for controlling invasive crayfish threatening endangered odonates 1. When multiple invasive species coexist in the same ecosystem and their diets change as they grow, determining whether to eradicate any particular invader is difficult because of complex predator\u2013prey interactions. 2. A stable isotope food-web analysis was conducted to explore an appropriate management strategy for three potential alien predators (snakehead Channa argus, bullfrog Rana catesbeiana, red-eared slider turtle Trachemys scripta elegans) of invasive crayfish Procambarus clarkii that had severely reduced the densities of endangered odonates in a pond in Japan. 3. The stable isotope analysis demonstrated that medium- and small-sized snakeheads primarily depended on crayfish and stone moroko Pseudorasbora parva. Both adult and juvenile bullfrogs depended on terrestrial arthropods, and juveniles exhibited a moderate dependence on crayfish. The turtle showed little dependence on crayfish. 4. These results suggest that eradication of snakeheads risks the possibility of mesopredator release, while such risk appears to be low in other alien predators. Copyright \u00a9 2011 John Wiley & Sons, Ltd." }, { "instance_id": "R56945xR56662", "comparison_id": "R56945", "paper_id": "R56662", "text": "The diversity of juvenile salmonids does not affect their competitive impact on a native glaxiid We used an invaded stream fish community in southern Chile to experimentally test whether the diversity of exotic species affects their competitive impact on a native species. In artificial enclosures an established invasive, rainbow trout, Oncorhynchus mykiss, and a potential invader, Atlantic salmon, Salmo salar, reduced the growth rate of native peladilla, Aplochiton zebra, by the same amount. In enclosures with both exotic salmonids, the growth rates of all three species were the same as in single exotic treatments. While neither species identity nor diversity appeared to affect competitive interactions in this experiment, the impact of salmonid diversity may vary with the type of interspecific interaction and/or the species identity of the exotics. Our experiment links two prominent concepts in invasion biology by testing whether the result of invasional meltdown, an increase in the diversity of exotic species, affects their impact through interspecific competition, the mechanism invoked by the biotic resistance hypothesis." }, { "instance_id": "R56945xR56557", "comparison_id": "R56945", "paper_id": "R56557", "text": "The portability of foodweb dynamics: reassembling na Australian eucalypt-psyllid-bird association within California Aims. To evaluate the role of native predators (birds) within an Australian foodweb (lerp psyllids and eucalyptus trees) reassembled in California. Location. Eucalyptus groves within Santa Cruz, California. Methods. We compared bird diversity and abundance between a eucalyptus grove infested with lerp psyllids and a grove that was uninfested, using point counts. We documented shifts in the foraging behaviour of birds between the groves using structured behavioural observations. Additionally, we judged the effect of bird foraging on lerp psyllid abundance using exclosure experiments. Results. We found a greater richness and abundance of Californian birds within a psyllid infested eucalyptus grove compared to a matched non-infested grove, and that Californian birds modify their foraging behaviour within the infested grove in order to concentrate on ingesting psyllids. This suggests that Californian birds could provide indirect top-down benefits to eucalyptus trees similar to those observed in Australia. However, using bird exclosure experiments, we found no evidence of top-down control of lerp psyllids by Californian birds. Main conclusions. We suggest that physiological and foraging differences between Californian and Australian pysllid-eating birds account for the failure to observe top-down control of psyllid populations in California. The increasing rate of non-indigenous species invasions has produced local biotas that are almost entirely composed of non-indigenous species. This example illustrates the complex nature of cosmopolitan native-exotic food webs, and the ecological insights obtainable through their study. \u00a9 2004 Blackwell Publishing Ltd." }, { "instance_id": "R56945xR56648", "comparison_id": "R56945", "paper_id": "R56648", "text": "Implications of beaver Castor canadensis and trout introductions on native fish in the Cape Horn biosphere reserve, Chile Abstract Invasive species threaten global biodiversity, but multiple invasions make predicting the impacts difficult because of potential synergistic effects. We examined the impact of introduced beaver Castor canadensis, brook trout Salvelinus fontinalis, and rainbow trout Oncorhynchus mykiss on native stream fishes in the Cape Horn Biosphere Reserve, Chile. The combined effects of introduced species on the structure of the native freshwater fish community were quantified by electrofishing 28 stream reaches within four riparian habitat types (forest, grassland, shrubland, and beaver-affected habitat) in 23 watersheds and by measuring related habitat variables (water velocity, substrate type, depth, and the percentage of pools). Three native stream fish species (puye Galaxias maculatus [also known as inanga], Aplochiton taeniatus, and A. zebra) were found along with brook trout and rainbow trout, but puye was the only native species that was common and widespread. The reaches affected by beaver impoundmen..." }, { "instance_id": "R56945xR56652", "comparison_id": "R56945", "paper_id": "R56652", "text": "Tree leaf litter composition and nonnative earthworms influence plant invasion in experimental forest floor mosocosms Dominant tree species influence community and ecosystem components through the quantity and quality of their litter. Effects of litter may be modified by activity of ecosystem engineers such as earthworms. We examined the interacting effects of forest litter type and earthworm presence on invasibility of plants into forest floor environments using a greenhouse mesocosm experiment. We crossed five litter treatments mimicking historic and predicted changes in dominant tree composition with a treatment of either the absence or presence of nonnative earthworms. We measured mass loss of each litter type and growth of a model nonnative plant species (Festuca arundinacea, fescue) sown into each mesocosm. Mass loss was greater for litter of tree species characterized by lower C:N ratios. Earthworms enhanced litter mass loss, but only for species with lower C:N, leading to a significant litter \u00d7 earthworm interaction. Fescue biomass was significantly greater in treatments with litter of low C:N and greater mass loss, suggesting that rapid decomposition of forest litter may be more favorable to understory plant invasions. Earthworms were expected to enhance invasion by increasing mass loss and removing the physical barrier of litter. However, earthworms typically reduced invasion success but not under invasive tree litter where the presence of earthworms facilitated invasion success compared to other litter treatments where earthworms were present. We conclude that past and predicted future shifts in dominant tree species may influence forest understory invasibility. The presence of nonnative earthworms may either suppress of facilitate invasibility depending on the species of dominant overstory tree species and the litter layers they produce." }, { "instance_id": "R56945xR56750", "comparison_id": "R56945", "paper_id": "R56750", "text": "Evaluating the effect of American mink, na alien invasive species, on the abundance of a native community. Is coexistence possible? Loss of biodiversity due to biological invasions is one of the most critical issues our society is facing. American mink is one of the most nefarious invasive non-native species and has major consequences for diversity, ecosystems and economics. A project to evaluate the impact of American mink has been carried out in Catalonia since 2000 under the aegis of regional and national government and a European LIFE programme. In this study, we tested whether temporal variations in the relative abundance of native species were related to American mink. In addition, we compared the abundance of natives before and after mink arrival. Among the competitors spotted genet and European polecat, mink abundance and arrival had a significant negative effect on their populations. However, among black rat and fish prey only three native fish species had a negative temporal relation with the abundance of mink and three fish species showed a significant difference in their abundance before and after mink arrival. The effect of mink was significant among species with a higher niche overlap (polecat and genet versus mink). The persistence and coexistence of the alien and native species seems to depend on heterogeneity, in terms of the based on niche segregation among these species." }, { "instance_id": "R56945xR56917", "comparison_id": "R56945", "paper_id": "R56917", "text": "Feeding behaviour, predatory functional response and trophic interactions of the invasive Chinese mitten crab (Eriocheir sinensis) and signal crayfish (Pacifastacus leniusculus) 1. Freshwaters are subject to particularly high rates of species introductions; hence, invaders increasingly co-occur and may interact to enhance impacts on ecosystem structure and function. As trophic interactions are a key mechanism by which invaders influence communities, we used a combination of approaches to investigate the feeding preferences and community impacts of two globally invasive large benthic decapods that co-occur in freshwaters: the signal crayfish (Pacifastacus leniusculus) and Chinese mitten crab (Eriocheir sinensis). 2. In laboratory preference tests, both consumed similar food items, including chironomids, isopods and the eggs of two coarse fish species. In a comparison of predatory functional responses with a native crayfish Austropotamobius pallipes), juvenile E. sinensis had a greater predatory intensity than the native A. pallipes on the keystone shredder Gammarus pulex, and also displayed a greater preference than P. leniusculus for this prey item. 3. In outdoor mesocosms (n = 16) used to investigate community impacts, the abundance of amphipods, isopods, chironomids and gastropods declined in the presence of decapods, and a decapod >gastropod >periphyton trophic cascade was detected when both species were present. Eriocheir sinensis affected a wider range of animal taxa than P. leniusculus. 4. Stable-isotope and gut-content analysis of wild-caught adult specimens of both invaders revealed a wide and overlapping range of diet items including macrophytes, algae, terrestrial detritus, macroinvertebrates and fish. Both decapods were similarly enriched in 15N and occupied the same trophic level as Ephemeroptera, Odonata and Notonecta. Eriocheir sinensis d13C values were closely aligned with macrophytes indicating a reliance on energy from this basal resource, supported by evidence of direct consumption from gut contents. Pacifastacus leniusculus d13C values were intermediate between those of terrestrial leaf litter and macrophytes, suggesting reliance on both allochthonous and autochthonous energy pathways. 5. Our results suggest that E. sinensis is likely to exert a greater per capita impact on the macroinvertebrate communities in invaded systems than P. leniusculus, with potential indirect effects on productivity and energy flow through the community." }, { "instance_id": "R56945xR56638", "comparison_id": "R56945", "paper_id": "R56638", "text": "Positive interactions among plant species for pollinator service: assessing the 'magnet species' concept with invasive species Plants with poorly attractive flowers or with little floral rewards may have inadequate pollinator service, which in turn reduces seed output. However, pollinator service of less attractive species could be enhanced when they are associated with species with highly attractive flowers (so called \u2018magnet-species\u2019). Although several studies have reported the magnet species effect, few of them have evaluated whether this positive interaction result in an enhancement of the seed output for the beneficiary species. Here, we compared pollinator visitation rates and seed output of the invasive annual species Carduus pycnocephalus when grow associated with shrubs of the invasive Lupinus arboreus and when grow alone, and hypothesized that L. arboreus acts as a magnet species for C. pycnocephalus. Results showed that C. pycnocephalus individuals associated with L. arboreus had higher pollinator visitation rates and higher seed output than individuals growing alone. The higher visitation rates of C. pycnocephalus associated to L. arboreus were maintained after accounting for flower density, which consistently supports our hypothesis on the magnet species effect of L. arboreus. Given that both species are invasives, the facilitated pollination and reproduction of C. pycnocephalus by L. arboreus could promote its naturalization in the community, suggesting a synergistic invasional process contributing to an \u2018invasional meltdown\u2019. The magnet effect of Lupinus on Carduus found in this study seems to be one the first examples of indirect facilitative interactions via increased pollination among invasive species." }, { "instance_id": "R56945xR56746", "comparison_id": "R56945", "paper_id": "R56746", "text": "Ecology of brushtail possums in New Zealand dryland ecosystem The introduced brushtail possum (Trichosurus vulpecula) is a major environmental and agricultural pest in New Zealand but little information is available on the ecology of possums in drylands, which cover c. 19% of the country. Here, we describe a temporal snapshot of the diet and feeding preferences of possums in a dryland habitat in New Zealand's South Island, as well as movement patterns and survival rates. We also briefly explore spatial patterns in capture rates. We trapped 279 possums at an average capture rate of 9 possums per 100 trap nights. Capture rates on individual trap lines varied from 0 to 38%, decreased with altitude, and were highest in the eastern (drier) parts of the study area. Stomach contents were dominated by forbs and sweet briar (Rosa rubiginosa); both items were consumed preferentially relative to availability. Possums also strongly preferred crack willow (Salix fragilis), which was uncommon in the study area and consumed only occasionally, but in large amounts. Estimated activity areas of 29 possums radio-tracked for up to 12 months varied from 0.2 to 19.5 ha (mean 5.1 ha). Nine possums (4 male, 5 female) undertook dispersal movements (\u22651000 m), the longest of which was 4940 m. The most common dens of radio-collared possums were sweet briar shrubs, followed by rock outcrops. Estimated annual survival was 85% for adults and 54% for subadults. Differences between the diets, activity areas and den use of possums in this study and those in forest or farmland most likely reflect differences in availability and distribution of resources. Our results suggest that invasive willow and sweet briar may facilitate the existence of possums by providing abundant food and shelter. In turn, possums may facilitate the spread of weeds by acting as a seed vector. This basic ecological information will be useful in modelling and managing the impacts of possum populations in drylands." }, { "instance_id": "R56945xR56935", "comparison_id": "R56945", "paper_id": "R56935", "text": "Synergistic impacts by na invasive amphipod and na invasive fish explain native gammarid extinction BackgroundWorldwide freshwater ecosystems are increasingly affected by invasive alien species. In particular, Ponto-Caspian gobiid fishes and amphipods are suspected to have pronounced effects on aquatic food webs. However, there is a lack of systematic studies mechanistically testing the potential synergistic effects of invasive species on native fauna. In this study we investigated the interrelations between the invasive amphipod Dikerogammarus villosus and the invasive fish species Neogobius melanostomus in their effects on the native amphipod Gammarus pulex. We hypothesized selective predation by the fish as a driver for displacement of native species resulting in potential extinction of G. pulex. The survival of G. pulex in the presence of N. melanostomus in relation to the presence of D. villosus and availability of shelter was analyzed in the context of behavioural differences between the amphipod species.ResultsGammarus pulex had a significantly higher susceptibility to predation by N. melanostomus compared to D. villosus in all experiments, suggesting preferential predation by this fish on native gammarids. Furthermore, the presence of D. villosus significantly increased the vulnerability of G. pulex to fish predation. Habitat structure was an important factor for swimming activity of amphipods and their mortality, resulting in a threefold decrease in amphipods consumed with shelter habitat structures provided. Behavioral differences in swimming activity were additionally responsible for higher predation rates on G. pulex. Intraguild predation could be neglected within short experimental durations.ConclusionsThe results of this study provide evidence for synergistic effects of the two invasive Ponto-Caspian species on the native amphipod as an underlying process of species displacements during invasion processes. Prey behaviour and monotonous habitat structures additionally contribute to the decline of the native gammarid fauna in the upper Danube River and elsewhere." }, { "instance_id": "R56945xR56720", "comparison_id": "R56945", "paper_id": "R56720", "text": "Plant-based food resources, trophic interactions among alien species, and the abundance of na invasive ant Recent research on invasive ants suggests that their success may be facilitated by increased resources at introduced locations stemming from the emergence of novel trophic interactions with abundant honeydew-producing Hemiptera. Moreover, those Hemiptera may themselves often be introduced or invasive. To test the importance of mutualisms for invasive species, we conducted a study in the southeastern United States of factors hypothesized to affect the abundance of an invasive ant native to South America, Solenopsis invicta. The study was conducted within grazing pastures, where S. invicta can be extremely abundant while also exhibiting substantial variability in abundance. A path analysis showed that the abundance of S. invicta was strongly and positively affected by the abundance of an invasive honeydew-producing mealybug native to Asia, Antonina graminis, and by the mealybugs\u2019 host grasses because of their strong positive effect on mealybug abundance. Abundance of the mealybug was primarily attributable to an invasive host grass native to Africa, Cynodon dactylon. The abundance of S. invicta was also positively affected by the abundance of other arthropods that they are likely to consume, and those arthropods were positively affected by the abundance of both the A. graminis host grasses and other plants. Thus the study shows that the distribution and abundance of different plant species could have important effects on the abundance of S. invicta through their effect on the ants\u2019 food resources. The results are also consistent with the hypothesis that the emergence of novel trophic interactions among invasive species can promote the abundance of invasive ants." }, { "instance_id": "R56945xR56642", "comparison_id": "R56945", "paper_id": "R56642", "text": "New mutualism for old: indirect disruption and direct facilitation of seed dispersal following Argentine ant invasion The indirect effects of biological invasions on native communities are poorly understood. Disruption of native ant communities following invasion by the Argentine ant (Linepithema humile) is widely reported to lead indirectly to the near complete collapse of seed dispersal services. In coastal scrub in southeastern Australia, we examined seed dispersal and handling of two native and two invasive alien plant species at Argentine ant-invaded or -uninvaded sites. The Argentine ant virtually eliminates the native keystone disperser Rhytidoponera victoriae, but seed dispersal did not collapse following invasion. Indeed, Argentine ants directly accounted for 92% of all ant-seed interactions and sustained overall seed dispersal rates. Nevertheless, dispersal quantity and quality among seed species differed between Argentine ant-invaded and -uninvaded sites. Argentine ants removed significantly fewer native Acacia retinodes seeds, but significantly more small seeds of invasive Polygala myrtifolia than did native ants at uninvaded sites. They also handled significantly more large seeds of A. sophorae, but rarely moved them >5 cm, instead recruiting en masse, consuming elaiosomes piecemeal and burying seeds in situ. In contrast, Argentine ants transported and interred P. myrtifolia seeds in their shallow nests. Experiments with artificial diaspores that varied in diaspore and elaiosome masses, but kept seed morphology and elaiosome quality constant, showed that removal by L. humile depended on the interaction of seed size and percentage elaiosome reward. Small diaspores were frequently taken, independent of high or low elaiosome reward, but large artificial diaspores with high reward instead elicited mass recruitment by Argentine ants and were rarely moved. Thus, Argentine ants appear to favour some diaspore types and reject others based largely on diaspore size and percentage reward. Such variability in response indirectly reduces native seed dispersal and can directly facilitate the spread of an invasive alien shrub." }, { "instance_id": "R56945xR56626", "comparison_id": "R56945", "paper_id": "R56626", "text": "Enemy release or invasional meltdown? Deer preference for exotic and native trees on Isla Victoria, Argentina How interactions between exotic species affect invasion impact is a fundamental issue on both theoretical and applied grounds. Exotics can facilitate establishment and invasion of other exotics (invasional meltdown) or they can restrict them by re-establishing natural population control (as predicted by the enemy- release hypothesis). We studied forest invasion on an Argentinean island where 43 species of Pinaceae, including 60% of the world's recorded invasive Pinaceae, were introduced c. 1920 but where few species are colonizing pristine areas. In this area two species of Palearctic deer, natural enemies of most Pinaceae, were introduced 80 years ago. Expecting deer to help to control the exotics, we conducted a cafeteria experiment to assess deer preferences among the two dominant native species (a conifer, Austrocedrus chilensis, and a broadleaf, Nothofagus dombeyi) and two widely introduced exotic tree species (Pseudotsuga menziesii and Pinus ponderosa). Deer browsed much more intensively on native species than on exotic conifers, in terms of number of individuals attacked and degree of browsing. Deer preference for natives could potentially facilitate invasion by exotic pines. However, we hypothesize that the low rates of invasion currently observed can result at least partly from high densities of exotic deer, which, despite their preference for natives, can prevent establishment of both native and exotic trees. Other factors, not mutually exclusive, could produce the observed pattern. Our results underscore the difficulty of predicting how one introduced species will effect impact of another one." }, { "instance_id": "R56945xR56748", "comparison_id": "R56945", "paper_id": "R56748", "text": "A nematode, fungus, and aphid interact via shared host plant. Implications for soybean management Soybean, Glycine max (L.) Merrill (Fabaceae), is an introduced crop to America and initially benefited from a small number of pests threatening its production. Since its rapid expansion in production beginning in the 1930s, several pests have been introduced from the native range of soybean. Our knowledge of how these pests interact and the implications for management is limited. We examined how three common economic soybean pests, the nematode Heterodera glycines Ichinohe (Nematoda: Heteroderidae), the fungus Cadophora gregata Harrington & McNew (Incertae sedis), and the aphid Aphis glycines Matsumura (Hemiptera: Aphididae), interact on soybean cyst nematode\u2010susceptible (SCN\u2010S) and soybean cyst nematode\u2010resistant cultivars carrying the PI 88788 resistance source (SCN\u2010R). From 2008 to 2010, six soybean cultivars were infested with either a single pest or all three pests in combination in a micro\u2010plot field experiment. Pest performance was measured in a \u2018single pest\u2019 treatment and compared with pest performance in the \u2018multiple pest\u2019 treatment, allowing us to measure the impact of SCN resistance and the presence of other soybean pests on each pest\u2019s performance. Performance of H. glycines (80% reduction in reproduction) and A. glycines (19.8% reduction in plant exposure) was reduced on SCN\u2010R cultivars. Regardless of cultivar, the presence of multiple pests significantly decreased the performance of A. glycines, but significantly increased H. glycines performance. The presence of multiple pests decreased the performance of C. gregata on SCN\u2010S soybean cultivars (20.6% reduction in disease rating)." }, { "instance_id": "R56945xR56893", "comparison_id": "R56945", "paper_id": "R56893", "text": "Impact of Ligustrum lucidum on the soil seed bank in invaded subtropical seasonally dry woodlands (C\u00f3rdoba, Argentina) The impact of invasive species on below ground flora may differ from that on the above ground vegetation. Recent reviews of invaded and native communities emphasize the need for more comprehensive information on the impacts of plant invasion on soil seed banks. Ligustrum lucidum is one of the most important invasive woody species in several ecosystems of Argentina; however, its impact on soil seed bank communities has not been studied. Here we analyzed differences in species richness, total seed density and species composition (total, native and exotic species) in the soil seed bank of native and invaded woodlands, in two different seasons. We also analyzed differences in similarity between standing vegetation and soil seed banks of both woodland types. The study was carried out in the Chaco Serrano woodlands of C\u00f3rdoba, central Argentina. Our main results indicate differences in L. lucidum woodland composition and a reduction in both richness and total density of species in the soil seed bank compared to the native woodlands, independently of the sampling season. Moreover, a higher abundance of certain exotic species in the soil seed bank was observed in L. lucidum woodlands, particularly in spring. Finally, low similarity between soil seed bank and the established vegetation was observed in both woodland types. From a management perspective, it seems that passive restoration from soil seed banks of L. lucidum might be coupled with active addition of some native woody species and control of other exotic species." }, { "instance_id": "R56945xR56797", "comparison_id": "R56945", "paper_id": "R56797", "text": "Examination of the effects of largemouth bass (Micropterus salmoides) and bluegill (Lepomis marochirus) on the ecosystem attributes of lake Kawahara-oike, Nagasaki Japan article i nfo The introduction of largemouth bass (Micropterus salmoides) and bluegill sunfish (Lepomis macrochirus )i nto the freshwater ecosystems of Japan has resulted in the suppression and/or replacement of native species, generating considerable concerns among resource managers. The impacts of largemouth bass and bluegill on native fauna have been examined in aquaria and isolated farm ponds, but there is limited work examining the likelihood to fundamentallymodifyingJapan'slakes.Theobjective of thepresentstudy istoexaminethedirectandsynergistic ecological effects of largemouth bass and bluegill on the biotic communities of Lake Kawahara-oike, Nagasaki, Japan, using an ecosystem (Ecopath) modeling approach. Specifically, we examine whether the two fish species have played a critical role in shaping the trophodynamics of the lake. We attempt to shed light on the trophic interactions between largemouth bass and bluegill and subsequently evaluate to what extent these interactions facilitate their establishment at the expense of native species. We also examine how these changes propagate through the Lake Kawahara-oike food web. Our study suggests that the introduction of bluegill has induced a range of changes at multiple trophic levels. The present analysis also provides evidence that largemouth bass was unable to exert significant top-down control on the growth rates of the bluegill population. Largemouth bass and bluegill appear to prevail over the native fish species populations and can apparently coexist in large numbers in invaded lakes. Future management strategies controlling invasive species are urgently required, if the integrity of native Japanese fish communities is to be protected." }, { "instance_id": "R56945xR56612", "comparison_id": "R56945", "paper_id": "R56612", "text": "Preferences of invasive Ponto-Caspian and native European gammarids for zebra mussel (Dreissena polymorpha, Bivalvia) shell habitat We investigated habitat preferences of two invasive Ponto-Caspian gammarids (Dikerogammarus haemobaphes and Pontogammarus robustoides) and a native European species (Gammarus fossarum) in laboratory experiments. The habitats consisted of the following objects: (1) living zebra mussels; (2) empty mussel shells (clean or coated with nail varnish) with both valves glued together using aquarium silicone sealant to imitate a living mussel; (3) stones (clean or varnished); (4) empty plates. Ten objects of the same type were glued to a plastic plate (10 \u00d7 10 cm) with methyl acrylic glue. The plates were placed in experimental tanks in various combinations. A single gammarid was put into the tank and its position was determined after 24 h. The studied species responded differently to the presence of zebra mussels. D. haemobaphes preferred living mussels rather than their empty shells and these two habitats over stones and empty plates. It responded positively to shell shape, selecting varnished shells rather than varnished stones, and to shell surface properties, selecting clean shells rather than varnished shells. It did not respond to waterborne mussel exudates. P. robustoides did not exhibit any preferences for the above-mentioned substrata. G. fossarum was attracted by empty mussel shells (but not by living mussels). It responded only to their shape, not to surface properties. The strong affinity for zebra mussels, exhibited by D. haemobaphes, might help it survive and develop stable populations in newly invaded areas." }, { "instance_id": "R56945xR56608", "comparison_id": "R56945", "paper_id": "R56608", "text": "Facilitation and interference underlying the association between the woody invaders Pyracantha angustifolia and Ligustrum lucidum ABSTRACT Questions: 1. Is there any post-dispersal positive effect of the exotic shrub Pyracantha angustifolia on the success of Ligustrum lucidum seedlings, as compared to the effect of the native Condalia montana or the open herbaceous patches between shrubs? 2. Is the possible facilitation by Pyracantha and/or Condalia related to differential emergence, growth, or survival of Ligustrum seedlings under their canopies? Location: Cordoba, central Argentina. Methods: We designed three treatments, in which ten mature individuals of Pyracantha, ten of the dominant native shrub Condalia montana, and ten patches without shrub cover were involved. In each treatment we planted seeds and saplings of Ligustrum collected from nearby natural populations. Seedlings emerging from the planted seeds were harvested after one year to measure growth. Survival of the transplanted saplings was recorded every two month during a year. Half of the planted seeds and transplanted saplings were cage-protected from rodents. Results..." }, { "instance_id": "R56945xR56774", "comparison_id": "R56945", "paper_id": "R56774", "text": "Invasive interactions: can Argentine ants indirectly increase the reproductive output of a weed? The direct and indirect interactions of invasive ants with plants, insect herbivores, and Hemiptera are complex. While ant and Hemiptera interactions with native plants have been well studied, the effects of invasive ant\u2013scale insect mutualisms on the reproductive output of invasive weeds have not. The study system consisted of Argentine ants (Linepithema humile), boneseed (Chrysanthemoides monilifera monilifera), and sap-sucking scale insects (Hemiptera: Saissetia oleae and Parasaissetia nigra), all of which are invasive in New Zealand. We examined the direct and indirect effects of Argentine ants on scale insects and other invertebrates (especially herbivores) and on plant reproductive output. Argentine ants spent one-third of their time specifically associated with scale insects in tending behaviours. The invertebrate community was significantly different between uninfested and infested plants, with fewer predators and herbivores on ant-infested plants. Herbivore damage was significantly reduced on plants with Argentine ants, but sooty mould colonisation was greater where ants were present. Herbivore damage increased when ants were excluded from plants. Boneseed plants infested with Argentine ants produced significantly more fruits than plants without ants. The increase in reproductive output in the presence of ants may be due to increased pollination as the result of pollinators being forced to relocate frequently to avoid attack by ants, resulting in an increase in pollen transfer and higher fruit/seed set. The consequences of Argentine ant invasion can be varied; not only does their invasion have consequences for maintaining biodiversity, ant invasion may also affect weed and pest management strategies." }, { "instance_id": "R56945xR56915", "comparison_id": "R56945", "paper_id": "R56915", "text": "Positive plant and bird diversity response to experimental deer population reduction after decades of uncontrolled browsing Aim During the 20th century, deer (family Cervidae), both native and introduced populations, dramatically increased in abundance in many parts of the world and became seen as major threats to biodiversity in forest ecosystems. Here, we evaluated the consequences that restoring top-down herbivore population control has on plants and birds.Location Forest ecosystems of Haida Gwaii (British Columbia, Canada) where introduced black-tailed deer (Odocoileus hemionus) have dramatically limited tree regeneration and simplified understorey plant, insect and bird assemblages.Methods We experimentally assessed ecosystem-wide responses of plant and bird communities to a ~80% reduction of deer abundance on two mediumsized islands (146 and 249 ha). We monitored changes in plant and bird communities for the 13 years following the start of culling and used two islands without culling and a set of exclosures as controls. Results Native plant communities increased in cover and richness after culling, while introduced plants decreased. Birds that depend on understorey vegetation for feeding and/or breeding increased significantly after deer were reduced in abundance but species not dependent on understorey vegetation did not. Finally, on control islands, plant and bird communities were stable or declined throughout the study period.Main conclusions Biodiversity losses caused by current continental-scale trends of increasing deer populations are potentially reversible. We demonstrate that controlling large herbivore populations (native or introduced) offers significant conservation benefits to forest understorey plant communities, even to those most negatively affected by uncontrolled browsing. We also report, for the first time, strong evidence that higher trophic levels (birds) can respond rapidly and positively to herbivore density control." }, { "instance_id": "R56945xR56939", "comparison_id": "R56945", "paper_id": "R56939", "text": "Foraging behavior interactions between two non-native social wasps, Vespula germanica and V. vulgaris (Hymenoptera: Vespidae): implications for invasion success? Vespula vulgaris is an invasive scavenging social wasp that has very recently arrived in Patagonia (Argentina), a territory previously invaded \u2013 35 yrs earlier \u2013 by another wasp, Vespula germanica. Although V. vulgaris wasps possess features that could be instrumental in overcoming obstacles through several invasion stages, the presence of preestablished populations of V. germanica could affect their success. We studied the potential role played by V. germanica on the subsequent invasion process of V. vulgaris wasps in Patagonia by focusing on the foraging interaction between both species. This is because food searching and exploitation are likely to overlap strongly among Vespula wasps. We carried out choice tests where two types of baits were presented in a pairwise manner. We found experimental evidence supporting the hypothesis that V. germanica and V. vulgaris have an asymmetrical response to baits with stimuli simulating the presence of each other. V. germanica avoided baits with either visual or olfactory cues indicating the V. vulgaris presence. However, V. vulgaris showed no preference between baits with or lacking V. germanica stimuli. These results suggest that the presence of an established population of V. germanica may not contribute to added biotic resistance to V. vulgaris invasion." }, { "instance_id": "R56945xR56845", "comparison_id": "R56945", "paper_id": "R56845", "text": "Plant community ssociations of two invasive thistles We assessed the field-scale plant community associations of Carduus nutans and C. acanthoides, two similar, economically important invasive thistles. Several plant species were associated with the presence of Carduus thistles while others, including an important pasture species, were associated with Carduus free areas. Thus, even within fields, areas invaded by Carduus thistles have different vegetation than uninvaded areas, either because some plants can resist invasion or because invasion changes the local plant community. Our results will allow us to target future research about the role of vegetation structure in resisting and responding to invasion." }, { "instance_id": "R56945xR56704", "comparison_id": "R56945", "paper_id": "R56704", "text": "Rhizobial hitchhikers from Down Under: invasional meltdown in a plant-bacteria mutualism? Aim This study analysed the diversity and identity of the rhizobial symbionts of co-existing exotic and native legumes in a coastal dune ecosystem invaded by Acacia longifolia. Location An invaded coastal dune ecosystem in Portugal and reference bradyrhizobial strains from the Iberian Peninsula and other locations. Methods Symbiotic nitrogen-fixing bacteria were isolated from root nodules of plants of the Australian invasive Acacia longifolia and the European natives Cytisus grandiflorus, Cytisus scoparius and Ulex europaeus. Total DNA of each isolate was amplified by polymerase chain reaction (PCR) with the primer BOX A1R. Subsequent PCR-sequencing and phylogenetic analyses of the internal transcribed spacer region and the nifD and nodA genes were performed for all different strains. Results The four plant species analysed were nodulated by bacteria from three different Bradyrhizobium lineages, although most of the isolates belonged to the Bradyrhizobium japonicum lineage sensu lato. Ninety-five per cent of the bradyrhizobia isolated from A. longifolia, C. grandiflorus and U. europaeus in the invaded ecosystem had nifD and nodA genes of Australian origin. Seven isolates obtained in this study define a new distinctive nifD group of Bradyrhizobium from western and Mediterranean Europe. Main conclusions These results reveal the introduction of exotic bacteria with the invasive plant species, their persistence in the new geographical area and the nodulation of native legumes by rhizobia containing exotic symbiotic genes. The disruption of native mutualisms and the mutual facilitation of the invasive spread of the introduced plant and bradyrhizobia could constitute the first report of an invasional meltdown documented for a plant\u2013bacteria mutualism." }, { "instance_id": "R56945xR56821", "comparison_id": "R56945", "paper_id": "R56821", "text": "Predation and functional responses of Carcinus maenas and Cancer magister in the presence of the introduced Cephalaspidean Philine orientalis An increasing number of examples suggest that interactions among introduced species are ecologically important and relevant to the management of invaded systems. We investigated the potential for the introduced cephalaspidean sea slug Philine orientalis to interfere with the feeding of the introduced European green crab (Carcinus maenas) and the native Dungeness crab (Cancer magister). We observed co-occurrence of crab species and P. orientalis at field sites in Bodega Harbor and Tomales, San Pablo, and San Francisco Bays. In laboratory and field experiments, we determined whether crab feeding was suppressed by P. orientalis and the duration of this suppression for individual crabs. We also used foraging response models to explore changes in the feeding rate of crabs with varying densities of P. orientalis and small bivalve prey. We found that P. orientalis deterred predation by green and Dungeness crabs on small clams in laboratory feeding trials, but not in field experiments with green crabs and P. orientalis. Foraging models predicted that P. orientalis would only affect crab feeding in the field under specific conditions of crab, P. orientalis, and prey densities. These foraging models bridged an important gap between lab and field experiments and allowed us to predict how changes in species abundances at two trophic levels might alter the importance of crab suppression by P. orientalis." }, { "instance_id": "R56945xR56778", "comparison_id": "R56945", "paper_id": "R56778", "text": "Influence of two exotic earthworm species with different foraging strategies on abundance and composition of boreal microarthropods Abstract In North America, many species of European earthworms have been introduced to northern forests. Facilitative or competitive interactions between these earthworm species may result in non-additive effects on native plant and animal species. We investigated the combined versus individual effects of the litter-dwelling earthworm Dendrobaena octaedra Savigny, 1826 and the deep-burrowing species Lumbricus terrestris L., 1758 on microarthropod assemblages from boreal forest soil by conducting a mesocosm experiment. Soil cores from earthworm-free areas of northern Alberta, Canada, were inoculated with D. octaedra alone, L. terrestris alone, both worm species together, or no earthworms. After 4.5 months, microarthropods were extracted from the soil, counted, and identified to higher taxa. Oribatid mites were further identified to family and genus. Abundance of microarthropods was significantly lower in the treatment containing both species than in the no earthworm treatment and the L. terrestris treatment. Oribatida and Prostigmata/Astigmata differed significantly among treatments and were lowest in the treatment containing both earthworm species, followed by the D. octaedra treatment, although post-hoc pairwise comparisons were not significant. Within the Oribatida, composition differed between the control and L. terrestris treatments as compared to the D. octaedra and both-species treatments, with Suctobelbella and Tectocepheus in particular having higher abundances in the control treatment. Effects of the two earthworm species on microarthropods were neither synergistic nor antagonistic. Our results indicate that earthworms can have strong effects on microarthropod assemblages in boreal forest soils. Future research should examine whether these changes have cascading effects on nutrient cycling, microbial communities, or plant growth." }, { "instance_id": "R56945xR56595", "comparison_id": "R56945", "paper_id": "R56595", "text": "Variation in herbivore-mediated indirect effects of an invasive plant on a native plant Theory predicts that damage by a shared herbivore to a secondary host plant species may either be higher or lower in the vicinity of a preferred host plant species. To evaluate the importance of ecological factors, such as host plant proximity and density, in determining the direction and strength of such herbivore-mediated indirect effects, we quantified oviposition by the exotic weevil Rhinocyllus conicus on the native wavyleaf thistle Cirsium undulatum in midgrass prairie on loam soils in the upper Great Plains, USA. Over three years (2001-2003), the number of eggs laid by R. conicus on C. undulatum always decreased significantly with distance (0-220 m) from a musk thistle (Carduus nutans L.) patch. Neither the level of R. conicus oviposition on C. undulatum nor the strength of the distance effect was predicted by local musk thistle patch density or by local C. undulatum density (<5 m). The results suggest that high R. conicus egg loads on C. undulatum near musk thistle resulted from the native thistle's co-occurrence with the coevolved preferred exotic host plant and not from the weevil's response to local host plant density. Mean egg loads on C. undulatum also were greater at sites with higher R. conicus densities. We conclude that both preferred-plant proximity and shared herbivore density strongly affected the herbivore-mediated indirect interaction, suggesting that such interactions are important pathways by which invasive exotic weeds can indirectly impact native plants." }, { "instance_id": "R56945xR56867", "comparison_id": "R56945", "paper_id": "R56867", "text": "Comparisons of isotopic niche widths of some invasive and indigenous fauna in a South African river Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (\u02dc0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12\u20130.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24\u20130.60) and Azolla filiculoides (0.09\u20130.33) as well as detrital Barringtonia racemosa leaves (0.00\u20130.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents." }, { "instance_id": "R56945xR56851", "comparison_id": "R56945", "paper_id": "R56851", "text": "Range expansion of Agrilus convexicollis in European Russia expedited by the invasion of the emerald ash borer, Agrilus planipennis (Coleoptera: Buprestidae) Abstract The jewel beetle Agrilus convexicollis Redtenbacher, 1849 (Buprestidae) occurs in many European and North Mediterranean countries and feeds mainly on dying shoots and branches of ash trees (Fraxinus excelsior, F. ornus and F. oxyphylla). A range map of A. convexicollis with 479 exact localities from the literature and museum collections is compiled. Historically, this species was not known to be present in the central region of European Russia. Since 2007, however, specimens of A. convexicollis have been collected in seven central European Russia localities, effectively expanding the northern border of the previously known range by approximately 665 km. All recently established localities of A. convexicollis are within the region invaded by emerald ash borer (A. planipennis Fairmaire), an East Asian pest of ashes that was first detected in European Russia in 2003. In addition, almost all A. convexicollis specimens from central European Russia (both adults and larvae) were collected from declining F. pennsylvanica (an introduced North American ash) infested with A. planipennis. This is a new host record for A. convexicollis. We suspect that the recent range expansion of A. convexicollis in central European Russia has been facilitated by the A. planipennis invasion, which has caused widespread decline and mortality of ash trees in the region. This work illustrates how the invasion of one species can facilitate the range expansion of another." }, { "instance_id": "R56945xR56931", "comparison_id": "R56945", "paper_id": "R56931", "text": "Conquerors or exiles? Impact of interference competition among invasive Ponto-Caspian gammarideans on their dispersal rates Ponto-Caspian gammarids have invaded European waters, affecting local communities by predation and competition. Their ranges and dispersal rates vary across Europe, which may result from their interspecific interactions, accelerating or reducing migrations. We checked this hypothesis by testing interference competition among co-occurring invaders: Dikerogammarus villosus, D. haemobaphes and Pontogammarus robustoides. We used 140-cm long tanks (gravel substratum), divided into seven compartments. We introduced 25 \u201cresidents\u201d into the outermost compartment, separated with a barrier. After 1 h, we introduced 25 \u201cintruders\u201d. After the next 1 h, we removed the barrier and the gammarids dispersed in the tank. After 4 or 20 h, we counted the gammarids in the compartments. We tested all pairwise species combinations and single-species controls. Dikerogammarus villosus displaced other species (P. robustoides only after 4 h) and reduced its own motility after 20 h in their presence. Pontogammarus robustoides stimulated the short-time migrations of D. villosus intruders and of D. haemobaphes. As P. robustoides migrated spontaneously much more than Dikerogammarus spp., its impact decreased after longer time. Dikerogammarus haemobaphes stimulated the short-time movement of P. robustoides intruders but reduced the long-time relocation of this species. In general, gammarid dispersal increased in the presence of stronger competitors (D. villosus and P. robustoides, especially residents) and decreased in response to weaker competitors (D. haemobaphes). Thus, competitive interactions may affect dispersal of invasive gammarids and contribute to the fastest spread of the weakest competitor, D. haemobaphes observed in the field, whereas the strongest species, D. villosus was the latest newcomer in many novel areas." }, { "instance_id": "R56945xR56706", "comparison_id": "R56945", "paper_id": "R56706", "text": "Non-native ecosystem engineer alters estuarine communities Many ecosystems are created by the presence of ecosystem engineers that play an important role in determining species' abundance and species composition. Additionally, a mosaic environment of engineered and non-engineered habitats has been shown to increase biodiversity. Non-native ecosystem engineers can be introduced into environments that do not contain or have lost species that form biogenic habitat, resulting in dramatic impacts upon native communities. Yet, little is known about how non-native ecosystem engineers interact with natives and other non-natives already present in the environment, specifically whether non-native ecosystem engineers facilitate other non-natives, and whether they increase habitat heterogeneity and alter the diversity, abundance, and distribution of benthic species. Through sampling and experimental removal of reefs, we examine the effects of a non-native reef-building tubeworm, Ficopomatus enigmaticus, on community composition in the central Californian estuary, Elkhorn Slough. Tubeworm reefs host significantly greater abundances of many non-native polychaetes and amphipods, particularly the amphipods Monocorophium insidiosum and Melita nitida, compared to nearby mudflats. Infaunal assemblages under F. enigmaticus reefs and around reef's edges show very low abundance and taxonomic diversity. Once reefs are removed, the newly exposed mudflat is colonized by opportunistic non-native species, such as M. insidiosum and the polychaete Streblospio benedicti, making removal of reefs a questionable strategy for control. These results show that provision of habitat by a non-native ecosystem engineer may be a mechanism for invasional meltdown in Elkhorn Slough, and that reefs increase spatial heterogeneity in the abundance and composition of benthic communities." }, { "instance_id": "R56945xR56941", "comparison_id": "R56945", "paper_id": "R56941", "text": "Habitat augmentation drives secondary invasion: na experimental approach to determine the mechanism of invasion success The entry of secondary invaders into, or their expansion within, native communities is contingent on the changes wrought by other (primary) invaders. When primary invaders have altered more than one property of the recipient community, standard descriptive and modeling approaches only provide a best guess of the mechanism permitting the secondary invasion. In rainforest on Christmas Island, we conducted a manipulative field experiment to determine the mechanism of invasion success for a community of land snails dominated by non-native species. The invasion of rainforest by the yellow crazy ant (Anoplolepis gracilipes) has facilitated these land snails, either by creating enemy-free space and/or increased habitat and resources (in the form of leaf litter) through the removal of the native omnivorous-detritivorous red land crab (Gecarcoidea natalis). We manipulated predator densities (high and low) and leaf litter (high and low) in replicated blocks of four treatment combinations at two sites. Over the course of one wet season (five months), we found that plots with high leaf litter biomass contained significantly more snails than those with low biomass, regardless of whether those plots had high or low predation pressure, at both the site where land crabs have always been abundant, and at the site where they have been absent for many years prior to the experiment. Each site was dominated by small snail species (<2 mm length), and through handling size and predation experiments we demonstrated that red crabs tend not to handle and eat snails of that size. These results suggest that secondary invasion by this community of non-native land snails is facilitated most strongly by habitat and resource augmentation, an indirect consequence of red land crab removal, and that the creation of enemy-free space is not important. By using a full-factorial experimental approach, we have confidently determined-rather than inferred-the mechanism by which primary invaders indirectly facilitate a community of secondary invaders." }, { "instance_id": "R56945xR56768", "comparison_id": "R56945", "paper_id": "R56768", "text": "Comparative feeding ecology of invasive Ponto-Caspian gobies Invasions of Ponto-Caspian gobiid fishes are suspected to cause regime shifts in freshwater ecosystems. This study compared the trophic niche differentiations of Neogobius melanostomus and Ponticola kessleri in the upper Danube River using stable isotope analyses (\u03b413C and \u03b415N), gut content analyses and morphometric analyses of the digestive tract. Both species were identified as predacious omnivores with high dietary overlap and a generalistic feeding strategy. Amphipods (especially invasive Dikerogammarus spp.) contributed 2/3 to the index of food importance. \u03b415N-signatures of N. melanostomus revealed an ontogenetic diet shift and significantly exceeded those in P. kessleri by ~1.5\u2030, indicating a niche separation of half a trophic level. P. kessleri had shorter uncoiled intestinal tracts than N. melanostomus, indicating a narrower niche and adaptation to animal food. Trophic niches in both species expanded during the growth period with increasing intraguild predation and cannibalism in P. kessleri and increasing molluscivory in N. melanostomus. P. kessleri showed a higher degree of specialization and more stable feeding patterns across seasons, whereas N. melanostomus adapted its diet according to the natural prey availability. The feeding patterns of both species observed in the upper Danube River strongly differ from those in their native ranges, underlining their great plasticity. Both goby species consumed mainly other non-native species (~92% of gut contents) and seemed to benefit from previous invasions of prey species like Dikerogammarus villosus. The invasive success of gobies and their prey mirror fundamental ecological changes in large European freshwater ecosystems." }, { "instance_id": "R56945xR56909", "comparison_id": "R56945", "paper_id": "R56909", "text": "Over-invasion in a freshwater ecosystem: newly introduced virile crayfish (Orconectes virilis) outcompete established invasive signal crayfish (Pacifastacus leniusculus) Abstract Biological invasions are a key threat to freshwater biodiversity, and identifying determinants of invasion success is a global conservation priority. The establishment of introduced species is predicted to be hindered by pre-existing, functionally similar invasive species. Over a five-year period we, however, find that in the River Lee (UK), recently introduced non-native virile crayfish (Orconectes virilis) increased in range and abundance, despite the presence of established alien signal crayfish (Pacifastacus leniusculus). In regions of sympatry, virile crayfish had a detrimental effect on signal crayfish abundance but not vice versa. Competition experiments revealed that virile crayfish were more aggressive than signal crayfish and outcompeted them for shelter. Together, these results provide early evidence for the potential over-invasion of signal crayfish by competitively dominant virile crayfish. Based on our results and the limited distribution of virile crayfish in Europe, we recommend that efforts to contain them within the Lee catchment be implemented immediately." }, { "instance_id": "R56945xR56616", "comparison_id": "R56945", "paper_id": "R56616", "text": "Non-native habitat as home for non-native species: comparison of communities associated with invasive tubeworm and native oyster reefs Introduction vectors for marine non-native species, such as oyster culture and boat foul- ing, often select for organisms dependent on hard substrates during some or all life stages. In soft- sediment estuaries, hard substrate is a limited resource, which can increase with the introduction of hard habitat-creating non-native species. Positive interactions between non-native, habitat-creating species and non-native species utilizing such habitats could be a mechanism for enhanced invasion success. Most previous studies on aquatic invasive habitat-creating species have demonstrated posi- tive responses in associated communities, but few have directly addressed responses of other non- native species. We explored the association of native and non-native species with invasive habitat- creating species by comparing communities associated with non-native, reef-building tubeworms Ficopomatus enigmaticus and native oysters Ostrea conchaphila in Elkhorn Slough, a central Califor- nia estuary. Non-native habitat supported greater densities of associated organisms\u2014primarily highly abundant non-native amphipods (e.g. Monocorophium insidiosum, Melita nitida), tanaid (Sinelebus sp.), and tube-dwelling polychaetes (Polydora spp.). Detritivores were the most common trophic group, making up disproportionately more of the community associated with F. enigmaticus than was the case in the O. conchaphila community. Analysis of similarity (ANOSIM) showed that native species' community structure varied significantly among sites, but not between biogenic habi- tats. In contrast, non-natives varied with biogenic habitat type, but not with site. Thus, reefs of the invasive tubeworm F. enigmaticus interact positively with other non-native species." }, { "instance_id": "R56945xR56636", "comparison_id": "R56945", "paper_id": "R56636", "text": "Diet of American mink Mustela vison and its potential impact on the native fauna of Navarino Island, Cape Horn Biosphere, Chile Article discussing the diet of the invasive American mink (Mustela vison) and its ecological impacts on the Cape Horn Biosphere Reserve in Chile." }, { "instance_id": "R56945xR56674", "comparison_id": "R56945", "paper_id": "R56674", "text": "Invasive species impacts on ecosystem structure and function: A comparison of the Bay of Quinte, Canada, and Oneida Lake, USA, before and after zebra mussel invasion As invasion rates of exotic species increase, an ecosystem level understanding of their impacts is imperative for predicting future spread and consequences. We have previously shown that network analyses are powerful tools for understanding the effects of exotic species perturbation on ecosystems. We now use the network analysis approach to compare how the same perturbation affects another ecosystem of similar trophic status. We compared food web characteristics of the Bay of Quinte, Lake Ontario (Canada), to previous research on Oneida Lake, New York (USA) before and after zebra mussel (Dreissena polymorpha) invasion. We used ecological network analysis (ENA) to rigorously quantify ecosystem function through an analysis of direct and indirect food web transfers. We used a social network analysis method, cohesion analysis (CA), to assess ecosystem structure by organizing food web members into subgroups of strongly interacting predators and prey. Together, ENA and CA allowed us to understand how food web structure and function respond simultaneously to perturbation. In general, zebra mussel effects on the Bay of Quinte, when compared to Oneida Lake, were similar in direction, but greater in magnitude. Both systems underwent functional changes involving focused flow through a small number of taxa and increased use of benthic sources of production; additionally, both systems structurally changed with subgroup membership changing considerably (33% in Oneida Lake) or being disrupted entirely (in the Bay of Quinte). However, the response of total ecosystem activity (as measured by carbon flow) differed between both systems, with increasing activity in the Bay of Quinte, and decreasing activity in Oneida Lake. Thus, these analyses revealed parallel effects of zebra mussel invasion in ecosystems of similar trophic status, yet they also suggested that important differences may exist. As exotic species continue to disrupt the structure and function of our native ecosystems, food web network analyses will be useful for understanding their far-reaching effects." }, { "instance_id": "R57101xR57057", "comparison_id": "R57101", "paper_id": "R57057", "text": "Predicting the Australian weed status of southern African plants A method of predicting weed status was developed for southern African plants naturalized in Australia, based upon information on extra-Australian weed status, distribution and taxonomy. Weed status in Australia was associated with being geographically widespread in southern Africa, being found in a wide range of climates in southern Africa, being described as a weed or targeted by herbicides in southern Africa, with early introduction and establishment in Australia, and with weediness in regions other than southern Africa. Multiple logistic regressions were used to identify the variables that best predicted weed status. The best fitting regressions were for weeds present for a long time in Australia (more than 140 years). They utilized three variables, namely weed status, climatic range in southern Africa and the existence of congeneric weeds in southern Africa. The highest level of variation explained (43%) was obtained for agricultural weeds using a single variable, weed status in southern Africa. Being recorded as a weed in Australia was related to climatic range and the existence of congeneric weeds in southern Africa (40% of variation explained). No variables were suitable predictors of non-agricultural (environmental) weeds. The regressions were used to predict future weed status of plants either not introduced or recently arrived in Australia. Recently-arrived species which were predicted to become weeds are Acacia karroo Hayne (Mimosaceae), Arctotis venustra T. Norl. (Asteraceae), Sisymbrium thellungii O.E. Schulz (Brassicaceae) and Solanum retroflexum Dun. (Solanaceae). Twenty species not yet arrived in Australia were predicted to have a high likelihood of becoming weeds. Analysis of the residuals of the regressions indicated two long-established species which might prove to be good targets for biological control: Mesembryanthemum crystallinum L. (Aizoaceae) and Watsonia meriana (L.) Mill. (Iridaceae)." }, { "instance_id": "R57101xR56955", "comparison_id": "R57101", "paper_id": "R56955", "text": "Predicting establishment success for alien reptiles and ambhibians: a role for climate matching We examined data comprising 1,028 successful and 967 failed introduction records for 596 species of alien reptiles and amphibians around the world to test for factors influencing establishment success. We found significant variations between families and between genera. The number of jurisdictions where a species was introduced was a significant predictor of the probability the species had established in at least one jurisdiction. All species that had been introduced to more than 10 jurisdictions (34 species) had established at least one alien population. We also conducted more detailed quantitative comparisons for successful (69 species) and failed (116 species) introductions to three jurisdictions (Great Britain, California and Florida) to test for associations with climate match, geographic range size, and history of establishment success elsewhere. Relative to failed species, successful species had better climate matches between the jurisdiction where they were introduced and their geographic range elsewhere in the world. Successful species were also more likely to have high establishment success rates elsewhere in the world. Cross-validations indicated our full model correctly categorized establishment success with 78\u201380% accuracy. Our findings may guide risk assessments for the import of live alien reptiles and amphibians to reduce the rate new species establish in the wild." }, { "instance_id": "R57101xR56984", "comparison_id": "R57101", "paper_id": "R56984", "text": "Introduction pathways and establishment rates of invasive aquatic species in Europe Species invasion is one of the leading mechanisms of global environmental change, particularly in freshwater ecosystems. We used the Food and Agriculture Organization's Database of Invasive Aquatic Species to study invasion rates and to analyze invasion pathways within Europe. Of the 123 aquatic species introduced into six contrasting European countries, the average percentage established is 63%, well above the 5%\u009620% suggested by Williamson's \"tens\" rule. The introduction and establishment transitions are independent of each other, and species that became widely established did so because their introduction was attempted in many countries, not because of a better establishment capability. The most frequently introduced aquatic species in Europe are freshwater fishes. We describe clear introduction pathways of aquatic species into Europe and three types of country are observed: \"recipient and donor\" (large, midlatitude European countries, such as France, the United Kingdom, and Germany, that give and receive the most introductions), \"recipient\" (most countries, but particularly southern countries, which give few species but receive many), and \"neither recipient nor donor\" (only two countries). A path analysis showed that the numbers of species given and received are mediated by the size (area) of the country and population density, but not gross domestic product per capita." }, { "instance_id": "R57101xR56986", "comparison_id": "R57101", "paper_id": "R56986", "text": "Alien mammals in Europe: updated numbers and trends, and assessment of the effects on biodiversity This study provides an updated picture of mammal invasions in Europe, based on detailed analysis of information on introductions occurring from the Neolithic to recent times. The assessment considered all information on species introductions, known extinctions and successful eradication campaigns, to reconstruct a trend of alien mammals' establishment in the region. Through a comparative analysis of the data on introduction, with the information on the impact of alien mammals on native and threatened species of Europe, the present study also provides an objective assessment of the overall impact of mammal introductions on European biodiversity, including information on impact mechanisms. The results of this assessment confirm the constant increase of mammal invasions in Europe, with no indication of a reduction of the rate of introduction. The study also confirms the severe impact of alien mammals, which directly threaten a significant number of native species, including many highly threatened species. The results could help to prioritize species for response, as required by international conventions and obligations." }, { "instance_id": "R57101xR57065", "comparison_id": "R57101", "paper_id": "R57065", "text": "Globalisation in marine ecosystems: the story of non-indigenous marine species across European seas The introduction of non-indigenous species (NIS) across the major European seas is a dynamic non-stop process. Up to September 2004, 851 NIS (the majority being zoobenthic organ- isms) have been reported in European marine and brackish waters, the majority during the 1960s and 1970s. The Mediterranean is by far the major recipient of exotic species with an average of one introduction every 4 wk over the past 5 yr. Of the 25 species recorded in 2004, 23 were reported in the Mediterranean and only two in the Baltic. The most updated patterns and trends in the rate, mode of introduction and establishment success of introductions were examined, revealing a process similar to introductions in other parts of the world, but with the uniqueness of migrants through the Suez Canal into the Mediterranean (Lessepsian or Erythrean migration). Shipping appears to be the major vector of introduction (excluding the Lessepsian migration). Aquaculture is also an important vector with target species outnumbered by those introduced unintentionally. More than half of immigrants have been estab- lished in at least one regional sea. However, for a significant part of the introductions both the establishment success and mode of introduction remain unknown. Finally, comparing trends across taxa and seas is not as accurate as could have been wished because there are differences in the spatial and taxonomic effort in the study of NIS. These differences lead to the conclusion that the number of NIS remains an underestimate, calling for continuous updating and systematic research." }, { "instance_id": "R57101xR57051", "comparison_id": "R57101", "paper_id": "R57051", "text": "Non-indigenous terrestrial vertebrates in Israel and adjacent areas We investigated characteristics of established non-indigenous (ENI) terrestrial vertebrates in Israel and adjacent areas, as well as attributes of areas they occupy. Eighteen non-indigenous birds have established populations in this region since 1850. A database of their attributes was compiled, analyzed, and compared to works from elsewhere. Most ENI bird species are established locally; a few are spreading or widespread. There has been a recent large increase in establishment. All ENI birds are of tropical origin, mostly from the Ethiopian and Oriental regions; the main families are Sturnidae, Psittacidae, Anatidae, and Columbidae. Most species have been deliberately brought to Israel in captivity and subsequently released or escaped. Most of these birds are commensal with humans to some degree, are not typically migratory, and have mean body mass larger than that of the entire order. ENI birds are not distributed randomly. There are centers in the Tel-Aviv area and along the Rift Valley, which is also a corridor of spread. Positive correlations were found between ENI bird richness and mean annual temperature and urbanization. Mediterranean forests and desert regions have fewer ENI species than expected. Apart from birds we report on non-indigenous species of reptiles (2) and mammals (2) in this region." }, { "instance_id": "R57101xR57010", "comparison_id": "R57101", "paper_id": "R57010", "text": "Alien flora of Europe: species diversity, temporal trends, geographical patterns and research needs The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004\u20132008), funded by the 6th Framework Programme of the European Union and aimed at \u201ccreating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments\u201d. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964\u20131980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species." }, { "instance_id": "R57101xR57081", "comparison_id": "R57101", "paper_id": "R57081", "text": "Plant introductions in Australia: how can we resolve \u2018weedy\u2019 conflicts of interest? Over 27,000 exotic plant species have been introduced to Australia, predominantly for use in gardening, agriculture and forestry. Less than 1% of such introductions have been solely accidental. Plant introductions also occur within Australia, as exotic and native species are moved across the country. Plant-based industries contribute around $50 billion to Australia\u2019s economy each year, play a signifi cant social role and can also provide environmental benefi ts such as mitigating dryland salinity. However, one of the downsides of a new plant introduction is the potential to become a new weed. Overall, 10% of exotic plant species introduced since European settlement have naturalised, but this rate is higher for agricultural and forestry plants. Exotic plant species have become agricultural, noxious and natural ecosystem weeds at rates of 4%, 1% and 7% respectively. Whilst garden plants have the lowest probability of becoming weeds this is more than compensated by their vast numbers of introductions, such that gardening is the greatest source of weeds in Australia. Resolving confl icts of interest with plant introductions needs a collaborative effort between those stakeholders who would benefi t (i.e. grow the plant) and those who would potentially lose (i.e. gain a weed) to compare the weed risk, feasibility of management and benefi ts of the species in question. For proposed plant imports to Australia, weed risk is presently the single consideration under international trade rules. Hence the focus is on ensuring the optimal performance of the border Weed Risk Assessment System. For plant species already present in Australia there are inconsistencies in managing weed risk between the States/Territories. This is being addressed with the development of a national standard for weed risk management. For agricultural and forestry species of high economic value but signifi cant weed risk, the feasibility of standard risk management approaches needs to be investigated. Invasive garden plants need national action." }, { "instance_id": "R57101xR56949", "comparison_id": "R57101", "paper_id": "R56949", "text": "Biological control attempts by introductions against pest insects in the field in Canada Abstract This is an analysis of the attempts to colonize at least 208 species of parasites and predators on about 75 species of pest insects in the field in Canada. There was colonization by about 10% of the species that were introduced in totals of under 5,000 individuals, 40% of those introduced in totals of between 5,000 and 31,200, and 78% of those introduced in totals of over 31,200. Indications exist that initial colonizations may be favoured by large releases and by selection of release sites that are semi-isolated and not ecologically complex but that colonizations are hindered when the target species differs taxonomically from the species from which introduced agents originated and when the release site lacks factors needed for introduced agents to survive or when it is subject to potentially-avoidable physical disruptions. There was no evidence that the probability of colonization was increased when the numbers of individuals released were increased by laboratory propagation. About 10% of the attempts were successful from the economic viewpoint. Successes may be overestimated if the influence of causes of coincidental, actual, or supposed changes in pest abundance are overlooked. Most of the successes were by two or more kinds of agents of which at least one attacked species additional to the target pests. Unplanned consequences of colonization have not been sufficiently harmful to warrant precautions to the extent advocated by Turnbull and Chant but are sufficiently potentially dangerous to warrant the restriction of all colonization attempts to biological control experts. It is concluded that most failures were caused by inadequate procedures, rather than by any weaknesses inherent in the method, that those inadequacies can be avoided in the future, and therefore that biological control of pest insects has much unrealized potential for use in Canada." }, { "instance_id": "R57101xR57000", "comparison_id": "R57101", "paper_id": "R57000", "text": "Ecological predictions and risk assessment for alien fishes in North America Methods of risk assessment for alien species, especially for nonagricultural systems, are largely qualitative. Using a generalizable risk assessment approach and statistical models of fish introductions into the Great Lakes, North America, we developed a quantitative approach to target prevention efforts on species most likely to cause damage. Models correctly categorized established, quickly spreading, and nuisance fishes with 87 to 94% accuracy. We then identified fishes that pose a high risk to the Great Lakes if introduced from unintentional (ballast water) or intentional pathways (sport, pet, bait, and aquaculture industries)." }, { "instance_id": "R57101xR57067", "comparison_id": "R57101", "paper_id": "R57067", "text": "The role of opportunity in the unintentional introduction of nonnative ants A longstanding goal in the study of biological invasions is to predict why some species are successful invaders, whereas others are not. To understand this process, detailed information is required concerning the pool of species that have the opportunity to become established. Here we develop an extensive database of ant species unintentionally transported to the continental United States and use these data to test how opportunity and species-level ecological attributes affect the probability of establishment. This database includes an amount of information on failed introductions that may be unparalleled for any group of unintentionally introduced insects. We found a high diversity of species (232 species from 394 records), 12% of which have become established in the continental United States. The probability of establishment increased with the number of times a species was transported (propagule pressure) but was also influenced by nesting habit. Ground nesting species were more likely to become established compared with arboreal species. These results highlight the value of developing similar databases for additional groups of organisms transported by humans to obtain quantitative data on the first stages of the invasion process: opportunity and transport." }, { "instance_id": "R57101xR57055", "comparison_id": "R57101", "paper_id": "R57055", "text": "Global climate change and accuracy of prediction of species' geographical ranges: establishment success of introduced ladybirds (Coccinellidae, Chilocorus spp.) worldwide Aim Predictions of how the geographical ranges of species change implicitly assume that range can be determined without invoking climate change. The aim here was to determine how accurate predictions of range change might be before entertaining global climatic change. Location Worldwide. Methods All the documented global biological control translocations of ladybirds (Coccinellidae: Chilocorus spp.) were analysed with the ecoclimatic program, CLIMEX. This program determines species distributions in relation to climate, and can be used to express the favourableness of different localities for a species. CLIMEX is also a useful exploratory tool for determining the likelihood of establishment of species introduced from one area to another. Results Predictive models were developed based on the likelihood of establishment of fifteen Chilocorus spp. relative to their physiological characteristics and climatic tolerances. This likelihood was compared with actual establishment with a resultant range of 0% accuracy to 100% accuracy. Only four (26.7%) species climatic tolerances could the predicted with 100% certainty. The general lack of accurate prediction was because climate is not always the overriding feature determining whether a species will establish or not. Other determinants, such as localized response to microclimate, phenology, host type and availability, presence of natural enemies and hibernation sites play a varying role over and above climate in determining whether a species will establish at a new locality." }, { "instance_id": "R57101xR56957", "comparison_id": "R57101", "paper_id": "R56957", "text": "Biological pollution in the Mediterranean Sea: invasive versus introduced macrophytes The authors have listed 85 species of macrophytes that have probably been introduced to the Mediterranean. Among them, nine species can be considered as invasive, i.e., playing a conspicuous role in the recipient ecosystems, taking the place of keystone species and/or being economically harmful: Acrothamnion preissii, Asparagopsis armata, Lophocladia lallemandii, Womersleyella setacea (Rhodophyta), Sargassum muticum, Stypopodium schimperi (Fucophyceae), Caulerpa racemosa, Caulerpa taxifolia and Halophila stipulacea (Plantae). These data fit well the Williamson and Fitter's \"tens rule\", which states that, on average, 1 out of 10 introduced species becomes invasive. Though some features (e.g. life traits, geographical origin) can increase the likelihood of a successful invasion, the success of invaders is far from being predictable. Since the beginning of the 20th century, the number of introduced species to the Mediterranean has nearly doubled every 20 years. Should these kinetics continue, and according to the tens rule, it can be expected that 5-10 newly introduced macrophytes shall become invasive in the next 20 years." }, { "instance_id": "R57101xR56970", "comparison_id": "R57101", "paper_id": "R56970", "text": "Sexual plumage differences and the outcome of game bird (Aves: Galliformes) introductions on oceanic islands Galliformes, after Passeriformes, is the group of birds that has been most introduced to oceanic islands. Among Passeriformes, whether the species\u2019 plumage is sexually monochromatic or dichromatic, along with other factors such as introduction effort and interspecific competition, has been identified as a factor that limits introduction success. In this study, we tested the hypothesis that sexually dichromatic plumage reduces the probability of success for 51 species from 26 genera of game birds that were introduced onto 12 oceanic islands. Analyses revealed no significant differences in probability of introduction success between monochromatic and dichromatic species at either the generic or specific levels. We also found no significant difference between these two groups in size of native geographic range, wing length or humanintroduction effort. Our results do not support the hypothesis that sexually dichromatic plumage (probably a response to sexual selection) predicts introduction outcomes of game birds as has been reported for passerine birds. These findings suggest that passerine and non-passerine birds differ fundamentally in terms of factors that could influence introduction outcome, and should therefore be evaluated separately as opposed to lumping these two groups as \u2018land birds\u2019." }, { "instance_id": "R57101xR57070", "comparison_id": "R57101", "paper_id": "R57070", "text": "A global meta-analysis of the ecological impacts of nonnative crayfish Abstract. Nonnative crayfish have been widely introduced and are a major threat to freshwater biodiversity and ecosystem functioning. Despite documentation of the ecological effects of nonnative crayfish from >3 decades of case studies, no comprehensive synthesis has been done to test quantitatively for their general or species-specific effects on recipient ecosystems. We provide the first global meta-analysis of the ecological effects of nonnative crayfish under experimental settings to compare effects among species and across levels of ecological organization. Our meta-analysis revealed strong, but variable, negative ecological impacts of nonnative crayfish with strikingly consistent effects among introduced species. In experimental settings, nonnative crayfish generally affect all levels of freshwater food webs. Nonnative crayfish reduce the abundance of basal resources like aquatic macrophytes, prey on invertebrates like snails and mayflies, and reduce abundances and growth of amphibians and fish, but they do not consistently increase algal biomass. Nonnative crayfish tend to have larger positive effects on growth of algae and larger negative effects on invertebrates and fish than native crayfish, but effect sizes vary considerably. Our study supports the assessment of crayfish as strong interactors in food webs that have significant effects across native taxa via polytrophic, generalist feeding habits. Nonnative crayfish species identity may be less important than extrinsic attributes of the recipient ecosystems in determining effects of nonnative crayfish. We identify some understudied and emerging nonnative crayfish that should be studied further and suggest expanding research to encompass more comparisons of native vs nonnative crayfish and different geographic regions. The consistent and general negative effects of nonnative crayfish warrant efforts to discourage their introduction beyond native ranges." }, { "instance_id": "R57501xR57343", "comparison_id": "R57501", "paper_id": "R57343", "text": "Native and naturalized plant diversity are positively correlated in scrub communities of California and Chile An emerging body of literature sug- gests that the richness of native and naturalized plant species are often positively correlated. It is unclear, however, whether this relationship is robust across spatial scales, and how a distur- bance regime may affect it. Here, I examine the relationships of both richness and abundance between native and naturalized species of plants in two mediterranean scrub communities: coastal sage scrub (CSS) in California and xeric-sloped matorral (XSM) in Chile. In each vegetation type I surveyed multiple sites, where I identified vascular plant species and estimated their relative cover. Herbaceous species richness was higher in XSM, while cover of woody species was higher in CSS, where woody species have a strong impact upon herbaceous species. As there were few naturalized species with a woody growth form, the analyses performed here relate primarily to herbaceous species. Relationships between the herbaceous cover of native and naturalized species were not significant in CSS, but were nearly sig- nificant in XSM. The herbaceous species richness of native and naturalized plants were not signi- ficantly correlated on sites that had burned less than one year prior to sampling in CSS, and too few sites were available to examine this relation- ship in XSM. In post 1-year burn sites, however, herbaceous richness of native and naturalized species were positively correlated in both CSS and XSM. This relationship occurred at all spatial scales, from 400 m 2 to 1 m 2 plots. The consistency of this relationship in this study, together with its reported occurrence in the liter- ature, suggests that this relationship may be general. Finally, the residuals from the correlations between native and naturalized species richness and cover, when plotted against site age (i.e. time since the last fire), show that richness and cover of naturalized species are strongly favoured on recently burned sites in XSM; this suggests that herbaceous species native to Chile are relatively poorly adapted to fire." }, { "instance_id": "R57501xR57284", "comparison_id": "R57501", "paper_id": "R57284", "text": "Scale dependent relationships between native plant diversity and the invasion of croftonweed (Eupatorium adenophorum) in southwest China Croftonweed is an invasive plant in southwest China. We examined the relationships between its invasion patterns and native plant diversity at different spatio-temporal scales. At the 25 m 2 scale, invasion success was negatively correlated with native plant diversity, indicating that resource availability might be the dominant factor regulating community invasibility. At the 400-m 2 scale, both negative and positive relationships were detected, possibly identifying a spatial scale threshold where extrinsic environmental factors became more important to community invasibility. At the vegetation province scale, variations in physical environment outweighed the importance of intrinsic biotic factors and positive relationships between diversity and invader success were found. Native plant diversity also inhibited croftonweed over the course of community succession and at the early stages of invasion at local spatial scales. However, the changing relationship might be an artifact of sampling at different spatial scales." }, { "instance_id": "R57501xR57313", "comparison_id": "R57501", "paper_id": "R57313", "text": "Habitat stress, species pool size and biotic resistance influence exotic plant richness in the Flooding Pampa grasslands Summary 1 Theory and empirical evidence suggest that community invasibility is influenced by propagule pressure, physical stress and biotic resistance from resident species. We studied patterns of exotic and native species richness across the Flooding Pampas of Argentina, and tested for exotic richness correlates with major environmental gradients, species pool size, and native richness, among and within different grassland habitat types. 2 Native and exotic richness were positively correlated across grassland types, increasing from lowland meadows and halophyte steppes, through humid to mesophyte prairies in more elevated topographic positions. Species pool size was positively correlated with local richness of native and exotic plants, being larger for mesophyte and humid prairies. Localities in the more stressful meadow and halophyte steppe habitats contained smaller fractions of their landscape species pools. 3 Native and exotic species numbers decreased along a gradient of increasing soil salinity and decreasing soil depth, and displayed a unimodal relationship with soil organic carbon. When covarying habitat factors were held constant, exotic and native richness residuals were still positively correlated across sites. Within grassland habitat types, exotic and native species richness were positively associated in meadows and halophyte steppes but showed no consistent relationship in the least stressful, prairie habitat types. 4 Functional group composition differed widely between native and exotic species pools. Patterns suggesting biotic resistance to invasion emerged only within humid prairies, where exotic richness decreased with increasing richness of native warm-season grasses. This negative relationship was observed for other descriptors of invasion such as richness and cover of annual cool-season forbs, the commonest group of exotics. 5 Our results support the view that ecological factors correlated with differences in invasion success change with the range of environmental heterogeneity encompassed by the analysis. Within narrow habitat ranges, invasion resistance may be associated with either physical stress or resident native diversity. Biotic resistance through native richness, however, appeared to be effective only at intermediate locations along a stress/ fertility gradient. 6 We show that certain functional groups, not just total native richness, may be critical to community resistance to invasion. Identifying such native species groups is important for directing management and conservation efforts." }, { "instance_id": "R57501xR57380", "comparison_id": "R57501", "paper_id": "R57380", "text": "Temporal trends and effects of diversity on occurrence of exotic macrophytes in a large reservoir Two exotic invasive macrophyte species (the emergent Urochloa subquadripara - tenner-grass - and the submersed Hydrilla verticillata - hydrilla) were investigated in a large sub-tropical reservoir. We analyzed their occurrences over an extended period and tested the hypothesis that macrophyte richness decreases their invasibility. The alternative hypothesis that the occurrence of these exotics is affected by fetch and underwater radiation (important determinants of macrophyte assemblage composition in this reservoir) was also tested. Incidence data (presence/absence) was obtained over 9.5 years at 235 stations. Logistic regression was applied to test whether the likelihood of occurrence of these two species was affected by macrophyte richness, fetch or underwater radiation. Tenner-grass was recorded at a high frequency and quickly recovered from disturbances caused by water drawdown. In contrast, H. verticillata was first recorded in 3 sites in January 2007, but it spread quickly, reaching 30.5% of the sites 19 months later. The main channel of the Parana River was the main source of propagules for this species. The likelihood of occurrence of tenner-grass was positively affected by macrophyte richness but negatively affected by fetch. Thus, wave disturbance is probably more important than diversity in preventing invasion by this species. Hydrilla, by contrast, was negatively affected by macrophyte richness and positively affected by fetch and underwater radiation. Although this result might indicate that macrophyte diversity prevents hydrilla invasion, this is probably not true because hydrilla colonized deeper sites where few species of plant exist. Resistance to disturbances caused by water drawdown (tenner-grass) and waves (hydrilla) as well as persistency of tenner-grass and fast spread of hydrilla make these exotic species a cause for concern because of their potential impacts on water uses and maintenance of diversity." }, { "instance_id": "R57501xR57376", "comparison_id": "R57501", "paper_id": "R57376", "text": "Can biotic resistance be utilized to reduce establishment rates of non-indigenous species in constructed waters? Understanding the mechanisms that facilitate establishment of non-indigenous species is imperative for devising techniques to reduce invasion rates. Passively dispersing non-indigenous organisms, including zooplankton, seemingly invade constructed waters (e.g., ornamental ponds, dams and reservoirs) at faster rates than natural lakes. A common attribute of these invaded water bodies is their relatively young age, leading to the assertion that low biotic resistance may lead to their higher vulnerability. Our aim was to determine if seeding of young water bodies with sediments containing diapausing stages of native zooplankton could accelerate community development, leading to greater biotic resistance to the establishment of new species. Twenty outdoor tanks were filled with water (1,400 L) and nutrients added to attain eutrophic conditions. Ten treatment tanks had sediments added, sourced from local water bodies. In the remaining ten, sediments were autoclaved, and received zooplankton via natural dispersal only. In an initial 12 month monitoring period, species richness increased at a greater rate in the treatment tanks (at 12 months average standing richness per tank = 3.8, accumulated richness = 8.2) than control tanks (2.6 and 5.0, P < 0.05). Treatment tanks developed assemblages with greater proportions of species adapted to pelagic conditions, such as planktonic cladocerans and copepods, while control tanks generally comprised of smaller, littoral dwelling, rotifers. Analysis of similarities indicated community composition differed between the control and treatment groups at 12 months (P < 0.01). Two copepod, four rotifer and one cladoceran species were intentionally added to tanks at 12 months. In the 3 month post-introduction period, five of these species established populations in the control tanks, while only two species established in the treatment tanks. The calanoid copepod Skistodiaptomus pallidus, for example, a non-indigenous species confined to constructed waters in New Zealand, established exclusively in tanks where native calanoid copepod species were absent (primarily control tanks). Our study suggests that biotic resistance could play an important role in reducing the establishment rate of non-indigenous zooplankton. It also provides evidence that seeding constructed water bodies with sediments containing diapausing eggs of native species may provide an effective management tool to reduce establishment rates of non-indigenous zooplankton." }, { "instance_id": "R57501xR57205", "comparison_id": "R57501", "paper_id": "R57205", "text": "The rich generally get richer, but there are exceptions: Correlations between species richness of native plant species and alien weeds in Mexico Studies on the resistance of communities to plant invasions at different spatial scales have yielded contradictory results that have been attributed to scale-dependent factors. Some of these studies argue either for or against Elton's notion of biotic resistance against invasions through diversity. We studied the correlation between alien weeds and native species, dividing the latter group into weedy and non-weedy species, integrating various factors that influence diversity into an analysis on the scale of the federal states of Mexico. The resulting multiple-regression models for native and alien weed species are robust (adjusted R2 = 0.87 and R2 = 0.69, respectively) and show a strong partial correlation of the number of weed species (native and alien) with the number of non-weed native species. These results agree with studies showing a positive correlation between the number of native and alien species on larger scales. Both models also include human population density as an important predictor variable, but this is more important for alien weeds (\u03b2 = 0.62) than for native weeds (\u03b2 = 0.32). In the regression model for native weed species richness, the non-cultivated (fallow) area (\u03b2 = 0.24) correlated positively with native weed richness. In the model for alien weed species richness, the native weed species richness was an important variable (\u03b2 = \u22120.51), showing a negative partial correlation (rpart = \u22120.4). This result is consistent with Elton's biotic resistance hypothesis, suggesting that biotic resistance is scale independent but that this may be masked by other factors that influence the diversity of both weeds and non-weeds." }, { "instance_id": "R57501xR57175", "comparison_id": "R57501", "paper_id": "R57175", "text": "Native communities determine the identity of exotic invaders even at scales at which communities are unsaturated Aim To determine why some communities are more invasible than others and how this depends on spatial scale. Our previous work in serpentine ecosystems showed that native and exotic diversity are negatively correlated at small scales, but became positively correlated at larger scales. We hypothesized that this pattern was the result of classic niche partitioning at small scales where the environment is homogeneous, and a shift to the dominance of coexistence mechanisms that depend on spatial heterogeneity in the environment at large scales. Location Serpentine ecosystem, Northern California. Methods We test the above hypotheses using the phylogenetic relatedness of natives and exotics. We hypothesized that (1) at small scales, native and exotic species should be more distantly related than expected from a random assemblage model because with biotic resistance, successful invaders should have niches that are different from those of the natives present and (2) at large scales, native and exotic species should not be more distantly related than expected. Result We find strong support for the first hypothesis providing further evidence of biotic resistance at small scales. However, at large scales, native and exotic species were also more distantly related than expected. Importantly, however, natives and exotics were more distantly related at small scales than they were at large scales, suggesting that in the transition from small to large scales, biotic resistance is relaxed but still present. Communities at large scales were not saturated in the sense that more species could enter the community, increasing species richness. However, species did not invade indiscriminately. Exotic species closely related to species already established the community were excluded. Main conclusions Native communities determine the identity of exotic invaders even at large spatial scales where communities are unsaturated. These results hold promise for predicting which species will invade a community given the species present." }, { "instance_id": "R57501xR57303", "comparison_id": "R57501", "paper_id": "R57303", "text": "Native macrophyte density and richness affect the invasiveness of a tropical Poaceae species The role of the native species richness and density in ecosystem invasibility is a matter of concern for both ecologists and managers. We tested the hypothesis that the invasiveness of Urochloa arrecta (non-native in the Neotropics) is negatively affected by the species richness and abundance of native aquatic macrophytes in freshwater ecosystems. We first created four levels of macrophyte richness in a greenhouse (richness experiment), and we then manipulated the densities of the same native species in a second experiment (density experiment). When the native macrophytes were adults, fragments of U. arrecta were added, and their growth was assessed. Our results from the richness experiment corroborated the hypothesis of a negative relationship between the native species richness and the growth of U. arrecta, as measured by sprout length and root biomass. However, the resistance to invasion was not attributed to the presence of a particular native species with a greater competitive ability. In the density experiment, U. arrecta growth decreased significantly with an increased density of all five of the native species. Density strongly affected the performance of the Poaceae in a negative manner, suggesting that patches that are densely colonized by native macrophytes and less subject to disturbances will be more resistant to invasion than those that are poorly colonized and more commonly subjected to disturbances. Our density experiment also showed that some species exhibit a higher competitive ability than others (sampling effect). Although native richness and abundance clearly limit the colonization and establishment of U. arrecta, these factors cannot completely prevent the invasion of aquatic ecosystems by this Poaceae species." }, { "instance_id": "R57501xR57373", "comparison_id": "R57501", "paper_id": "R57373", "text": "The rich get richer: patterns of plant invasions in the United States Observations from islands, small-scale experiments, and mathematical models have generally supported the paradigm that habitats of low plant diversity are more vulnerable to plant invasions than areas of high plant diversity. We summarize two independent data sets to show exactly the opposite pattern at multiple spatial scales. More significant, and alarming, is that hotspots of native plant diversity have been far more heavily invaded than areas of low plant diversity in most parts of the United States when considered at larger spatial scales. Our findings suggest that we cannot expect such hotspots to repel invasions, and that the threat of invasion is significant and predictably greatest in these areas." }, { "instance_id": "R57501xR57103", "comparison_id": "R57501", "paper_id": "R57103", "text": "Saturation and invasion resistance of non-interactive plant communities Open plant assemblages in shoals of western Caucasian rivers were used as examples to analyze the relationship between the species saturation and the number and total abundance of alien species in non-interactive communities. Invasion of exotic species into highly saturated communities has been demonstrated to be, on average, less probable than their invasion into unsaturated communities. A hypothesis explaining the relationship between these parameters has been put forward. According to the hypothesis, the number of alien species in a specific locality in a community is determined by their ratio to the number of native species in the species pools of these communities; and their mean abundance, by the ratio of the total number of species to the number of individuals in the localities. Both ratios are smaller in saturated biocenoses, which determines a relatively small admixture of alien species in them." }, { "instance_id": "R57501xR57350", "comparison_id": "R57501", "paper_id": "R57350", "text": "Dominance not richness determines invasibility of tallgrass prairie Many recent studies suggest that more diverse communities are more resistant to invasion. Community characteristics that most strongly influence invasion are uncertain, however, due to covariation of diversity with competition and crowding. We examined separately the effects of species richness and dominance on invasion by an exotic legume, Melilotus officinalis, in intact, native Kansas grassland. We manipulated dominance of C4 grasses by reducing their abundance (i.e. ramet densities) by \ufffd /25 and 50%. In addition, richness was reduced by removing species that were mainly rare and uncommon as might be expected with environmental changes such as drought and fragmentation. In both years of the study (2001/2002), invasibility, measured as peak establishment of Melilotus, was not affected by a 3fold reduction in species richness, nor was there an interaction between loss of species and reduced dominance on invasion. In contrast, reductions in abundance of the dominants significantly reduced invasibility of the grassland plots in both years. Because the abundance of dominants was highly correlated with measures of competition (i.e. ratio of dominant biomass to total biomass) and crowding (total stem densities), this pattern was opposite to that expected if competition were indeed limiting invasion. Rather, invasion appeared to be facilitated by the dominant species, most likely because reduced dominance increased environmental stress. Our results suggest that dominance is the key community characteristic determining invasibility, because highly competitive and space-filling species can either enhance or reduce susceptibility to invasion depending on whether dominants create a more competitive environment or alleviate stressful conditions." }, { "instance_id": "R57501xR57307", "comparison_id": "R57501", "paper_id": "R57307", "text": "Plant diversity increases resistance to invasion in the absence of covarying extrinsic factors Biological invasion is a widespread, but poorly understood phenomenon. Elton's hypothesis, supported by theory, experiment, and anecdotal evidence, suggests that an important determinant of invasion success is resident biodiversity, arguing that high diversity increases the competitive environment of communities and makes them more difficult to invade. Observational studies of plant invasions, however, find little support for this hypothesis and argue strongly against it. Lack of control of extrinsic factors (e.g., disturbance, climate, or soil fertility) that covary with biodiversity and invasion in observational studies makes it difficult to determine if their findings truly refute Elton's hypothesis. We examined performance of Crepis tectorum (an invasive, annual composite weed) in experimental prairie grassland plots and greenhouse plant assemblages in which resident species richness was directly manipulated. Under these conditions, unlike observational studies, no covarying extrinsic factors could interfere with interpreting results, We found a strong inverse association between resident diversity and invader performance as predicted by Elton's hypothesis. Higher resident diversity increased crowding, decreased available light, and decreased available nutrients all of which increased the competitive environment of diverse plant assemblages and reduced C. tectorum success, Examination of individual resident species impacts on C. tectorum performance demonstrated that this diversity effect was not due to the sampling effect. These results suggest that both Elton's hypothesis and its competitive mechanism may operate in nature, but covarying extrinsic factors may obscure the negative impact of diversity on invader success." }, { "instance_id": "R57501xR57324", "comparison_id": "R57501", "paper_id": "R57324", "text": "Mechanisms of resistance of Mediterranean annual communities to invasion by Conyza bonariensis: effects of native functional composition Recent studies have shown that a high species or functional group richness may not always lead to a greater resistance of plant communities to invasion, whereas species and/or functional group composition can more reliably predict invasion resistance. The aim of this study was to understand the mechanisms through which functional group composition can influence the resistance of Mediterranean annual communities to invasion by the exotic Conyza bonariensis. To analyse the effects of functional composition on the performance of individuals introduced as seedlings we first examined the relationships between the demographic and vegetative parameters of C. bonariensis and the biomass achieved by each functional group (grasses, legumes and Asteraceae rosettes) in synthetic communities. As a further step to approach the mechanisms involved in community resistance to invasion, we included in the analyses measurements of functional variables taken within the synthetic communities. In agreement with earlier results and theory suggesting that high nutrient availability can favour invasions, an abundant legume biomass in communities increased the final biomass and net fecundity of C. bonariensis, due to positive effects on soil nitrate concentration. Survival and establishment of C. bonariensis were mainly favoured by a high biomass of Asteraceae. Additional results from measurements of herbivory suggested that C. bonariensis survival wasn't related to abiotic conditions but may be owed to a protection against herbivores in plots with abundant Asteraceae. Establishment was on the other hand likely to be hindered by the effects of abundant grass and legume foliage on light quality, and therefore easier within an Asteraceae canopy. We conclude that invasion of Mediterranean old fields by species with biologies similar to C. bonariensis could be limited by favouring communities dominated by annual grasses." }, { "instance_id": "R57501xR57271", "comparison_id": "R57501", "paper_id": "R57271", "text": "Negative native-exotic diversity relationship in oak savannas explained by human influence and climate Recent research has proposed a scale-dependence to relationships between native diversity and exotic invasions. At fine spatial scales, native-exotic richness relationships should be negative as higher native richness confers resistance to invasion. At broad scales, relationships should be positive if natives and exotics respond similarly to extrinsic factors. Yet few studies have examined both native and exotic richness patterns across gradients of human influence, where impacts could affect native and exotic species differently. We examined native-exotic richness relationships and extrinsic drivers of plant species richness and distributions across an urban development gradient in remnant oak savanna patches. In sharp contrast to most reported results, we found a negative relationship at the regional scale, and no relationship at the local scale. The negative regional-scale relationship was best explained by extrinsic factors, surrounding road density and climate, affecting natives and exotics in opposite ways, rather than a direct effect of native on exotic richness, or vice versa. Models of individual species distributions also support the result that road density and climate have largely opposite effects on native and exotic species, although simple life history traits (life form, dispersal mode) do not predict which habitat characteristics are important for particular species. Roads likely influence distributions and species richness by increasing both exotic propagule pressure and disturbance to native species. Climate may partially explain the negative relationship due to differing climatic preferences within the native and exotic species pools. As gradients of human influence are increasingly common, negative broad-scale native-exotic richness relationships may be frequent in such landscapes." }, { "instance_id": "R57501xR57365", "comparison_id": "R57501", "paper_id": "R57365", "text": "Plant species invasions along the latitudinal gradient in the United States It has been long established that the richness of vascular plant species and many animal taxa decreases with increasing latitude, a pattern that very generally follows declines in actual and potential evapotranspiration, solar radiation, temperature, and thus, total productivity. Using county-level data on vascular plants from the United States (3000 counties in the conterminous 48 states), we used the Akaike Information Criterion (AIC) to evaluate competing models predicting native and nonnative plant species density (number of species per square kilometer in a county) from various combinations of biotic variables (e.g., native bird species density, vegetation carbon, normalized difference vegetation in- dex), environmental/topographic variables (elevation, variation in elevation, the number of land cover classes in the county, radiation, mean precipitation, actual evapotranspiration, and potential evapotranspiration), and human variables (human population density, crop- land, and percentage of disturbed lands in a county). We found no evidence of a latitudinal gradient for the density of native plant species and a significant, slightly positive latitudinal gradient for the density of nonnative plant species. We found stronger evidence of a sig- nificant, positive productivity gradient (vegetation carbon) for the density of native plant species and nonnative plant species. We found much stronger significant relationships when biotic, environmental/topographic, and human variables were used to predict native plant species density and nonnative plant species density. Biotic variables generally had far greater influence in multivariate models than human or environmental/topographic variables. Later, we found that the best, single, positive predictor of the density of nonnative plant species in a county was the density of native plant species in a county. While further study is needed, it may be that, while humans facilitate the initial establishment invasions of non- native plant species, the spread and subsequent distributions of nonnative species are con- trolled largely by biotic and environmental factors." }, { "instance_id": "R57501xR57219", "comparison_id": "R57501", "paper_id": "R57219", "text": "Ecological filtering of exotic plants in an Australian sub-alpine environment Abstract We investigated some of the factors influencing exotic invasion of native sub-alpine plant communities at a site in southeast Australia. Structure, floristic composition and invasibility of the plant communities and attributes of the invasive species were studied. To determine the plant characteristics correlated with invasiveness, we distinguished between roadside invaders, native community invaders and non-invasive exotic species, and compared these groups across a range of traits including functional group, taxonomic affinity, life history, mating system and morphology. Poa grasslands and Eucalyptus-Poa woodlands contained the largest number of exotic species, although all communities studied appeared resilient to invasion by most species. Most community invaders were broad-leaved herbs while roadside invaders contained both herbs and a range of grass species. Over the entire study area the richness and cover of native and exotic herbaceous species were positively related, but exotic herbs were more negatively related to cover of specific functional groups (e.g. trees) than native herbs. Compared with the overall pool of exotic species, those capable of invading native plant communities were disproportionately polycarpic, Asteracean and cross-pollinating. Our data support the hypothesis that strong ecological filtering of exotic species generates an exotic assemblage containing few dominant species and which functionally converges on the native assemblage. These findings contrast with those observed in the majority of invaded natural systems. We conclude that the invasion of closed sub-alpine communities must be viewed in terms of the unique attributes of the invading species, the structure and composition of the invaded communities and the strong extrinsic physical and climatic factors typical of the sub-alpine environment. Nomenclature: Australian Plant Name Index (APNI); http://www.anbg.gov.au/cgi-bin/apni Abbreviations: KNP = Kosciuszko National Park; MRPP = Multi response permutation procedure; VE = Variance explained." }, { "instance_id": "R57501xR57378", "comparison_id": "R57501", "paper_id": "R57378", "text": "Continental rain forest fragments in Singapore resist invasion by exotic plants Aim In general, the plant communities of oceanic islands suffer more from exotic plant invasions than their continental equivalents. At least part of this difference may be contributed by differences in non-biological factors, such as the antiquity and intensity of human impacts and the absence of internal barriers to dispersal, rather than differences in inherent invasibility. We tested the resistance of species-rich continental rain forests to plant invasion on a small, continental island that has been subject to prolonged and intensive human impact. Location Singapore is a 683-km 2 equatorial island <1 km from the Asian mainland and with a population of 4 million people. It has a continental biota but has been subject to human impacts as intense as on any oceanic island. Methods We sampled twenty-nine sites in seven vegetation types, ranging from urban wasteland to fragments of primary lowland rain forest. In each sample plot, all plant species were identified, exotic cover was estimated, and a range of environmental variables measured. Additional qualitative surveys for exotic invasion were made in other forest areas in Singapore. The data were analysed by Spearman\u2019s rank correlation coefficient. Results The number of exotic species recorded at a site was unrelated to the number of native species. Across all sites, percentage canopy opening had the highest correlation with the number of exotic species, while soil pH (which largely reflects the incorporation of calcareous construction wastes) had the highest correlation if the mangrove sites were excluded. There were no exotics in mangrove forest and only a tropical American, birddispersed shrub, Clidemia hirta (L.) D. Don (Melastomataceae: Koster\u2019s Curse), in primary and tall secondary forest patches. The species-poor early stages of woody plant succession on highly degraded soils were also very resistant to exotic plant invasion. Main conclusions Long-isolated rain forest fragments in an exotic-dominated continental island landscape resist invasion by exotic plants, suggesting that the problems on oceanic islands may reflect an inherently greater invasibility. This study also adds to the increasing evidence that the floras of tropical rain forest fragments in South-east Asia are remarkably resilient on a time-scale of decades to a century or more." }, { "instance_id": "R57501xR57199", "comparison_id": "R57501", "paper_id": "R57199", "text": "Araucaria Forest conservation: mechanisms providing resistance to invasion by exotic timber trees Since only 12.6% of the Brazilian Araucaria Forest remains and timber tree monocultures are expanding, biological invasion is a potential threat to the conservation of natural forest remnants. Here, we test (1) the susceptibility of Araucaria Forest to invasion by Pinus taeda and Eucalyptus saligna, (2) the efficiency of different mechanisms controlling the early establishment of these two exotic timber tree species, and (3) the potential of the native timber tree Araucaria angustifolia to establish successfully in ecologically-managed monocultures of Araucaria, Pinus and Eucalyptus. In Araucaria Forest, more than a thousand Pinus seeds landed annually in a hectare; however, experimentally exposed seeds were 100% removed in only 6 days. Furthermore, all experimentally transplanted seedlings of Pinus taeda and Eucalyptus saligna died in less than a year in Araucaria Forest, but not in the monocultures. Correlative evidence suggests that this mortality was associated to plant community richness, plant abundance, and soil fertility. Araucaria angustifolia, in contrast, showed an establishment success in ecologically-managed tree monocultures as high as that exhibited in its natural habitat. The current resistance of Araucaria Forest to invasion by exotic timber trees is good news for the conservation of Araucaria Forest remnants and for its keystone species. The understanding of the mechanisms providing such resistance against invasion points towards management tools for minimizing future threats." }, { "instance_id": "R57501xR57227", "comparison_id": "R57501", "paper_id": "R57227", "text": "Native Predators Do Not Influence Invasion Success of Pacific Lionfish on Caribbean Reefs Biotic resistance, the process by which new colonists are excluded from a community by predation from and/or competition with resident species, can prevent or limit species invasions. We examined whether biotic resistance by native predators on Caribbean coral reefs has influenced the invasion success of red lionfishes (Pterois volitans and Pterois miles), piscivores from the Indo-Pacific. Specifically, we surveyed the abundance (density and biomass) of lionfish and native predatory fishes that could interact with lionfish (either through predation or competition) on 71 reefs in three biogeographic regions of the Caribbean. We recorded protection status of the reefs, and abiotic variables including depth, habitat type, and wind/wave exposure at each site. We found no relationship between the density or biomass of lionfish and that of native predators. However, lionfish densities were significantly lower on windward sites, potentially because of habitat preferences, and in marine protected areas, most likely because of ongoing removal efforts by reserve managers. Our results suggest that interactions with native predators do not influence the colonization or post-establishment population density of invasive lionfish on Caribbean reefs." }, { "instance_id": "R57501xR57250", "comparison_id": "R57501", "paper_id": "R57250", "text": "The roles of biotic resistance and nitrogen deposition in regulating non-native understory plant diversity Understanding the mechanisms that allow for plant invasions is important for both ecologists and land managers, due to both the environmental and economic impacts of native biodiversity losses. We conducted an observational field study in 2008 to examine the relationship between native and non-native forest understory plant species and to investigate the influence of soil nitrogen (N) on plant community richness and diversity. In 2009, we conducted a companion fertilization experiment to investigate how various forms of N deposition (inorganic and organic) influenced native and non-native species richness and diversity. We found that native species richness and diversity were negatively correlated with 1) non-native species richness and diversity and 2) higher total soil inorganic N. In the deposition experiment, adding organic N fertilizers decreased native richness and diversity compared to inorganic N fertilizers. Together, these results indicate that increasing soil N can be detrimental to native species; however, native species richness and diversity may counteract the N-stimulation of non-native species. Furthermore, the negative effects of organic N deposition on native plants may be just as strong, if not stronger, than the effects of inorganic N deposition." }, { "instance_id": "R57501xR57216", "comparison_id": "R57501", "paper_id": "R57216", "text": "Abiotic constraints eclipse biotic resistance in determining invasibility along experimental vernal pool gradients Effective management of invasive species requires that we understand the mechanisms determining community invasibility. Successful invaders must tolerate abiotic conditions and overcome resistance from native species in invaded habitats. Biotic resistance to invasions may reflect the diversity, abundance, or identity of species in a community. Few studies, however, have examined the relative importance of abiotic and biotic factors determining community invasibility. In a greenhouse experiment, we simulated the abiotic and biotic gradients typically found in vernal pools to better understand their impacts on invasibility. Specifically, we invaded plant communities differing in richness, identity, and abundance of native plants (the \"plant neighborhood\") and depth of inundation to measure their effects on growth, reproduction, and survival of five exotic plant species. Inundation reduced growth, reproduction, and survival of the five exotic species more than did plant neighborhood. Inundation reduced survival of three species and growth and reproduction of all five species. Neighboring plants reduced growth and reproduction of three species but generally did not affect survival. Brassica rapa, Centaurea solstitialis, and Vicia villosa all suffered high mortality due to inundation but were generally unaffected by neighboring plants. In contrast, Hordeum marinum and Lolium multiflorum, whose survival was unaffected by inundation, were more impacted by neighboring plants. However, the four measures describing plant neighborhood differed in their effects. Neighbor abundance impacted growth and reproduction more than did neighbor richness or identity, with growth and reproduction generally decreasing with increasing density and mass of neighbors. Collectively, these results suggest that abiotic constraints play the dominant role in determining invasibility along vernal pool and similar gradients. By reducing survival, abiotic constraints allow only species with the appropriate morphological and physiological traits to invade. In contrast, biotic resistance reduces invasibility only in more benign environments and is best predicted by the abundance, rather than diversity, of neighbors. These results suggest that stressful environments are not likely to be invaded by most exotic species. However, species, such as H. marinum, that are able to invade these habitats require careful management, especially since these environments often harbor rare species and communities." }, { "instance_id": "R57501xR57163", "comparison_id": "R57501", "paper_id": "R57163", "text": "Invasion in space and time: non-native species richness and relative abundance respond to interannual variation in productivity and diversity Ecologists have long sought to understand the relationships among species diversity, community productivity and invasion by non-native species. Here, four long-term observational datasets were analyzed using repeated measures statistics to determine how plant species richness and community resource capture (i.e. productivity) influenced invasion. Multiple factors influenced the results, including the metric used to quantify invasion, interannual variation and spatial scale. Native richness was positively correlated with non-native richness, but was usually negatively correlated with non-native abundance, and these patterns were stronger at the larger spatial scale. Logistic regressions indicated that the probability of invasion was reduced both within and following years with high productivity, except at the desert grassland site where high productivity was associated with increased invasion. Our analysis suggests that while non-natives were most likely to establish in species rich communities, their success was diminished by high resource capture by the resident community." }, { "instance_id": "R57501xR57305", "comparison_id": "R57501", "paper_id": "R57305", "text": "Patterns of invasion of an urban remnant of a species-rich grassland in southeastern Australia by non-native plant species . The invasion by non-native plant species of an urban remnant of a species-rich Themeda triandra grassland in southeastern Australia was quantified and related to abiotic influences. Richness and cover of non-native species were highest at the edges of the remnant and declined to relatively uniform levels within the remnant. Native species richness and cover were lowest at the edge adjoining a roadside but then showed little relation to distance from edge. Roadside edge quadrats were floristically distinct from most other quadrats when ordinated by Detrended Correspondence Analysis. Soil phosphorus was significantly higher at the roadside edge but did not vary within the remnant itself. All other abiotic factors measured (NH4, NO3, S, pH and % organic carbon) showed little variation across the remnant. Non-native species richness and cover were strongly correlated with soil phosphorus levels. Native species were negatively correlated with soil phosphorus levels. Canonical Correspondence Analysis identified the perennial non-native grasses of high biomass as species most dependent on high soil nutrient levels. Such species may be resource-limited in undisturbed soils. Three classes of non-native plants have invaded this species-rich grassland: (1) generalist species (> 50 % frequency), mostly therophytes with non-specialized habitat or germination requirements; (2) resource-limited species comprising perennial species of high biomass that are dependent on nutrient increases and/or soil disturbances before they can invade the community and; (3) species of intermediate frequency (1\u201330 %), of low to high biomass potential, that appear to have non-specialized habitat requirements but are currently limited by seed dispersal, seedling establishment or the current site management. Native species richness and cover are most negatively affected by increases in non-native cover. Declines are largely evident once the non-native cover exceeds 40 %. Widespread, generalist non-native species are numerous in intact sites and will have to be considered a permanent part of the flora of remnant grasslands. Management must aim to minimize increases in cover of any non-native species or the disturbances that favour the establishment of competitive non-native grasses if the native grassland flora is to be conserved in small, fragmented remnants." }, { "instance_id": "R57501xR57296", "comparison_id": "R57501", "paper_id": "R57296", "text": "Species evenness and invasion resistance of experimental grassland communities Concern for biodiversity loss coupled with the accelerated rate of biological invasions has provoked much interest in assessing how native plant species diversity affects invasibility. Although experimental studies extensively document the effects of species richness on invader performance, the role of species evenness in such studies is rarely examined. Species evenness warrants more attention because the relative abundances of species can account for substantially more of the variance in plant community diversity and tend to change more rapidly and more frequently in response to disturbances than the absolute numbers of species. In this study, we experimentally manipulated species evenness within native prairie grassland mesocosms. We assessed how evenness affected primary productivity, light availability and the resistance of native communities to invasion. The primary productivity of native communities increased significantly with species evenness, and this increase in productivity was accompanied by significant decreases in light availability. However, evenness had no effect on native community resistance to invasion by three common exotic invasive species. In this study, niche complementarity provides a potential mechanism for the effects of evenness on productivity and light availability, but these effects apparently were not strong enough to alter the invasibility of the experimental communities. Our results suggest that species evenness enhances community productivity but provides no benefit to invasion resistance in otherwise functionally diverse communities." }, { "instance_id": "R57501xR57245", "comparison_id": "R57501", "paper_id": "R57245", "text": "Filling in the gaps: modelling native species richness and invasions using spatially incomplete data Detailed knowledge of patterns of native species richness, an important component of biodiversity, and non-native species invasions is often lacking even though this knowledge is essential to conservation efforts. However, we cannot afford to wait for complete information on the distribution and abundance of native and harmful invasive species. Using information from counties well surveyed for plants across the USA, we developed models to fill data gaps in poorly surveyed areas by estimating the density (number of species km -2 ) of native and non-native plant species. Here, we show that native plant species density is non-random, predictable, and is the best predictor of non-native plant species density. We found that eastern agricultural sites and coastal areas are among the most invaded in terms of non-native plant species densities, and that the central USA appears to have the greatest ratio of non-native to native species. These large-scale models could also be applied to smaller spatial scales or other taxa to set priorities for conservation and invasion mitigation, prevention, and control efforts." }, { "instance_id": "R57501xR57311", "comparison_id": "R57501", "paper_id": "R57311", "text": "Native plant diversity increases herbivory to non-natives There is often an inverse relationship between the diversity of a plant community and the invasibility of that community by non-native plants. Native herbivores that colonize novel plants may contribute to diversity\u2013invasibility relationships by limiting the relative success of non-native plants. Here, we show that, in large collections of non-native oak trees at sites across the USA, non-native oaks introduced to regions with greater oak species richness accumulated greater leaf damage than in regions with low oak richness. Underlying this trend was the ability of herbivores to exploit non-native plants that were close relatives to their native host. In diverse oak communities, non-native trees were on average more closely related to native trees and received greater leaf damage than those in depauperate oak communities. Because insect herbivores colonize non-native plants that are similar to their native hosts, in communities with greater native plant diversity, non-natives experience greater herbivory." }, { "instance_id": "R57501xR57107", "comparison_id": "R57501", "paper_id": "R57107", "text": "Loss of native herbaceous species due to woody plant encroachment facilitates the establishment of an invasive grass Although negative relationships between diversity (frequently measured as species richness) and invasibility at neighborhood or community scales have often been reported, realistic natural diversity gradients have rarely been studied at this scale. We recreated a naturally occurring gradient in species richness to test the effects of species richness on community invasibility. In central Texas savannas, as the proportion of woody plants increases (a process known as woody plant encroachment), herbaceous habitat is both lost and fragmented, and native herbaceous species richness declines. We examined the effects of these species losses on invasibility in situ by removing species that occur less frequently in herbaceous patches as woody plant encroachment advances. This realistic species removal was accompanied by a parallel and equivalent removal of biomass with no changes in species richness. Over two springs, the nonnative bunchgrass Bothriochloa ischaemum germinated significantly more often in the biomass-removal treatment than in unmanipulated control plots, suggesting an effect of native plant density independent of diversity. Additionally, significantly more germination occurred in the species-removal treatment than in the biomass-removal treatment. Changes in species richness had a stronger effect on B. ischaemum germination than changes in plant density, demonstrating that niche-related processes contributed more to biotic resistance in this system than did species-neutral competitive interactions. Similar treatment effects were found on transplant growth. Thus we show that woody plant encroachment indirectly facilitates the establishment of an invasive grass by reducing native diversity. Although we found a negative relationship between species richness and invasibility at the scale of plots with similar composition and environmental conditions, we found a positive relationship between species richness and invasibility at larger scales. This apparent paradox is consistent with reports from other systems and may be the result of variation in environmental factors at larger scales similarly influencing both invasibility and richness. The habitat loss and fragmentation associated with woody plant encroachment are two of many processes that commonly threaten biodiversity, including climate change. Many of these processes are similarly likely to increase invasibility via their negative effects on native diversity." }, { "instance_id": "R57501xR57153", "comparison_id": "R57501", "paper_id": "R57153", "text": "Spread of introduced Caulerpa species in macroalgal habitats A short-term field experiment was designed to identify layers of Mediterranean macroalgal assemblage conducive to successful spread of two introduced Caulerpa species (Bryopsidales, Chlorophyta). By manipulation of species presence, three experimental assemblages were obtained: (1) encrusting algae, having removed the turf and erect species; (2) encrusting and turfing algae, having removed erect species; (3) encrusting, turfing and erect algae, that is, unmanipulated assemblages, which served as a control. Fragments of the two introduced species Caulerpa taxifolia (Vahl) C. Agardh and Caulerpa racemosa (Forsskal) J. Agardh were transplanted in each of the three assemblages. Width of the colony, blade density and percentage of the substratum covered by the two species were measured. The susceptibility of the indigenous community to the spread of Caulerpa species was related to type of assemblage. Blade density and amount of substratum covered by the two Caulerpa species were different between species and generally greater for C. taxifolia than for C. racemosa. Overall, the spread of these species was strongly dependent on the type but not directly on the complexity of the assemblage. Turf was more favourable than encrusting species alone, while the least advantageous habitat was where the macroalgal assemblage is composed of encrusting, turf and erect species. In other words, increased number of species in the assemblage reduces invasion of the Caulerpa species but the type of algae in the assemblage is likely to be more important than number of species. The presence of turf promotes the spread of Caulerpa species." }, { "instance_id": "R57501xR57167", "comparison_id": "R57501", "paper_id": "R57167", "text": "Elton's hypothesis revisited: an experimental test using cogongrass In the 1950s Charles Elton hypothesized that more diverse communities should be less susceptible to invasion by exotic species (biodiversity\u2013invasibility hypothesis). The biodiversity\u2013invasibility hypothesis postulates that species-rich communities are less vulnerable to invasion because vacant niches are less common and the intensity of interspecific competition is more severe. Field studies were conducted at two sites, a logged site and an unlogged site in Santa Rosa County, Florida, U.S.A, to test Elton\u2019s hypothesis using cogongrass (Imperata cylindrica), a non-indigenous grass invading large areas of the Southeastern United States. The logged site was under 17-year-old loblolly pine prior to clear cutting. The unlogged site, a longleaf pine forest, was at the Blackwater River State Forest. Both the logged site and unlogged site showed no significant relationship between the rate of cogongrass spread and native plant species richness, functional richness, and cover of the invaded community. Increased species or functional richness may increase the use of resources; however, the extensive rhizome/root network possessed by cogongrass and its ability to thrive under shade may allow for its persistence in a diverse community. The results from both the logged and unlogged sites do not support the general hypothesis of Elton that invasion resistance and compositional stability increase with diversity. Biodiversity does not appear to be an important factor for cogongrass invasion in the southern United States. Extrinsic factors in this study prevent the ability to draw a defined causal relationship between native plant diversity and invasibility. Underlying reasons for why no relationship was observed may be simply due to the tremendous competitive ability of cogongrass or the narrow range of species richness, functional richness and cover observed in our study." }, { "instance_id": "R57501xR57141", "comparison_id": "R57501", "paper_id": "R57141", "text": "Evidence for spider community resilience to invasion by non-native spiders The negative impacts of non-native species are well documented; however, the ecological outcomes of invasions can vary widely. In order to determine the resilience of local communities to invasion by non-native spiders, we compared spider assemblages from areas with varying numbers of non-native spiders in California coastal sage scrub. Spiders were collected from pitfall traps over 2 years. Productive lowland coastal sites contained both the highest proportion of non-natives and the greatest number of spiders overall. We detected no negative associations between native and non-native spiders and therefore suggest that non-native spiders are not presently impacting local ground-dwelling spiders. Strong positive correlations between abundances of some natives and non-natives may be the result of similar habitat preferences or of facilitation between species. We propose that the effects of non-native species depend on resource availability and site productivity, which, in turn, affect community resilience. Our results support the contention that both invasibility and resilience are higher in diverse, highly linked communities with high resource availability rather than the classical view that species poor communities are more invasible." }, { "instance_id": "R57501xR57126", "comparison_id": "R57501", "paper_id": "R57126", "text": "Patterns of phylogenetic diversity are linked to invasion impacts, not invasion resistance, in a native grassland Question: There are often more invasive species in communities that are less phylogenetically diverse or distantly related to the invaders. This is thought to indicate reduced biotic resistance, but recent theory predicts that phylogenetic relationships have more influence on competitive outcomes when interactions are more pair-wise than diffuse. Therefore, phylogenetic relationships should change when the invader becomes dominant and interactions are more pairwise, rather than alter biotic resistance, which is the outcome of diffuse interactions with the resident community; however both processes can produce similar phylogenetic structures within communities. We ask whether phylogenetic structure is more associated with biotic resistance or invasion impacts following Bromus inermis (brome) invasion and identify the mechanisms behind changes to phylogenetic structure. Location: Native grassland in Alberta, Canada. Methods: We tested whether phylogenetic structure affected biotic resistance by transplanting brome seedlings into intact vegetation and quantified invasion impacts on community structure by surveying across multiple invasion edges. Additionally, we tested whether relatedness, rarity, average patch size, evolutionary distinctiveness or environmental tolerances determined species\u2019 response to brome invasion. Results: Neither phylogenetic diversity, nor relatedness to brome, influenced the strength of biotic resistance; resource availability was the strongest determinant of resistance. However, communities did become less diverse and phylogenetically over-dispersed following brome invasion, but not because of the loss of related species. Brome invasion was associated with declines in common species from common lineages and increases in shade-tolerant species and rare species from species-poor lineages. Conclusions: Ourresults suggest that invasion is morelikelytoaffectthe phylogenetic structure of the community than the phylogenetic structure of the community will affect invasion. However, they also suggest that the degree of relatedness between the invader and the resident community is unlikely todrive these effects on phylogenetic community structure. Consistent with previous studies, invasion effects were stronger for common species as they have reduced shade tolerance and cannot persist in a subordinate role. This suggests that invasion effects on phylogenetic community structure will depend on which species exhibit traits that enable persistence with the invader and how these traits are distributed across the phylogeny." }, { "instance_id": "R57501xR57133", "comparison_id": "R57501", "paper_id": "R57133", "text": "Functional group diversity, resource preemption and the genesis of invasion resistance in a community of marine algae Although many studies have investigated how community characteristics such as diversity and disturbance relate to invasibility, the mechanisms underlying biotic resistance to introduced species are not well understood. I manipulated the functional group composition of native algal communities and invaded them with the introduced, Japanese seaweed Sargassum muticum to understand how individual functional groups contributed to overall invasion resistance. The results suggested that space preemption by crustose and turfy algae inhibited S. muticum recruitment and that light preemption by canopy and understory algae reduced S. muticum survivorship. However, other mechanisms I did not investigate could have contributed to these two results. In this marine community the sequential preemption of key resources by different functional groups in different stages of the invasion generated resistance to invasion by S. muticum. Rather than acting collectively on a single resource the functional groups in this system were important for preempting either space or light, but not both resources. My experiment has important implications for diversity-invasibility studies, which typically look for an effect of diversity on individual resources. Overall invasion resistance will be due to the additive effects of individual functional groups (or species) summed over an invader's life cycle. Therefore, the cumulative effect of multiple functional groups (or species) acting on multiple resources is an alternative mechanism that could generate negative relationships between diversity and invasibility in a variety of biological systems." }, { "instance_id": "R57501xR57281", "comparison_id": "R57501", "paper_id": "R57281", "text": "Evenness-invasibility relationships differ between two extinction scenarios in tallgrass prairie Experiments that have manipulated species richness with random draws of species from a larger species pool have usually found that invasibility declines as richness increases. These results have usually been attributed to niche complementarity, and interpreted to mean that communities will become less resistant to invaders as species go locally extinct. However, it is not clear how relevant these studies are to real-world situations where species extinctions are non-random, and where species diversity declines due to increased rarity (i.e. reduced evenness) without having local extinctions. We experimentally varied species richness from 1 to 4, and evenness from 0.44 to 0.97 with two different extinction scenarios in two-year old plantings using seedling transplants in western Iowa. In both scenarios, evenness was varied by changing the level of dominance of the tall grass Andropogon gerardii. In one scenario, which simulated a loss of short species from Andropogon communities, we directly tested for complementarity in light capture due to having species in mixtures with dissimilar heights. We contrasted this scenario with a second set of mixtures that contained all tall species. In both cases, we controlled for factors such as rooting depth and planting density. Mean invader biomass was higher in monocultures (5.4 g m 2 week 1 ) than in 4-species mixtures (3.2 g m 2 week 1 ). Reduced evenness did not affect invader biomass in mixtures with dissimilar heights. However, the amount of invader biomass decreased by 60% as evenness increased across mixtures with all tall species. This difference was most pronounced early in the growing season when high evenness plots had greater light capture than low evenness plots. These results suggest that the effect of reduced species diversity on invasibility are 1) not related to complementarity through height dissimilarity, and 2) variable depending on the phenological traits of the species that are becoming rare or going locally extinct." }, { "instance_id": "R57501xR57301", "comparison_id": "R57501", "paper_id": "R57301", "text": "Beyond biodiversity: individualistic controls of invasion in a self-assembled community Recent experimental and simulation results, and competition-based ecological theory, predict a simple relationship between species richness and the invasibility of communities at small spatial scales \u2013 likelihood of invasion decreases with increasing richness. Here we show data from 42 continuous years of sampling old field succession that reveal quite different dynamics of plant invasion. Contrary to experimental studies, when richness was important in explaining invasion probability, it was typically positively associated with species invasion. Invasion of several species had a unimodal response to resident species richness, which appeared to be a mixture of compositional influences and a richness effect. Interestingly, invasions by native and exotic species did not fundamentally differ. Control of species invasion in this system is individualistic, caused by a variety of community-level mechanisms rather than a single prevailing richness effect." }, { "instance_id": "R57501xR57309", "comparison_id": "R57501", "paper_id": "R57309", "text": "Weak vs. strong invaders of natural plant communities: Assessing invasibility and impact In response to the profound threat of exotic species to natural systems, much attention has been focused on the biotic resistance hypothesis, which predicts that diverse communities should better resist invasions. While studies of natural communities generally refute this hypothesis, reporting positive relationships between native species diversity and invasibility, some local-scale studies have instead obtained negative relationships. Most treatments of the topic have failed to recognize that all exotic invaders do not behave alike: while \u201cweak\u201d invaders become minor components of communities, \u201cstrong\u201d invaders become community dominants at the expense of native species. At the same time, the specific impacts of strong invaders on communities are poorly documented yet critical to understanding implications of diversity loss. With these shortfalls in mind, we examined local-scale relationships between native and exotic plant taxa in bunchgrass communities of western Montana, USA. We found that measures of..." }, { "instance_id": "R57501xR57338", "comparison_id": "R57501", "paper_id": "R57338", "text": "Shifts in grassland invasibility: effects of soil resources, disturbance, composition, and invader size There is an emerging recognition that invasibility is not an intrinsic community trait, but is a condition that fluctuates from interactions between environmental forces and residential characters. Elucidating the spatiotemporal complexities of invasion requires inclusion of multiple, ecologically variable factors within communities of differing structure. Water and nutrient amendments, disturbance, and local composition affect grassland invasibility but no study has simultaneously integrated these, despite evidence that they frequently interact. Using a split-plot factorial design, we tested the effects of these factors on the invasibility of C3 pasture communities by smooth pigweed Amaranthus hybridus L., a problematic C4 forb. We sowed seeds and transplanted 3-week old seedlings of A. hybridus into plots containing monocultures and mixtures of varying composition, subjected plots to water, soil disturbance, and synthetic bovine urine (SBU) treatments, and measured A. hybridus emergence, recruitment, and growth rate. Following SBU addition, transplanted seedling growth increased in all plots but differed among legume and nonlegume monocultures and mixtures of these plant types. However, SBU decreased the number and recruitment rate of emerged seedlings because high residential growth reduced light availability. Nutrient pulses can therefore have strong but opposing effects on invasibility, depending on when they coincide with particular life history stages of an invader. Indeed, in SBU-treated plots, small differences in height of transplanted seedlings early on produced large differences in their final biomass. All facilitative effects of small-scale disturbance on invasion success diminished when productivity-promoting factors were present, suggesting that disturbance patch size is important. Precipitation-induced invasion resistance of C3 pastures by a C4 invader was partly supported. In grazed grasslands, these biotic and environmental factors vary across scales and interact in complex ways to affect invasibility, thus a dynamic patch mosaic of differential invasion resistance likely occurs in single fields. We propose that disturbance patch size, grazing intensity, soil resource availability, and resident composition are inextricably linked to grassland invasions and comment on the utility of community attributes as reliable predictors of invasibility. Lastly, we suggest temporal as well as spatial coincidences of multiple invasion facilitators dictate the window of opportunity for invasion." }, { "instance_id": "R57501xR57290", "comparison_id": "R57501", "paper_id": "R57290", "text": "Effects of native species diversity and resource additions on invader impact Theory and empirical work have demonstrated that diverse communities can inhibit invasion. Yet, it is unclear how diversity influences invader impact, how impact varies among exotics, and what the relative importance of diversity is versus extrinsic factors that themselves can influence invasion. To address these issues, we established plant assemblages that varied in native species and functional richness and crossed this gradient in diversity with resource (water) addition. Identical assemblages were either uninvaded or invaded with one of three exotic forbs: spotted knapweed (Centaurea maculosa), dalmatian toadflax (Linaria dalmatica), or sulfur cinquefoil (Potentilla recta). To determine impacts, we measured the effects of exotics on native biomass and, for spotted knapweed, on soil moisture and nitrogen levels. Assemblages with high species richness were less invaded and less impacted than less diverse assemblages. Impact scaled with exotic biomass; spotted knapweed had the largest impact on native biomass compared with the other exotics. Although invasion depressed native biomass, the net result was to increase total community yield. Water addition increased invasibility (for knapweed only) but had no effect on invader impact. Together, these results suggest that diversity inhibits invasion and reduces impact more than resource additions facilitate invasion or impact." }, { "instance_id": "R58002xR57943", "comparison_id": "R58002", "paper_id": "R57943", "text": "Comparison of invertebrate herbivores on native and non-native Senecio species: Implications for the enemy release hypothesis The enemy release hypothesis posits that non-native plant species may gain a competitive advantage over their native counterparts because they are liberated from co-evolved natural enemies from their native area. The phylogenetic relationship between a non-native plant and the native community may be important for understanding the success of some non-native plants, because host switching by insect herbivores is more likely to occur between closely related species. We tested the enemy release hypothesis by comparing leaf damage and herbivorous insect assemblages on the invasive species Senecio madagascariensis Poir. to that on nine congeneric species, of which five are native to the study area, and four are non-native but considered non-invasive. Non-native species had less leaf damage than natives overall, but we found no significant differences in the abundance, richness and Shannon diversity of herbivores between native and non-native Senecio L. species. The herbivore assemblage and percentage abundance of herbivore guilds differed among all Senecio species, but patterns were not related to whether the species was native or not. Species-level differences indicate that S. madagascariensis may have a greater proportion of generalist insect damage (represented by phytophagous leaf chewers) than the other Senecio species. Within a plant genus, escape from natural enemies may not be a sufficient explanation for why some non-native species become more invasive than others." }, { "instance_id": "R58002xR57735", "comparison_id": "R58002", "paper_id": "R57735", "text": "Testing the enemy release hypothesis: a comparison of foliar insect herbivory of the exotic Norway maple (Acer platanoides L.) and the native sugar maple (A-saccharum L.) Norway maple (Acer platanoides) is a Eurasian introduced tree species which has invaded the North American range of its native congener, sugar maple (A. saccharum). One hypothesis used to explain the success of an invasive species is the enemy release hypothesis (ERH), which states that invasive species are often particularly successful in their new range because they lack the enemies of their native range. In this study, we hypothesized that Norway maple would have less insect damage than sugar maple due to such enemy release. Autumn 2005 and summer 2006 leaves of Norway and sugar maple were collected from six sites in New Jersey and Pennsylvania to compare percent leaf area loss, gall damage, fungal damage, and specific leaf area (cm2/g). Although both species had low overall mean levels of leaf damage (0.4\u20132.5%), in both years/seasons Norway maple had significantly less leaf damage than sugar maple. Insects were also collected to compare insect assemblies present on each tree species. The numbers of insect taxa and individuals found on each species were nearly equivalent. Overall, the results of this study are consistent with the enemy release hypothesis for Norway maple. In addition, sugar maples when surrounded by Norway maples tended to show reduced herbivory. This suggests that the spread of Norway maple in North America, by reducing amounts of insect herbivory, may have further ecosystem-wide impacts." }, { "instance_id": "R58002xR57590", "comparison_id": "R58002", "paper_id": "R57590", "text": "The invertebrate fauna on broom, Cytisus scoparius, in two native and two exotic habitats This study quantifies the invertebrate fauna found on broom, Cytisus scoparius, L. (Link), in two countries where it grows as a native plant (France and England) and two countries where it grows as an alien plant (New Zealand and Australia). The data are used to test three hypotheses concerning the predicted differences in invertebrate community structure in native versus exotic habitats: (1) Are generalist phytophages dominant in exotic habitats and specialist phytophages dominant in native habitats? (2) Are there empty phytophage niches in exotic habitats? (3) As a plant species accumulates phytophages, do these in turn accumulate natural enemies? The broom fauna was sampled at five sites in each country by beating five broom bushes per site. The sampling efficiency of beating was quantified at one field site and it was shown to collect 87 % of invertebrate abundance, 95 % of invertebrate biomass and 100 % of phytophagous species found on the branches. Generalist phytophages were dominant on broom in exotic habitats and specialists dominant on broom in the native habitats. Thus, the two countries where broom grows as a native plant had higher numbers of total phytophage species and a higher abundance of specialist phytophages per bush. There was no significant difference in the average abundance of generalist phytophage species found per bush in native and alien habitats. Phytophages were assigned to seven feeding niches: suckers, root feeders, external chewers, flower feeders, seed feeders, miners and pollen feeders. Empty niches were found in the exotic habitats; species exploiting structurally specific parts of the host plant, such as flowers and seeds, were absent in the countries where broom grows as an alien plant. The pattern of niche occupancy was similar between native and exotic habitats when just the generalist phytophages were considered. As phytophage abundance and biomass increased, there were concomitant increases in natural enemy abundance and biomass. Thus, it appears that as plants accumulate phytophages, the phytophages in turn accumulate natural enemies and a food web develops around the plant. Moreover, in the native countries, the history of association between the natural enemies and their prey has been sufficient for specialist predators and parasitoids, feeding on the specialist phytophages, to have evolved." }, { "instance_id": "R58002xR57596", "comparison_id": "R58002", "paper_id": "R57596", "text": "Post-dispersal losses to seed predators: an experimental comparison of native and exotic old field plants Invasions by exotic plants may be more likely if exotics have low rates of attack by natural enemies, includ - ing post-dispersal seed predators (granivores). We investigated this idea with a field experiment conducted near Newmarket, Ontario, in which we experimentally excluded vertebrate and terrestrial insect seed predators from seeds of 43 native and exotic old-field plants. Protection from vertebrates significantly increased recovery of seeds; vertebrate exclusion produced higher recovery than controls for 30 of the experimental species, increasing overall seed recovery from 38.2 to 45.6%. Losses to vertebrates varied among species, significantly increasing with seed mass. In contrast, insect exclusion did not significantly improve seed recovery. There was no evidence that aliens benefitted from a re - duced rate of post-dispersal seed predation. The impacts of seed predators did not differ significantly between natives and exotics, which instead showed very similar responses to predator exclusion treatments. These results indicate that while vertebrate granivores had important impacts, especially on large-seeded species, exotics did not generally benefit from reduced rates of seed predation. Instead, differences between natives and exotics were small compared with interspecific variation within these groups. Resume : L'invasion par les plantes adventices est plus plausible si ces plantes ont peu d'ennemis naturels, incluant les predateurs post-dispersion des graines (granivores). Les auteurs ont examine cette idee lors d'une experience sur le ter- rain, conduite pres de Newmarket en Ontario, dans laquelle ils ont experimentalement empeche les predateurs de grai- nes, vertebres et insectes terrestres, d'avoir acces aux graines de 43 especes de plantes indigenes ou exotiques, de vielles prairies. La protection contre les vertebres augmente significativement la survie des graines; l'exclusion permet de recuperer plus de graines comparativement aux temoins chez 30 especes de plantes experimentales, avec une aug- mentation generale de recuperation allant de 38.2 a 45.6%. Les pertes occasionnees par les vertebres varient selon les especes, augmentant significativement avec la grosseur des graines. Au contraire, l'exclusion des insectes n'augmente pas significativement les nombres de graines recuperees. Ils n'y a pas de preuve que les adventices auraient beneficie d'une reduction du taux de predation post-dispersion des graines. Les impacts des predateurs de graines ne different pas significativement entre les especes indigenes et introduites, qui montrent au contraire des reactions tres similaires aux traitements d'exclusion des predateurs. Ces resultats indiquent que bien que les granivores vertebres aient des im - pacts importants, surtout sur les especes a grosses graines, les plantes introduites ne beneficient generalement pas de taux reduits de predation des graines. Au contraire, les differences entre les plantes indigenes et les plantes introduites sont petites comparativement a la variation interspecifique a l'interieur de chacun de ces groupes. Mots cles : adventices, exotiques, granivores, envahisseurs, vieilles prairies, predateurs de graines. (Traduit par la Redaction) Blaney and Kotanen 292" }, { "instance_id": "R58002xR57968", "comparison_id": "R58002", "paper_id": "R57968", "text": "Pollinators and predators at home and away: do they determine invasion success for Australian Acacia in New Zealand? Aim Interactions with pollinators and pre-dispersal seed predators are important determinants of reproductive output and could influence the success of plant species introduced to areas outside their native range. We identified the role of these interactions in determining reproductive output and invasion outcomes for species of Australian Acacia introduced to New Zealand. Location Australia and New Zealand. Methods We studied three species of Australian Acacia with different invasion success in New Zealand. In both Australia and New Zealand, we measured pollination success as the number of pods per inflorescence and the proportion of aborted seeds per pod, determined losses to pre-dispersal seed predators, and measured overall seed output. For each species, we compared performance in New Zealand with that in Australia, then examined whether there was any variation among species in their relative performance in each country. Results The number of pods per inflorescence and proportion of seeds aborted were similar in each country and among species. There was little difference in pre-dispersal seed predation rate between Australia and New Zealand for Acacia dealbata, an invasive species, and Acacia baileyana, a species widely naturalized in New Zealand. However, pre-dispersal seed predation rate was lower in New Zealand for Acacia pravissima, currently considered to be a casual species there. Both the invasive A. dealbata and the casual A. pravissima produced more seeds per tree in New Zealand than Australia. Main conclusions Differences in reproductive success between the native and introduced range could not explain the differences in invasion success among the three Acacia species. Although per capita reproductive output was higher in New Zealand for two species, neither mutualistic interactions with pollinators nor antagonistic interactions with pre-dispersal seed predators explained those differences. The high seed output of A. pravissima suggests it has the potential to become invasive. These findings highlight the value of broad comparative studies in elucidating the drivers of invasion." }, { "instance_id": "R58002xR57962", "comparison_id": "R58002", "paper_id": "R57962", "text": "Parasites of non-native freshwater fishes introduced into England and Wales suggest enemy release and parasite acquisition Abstract When non-native species are introduced into a new range, their parasites can also be introduced, with these potentially spilling-over into native hosts. However, in general, evidence suggests that a high proportion of their native parasites are lost during introduction and infections by some new parasites from the native range might occur, potentially resulting in parasite spill-back to native species. These processes were investigated here using parasite surveys and literature review on seven non-native freshwater fishes introduced into England and Wales. Comparison of the mean numbers of parasite species and genera per population for each fish species England and Wales with their native ranges revealed <9 % of the native parasite fauna were present in their populations in England and Wales. There was no evidence suggesting these introduced parasites had spilled over into sympatric native fishes. The non-native fishes did acquire parasites following their introduction, providing potential for parasite spill-back to sympatric fishes, and resulted in non-significant differences in overall mean numbers of parasites per populations between the two ranges. Through this acquisition, the non-native fishes also had mean numbers of parasite species and genera per population that were not significantly different to sympatric native fishes. Thus, the non-native fishes in England and Wales showed evidence of enemy release, acquired new parasites following introduction providing potential for spill-back, but showed no evidence of parasite spill-over." }, { "instance_id": "R58002xR57880", "comparison_id": "R58002", "paper_id": "R57880", "text": "Do alien plants escape from natural enemies of congeneric residents? Yes but not from all As predicted by the enemy release hypothesis, plants are supposedly less attacked by herbivores in their introduced range than in their native range. However, the nature of the natural enemies, in particular their degree of specificity may also affect the level of enemy escape. It is therefore expected that ectophagous invertebrate species, being generally considered as more generalists than endophagous species, are more prompt to colonise alien plants. In Swiss, Siberian and Russian Far East arboreta, we tested whether alien woody plants are less attacked by native herbivorous insects than native congeneric woody plant species. We also tested the hypothesis that leaf miners and gall makers show stronger preference for native woody plants than external leaf chewers. In all investigated regions, leaf miners and gall makers were more abundant and showed higher species richness on native woody plants than on congeneric alien plants. In contrast, external leaf chewers did not cause more damage to native plants than to alien plants, possibly because leaf chewers are, in general, less species specific than leaf miners and gall makers. These results, obtained over a very large number of plant-enemy systems, generally support the hypothesis that alien plants partly escape from phytophagous invertebrates but also show that different feeding guilds may react differently to the introduction of alien plants." }, { "instance_id": "R58002xR57959", "comparison_id": "R58002", "paper_id": "R57959", "text": "No release for the wicked: enemy release is dynamic and not associated with invasiveness The enemy release hypothesis predicts that invasive species will receive less damage from enemies, compared to co-occurring native and noninvasive exotic species in their introduced range. However, release operating early in invasion could be lost over time and with increased range size as introduced species acquire new enemies. We used three years of data, from 61 plant species planted into common gardens, to determine whether (1) invasive, noninvasive exotic, and native species experience differential damage from insect herbivores. and mammalian browsers, and (2) enemy release is lost with increased residence time and geographic spread in the introduced range. We find no evidence suggesting enemy release is a general mechanism contributing to invasiveness in this region. Invasive species received the most insect herbivory, and damage increased with longer residence times and larger range sizes at three spatial scales. Our results show that invasive and exotic species fail to escape enemies, particularly over longer temporal and larger spatial scales." }, { "instance_id": "R58002xR57598", "comparison_id": "R58002", "paper_id": "R57598", "text": "Lack of pre-dispersal seed predators in introduced Asteraceae in New Zealand The idea that naturalised invading plants have fewer phytophagous insects associated with them in their new environment relative to their native range is often assumed, but quantitative data are few and mostly refer to pests on crop species. In this study, the incidence of seed-eating insect larvae in flowerheads of naturalised Asteraceae in New Zealand is compared with that in Britain where the species are native. Similar surveys were carried out in both countries by sampling 200 flowerheads of three populations of the same thirteen species. In the New Zealand populations only one seed-eating insect larva was found in 7800 flowerheads (0.013% infected flowerheads, all species combined) in contrast with the British populations which had 487 (6.24%) flowerheads infested. Possible reasons for the low colonization level of the introduced Asteraceae by native insects in New Zealand are 1) the relatively recent introduction of the plants (100-200 years), 2) their phylogenetic distance from the native flora, and 3) the specialised nature of the bud-infesting habit of the insects." }, { "instance_id": "R58002xR57856", "comparison_id": "R58002", "paper_id": "R57856", "text": "Foliar herbivory and its effects on plant growth in native and exotic species in the Patagonian steppe Studies of herbivory and its consequences on the growth of native and exotic plants could help elucidate some processes involved in plant invasions. Introduced species are likely to experience reduced herbivory in their new range due to the absence of specialist enemies and, thus, may have higher benefits if they reduce the investment in resistance and increase their compensatory capacity. In order to evaluate the role of herbivory in disturbed areas within the Patagonian steppe, we quantified and compared the leaf levels of herbivory of four native and five exotic species and recorded the associated insect fauna. We also performed greenhouse experiments in which we simulated herbivory in order to evaluate the compensatory capacity of native and exotic species under different herbivory levels that resembled naturally occurring damage. Natural herbivory levels in the field were similar between the studied exotic and native plants. Field observations confirmed that they both shared some herbivore insects, most of which are generalists. In the greenhouse experiments, both exotic and native plants fully compensated for herbivory. Our results suggest that the studied exotic plants are not released from herbivory in the Patagonian steppe but are able to fully compensate for it. The capacity to recover from herbivory coupled with other potential adaptations, such as a better performance under disturbance and greater competitive ability than that of the native species, may represent some of the mechanisms responsible for the success of plant invasion in the Patagonian steppe." }, { "instance_id": "R58002xR57912", "comparison_id": "R58002", "paper_id": "R57912", "text": "Parasites and genetic diversity in an invasive bumblebee Biological invasions are facilitated by the global transportation of species and climate change. Given that invasions may cause ecological and economic damage and pose a major threat to biodiversity, understanding the mechanisms behind invasion success is essential. Both the release of non-native populations from natural enemies, such as parasites, and the genetic diversity of these populations may play key roles in their invasion success. We investigated the roles of parasite communities, through enemy release and parasite acquisition, and genetic diversity in the invasion success of the non-native bumblebee, Bombus hypnorum, in the United Kingdom. The invasive B. hypnorum had higher parasite prevalence than most, or all native congeners for two high-impact parasites, probably due to higher susceptibility and parasite acquisition. Consequently parasites had a higher impact on B. hypnorum queens\u2019 survival and colony-founding success than on native species. Bombus hypnorum also had lower functional genetic diversity at the sex-determining locus than native species. Higher parasite prevalence and lower genetic diversity have not prevented the rapid invasion of the United Kingdom by B. hypnorum. These data may inform our understanding of similar invasions by commercial bumblebees around the world. This study suggests that concerns about parasite impacts on the small founding populations common to re-introduction and translocation programs may be less important than currently believed." }, { "instance_id": "R58002xR57860", "comparison_id": "R58002", "paper_id": "R57860", "text": "Herbivory by an introduced Asian weevil negatively affects population growth of an invasive Brazilian shrub in Florida The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide." }, { "instance_id": "R58002xR57765", "comparison_id": "R58002", "paper_id": "R57765", "text": "Enemy release does not increase performance of Cirsium arvense in New Zealand Cirsium arvense (L.) Scop. (Californian, Canada, or creeping thistle) is an exotic perennial herb indigenous to Eurasia that successfully established in New Zealand (NZ) approximately 130 years ago. Presently, C. arvense is considered one of the worst invasive weeds in NZ arable and pastoral productions systems. A mechanism commonly invoked to explain the apparent increased vigour of introduced weeds is release from natural enemies. The enemy-release hypothesis (ERH) predicts that plants in an introduced range should experience reduced herbivory, particularly from specialists, and that release from this natural enemy pressure facilitates increased plant performance in the introduced range. In 2007, surveys were carried out in 13 populations in NZ (7 in the North Island and 6 in the South Island) and in 12 populations in central Europe to quantify and compare growth characteristics of C. arvense in its native versus introduced range. Altitude and mean annual precipitation for each population were used as covariates in an attempt to explain differences or similarities in plant traits among ranges. All plant traits varied significantly among populations within a range. Shoot dry weight was greater in the South Island compared to Europe, which is in line with the prediction of increased plant performance in the introduced range; however, this was explained by environmental conditions. Contrary to expectations, the North Island was not different from Europe for all plant traits measured, and after adjustment for covariates showed decreased shoot density and dry weight compared to the native range. Therefore, environmental factors appear to be more favourable for growth of C. arvense in both the North and South Islands. In accordance with the ERH, there was significantly greater endophagous herbivory in the capitula and stems of shoots in Europe compared to both NZ ranges. In NZ, capitulum attack from Rhinocyllus conicus was found only in the North Island, and no stem-mining attack was found anywhere in NZ. Thus, although C. arvense experiences significantly reduced natural enemy pressure in both the North and South Islands of NZ there is no evidence that it benefits from this enemy release." }, { "instance_id": "R58002xR57803", "comparison_id": "R58002", "paper_id": "R57803", "text": "Testing hypotheses for exotic plant success: parallel experiments in the native and introduced ranges A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants." }, { "instance_id": "R58002xR57986", "comparison_id": "R58002", "paper_id": "R57986", "text": "Herbivory and the success of Ligustrum lucidum: evidence from a comparison between native and novel ranges Invasive plant species may benefit from a reduction in herbivory in their introduced range. The reduced herbivory may cause a reallocation of resources from defence to fitness. Here, we evaluated leaf herbivory of an invasive tree species (Ligustrum lucidum Aiton) in its native and novel ranges, and determined the potential changes in leaf traits that may be associated with the patterns of herbivory. We measured forest structure, damage by herbivores and leaf traits in novel and native ranges, and on the basis of the literature, we identified the common natural herbivores of L. lucidum. We also performed an experiment offering leaves from both ranges to a generalist herbivore (Spodoptera frugiperda). L. lucidum was more abundant and experienced significantly less foliar damage in the novel than in the native range, in spite of the occurrence of several natural herbivores. The reduced lignin content and lower lignin : N ratio in novel leaves, together with the higher herbivore preference for leaves of this origin in the laboratory experiment, indicated lower herbivore resistance in novel than in native populations. The reduced damage by herbivores is not the only factor explaining invasion success, but it may be an important cause that enhances the invasiveness of L. lucidum." }, { "instance_id": "R58002xR57808", "comparison_id": "R58002", "paper_id": "R57808", "text": "Range-expanding populations of a globally introduced weed experience negative plant-soil feedbacks Background Biological invasions are fundamentally biogeographic processes that occur over large spatial scales. Interactions with soil microbes can have strong impacts on plant invasions, but how these interactions vary among areas where introduced species are highly invasive vs. naturalized is still unknown. In this study, we examined biogeographic variation in plant-soil microbe interactions of a globally invasive weed, Centaurea solstitialis (yellow starthistle). We addressed the following questions (1) Is Centaurea released from natural enemy pressure from soil microbes in introduced regions? and (2) Is variation in plant-soil feedbacks associated with variation in Centaurea's invasive success? Methodology/Principal Findings We conducted greenhouse experiments using soils and seeds collected from native Eurasian populations and introduced populations spanning North and South America where Centaurea is highly invasive and noninvasive. Soil microbes had pervasive negative effects in all regions, although the magnitude of their effect varied among regions. These patterns were not unequivocally congruent with the enemy release hypothesis. Surprisingly, we also found that Centaurea generated strong negative feedbacks in regions where it is the most invasive, while it generated neutral plant-soil feedbacks where it is noninvasive. Conclusions/Significance Recent studies have found reduced below-ground enemy attack and more positive plant-soil feedbacks in range-expanding plant populations, but we found increased negative effects of soil microbes in range-expanding Centaurea populations. While such negative feedbacks may limit the long-term persistence of invasive plants, such feedbacks may also contribute to the success of invasions, either by having disproportionately negative impacts on competing species, or by yielding relatively better growth in uncolonized areas that would encourage lateral spread. Enemy release from soil-borne pathogens is not sufficient to explain the success of this weed in such different regions. The biogeographic variation in soil-microbe effects indicates that different mechanisms may operate on this species in different regions, thus establishing geographic mosaics of species interactions that contribute to variation in invasion success." }, { "instance_id": "R58002xR57950", "comparison_id": "R58002", "paper_id": "R57950", "text": "Phytophagous Insects on Native and Non-Native Host Plants: Combining the Community Approach and the Biogeographical Approach During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH) is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community) or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native). Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines) in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis." }, { "instance_id": "R58002xR57602", "comparison_id": "R58002", "paper_id": "R57602", "text": "Release from parasites as natural enemies: increased performance of a globally introduced marine crab Introduced species often seem to perform better than conspecifics in their native range. This is apparent in the high densities they may achieve or the larger individual sizes they attain. A prominent hypothesis explaining the success of introduced terrestrial species is that they are typically free of or are less affected by the natural enemies (competitors, predators, and parasites) they encounter in their introduced range compared to their native range. To test this hypothesis in a marine system, we conducted a global assessment of the effect of parasitism and predation on the ecological performance of European green crab populations. In Europe, where the green crab is native, crab body size and biomass were negatively associated with the prevalence of parasitic castrators. When we compared native crab populations with those from introduced regions, limb loss (an estimator of predation) was not significantly lower in introduced regions, parasites infected introduced populations substantially less and crabs in introduced regions were larger and exhibited a greater biomass. Our results are consistent with the general prediction that introduced species suffer less from parasites compared to populations where they are native. This may partly explain why the green crab is such a successful invader and, subsequently, why it is a pest in so many places." }, { "instance_id": "R58002xR57948", "comparison_id": "R58002", "paper_id": "R57948", "text": "The parasite community of gobiid fishes (Actinopterygii: Gobiidae) from the Lower Volga River region Abstract The parasitic fauna in the lower Volga River basin was investigated for four gobiid species: the nonindigenous monkey goby Neogobius fluviatilis (Pallas, 1814), the round goby N. melanostomus (Pallas, 1814), the Caspian bighead goby Ponticola gorlap (Iljin, 1949), and the tubenose goby Proterorhinus cf. semipellucidus (Kessler, 1877). In total, 19 species of goby parasites were identified, of which two - Bothriocephalus opsariichthydis Yamaguti, 1934 and Nicolla skrjabini (Iwanitzki, 1928) - appeared to have been introduced from other geographic regions. The monkey goby had significantly fewer parasitic species (6), but relatively high levels of infection, in comparison to the native species. Parasitism of the Caspian bighead goby, which is the only predatory fish among the studied gobies, differed from the others according to the results of discriminant analysis. The parasitic fauna of the tubenose goby more closely resembled those of Caspian Sea gobiids, rather than the Black Sea monkey goby." }, { "instance_id": "R58002xR57639", "comparison_id": "R58002", "paper_id": "R57639", "text": "Realized vs apparent reduction in enemies of the European starling Release from parasites, pathogens or predators (i.e. enemies) is a widely cited \u2018rule of thumb\u2019 to explain the proliferation of nonindigenous species in their introduced regions (i.e. the \u2018enemy release hypothesis\u2019, or ERH). Indeed, profound effects of some parasites and predators on host populations are well documented. However, some support for the ERH comes from studies that find a reduction in the species richness of enemies in the introduced range, relative to the native range, of particular hosts. For example, data on helminth parasites of the European starling in both its native Eurasia and in North America support a reduction of parasites in the latter. However, North American \u2018founder\u2019 starlings were likely not chosen randomly from across Eurasia. This could result in an overestimation of enemy release since enemies affect their hosts on a population level. We control for the effects of subsampling colonists and find, contrary to previous reports, no evidence that introduced populations of starlings experienced a reduction in the species richness of helminth parasites after colonization of North America. These results highlight the importance of choosing appropriate contrast groups in biogeographical analyses of biological invasions to minimize the confounding effects of \u2018propagule biases\u2019." }, { "instance_id": "R58002xR57922", "comparison_id": "R58002", "paper_id": "R57922", "text": "Biotic resistance to plant invasion in grassland: Does seed predation increase with resident plant diversity? Abstract Seed predation impacts heavily on plant populations and community composition in grasslands. In particular, generalist seed predators may contribute to biotic resistance, i.e. the ability of resident species in a community to reduce the success of non-indigenous plant invaders. However, little is known of predators\u2019 preferences for seeds of indigenous or non-indigenous plant species or how seed predation varies across communities. We hypothesize that seed predation does not differ between indigenous and non-indigenous plant species and that seed predation is positively related to plant species diversity in the resident community. The seed removal of 36 indigenous and non-indigenous grassland species in seven extensively or intensively managed hay meadows across Switzerland covering a species-richness gradient of 18\u201350 plant species per unit area (c. 2 m 2 ) was studied. In mid-summer 2011, c. 24,000 seeds were exposed to predators in Petri dishes filled with sterilized soil, and the proportions of seeds removed were determined after three days\u2019 exposure. These proportions varied among species (9.2\u201362.5%) and hay meadows (17.8\u201348.6%). Seed removal was not related to seed size. Moreover, it did not differ between indigenous and non-indigenous species, suggesting that mainly generalist seed predators were active. However, seed predation was positively related to plant species richness across a gradient in the range of 18\u201338 species per unit area, representing common hay meadows in Switzerland. Our results suggest that generalist post-dispersal seed predation contributes to biotic resistance and may act as a filter to plant invasion by reducing the propagule pressure of non-local plant species." }, { "instance_id": "R58002xR57877", "comparison_id": "R58002", "paper_id": "R57877", "text": "Does time since introduction influence enemy release of an invasive weed? Release from natural enemies is considered to potentially play an important role in the initial establishment and success of introduced plants. With time, the species richness of herbivores using non-native plants may increase [species-time relationship (STR)]. We investigated whether enemy release may be limited to the early stages of invasion. Substituting space for time, we sampled invertebrates and measured leaf damage on the invasive species Senecio madagascariensis Poir. at multiple sites, north and south of the introduction site. Invertebrate communities were collected from plants in the field, and reared from collected plant tissue. We also sampled invertebrates and damage on the native congener Senecio pinnatifolius var. pinnatifolius A. Rich. This species served as a control to account for environmental factors that may vary along the latitudinal gradient and as a comparison for evaluating the enemy release hypothesis (ERH). In contrast to predictions of the ERH, greater damage and herbivore abundances and richness were found on the introduced species S. madagascariensis than on the native S. pinnatifolius. Supporting the STR, total invertebrates (including herbivores) decreased in abundance, richness and Shannon diversity from the point of introduction to the invasion fronts of S. madagascariensis. Leaf damage showed the opposite trend, with highest damage levels at the invasion fronts. Reared herbivore loads (as opposed to external collections) were greater on the invader at the point of introduction than on sites further from this region. These results suggest there is a complex relationship between the invader and invertebrate community response over time. S. madagascariensis may be undergoing rapid changes at its invasion fronts in response to environmental and herbivore pressure." }, { "instance_id": "R58002xR57654", "comparison_id": "R58002", "paper_id": "R57654", "text": "Phytophagous insects of giant hogweed Heracleum mantegazzianum (Apiaceae) in invaded areas of Europe and in its native area of the Caucasus Giant hogweed, Heracleum mantegazzianum (Apiaceae), was introduced from the Caucasus into Western Europe more than 150 years ago and later became an invasive weed which created major problems for European authorities. Phytophagous insects were collected in the native range of the giant hogweed (Caucasus) and were compared to those found on plants in the invaded parts of Europe. The list of herbivores was compiled from surveys of 27 localities in nine countries during two seasons. In addition, litera- ture records for herbivores were analysed for a total of 16 Heracleum species. We recorded a total of 265 herbivorous insects on Heracleum species and we analysed them to describe the herbivore assemblages, locate vacant niches, and identify the most host- specific herbivores on H. mantegazzianum. When combining our investigations with similar studies of herbivores on other invasive weeds, all studies show a higher proportion of specialist herbivores in the native habitats compared to the invaded areas, supporting the \"enemy release hypothesis\" (ERH). When analysing the relative size of the niches (measured as plant organ biomass), we found less herbivore species per biomass on the stem and roots, and more on the leaves (Fig. 5). Most herbivores were polyphagous gener- alists, some were found to be oligophagous (feeding within the same family of host plants) and a few had only Heracleum species as host plants (monophagous). None were known to feed exclusively on H. mantegazzianum. The oligophagous herbivores were restricted to a few taxonomic groups, especially within the Hemiptera, and were particularly abundant on this weed." }, { "instance_id": "R58002xR57850", "comparison_id": "R58002", "paper_id": "R57850", "text": "Biogeographical comparison of the invasive Lepidium draba in its native, expanded and introduced ranges Invasive plants are expected to perform better and consequently be more abundant in their introduced compared to their native ranges. However, few studies have simultaneously compared plant and population traits along with biotic and abiotic environmental parameters for invasive and native plant populations. We compared 17 native Eastern European, 14 expanded Western European and 31 introduced US populations of the invasive Lepidium draba over 2 years. Most parameters were similar between the two European ranges, but differed for the US. Density, cover, and biomass of L. draba were greater in the US while cover of other vegetation was lower. Bare-ground and litter cover were greater for US populations in 1 year, as was L. draba shoot height and seed output. Availability of labile soil nitrogen was also greater in the US range. Endophagous shoot herbivory was greater in Western Europe compared to the US in 1 year. As expected, specialist herbivores were only found in Europe. Differences between ranges were not explained by varying environmental conditions (climate, altitude and latitude). In summary our results indicate that lower interspecific competition, higher resource availability and the lack of specialist natural enemies may all contribute to the increased performance of L. draba in its introduced US range. Additionally, L. draba is well adapted to disturbance events, which may further benefit its competitiveness at degraded sites. In general our results were consistent between years, which reinforces their validity. However, some of the differences were only significant in one of the 2 years, which, on the other hand, emphasizes the importance to ideally conduct biogeographic comparisons over multiple years." }, { "instance_id": "R58002xR57977", "comparison_id": "R58002", "paper_id": "R57977", "text": "Using a botanical garden to assess factors influencing the colonization of exotic woody plants by phyllophagous insects The adoption of exotic plants by indigenous herbivores in the region of introduction can be influenced by numerous factors. A botanical garden in Western Siberia was used to test various hypotheses on the adaptation of indigenous phyllophagous insects to exotic plants invasions, focusing on two feeding guilds, external leaf chewers and leaf miners. A total of 150 indigenous and exotic woody plant species were surveyed for insect damage, abundance and species richness. First, exotic woody plants were much less damaged by chewers and leaf miners than native plants, and the leaf miners\u2019 species richness was much lower on exotic than native plants. Second, exotic woody plants having a congeneric species in the region of introduction were more damaged by chewers and hosted a more abundant and species-rich community of leaf miners than plants without native congeneric species. Third, damage by chewers significantly increased with the frequency of planting of exotic host plants outside the botanical garden, and leaf miners\u2019 abundance and species richness significantly increased with residence time in the garden. Finally, no significant relationship was found between insect damage or abundance and the origin of the exotic plants. Besides the ecological implications of the results, this study also illustrates the potential of botanical gardens to test ecological hypotheses on biological invasions and insect\u2013plant interactions on a large set of plant species." }, { "instance_id": "R58002xR57612", "comparison_id": "R58002", "paper_id": "R57612", "text": "Diversity and abundance patterns of phytophagous insect communities on alien and native host plants in the Brassicaceae The herbivore load (abundance and species richness of herbivores) on alien plants is supposed to be one of the keys to understand the invasiveness of species. We investigate the phytophagous insect communities on cabbage plants (Brassicaceae) in Europe. We compare the communities of endophagous and ectophagous insects as well as of Coleoptera and Lepidoptera on native and alien cabbage plant species. Contrary to many other reports, we found no differences in the herbivore load between native and alien hosts. The majority of insect species attacked alien as well as native hosts. Across insect species, there was no difference in the patterns of host range on native and on alien hosts. Likewise the similarity of insect communities across pairs of host species was not different between natives and aliens. We conclude that the general similarity in the community patterns between native and alien cabbage plant species are due to the chemical characteristics of this plant family. All cabbage plants share glucosinolates. This may facilitate host switches from natives to aliens. Hence the presence of native congeners may influence invasiveness of alien plants." }, { "instance_id": "R58002xR57990", "comparison_id": "R58002", "paper_id": "R57990", "text": "Gyrodactylus spp. diversity in native and introduced minnow (Phoxinus phoxinus) populations: no support for \"the enemy release\" hypothesis BackgroundTranslocation of native species and introduction of non-native species are potentially harmful to the existing biota by introducing e.g. diseases, parasites and organisms that may negatively affect the native species. The enemy release hypothesis states that parasite species will be lost from host populations when the host is introduced into new environments.MethodsWe tested the enemy release hypothesis by comparing 14 native and 29 introduced minnow (Phoxinus phoxinus) populations in Norway with regard to the ectoparasitic Gyrodactylus species community and load (on caudal fin). Here, we used a nominal logistic regression on presence/absence of Gyrodactylus spp. and a generalized linear model on the summed number of Gyrodactylus spp. on infected populations, with individual minnow heterozygosity (based on 11 microsatellites) as a covariate. In addition, a sample-based rarefaction analysis was used to test if the Gyrodactylus-species specific load differed between native and introduced minnow populations. An analysis of molecular variance was performed to test for hierarchical population structure between the two groups and to test for signals of population bottlenecks the two-phase model in the Wilcoxon signed-rank test was used. To test for demographic population expansion events in the introduced minnow population, we used the kg-test under a stepwise mutation model.ResultsThe native and introduced minnow populations had similar species compositions of Gyrodactylus, lending no support to the enemy release hypothesis. The two minnow groups did not differ in the likelihood of being infected with Gyrodactylus spp. Considering only infected minnow populations it was evident that native populations had a significantly higher mean abundance of Gyrodactylus spp. than introduced populations. The results showed that homozygotic minnows had a higher Gyrodactylus spp. infection than more heterozygotic hosts. Using only infected individuals, the two minnow groups did not differ in their mean number of Gyrodactylus spp. However, a similar negative association between heterozygosity and abundance was observed in the native and introduced group. There was no evidence for demographic bottlenecks in the minnow populations, implying that introduced populations retained a high degree of genetic variation, indicating that the number of introduced minnows may have been large or that introductions have been happening repeatedly. This could partly explain the similar species composition of Gyrodactylus in the native and introduced minnow populations.ConclusionsIn this study it was observed that native and introduced minnow populations did not differ in their species community of Gyrodactylus spp., lending no support to the enemy release hypothesis. A negative association between individual minnow host heterozygosity and the number of Gyrodactylus spp. was detected. Our results suggest that the enemy release hypothesis does not necessarily limit fish parasite dispersal, further emphasizing the importance of invasive fish species dispersal control." }, { "instance_id": "R58002xR57677", "comparison_id": "R58002", "paper_id": "R57677", "text": "Does enemy release matter for invasive plants? evidence from a comparison of insect herbivore damage among invasive, non-invasive and native congeners One of the most popular single-factor hypotheses that have been proposed to explain the naturalization and spread of introduced species is the enemy release hypothesis (ERH). One ramification of the ERH is that invasive plants sustain less herbivore damage than their native counterparts in the invaded range. However, introduced plants, invasive or not, may experience less herbivore damage than the natives. Therefore, to test the role of natural enemies in the success of invasive plants, studies should include both invasive as well as non-invasive introduced species. In this study, we employed a novel three-way comparison, in which we compared herbivore damage among native, introduced invasive, and introduced non-invasive Eugenia (Myrtaceae) in South Florida. We found that introduced Eugenia, both invasive and non-invasive, sustained less herbivore damage, especially damage by oligophagous and endophagous insects, than native Eugenia. However, the difference in insect damage between introduced invasive and introduced non-invasive Eugenia was not significant. Escape from herbivores may not account for the spread of invasive Eugenia. We would not have been able to draw this conclusion without inclusion of the non-invasive Eugenia species in the study." }, { "instance_id": "R58002xR57700", "comparison_id": "R58002", "paper_id": "R57700", "text": "The invasive shrub Buddleja davidii performs better in its introduced range It is commonly assumed that invasive plants grow more vigorously in their introduced than in their native range, which is then attributed to release from natural enemies or to microevolutionary changes, or both. However, few studies have tested this assumption by comparing the performance of invasive species in their native vs. introduced ranges. Here, we studied abundance, growth, reproduction, and herbivory in 10 native Chinese and 10 invasive German populations of the invasive shrub Buddleja davidii (Scrophulariaceae; butterfly bush). We found strong evidence for increased plant vigour in the introduced range: plants in invasive populations were significantly taller and had thicker stems, larger inflorescences, and heavier seeds than plants in native populations. These differences in plant performance could not be explained by a more benign climate in the introduced range. Since leaf herbivory was substantially reduced in invasive populations, our data rather suggest that escape from natural enemies, associated with increased plant growth and reproduction, contributes to the invasion success of B. davidii in Central Europe." }, { "instance_id": "R58002xR57916", "comparison_id": "R58002", "paper_id": "R57916", "text": "Parasitization of invasive gobiids in the eastern part of the Central trans-European corridor of invasion of Ponto-Caspian hydrobionts Four gobiid species, Babka gymnotrachelus, Neogobius melanostomus, Neogobius fluviatilis, and Proterorhinus semilunaris, were parasitologically studied in different localities of the Dnieper and Vistula river basins. The highest number of parasitic species was found in N. fluviatilis (35 taxa). The parasite fauna of N. melanostomus, B. gymnotrachelus, and P. semilunaris consists of 23, 22, and 15 taxa, respectively. The species accumulation curves show stable accumulation of parasite species by all four fish hosts along the studied part of the corridor, from the Dnieper Estuary to the Vistula River delta. The plot reveals also that the studied gobies lose the parasites common in the host native range and accept new parasites from the colonized area. In the case of N. melanostomus, it complies with the enemy release hypothesis, as the parasite load was low in the invaded area if compared to the native range. The three other alien gobies are vector for Gyrodactylus proterorhini in the Baltic basin. Moreover, populations of this alien monogenean tend to be more abundant in their new range in comparison with the Black Sea basin. In general, the number of parasite species in the colonized area was of the same rank as in the native one for N. fluviatilis, and even higher for B. gymnotrachelus. This results from accumulating new parasite species along the gobiid invasion route. In particular, the N. fluviatilis, B. gymnotrachelus, and P. semilunaris lost some of their native parasites and gained the local ones after entering the post-dam part of the Vistula River; it can be interpreted as a partial escape from parasites." }, { "instance_id": "R58002xR57785", "comparison_id": "R58002", "paper_id": "R57785", "text": "Caterpillar assemblages on introduced blue spruce. differences from native Norway spruce Blue spruce (Picea pungens Engelm.) is native to the central and southern Rocky Mountains of the USA (DAUBENMIRE, 1972), from where it has been introduced to other parts of North America, Europe, etc. In Central Europe, blue spruce was mostly planted in ornamental settings in urban areas, Christmas tree plantations and forests too. In the Slovak Republic, blue spruce has patchy distribution. Its scattered stands cover the area of 2,618 ha and 0.14% of the forest area (data from the National Forest Centre, Zvolen). Compared to the Slovak Republic, the area afforested with blue spruce in the Czech Republic is much larger \u20138,741 ha and 0.4% of the forest area (KRIVANEK et al., 2006, UHUL, 2006). Plantations of blue spruce in the Czech Republic were largely established in the western and north-western parts of the country (BERAN and SINDELAR, 1996; BALCAR et al., 2008b)." }, { "instance_id": "R58002xR57717", "comparison_id": "R58002", "paper_id": "R57717", "text": "Use of the introduced bivalve, Musculista senhousia, by generalist parasites of native New Zealand bivalves Abstract Introduced species are often thought to do well because of an escape from natural enemies. However, once established, they can acquire a modest assemblage of enemies, including parasites, in their new range. Here we quantified prevalence and effects of infection with copepods (family Myicolidae) and pea crabs (Pinnotheres novaezelandiae), in three mussel species, the non\u2010native Musculista senhousia, and two native mussels, Perna canaliculus and Xenostrobus pulex, at Bucklands Beach, Auckland, New Zealand. Copepod prevalence was highest in X. pulex (17.9%), whereas pea crab prevalence was highest in P. canaliculus (33.6%). Both parasites infected M. senhousia, but at a much lower prevalence. Dry tissue weight was significantly lower in P. canaliculus infected with pea crabs. In addition, we experimentally investigated host species selection by pea crabs. In an experimental apparatus, pea crabs showed a significant attraction to P. canaliculus, but not so for X. pulex or M. senhousia. When the mussels were presented in combination, pea crabs showed a weak attraction for X. pulex. Pea crab attraction to M. senhousia was not significant. It appears that the introduced M. senhousia largely escapes the detrimental effects of infection with either parasite species compared with native mussels occurring in sympatry." }, { "instance_id": "R58002xR57973", "comparison_id": "R58002", "paper_id": "R57973", "text": "Evidence for enemy release and increased seed production and size for two invasive Australian acacias Summary Invasive plants are hypothesized to have higher fitness in introduced areas due to their release from pathogens and herbivores and the relocation of resources to reproduction. However, few studies have tested this hypothesis in native and introduced regions. A biogeographical approach is fundamental to understanding the mechanisms involved in plant invasions and to detect rapid evolutionary changes in the introduced area. Reproduction was assessed in native and introduced ranges of two invasive Australian woody legumes, Acacia dealbata and A. longifolia. Seed production, pre-dispersal seed predation, seed and elaiosome size and seedling size were assessed in 7\u201310 populations from both ranges, taking into account the effect of differences in climate. There was a significantly higher percentage of fully developed seeds per pod, a lower proportion of aborted seeds and the absence of pre-dispersal predation in the introduced range for both Acacia species. Acacia longifolia produced more seeds per pod in the invaded range, whereas A. dealbata produced more seeds per tree in the invaded range. Seeds were bigger in the invaded range for both species, and elaiosome: seed ratio was smaller for A. longifolia in the invaded range. Seedlings were also larger in the invaded range, suggesting that the increase in seed size results into greater offspring growth. There were no differences in the climatic conditions of sites occupied by A. longifolia in both regions. Minimum temperature was higher in Portuguese A. dealbata populations, but this difference did not explain the increase in seed production and seed size in the introduced range. It did have, however, a positive effect on the number of pods per tree. Synthesis. Acacia dealbata and A. longifolia escape pre-dispersal predation in the introduced range and display a higher production of fully developed seeds per fruit and bigger seeds. These differences may explain the invasion of both species because they result in an increased seedling growth and the production of abundant soil seedbanks in the introduced area." }, { "instance_id": "R58002xR57996", "comparison_id": "R58002", "paper_id": "R57996", "text": "A Comparison of Herbivore Damage on Three Invasive Plants and Their Native Congeners: Implications for the Enemy Release Hypothesis ABSTRACT One explanation for the success of exotic plants in their introduced habitats is that, upon arriving to a new continent, plants escaped their native herbivores or pathogens, resulting in less damage and lower abundance of enemies than closely related native species (enemy release hypothesis). We tested whether the three exotic plant species, Rubus phoenicolasius (wineberry), Fallopia japonica (Japanese knotweed), and Persicaria perfoliata (mile-a-minute weed), suffered less herbivory or pathogen attack than native species by comparing leaf damage and invertebrate herbivore abundance and diversity on the invasive species and their native congeners. Fallopia japonica and R. phoenicolasius received less leaf damage than their native congeners, and F. japonica also contained a lower diversity and abundance of invertebrate herbivores. If the observed decrease in damage experienced by these two plant species contributes to increased fitness, then escape from enemies may provide at least a partial explanation for their invasiveness. However, P. perfoliata actually received greater leaf damage than its native congener. Rhinoncomimus latipes, a weevil previously introduced in the United States as a biological control for P. perfoliata, accounted for the greatest abundance of insects collected from P. perfoliata. Therefore, it is likely that the biocontrol R. latipes was responsible for the greater damage on P. perfoliata, suggesting this insect may be effective at controlling P. perfoliata populations if its growth and reproduction is affected by the increased herbivore damage." }, { "instance_id": "R58002xR57780", "comparison_id": "R58002", "paper_id": "R57780", "text": "Origin, local experience, and the impact of biotic interactions on native and introduced Senecio species A key gap in understanding the long-term success of invasive species is how biotic interactions change with the duration of experience in the introduced range. We examined biotic interactions using a common garden experiment with native, hybrid, and exotic Senecio species representing a range of experience in the UK. Introduced species had fewer aphids and pathogens and more root colonization by mycorrhizal fungi compared to natives; hybrids generally had intermediate levels of interactions. The duration of experience in the introduced range was reflected by an increasing degree of variability in enemy release. These findings support the enemy release hypothesis and indicate the potential for changes in enemy release as time and experience in the new range increase." }, { "instance_id": "R58002xR57600", "comparison_id": "R58002", "paper_id": "R57600", "text": "Impact of fire on leaf nutrients, arthropod fauna and herbivory of native and exotic eucalypts in Kings Park, Perth, Western Australia The vegetation of Kings Park, near the centre of Perth, Western Australia, once had an overstorey of Eucalyptus marginata (jarrah) or Eucalyptus gomphocephala (tuart), and many trees still remain in the bushland parts of the Park. Avenues and roadsides have been planted with eastern Australian species, including Eucalyptus cladocalyx (sugar gum) and Eucalyptus botryoides (southern mahogany), both of which have become invasive. The present study examined the effect of a recent burn on the level of herbivory on these native and exotic eucalypts. Leaf damage, shoot extension and number of new leaves were measured on tagged shoots of saplings of each tree species in unburnt and burnt areas over an 8-month period. Leaf macronutrient levels were quantified and the number of arthropods on saplings was measured at the end of the recording period by chemical knockdown. Leaf macronutrients were mostly higher in all four species in the burnt area, and this was associated with generally higher numbers of canopy arthropods and greater levels of leaf damage. It is suggested that the pulse of soil nutrients after the fire resulted in more nutrient-rich foliage, which in turn was more palatable to arthropods. The resulting high levels of herbivory possibly led to reduced shoot extension of E. gomphocephala, E. botryoides and, to a lesser extent, E. cladocalyx. This acts as a negative feedback mechanism that lessens the tendency for lush, post-fire regrowth to outcompete other species of plants. There was no consistent difference in the levels of the various types of leaf damage or of arthropods on the native and the exotic eucalypts, suggesting that freedom from herbivory is not contributing to the invasiveness of the two exotic species." }, { "instance_id": "R58002xR57843", "comparison_id": "R58002", "paper_id": "R57843", "text": "Contrasting patterns of herbivore and predator pressure on invasive and native plants Invasive non-native plant species often harbor fewer herbivorous insects than related native plant species. However, little is known about how herbivorous insects on non-native plants are exposed to carnivorous insects, and even less is known on plants that have recently expanded their ranges within continents due to climate warming. In this study we examine the herbivore load (herbivore biomass per plant biomass), predator load (predator biomass per plant biomass) and predator pressure (predator biomass per herbivore biomass) on an inter-continental non-native and an intra-continental range-expanding plant species and two congeneric native species. All four plant species co-occur in riparian habitat in north-western Europe. Insects were collected in early, mid and late summer from three populations of all four species. Before counting and weighing the insects were classified to trophic guild as carnivores (predators), herbivores, and transients. Herbivores were further subdivided into leaf-miners, sap-feeders, chewers and gallers. Total herbivore loads were smaller on inter-continental non-native and intra-continental range-expanding plants than on the congeneric natives. However, the differences depended on time within growing season, as well as on the feeding guild of the herbivore. Although the predator load on non-native plants was not larger than on natives, both non-native plant species had greater predator pressure on the herbivores than the natives. We conclude that both these non-native plant species have better bottom-up as well as top-down control of herbivores, but that effects depend on time within growing season and (for the herbivore load) on herbivore feeding guild. Therefore, when evaluating insects on non-native plants, variation within season and differences among feeding guilds need to be taken into account." }, { "instance_id": "R58002xR57774", "comparison_id": "R58002", "paper_id": "R57774", "text": "Effects of large enemies on success of exotic species in marine fouling communities of Washington, USA The enemy release hypothesis, which posits that exotic species are less regulated by enemies than native species, has been well-supported in terrestrial systems but rarely tested in marine systems. Here, the enemy release hypothesis was tested in a marine system by excluding large enemies (>1.3 cm) in dock fouling communities in Washington, USA. After documenting the distribution and abundance of potential enemies such as chitons, gastropods and flatworms at 4 study sites, exclusion experiments were conducted to test the hypotheses that large grazing ene- mies (1) reduced recruitment rates in the exotic ascidian Botrylloides violaceus and native species, (2) reduced B. violaceus and native species abundance, and (3) altered fouling community struc- ture. Experiments demonstrated that, as predicted by the enemy release hypothesis, exclusion of large enemies did not significantly alter B. violaceus recruitment or abundance and it did signifi- cantly increase abundance or recruitment of 2 common native species. However, large enemy exclusion had no significant effects on most native species or on overall fouling community struc- ture. Furthermore, neither B. violaceus nor total exotic species abundance correlated positively with abundance of large enemies across sites. I therefore conclude that release from large ene- mies is likely not an important mechanism for the success of exotic species in Washington fouling communities." }, { "instance_id": "R58002xR57826", "comparison_id": "R58002", "paper_id": "R57826", "text": "Population regulation by enemies of the grass Brachypodium sylvaticum: demography in native and invaded ranges The enemy-release hypothesis (ERH) states that species become more successful in their introduced range than in their native range because they leave behind natural enemies in their native range and are thus \"released\" from enemy pressures in their introduced range. The ERH is popularly cited to explain the invasive properties of many species and is the underpinning of biological control. We tested the prediction that plant populations are more strongly regulated by natural enemies (herbivores and pathogens) in their native range than in their introduced range with enemy-removal experiments using pesticides. These experiments were replicated at multiple sites in both the native and invaded ranges of the grass Brachypodium sylvaticum. In support of the ERH, enemies consistently regulated populations in the native range. There were more tillers and more seeds produced in treated vs. untreated plots in the native range, and few seedlings survived in the native range. Contrary to the ERH, total measured leaf damage was similar in both ranges, though the enemies that caused it differed. There was more damage by generalist mollusks and pathogens in the native range, and more damage by generalist insect herbivores in the invaded range. Demographic analysis showed that population growth rates were lower in the native range than in the invaded range, and that sexually produced seedlings constituted a smaller fraction of the total in the native range. Our removal experiment showed that enemies regulate plant populations in their native range and suggest that generalist enemies, not just specialists, are important for population regulation." }, { "instance_id": "R58002xR57738", "comparison_id": "R58002", "paper_id": "R57738", "text": "Testing the enemy release hypothesis: trematode parasites in the non-indigenous Manila clam Ruditapes philippinarum The present study tested the \u2018Enemy Release Hypothesis\u2019 (ERH) which states that the success of an introduced species is related to the scarcity of natural enemies in the introduced range compared with the native range. Digeneans are dominant macroparasites of molluscs; therefore, the interaction between R. philippinarum and these parasites was ideal for investigation. A two-year monitoring in Arcachon Bay (SW France) was performed to estimate digenean loads in R. philippinarum and in three infaunal native bivalves (R. decussatus, Paphia aurea, Cerastoderma edule). A laboratory experiment allowed comparison of infection success among these bivalves (except P. aurea) by generalist digenean larvae (Himasthla elongata cercariae). R. philippinarum digenean abundance in Arcachon Bay was much lower than in native bivalves, with values depending on species, sites and time. Similarly, mean digenean species richness per host individual was always lower in R. philippinarum than in sympatric bivalves. A comparison of digenean metacercariae abundance between R. decussatus and C. edule in Mundaka Estuary (Spain) showed that both species had similar digenean loads but that R. decussatus was depleted in digenean species encysting in host tissues (the non-gymnophallid species). Experimental infection confirmed that the two species of the genus Ruditapes (and not R. philippinarum only) were resistant to encysting digeneans, with an infection success 3\u20135 times lower than that of C. edule. The lack of infection that was observed in the field would therefore be the consequence of a tissue barrier, R. philippinarum epithelium being too tough for cercariae penetration. Concordantly, according to the literature, digenean infection in the native range of R. philippinarum is also low. Consequently, the ERH, as an explanation for R. philippinarum success in Europe, is not totally consistent in the case of digenean trematodes as enemies, R. philippinarum hosting low load of digeneans in its native as well as colonized range." }, { "instance_id": "R58002xR57620", "comparison_id": "R58002", "paper_id": "R57620", "text": "Herbivory, disease, recruitment limitation, and success of alien and native tree species The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory." }, { "instance_id": "R58002xR57787", "comparison_id": "R58002", "paper_id": "R57787", "text": "Low prevalence of haemosporidian parasites in the introduced house sparrow (Passer domesticus) in Brazil Abstract Species that are introduced to novel environments can lose their native pathogens and parasites during the process of introduction. The escape from the negative effects associated with these natural enemies is commonly employed as an explanation for the success and expansion of invasive species, which is termed the enemy release hypothesis (ERH). In this study, nested PCR techniques and microscopy were used to determine the prevalence and intensity (respectively) of Plasmodium spp. and Haemoproteus spp. in introduced house sparrows and native urban birds of central Brazil. Generalized linear mixed models were fitted by Laplace approximation considering a binomial error distribution and logit link function. Location and species were considered as random effects and species categorization (native or non-indigenous) as fixed effects. We found that native birds from Brazil presented significantly higher parasite prevalence in accordance with the ERH. We also compared our data with the literature, and found that house sparrows native to Europe exhibited significantly higher parasite prevalence than introduced house sparrows from Brazil, which also supports the ERH. Therefore, it is possible that house sparrows from Brazil might have experienced a parasitic release during the process of introduction, which might also be related to a demographic release (e.g. release from the negative effects of parasites on host population dynamics)." }, { "instance_id": "R58002xR57975", "comparison_id": "R58002", "paper_id": "R57975", "text": "A case of complete loss of gill parasites in the invasive cichlid Oreochromis mossambicus This study investigates the recent evolution of a rich parasite community associated with one of the world\u2019s most invasive species, the cichlid fish Oreochromis mossambicus. Populations from the species\u2019 native range (Mozambique) are compared to a population from New Caledonia (Wester Pacific), an island where the species was introduced in 1954. The results support the complete local extinction of the gill parasite community in the course of the invasion process. Up to six gill parasite species per locality were documented in the O. mossambicus native range, and previous surveys consistently reported at least one parasite species introduced along African cichlid species established out of Africa. The absence of parasites in New Caledonia is therefore exceptional. This can be attributed to local factors, such as a strong initial population bottleneck, the likely absence of multiple host introductions, and the frequent occurrence of brackish watersheds that might enhance the probability for natural deparasitation." }, { "instance_id": "R58002xR57872", "comparison_id": "R58002", "paper_id": "R57872", "text": "Arthropod Communities on Native and Nonnative Early Successional Plants ABSTRACT Early successional ruderal plants in North America include numerous native and nonnative species, and both are abundant in disturbed areas. The increasing presence of nonnative plants may negatively impact a critical component of food web function if these species support fewer or a less diverse arthropod fauna than the native plant species that they displace. We compared arthropod communities on six species of common early successional native plants and six species of nonnative plants, planted in replicated native and nonnative plots in a farm field. Samples were taken twice each year for 2 yr. In most arthropod samples, total biomass and abundance were substantially higher on the native plants than on the nonnative plants. Native plants produced as much as five times more total arthropod biomass and up to seven times more species per 100 g of dry leaf biomass than nonnative plants. Both herbivores and natural enemies (predators and parasitoids) predominated on native plants when analyzed separately. In addition, species richness was about three times greater on native than on nonnative plants, with 83 species of insects collected exclusively from native plants, and only eight species present only on nonnatives. These results support a growing body of evidence suggesting that nonnative plants support fewer arthropods than native plants, and therefore contribute to reduced food resources for higher trophic levels." }, { "instance_id": "R58002xR57753", "comparison_id": "R58002", "paper_id": "R57753", "text": "Release from soil pathogens plays an important role in the success of invasive Carpobrotus in the Mediterranean Introduced plant species can become locally dominant and threaten native flora and fauna. This dominance is often thought to be a result of release from specialist enemies in the invaded range, or the evolution of increased competitive ability. Soil borne microorganisms have often been overlooked as enemies in this context, but a less deleterious plant soil interaction in the invaded range could explain local dominance. Two plant species, Carpobrotus edulis and the hybrid Carpobrotus X cf. acinaciformis, are considered major pests in the Mediterranean basin. We tested if release from soil-borne enemies and/or evolution of increased competitive ability could explain this dominance. Comparing biomass production in non-sterile soil with that in sterilized soil, we found that inoculation with rhizosphere soil from the native range reduced biomass production by 32% while inoculation with rhizosphere soil from the invaded range did not have a significant effect on plant biomass. Genotypes from the invaded range, including a hybrid, did not perform better than plants from the native range in sterile soil. Hence evolution of increased competitive ability and hybridization do not seem to play a major role. We conclude that the reduced negative net impact of the soil community in the invaded range may contribute to the success of Carpobrotus species in the Mediterranean basin." }, { "instance_id": "R58002xR57706", "comparison_id": "R58002", "paper_id": "R57706", "text": "Herbivory on invasive exotic plants and their non-invasive relatives The Enemy Release Hypothesis links exotic plant success to escape from enemies such as herbivores and pathogens. Recent work has shown that exotic plants that more fully escape herbivores and pathogens are more likely to become highly invasive, compared to plants with higher enemy loads in their novel ranges. We predicted that highly invasive plants from the Asteraceae and the Brassicaceae would be less acceptable, in laboratory no-choice feeding trials, to the generalist herbivore the American grasshopper, Schistocerca americana. We also compared herbivory on invasive and non-invasive plants from the genus Centaurea in no-choice feeding trials using the red-legged grasshopper Melanoplus femurrubrum and in a common garden in the field. In accordance with our predictions, highly invasive plants were fed on less by grasshoppers in the laboratory. They also received less damage in the field, suggesting that they contain feeding deterrents that render them less acceptable to generalist herbivores than non-invasive plants." }, { "instance_id": "R58002xR57698", "comparison_id": "R58002", "paper_id": "R57698", "text": "Using parasites to inform ecological history: Comparisons among three congeneric marine snails Species introduced to novel regions often leave behind many parasite species. Signatures of parasite release could thus be used to resolve cryptogenic (uncertain) origins such as that of Littorina littorea, a European marine snail whose history in North America has been debated for over 100 years. Through extensive field and literature surveys, we examined species richness of parasitic trematodes infecting this snail and two co-occurring congeners, L. saxatilis and L. obtusata, both considered native throughout the North Atlantic. Of the three snails, only L. littorea possessed significantly fewer trematode species in North America, and all North American trematodes infecting the three Littorina spp. were a nested subset of Europe. Surprisingly, several of L. littorea's missing trematodes in North America infected the other Littorina congeners. Most likely, long separation of these trematodes from their former host resulted in divergence of the parasites' recognition of L. littorea. Overall, these patterns of parasitism suggest a recent invasion from Europe to North America for L. littorea and an older, natural expansion from Europe to North America for L. saxatilis and L. obtusata." }, { "instance_id": "R6755xR6523", "comparison_id": "R6755", "paper_id": "R6523", "text": "Towards a Linked-Data based Visualization Wizard Datasets published in the LOD cloud are recommended to follow some best practice in order to be 4-5 stars Linked Data compliant. They can often be consumed and accessed by different means such as API access, bulk download or as linked data fragments, but most of the time, a SPARQL endpoint is also provided. While the LOD cloud keeps growing, having a quick glimpse of those datasets is getting harder and there is a need to develop new methods enabling to detect automatically what an arbitrary dataset is about and to recommend visualizations for data samples. We consider that \"a visualization is worth a million triples\", and in this paper, we propose a novel approach that mines the content of datasets and automatically generates visualizations. Our approach is directly based on the usage of SPARQL queries that will detect the important categories of a dataset and that will specifically consider the properties used by the objects which have been interlinked via owl:sameAs links. We then propose to associate type of visualization for those categories. We have implemented this approach into a so-called Linked Data Vizualization Wizard (LDVizWiz)." }, { "instance_id": "R6756xR6425", "comparison_id": "R6756", "paper_id": "R6425", "text": "An Open Source Software for Exploring and Manipulating Networks Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization." }, { "instance_id": "R6757xR6395", "comparison_id": "R6757", "paper_id": "R6395", "text": "CASIA@ V2: a MLN-based question answering system over linked data We present a question answering system (CASIA@V2) over Linked Data (DBpedia), which translates natural language questions into structured queries automatically. Existing systems usually adopt a pipeline framework, which con- tains four major steps: 1) Decomposing the question and detecting candidate phrases; 2) mapping the detected phrases into semantic items of Linked Data; 3) grouping the mapped semantic items into semantic triples; and 4) generat- ing the rightful SPARQL query. We present a jointly learning framework using Markov Logic Network(MLN) for phrase detection, phrases mapping to seman- tic items and semantic items grouping. We formulate the knowledge for resolving the ambiguities in three steps of QALD as first-order logic clauses in a MLN. We evaluate our approach on QALD-4 test dataset and achieve an F-measure score of 0.36, an average precision of 0.32 and an average recall of 0.40 over 50 questions." }, { "instance_id": "R6757xR6268", "comparison_id": "R6757", "paper_id": "R6268", "text": "Intui2: a prototype system for question answering over linked data An ever increasing amount of Linked Data is made available every day. Public triple stores offer the possibility of querying hundreds of millions of triples. But this information can only be retrieved using specialized query languages like SPARQL, so for the majority of Internet users, it is still unavailable. This paper presents a prototype system aimed at streamlining the access to the information stored as RDF. The system takes as input a natural language question formulated in English and generates an equivalent SPARQL query. The mapping is based on the analysis of the syntactic patterns present in the input question. In the initial evaluation results, against the 99 questions in the QALD-3 DBpedia test set, the system provides a correct answer to 30 questions and a partial answer for another 3 questions, achieving an F-measure of 0.32." }, { "instance_id": "R6757xR6313", "comparison_id": "R6757", "paper_id": "R6313", "text": "Natural language queries over heterogeneous linked data graphs The demand to access large amounts of heterogeneous structured data is emerging as a trend for many users and applications. However, the effort involved in querying heterogeneous and distributed third-party databases can create major barriers for data consumers. At the core of this problem is the semantic gap between the way users express their information needs and the representation of the data. This work aims to provide a natural language interface and an associated semantic index to support an increased level of vocabulary independency for queries over Linked Data/Semantic Web datasets, using a distributional-compositional semantics approach. Distributional semantics focuses on the automatic construction of a semantic model based on the statistical distribution of co-occurring words in large-scale texts. The proposed query model targets the following features: (i) a principled semantic approximation approach with low adaptation effort (independent from manually created resources such as ontologies, thesauri or dictionaries), (ii) comprehensive semantic matching supported by the inclusion of large volumes of distributional (unstructured) commonsense knowledge into the semantic approximation process and (iii) expressive natural language queries. The approach is evaluated using natural language queries on an open domain dataset and achieved avg. recall=0.81, mean avg. precision=0.62 and mean reciprocal rank=0.49." }, { "instance_id": "R6757xR6383", "comparison_id": "R6757", "paper_id": "R6383", "text": "Xser@ QALD-4: answering natural language questions via phrasal semantic parsing Understanding natural language questions and converting them into structured queries have been considered as a crucial way to help users access large scale structured knowledge bases. However, the task usually involves two main challenges: recognizing users\u2019 query intention and mapping the involved semantic items against a given knowledge base (KB). In this paper, we propose an efficient pipeline framework to model a user\u2019s query intention as a phrase level dependency DAG which is then instantiated regarding a specific KB to construct the final structured query. Our model benefits from the efficiency of linear structured prediction models and the separation of KB-independent and KB-related modelings. We evaluate our model on two datasets, and the experimental results showed that our method outperforms the state-of-the-art methods on the Free917 dataset, and, with limited training data from Free917, our model can smoothly adapt to new challenging dataset, WebQuestion, without extra training efforts while maintaining promising performances." }, { "instance_id": "R6757xR6274", "comparison_id": "R6757", "paper_id": "R6274", "text": "Natural Language Interfaces to Ontologies: Combining Syntactic Analysis and Ontology-Based Lookup through the User Interaction With large datasets such as Linked Open Data available, there is a need for more user-friendly interfaces which will bring the advantages of these data closer to the casual users. Several recent studies have shown user preference to Natural Language Interfaces (NLIs) in comparison to others. Although many NLIs to ontologies have been developed, those that have reasonable performance are domain-specific and tend to require customisation for each new domain which, from a developer's perspective, makes them expensive to maintain. We present our system FREyA, which combines syntactic parsing with the knowledge encoded in ontologies in order to reduce the customisation effort. If the system fails to automatically derive an answer, it will generate clarification dialogs for the user. The user's selections are saved and used for training the system in order to improve its performance over time. FREyA is evaluated using Mooney Geoquery dataset with very high precision and recall." }, { "instance_id": "R68535xR67866", "comparison_id": "R68535", "paper_id": "R67866", "text": "Emergent constraints on transient climate response (TCR) and equilibrium climate sensitivity\u00a0(ECS) from historical warming in CMIP5 and CMIP6 models Abstract. Climate sensitivity to CO2 remains the key uncertainty in projections of future climate change. Transient climate response (TCR) is the metric of temperature sensitivity that is most relevant to warming in the next few decades and contributes the biggest uncertainty to estimates of the carbon budgets consistent with the Paris targets. Equilibrium climate sensitivity (ECS) is vital for understanding longer-term climate change and stabilisation targets. In the IPCC 5th Assessment Report (AR5), the stated \u201clikely\u201d ranges (16 %\u201384 % confidence) of TCR (1.0\u20132.5 K) and ECS (1.5\u20134.5 K) were broadly consistent with the ensemble of CMIP5 Earth system models (ESMs) available at the time. However, many of the latest CMIP6 ESMs have larger climate sensitivities, with 5 of 34 models having TCR values above 2.5 K and an ensemble mean TCR of 2.0\u00b10.4 K. Even starker, 12 of 34 models have an ECS value above 4.5 K. On the face of it, these latest ESM results suggest that the IPCC likely ranges may need revising upwards, which would cast further doubt on the feasibility of the Paris targets. Here we show that rather than increasing the uncertainty in climate sensitivity, the CMIP6 models help to constrain the likely range of TCR to 1.3\u20132.1 K, with a central estimate of 1.68 K. We reach this conclusion through an emergent constraint approach which relates the value of TCR linearly to the global warming from 1975 onwards. This is a period when the signal-to-noise ratio of the net radiative forcing increases strongly, so that uncertainties in aerosol forcing become progressively less problematic. We find a consistent emergent constraint on TCR when we apply the same method to CMIP5 models. Our constraints on TCR are in good agreement with other recent studies which analysed CMIP ensembles. The relationship between ECS and the post-1975 warming trend is less direct and also non-linear. However, we are able to derive a likely range of ECS of 1.9\u20133.4 K from the CMIP6 models by assuming an underlying emergent relationship based on a two-box energy balance model. Despite some methodological differences; this is consistent with a previously published ECS constraint derived from warming trends in CMIP5 models to 2005. Our results seem to be part of a growing consensus amongst studies that have applied the emergent constraint approach to different model ensembles and to different aspects of the record of global warming." }, { "instance_id": "R68871xR23368", "comparison_id": "R68871", "paper_id": "R23368", "text": "Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere temperatures and winds, cloud heights, precipitation, and sea level pressure. Data\u2013model comparisons continue, however, to highlight persistent problems in the marine stratocumulus regions." }, { "instance_id": "R6947xR6701", "comparison_id": "R6947", "paper_id": "R6701", "text": "Multi-document summarisation using generic relation extraction Experiments are reported that investigate the effect of various source document representations on the accuracy of the sentence extraction phase of a multi-document summarisation task. A novel representation is introduced based on generic relation extraction (GRE), which aims to build systems for relation identification and characterisation that can be transferred across domains and tasks without modification of model parameters. Results demonstrate performance that is significantly higher than a non-trivial baseline that uses tf*idf-weighted words and at least as good as a comparable but less general approach from the literature. Analysis shows that the representations compared are complementary, suggesting that extraction performance could be further improved through system combination." }, { "instance_id": "R6947xR6739", "comparison_id": "R6947", "paper_id": "R6739", "text": "Exploiting Category-Specific Information for Multi-Document Summarization We show that by making use of information common to document sets belonging to a common category, we can improve the quality of automatically extracted content in multi-document summaries. This simple property is widely applicable in multi-document summarization tasks, and can be encapsulated by the concept of category-specific importance (CSI). Our experiments show that CSI is a valuable metric to aid sentence selection in extractive summarization tasks. We operationalize the computation CSI of sentences through the introduction of two new features that can be computed without needing any external knowledge. We also generalize this approach, showing that when manually-curated document-to-category mappings are unavailable, performing automatic categorization of document sets also improves summarization performance. We have incorporated these features into a simple, freely available, open-source extractive summarization system, called SWING. In the recent TAC-2011 guided summarization task, SWING outperformed all other participant summarization systems as measured by automated ROUGE measures." }, { "instance_id": "R6947xR6697", "comparison_id": "R6947", "paper_id": "R6697", "text": "OHSU Summarization and Entity Linking Systems We present two distinct text analysis systems. We first present two supervised sentence ranking approaches for use in extractive update summarization. For the first, we use the same general machine learning approach described in Fisher and Roark (2008) for update summarization. In the second, we use a similar machine learning approach, but include sub-sentential units produced by our discourse segmenter, see Fisher and Roark (2007b), as possible units for inclusion in a summary. Interestingly, we find that one approach performs significantly better in the production of the base summary, while the other approach performs significantly better in the update summary. We then present a large-corpus entity linking system. This system expands queries using internal links within Wikipedia and link entities with minimum-spanning-tree clustering. We present and evaluate empirical results on the TAC 2009 knowledge-base-population data, and demonstrate competitive results with a simple system." }, { "instance_id": "R6947xR6729", "comparison_id": "R6947", "paper_id": "R6729", "text": "MCMR: Maximum coverage and minimum redundant text summarization model In paper, we propose an unsupervised text summarization model which generates a summary by extracting salient sentences in given document(s). In particular, we model text summarization as an integer linear programming problem. One of the advantages of this model is that it can directly discover key sentences in the given document(s) and cover the main content of the original document(s). This model also guarantees that in the summary can not be multiple sentences that convey the same information. The proposed model is quite general and can also be used for single- and multi-document summarization. We implemented our model on multi-document summarization task. Experimental results on DUC2005 and DUC2007 datasets showed that our proposed approach outperforms the baseline systems." }, { "instance_id": "R6947xR6709", "comparison_id": "R6947", "paper_id": "R6709", "text": "Towards recency ranking in web search In web search, recency ranking refers to ranking documents by relevance which takes freshness into account. In this paper, we propose a retrieval system which automatically detects and responds to recency sensitive queries. The system detects recency sensitive queries using a high precision classifier. The system responds to recency sensitive queries by using a machine learned ranking model trained for such queries. We use multiple recency features to provide temporal evidence which effectively represents document recency. Furthermore, we propose several training methodologies important for training recency sensitive rankers. Finally, we develop new evaluation metrics for recency sensitive queries. Our experiments demonstrate the efficacy of the proposed approaches." }, { "instance_id": "R6948xR6563", "comparison_id": "R6948", "paper_id": "R6563", "text": "Automatic condensation of electronic publications by sentence selection Abstract As electronic information access becomes the norm, and the variety of retrievable material increases, automatic methods of summarizing or condensing text will become critical. This paper describes a system that performs domain-independent automatic condensation of news from a large commercial news service encompassing 41 different publications. This system was evaluated against a system that condensed the same articles using only the first portion of the texts (the lead), up to the target length of the summaries. Three lengths of articles were evaluated for 250 documents by both systems, totalling 1500 suitability judgements in all. The outcome of perhaps the largest evaluation of human vs machine summarization performed to date was unexpected. The lead-based summaries outperformed the \u201cintelligent\u201d summaries significantly, achieving acceptability ratings of over 90%, compared to 74.4%. This paper briefly reviews the literature, details the implications of these results, and addresses the remaining hopes for content-based summarization. We expect the results presented here to be useful to other researchers currently investigating the viability of summarization through sentence selection heuristics." }, { "instance_id": "R6948xR6575", "comparison_id": "R6948", "paper_id": "R6575", "text": "Generating Natural Language Summaries from Multiple On-Line Sources We present a methodology for summarization of news about current events in the form of briefings that include appropriate background (historical) information. The system that we developed, SUMMONS, uses the output of systems developed for the DARPA Message Understanding Conferences to generate summaries of multiple documents on the same or related events, presenting similarities and differences, contradictions, and generalizations among sources of information. We describe the various components of the system, showing how information from multiple articles is combined, organized into a paragraph, and finally, realized as English sentences. A feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefing." }, { "instance_id": "R6948xR6607", "comparison_id": "R6948", "paper_id": "R6607", "text": "GLEANS: A Generator of Logical Extracts and Abstracts for Nice Summaries Generator Postprocessor ABSTRACT WITH HEADLINE o j n u j n \u0080 j z y u y \u007f y l jWITH HEADLINE o j n u j n \u0080 j z y u y \u007f y l j" }, { "instance_id": "R6948xR6582", "comparison_id": "R6948", "paper_id": "R6582", "text": "Information Fusion in the Context of Multi-Document Summarization We present a method to automatically generate a concise summary by identifying and synthesizing similar elements across related text from a set of multiple documents. Our approach is unique in its usage of language generation to reformulate the wording of the summary." }, { "instance_id": "R69680xR69646", "comparison_id": "R69680", "paper_id": "R69646", "text": "Extracting refined rules from knowledge-based neural networks Neural networks, despite their empirically proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge must be inserted into a neural network. Second, the network must be refined. Third, the refined knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this article, we propose and empirically evaluate a method for the final, and possibly most difficult, step. Our method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules 1) closely reproduce the accuracy of the network from which they are extracted; 2) are superior to the rules produced by methods that directly refine symbolic rules; 3) are superior to those produced by previous techniques for extracting rules from trained neural networks; and 4) are \u201chuman comprehensible.\u201d Thus, this method demonstrates that neural networks can be used to effectively refine symbolic knowledge. Moreover, the rule-extraction technique developed herein contributes to the understanding of how symbolic and connectionist approaches to artificial intelligence can be profitably integrated." }, { "instance_id": "R69680xR69617", "comparison_id": "R69680", "paper_id": "R69617", "text": "Ontology-based deep learning for human behavior prediction with explanations in health social networks Human behavior modeling is a key component in application domains such as healthcare and social behavior research. In addition to accurate prediction, having the capacity to understand the roles of human behavior determinants and to provide explanations for the predicted behaviors is also important. Having this capacity increases trust in the systems and the likelihood that the systems actually will be adopted, thus driving engagement and loyalty. However, most prediction models do not provide explanations for the behaviors they predict. In this paper, we study the research problem, human behavior prediction with explanations, for healthcare intervention systems in health social networks. We propose an ontology-based deep learning model (ORBM+) for human behavior prediction over undirected and nodes-attributed graphs. We first propose a bottom-up algorithm to learn the user representation from health ontologies. Then the user representation is utilized to incorporate self-motivation, social influences, and environmental events together in a human behavior prediction model, which extends a well-known deep learning method, the Restricted Boltzmann Machine. ORBM+ not only predicts human behaviors accurately, but also, it generates explanations for each predicted behavior. Experiments conducted on both real and synthetic health social networks have shown the tremendous effectiveness of our approach compared with conventional methods." }, { "instance_id": "R69680xR69560", "comparison_id": "R69680", "paper_id": "R69560", "text": "A symbolic approach for explaining errors in image classification tasks Machine learning algorithms, despite their increasing success in handling object recognition tasks, still seldom perform without error. Often the process of understanding why the algorithm has fail ..." }, { "instance_id": "R69680xR69659", "comparison_id": "R69680", "paper_id": "R69659", "text": "Towards semantic data mining with g-segs This paper introduces the term semantic data mining to denote a data mining approach where domain ontologies are used as background knowledge for data mining. It is motivated by successful applications of SEGS (search for enriched gene sets), a system that uses biological ontologies as background knowledge to construct descriptions of interesting gene sets in experimental microarray data. We generalized this domain-specific system to perform subgroup discovery on arbitrary data, annotated by ontologies. We present a prototype of the new semantic data mining system named g-SEGS, implemented in the Orange4WS environment, and an illustrative example showing the application potential of semantic data mining." }, { "instance_id": "R69680xR69604", "comparison_id": "R69680", "paper_id": "R69604", "text": "Learning knowledge graphs for question answering through conversational dialog We describe how a question-answering system can learn about its domain from conversational dialogs. Our system learns to relate concepts in science questions to propositions in a fact corpus, stores new concepts and relations in a knowledge graph (KG), and uses the graph to solve questions. We are the first to acquire knowledge for question-answering from open, natural language dialogs without a fixed ontology or domain model that predetermines what users can say. Our relation-based strategies complete more successful dialogs than a query expansion baseline, our taskdriven relations are more effective for solving science questions than relations from general knowledge sources, and our method is practical enough to generalize to other domains." }, { "instance_id": "R69680xR69663", "comparison_id": "R69680", "paper_id": "R69663", "text": "Semantic text mining with linked data Linked Data is an open data space that emerges from the publication and interlinking of structured data on the Web using the Semantic Web technologies. How to utilize this wealth of data is currently a focused research theme of the Semantic Web community. In this paper, we aim to utilize Linked Data to generate semantic annotations for frequent patterns extracted from textual documents. First, we extract semantic relations from textual documents and merge them into a set of semantic graphs. Then, we apply a frequent subgraph discovery algorithm on the set of graphs to generate frequent patterns. Finally, we annotate the discovered patterns using Linked Data. Our approach can be applied in such domains as terrorist network analysis and biological network analysis. The efficacy of our approach is demonstrated through an empirical experiment that discovers and validates relationships between political figures from large number of news on the Web." }, { "instance_id": "R69680xR69592", "comparison_id": "R69680", "paper_id": "R69592", "text": "Conditional focused neural question answering with large-scale knowledge bases How can we enable computers to automatically answer questions like \"Who created the character Harry Potter\"? Carefully built knowledge bases provide rich sources of facts. However, it remains a challenge to answer factoid questions raised in natural language due to numerous expressions of one question. In particular, we focus on the most common questions --- ones that can be answered with a single fact in the knowledge base. We propose CFO, a Conditional Focused neural-network-based approach to answering factoid questions with knowledge bases. Our approach first zooms in a question to find more probable candidate subject mentions, and infers the final answers with a unified conditional probabilistic framework. Powered by deep recurrent neural networks and neural embeddings, our proposed CFO achieves an accuracy of 75.7% on a dataset of 108k questions - the largest public one to date. It outperforms the current state of the art by an absolute margin of 11.8%." }, { "instance_id": "R69680xR69609", "comparison_id": "R69680", "paper_id": "R69609", "text": "Towards a knowledge graph based speech interface Applications which use human speech as an input require a speech interface with high recognition accuracy. The words or phrases in the recognised text are annotated with a machine-understandable meaning and linked to knowledge graphs for further processing by the target application. These semantic annotations of recognised words can be represented as a subject-predicate-object triples which collectively form a graph often referred to as a knowledge graph. This type of knowledge representation facilitates to use speech interfaces with any spoken input application, since the information is represented in logical, semantic form, retrieving and storing can be followed using any web standard query languages. In this work, we develop a methodology for linking speech input to knowledge graphs and study the impact of recognition errors in the overall process. We show that for a corpus with lower WER, the annotation and linking of entities to the DBpedia knowledge graph is considerable. DBpedia Spotlight, a tool to interlink text documents with the linked open data is used to link the speech recognition output to the DBpedia knowledge graph. Such a knowledge-based speech recognition interface is useful for applications such as question answering or spoken dialog systems." }, { "instance_id": "R69680xR69565", "comparison_id": "R69680", "paper_id": "R69565", "text": "Reasoning about object affordances in a knowledge base representation Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zero-shot affordance prediction and object recognition given human poses." }, { "instance_id": "R69680xR69597", "comparison_id": "R69680", "paper_id": "R69597", "text": "Fvqa: Fact-based visual question answering Visual Question Answering (VQA) has attracted much attention in both computer vision and natural language processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA (Fact-based VQA), a VQA dataset which requires, and supports, much deeper reasoning. FVQA primarily contains questions that require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. Each supporting-fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting-facts." }, { "instance_id": "R69680xR69678", "comparison_id": "R69680", "paper_id": "R69678", "text": "Logical rule induction and theory learning using neural theorem proving A hallmark of human cognition is the ability to continually acquire and distill observations of the world into meaningful, predictive theories. In this paper we present a new mechanism for logical theory acquisition which takes a set of observed facts and learns to extract from them a set of logical rules and a small set of core facts which together entail the observations. Our approach is neuro-symbolic in the sense that the rule pred- icates and core facts are given dense vector representations. The rules are applied to the core facts using a soft unification procedure to infer additional facts. After k steps of forward inference, the consequences are compared to the initial observations and the rules and core facts are then encouraged towards representations that more faithfully generate the observations through inference. Our approach is based on a novel neural forward-chaining differentiable rule induction network. The rules are interpretable and learned compositionally from their predicates, which may be invented. We demonstrate the efficacy of our approach on a variety of ILP rule induction and domain theory learning datasets." }, { "instance_id": "R69680xR69665", "comparison_id": "R69680", "paper_id": "R69665", "text": "Interpreting data mining results with linked data for learning analytics: motivation, case study and direction Learning Analytics by nature relies on computational information processing activities intended to extract from raw data some interesting aspects that can be used to obtain insights into the behaviours of learners, the design of learning experiences, etc. There is a large variety of computational techniques that can be employed, all with interesting properties, but it is the interpretation of their results that really forms the core of the analytics process. In this paper, we look at a specific data mining method, namely sequential pattern extraction, and we demonstrate an approach that exploits available linked open data for this interpretation task. Indeed, we show through a case study relying on data about students' enrolment in course modules how linked data can be used to provide a variety of additional dimensions through which the results of the data mining method can be explored, providing, at interpretation time, new input into the analytics process." }, { "instance_id": "R69680xR69623", "comparison_id": "R69680", "paper_id": "R69623", "text": "Knowledge-based transfer learning explanation Machine learning explanation can significantly boost machine learning's application in decision making, but the usability of current methods is limited in human-centric explanation, especially for transfer learning, an important machine learning branch that aims at utilizing knowledge from one learning domain (i.e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain. In this paper , we propose an ontology-based approach for human-centric explanation of transfer learning. Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases. The evaluation with US flight data and DB-pedia has presented their confidence and availability in explaining the transferability of feature representation in flight departure delay forecasting." }, { "instance_id": "R69680xR69670", "comparison_id": "R69680", "paper_id": "R69670", "text": "Generating possible interpretations for statistics from linked open data Statistics are very present in our daily lives. Every day, new statistics are published, showing the perceived quality of living in different cities, the corruption index of different countries, and so on. Interpreting those statistics, on the other hand, is a difficult task. Often, statistics collect only very few attributes, and it is difficult to come up with hypotheses that explain, e.g., why the perceived quality of living in one city is higher than in another. In this paper, we introduce Explain-a-LOD, an approach which uses data from Linked Open Data for generating hypotheses that explain statistics. We show an implemented prototype and compare different approaches for generating hypotheses by analyzing the perceived quality of those hypotheses in a user study." }, { "instance_id": "R69680xR69661", "comparison_id": "R69680", "paper_id": "R69661", "text": "Semantic subgroup explanations Subgroup discovery (SD) methods can be used to find interesting subsets of objects of a given class. While subgroup describing rules are themselves good explanations of the subgroups, domain ontologies can provide additional descriptions to data and alternative explanations of the constructed rules. Such explanations in terms of higher level ontology concepts have the potential of providing new insights into the domain of investigation. We show that this additional explanatory power can be ensured by using recently developed semantic SD methods. We present a new approach to explaining subgroups through ontologies and demonstrate its utility on a motivational use case and on a gene expression profiling use case where groups of patients, identified through SD in terms of gene expression, are further explained through concepts from the Gene Ontology and KEGG orthology. We qualitatively compare the methodology with the supporting factors technique for characterizing subgroups. The developed tools are implemented within a new browser-based data mining platform ClowdFlows." }, { "instance_id": "R69680xR69607", "comparison_id": "R69680", "paper_id": "R69607", "text": "Knowledge-based conversational agents and virtual story telling We describe an architecture for building speech-enabled conversational agents, deployed as self-contained Web services, with ability to provide inference processing on very large knowledge bases and its application to voice enabled chatbots in a virtual storytelling environment. The architecture integrates inference engines, natural language pattern matching components and story-specific information extraction from RDF/XML files. Our Web interface is dynamically generated by server side agents supporting multi-modal interface components (speech and animation). Prolog refactorings of the WordNet lexical knowledge base, FrameNet and the Open Mind common sense knowledge repository are combined with internet meta-search to provide high-quality knowledge sources to our conversational agents. An example of conversational agent with speech capabilities is deployed on the Web at http://logic.csci.unt.edu:8080/wordnet_agent/frame.html. The agent is also accessible for live multi-user text-based chat, through a Yahoo Instant Messenger protocol adaptor, from wired or wireless devices, as the jinni_agent Yahoo IM \"handle\"." }, { "instance_id": "R70287xR51386", "comparison_id": "R70287", "paper_id": "R51386", "text": "In vitro screening of a FDA approved chemical library reveals potential inhibitors of SARS-CoV-2 replication Abstract A novel coronavirus, named SARS-CoV-2, emerged in 2019 in China and rapidly spread worldwide. As no approved therapeutics exists to treat COVID-19, the disease associated to SARS-Cov-2, there is an urgent need to propose molecules that could quickly enter into clinics. Repurposing of approved drugs is a strategy that can bypass the time-consuming stages of drug development. In this study, we screened the PRESTWICK CHEMICAL LIBRARY composed of 1,520 approved drugs in an infected cell-based assay. The robustness of the screen was assessed by the identification of drugs that already demonstrated in vitro antiviral effect against SARS-CoV-2. Thereby, 90 compounds were identified as positive hits from the screen and were grouped according to their chemical composition and their known therapeutic effect. Then EC50 and CC50 were determined for a subset of 15 compounds from a panel of 23 selected drugs covering the different groups. Eleven compounds such as macrolides antibiotics, proton pump inhibitors, antiarrhythmic agents or CNS drugs emerged showing antiviral potency with 2 < EC50 \u2264 20 \u00b5M. By providing new information on molecules inhibiting SARS-CoV-2 replication in vitro, this study provides information for the selection of drugs to be further validated in vivo. Disclaimer: This study corresponds to the early stages of antiviral development and the results do not support by themselves the use of the selected drugs to treat SARS-CoV-2 infection." }, { "instance_id": "R70287xR51267", "comparison_id": "R70287", "paper_id": "R51267", "text": "A SARS-CoV-2 protein interaction map reveals targets for drug repurposing A newly described coronavirus named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which is the causative agent of coronavirus disease 2019 (COVID-19), has infected over 2.3 million people, led to the death of more than 160,000 individuals and caused worldwide social and economic disruption 1 , 2 . There are no antiviral drugs with proven clinical efficacy for the treatment of COVID-19, nor are there any vaccines that prevent infection with SARS-CoV-2, and efforts to develop drugs and vaccines are hampered by the limited knowledge of the molecular details of how SARS-CoV-2 infects cells. Here we cloned, tagged and expressed 26 of the 29 SARS-CoV-2 proteins in human cells and identified the human proteins that physically associated with each of the SARS-CoV-2 proteins using affinity-purification mass spectrometry, identifying 332 high-confidence protein\u2013protein interactions between SARS-CoV-2 and human proteins. Among these, we identify 66 druggable human proteins or host factors targeted by 69 compounds (of which, 29 drugs are approved by the US Food and Drug Administration, 12 are in clinical trials and 28 are preclinical compounds). We screened a subset of these in multiple viral assays and found two sets of pharmacological agents that displayed antiviral activity: inhibitors of mRNA translation and predicted regulators of the sigma-1 and sigma-2 receptors. Further studies of these host-factor-targeting agents, including their combination with drugs that directly target viral enzymes, could lead to a therapeutic regimen to treat COVID-19. A human\u2013SARS-CoV-2 protein interaction map highlights cellular processes that are hijacked by the virus and that can be targeted by existing drugs, including inhibitors of mRNA translation and predicted regulators of the sigma receptors." }, { "instance_id": "R70287xR70021", "comparison_id": "R70287", "paper_id": "R70021", "text": "Discovery, Synthesis, And Structure-Based Optimization of a Series ofN-(tert-Butyl)-2-(N-arylamido)-2-(pyridin-3-yl) Acetamides (ML188) as Potent Noncovalent Small Molecule Inhibitors of the Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) 3CL Protease A high-throughput screen of the NIH molecular libraries sample collection and subsequent optimization of a lead dipeptide-like series of severe acute respiratory syndrome (SARS) main protease (3CLpro) inhibitors led to the identification of probe compound ML188 (16-(R), (R)-N-(4-(tert-butyl)phenyl)-N-(2-(tert-butylamino)-2-oxo-1-(pyridin-3-yl)ethyl)furan-2-carboxamide, Pubchem CID: 46897844). Unlike the majority of reported coronavirus 3CLpro inhibitors that act via covalent modification of the enzyme, 16-(R) is a noncovalent SARS-CoV 3CLpro inhibitor with moderate MW and good enzyme and antiviral inhibitory activity. A multicomponent Ugi reaction was utilized to rapidly explore structure-activity relationships within S(1'), S(1), and S(2) enzyme binding pockets. The X-ray structure of SARS-CoV 3CLpro bound with 16-(R) was instrumental in guiding subsequent rounds of chemistry optimization. 16-(R) provides an excellent starting point for the further design and refinement of 3CLpro inhibitors that act by a noncovalent mechanism of action." }, { "instance_id": "R70287xR69999", "comparison_id": "R70287", "paper_id": "R69999", "text": "Human organ chip-enabled pipeline to rapidly repurpose therapeutics during viral pandemics The rising threat of pandemic viruses, such as SARS-CoV-2, requires development of new preclinical discovery platforms that can more rapidly identify therapeutics that are active in vitro and also translate in vivo . Here we show that human organ-on-a-chip (Organ Chip) microfluidic culture devices lined by highly differentiated human primary lung airway epithelium and endothelium can be used to model virus entry, replication, strain-dependent virulence, host cytokine production, and recruitment of circulating immune cells in response to infection by respiratory viruses with great pandemic potential. We provide a first demonstration of drug repurposing by using oseltamivir in influenza A virus-infected organ chip cultures and show that co-administration of the approved anticoagulant drug, nafamostat, can double oseltamivir\u2019s therapeutic time window. With the emergence of the COVID-19 pandemic, the Airway Chips were used to assess the inhibitory activities of approved drugs that showed inhibition in traditional cell culture assays only to find that most failed when tested in the Organ Chip platform. When administered in human Airway Chips under flow at a clinically relevant dose, one drug \u2013 amodiaquine - significantly inhibited infection by a pseudotyped SARS-CoV-2 virus. Proof of concept was provided by showing that amodiaquine and its active metabolite (desethylamodiaquine) also significantly reduce viral load in both direct infection and animal-to-animal transmission models of native SARS-CoV-2 infection in hamsters. These data highlight the value of Organ Chip technology as a more stringent and physiologically relevant platform for drug repurposing, and suggest that amodiaquine should be considered for future clinical testing." }, { "instance_id": "R70287xR70061", "comparison_id": "R70287", "paper_id": "R70061", "text": "Dieckol, a SARS-CoV 3CLpro inhibitor, isolated from the edible brown algae Ecklonia cava Graphical abstract Phlorotannins have been isolated and are the first of their type showing competitively inhibitory activity toward SARS-CoV 3CLpro. Dieckol showed the most potent SARS-CoV 3CLpro trans/cis-cleavage inhibitory effects with high association rate." }, { "instance_id": "R70287xR69946", "comparison_id": "R70287", "paper_id": "R69946", "text": "Identification of myricetin and scutellarein as novel chemical inhibitors of the SARS coronavirus helicase, nsP13 Abstract Severe acute respiratory syndrome (SARS) is an infectious disease with a strong potential for transmission upon close personal contact and is caused by the SARS-coronavirus (CoV). However, there are no natural or synthetic compounds currently available that can inhibit SARS-CoV. We examined the inhibitory effects of 64 purified natural compounds against the activity of SARS helicase, nsP13, and the hepatitis C virus (HCV) helicase, NS3h, by conducting fluorescence resonance energy transfer (FRET)-based double-strand (ds) DNA unwinding assay or by using a colorimetry-based ATP hydrolysis assay. While none of the compounds, examined in our study inhibited the DNA unwinding activity or ATPase activity of human HCV helicase protein, we found that myricetin and scutellarein potently inhibit the SARS-CoV helicase protein in vitro by affecting the ATPase activity, but not the unwinding activity, nsP13. In addition, we observed that myricetin and scutellarein did not exhibit cytotoxicity against normal breast epithelial MCF10A cells. Our study demonstrates for the first time that selected naturally-occurring flavonoids, including myricetin and scultellarein might serve as SARS-CoV chemical inhibitors." }, { "instance_id": "R70287xR69940", "comparison_id": "R70287", "paper_id": "R69940", "text": "Identification of novel drug scaffolds for inhibition of SARS-CoV 3-Chymotrypsin-like protease using virtual and high-throughput screenings Abstract We have used a combination of virtual screening (VS) and high-throughput screening (HTS) techniques to identify novel, non-peptidic small molecule inhibitors against human SARS-CoV 3CLpro. A structure-based VS approach integrating docking and pharmacophore based methods was employed to computationally screen 621,000 compounds from the ZINC library. The screening protocol was validated using known 3CLpro inhibitors and was optimized for speed, improved selectivity, and for accommodating receptor flexibility. Subsequently, a fluorescence-based enzymatic HTS assay was developed and optimized to experimentally screen approximately 41,000 compounds from four structurally diverse libraries chosen mainly based on the VS results. False positives from initial HTS hits were eliminated by a secondary orthogonal binding analysis using surface plasmon resonance (SPR). The campaign identified a reversible small molecule inhibitor exhibiting mixed-type inhibition with a K i value of 11.1 \u03bcM. Together, these results validate our protocols as suitable approaches to screen virtual and chemical libraries, and the newly identified compound reported in our study represents a promising structural scaffold to pursue for further SARS-CoV 3CLpro inhibitor development." }, { "instance_id": "R70584xR70564", "comparison_id": "R70584", "paper_id": "R70564", "text": "A Bayesian network for early diagnosis of sepsis patients: a basis for a clinical decision support system Sepsis is a severe medical condition caused by an inordinate immune response to an infection. Early detection of sepsis symptoms is important to prevent the progression into the more severe stages of the disease, which kills one in four it effects. Electronic medical records of 1492 patients containing 233 cases of sepsis were used in a clustering analysis to identify features that are indicative of sepsis and can be further used for training a Bayesian inference network. The Bayesian network was constructed using the systemic inflammatory response syndrome criteria, mean arterial pressure, and lactate levels for sepsis patients. The resulting network reveals a clear correlation between lactate levels and sepsis. Furthermore, it was shown that lactate levels may be predicative of the SIRS criteria. In this light, Bayesian networks of sepsis patients hold the promise of providing a clinical decision support system in the future." }, { "instance_id": "R70630xR70626", "comparison_id": "R70630", "paper_id": "R70626", "text": "Data-driven Temporal Prediction of Surgical Site Infection Analysis of data from Electronic Health Records (EHR) presents unique challenges, in particular regarding nonuniform temporal resolution of longitudinal variables. A considerable amount of patient information is available in the EHR - including blood tests that are performed routinely during inpatient follow-up. These data are useful for the design of advanced machine learning-based methods and prediction models. Using a matched cohort of patients undergoing gastrointestinal surgery (101 cases and 904 controls), we built a prediction model for post-operative surgical site infections (SSIs) using Gaussian process (GP) regression, time warping and imputation methods to manage the sparsity of the data source, and support vector machines for classification. For most blood tests, wider confidence intervals after imputation were obtained in patients with SSI. Predictive performance with individual blood tests was maintained or improved by joint model prediction, and non-linear classifiers performed consistently better than linear models." }, { "instance_id": "R70630xR70610", "comparison_id": "R70630", "paper_id": "R70610", "text": "Strategies for handling missing clinical data for automated surgical site infection detection from the electronic health record Proper handling of missing data is important for many secondary uses of electronic health record (EHR) data. Data imputation methods can be used to handle missing data, but their use for analyzing EHR data is limited and specific efficacy for postoperative complication detection is unclear. Several data imputation methods were used to develop data models for automated detection of three types (i.e., superficial, deep, and organ space) of surgical site infection (SSI) and overall SSI using American College of Surgeons National Surgical Quality Improvement Project (NSQIP) Registry 30-day SSI occurrence data as a reference standard. Overall, models with missing data imputation almost always outperformed reference models without imputation that included only cases with complete data for detection of SSI overall achieving very good average area under the curve values. Missing data imputation appears to be an effective means for improving postoperative SSI detection using EHR clinical data." }, { "instance_id": "R76783xR76746", "comparison_id": "R76783", "paper_id": "R76746", "text": "Knowledge Graph Embedding: A Survey of Approaches and Applications Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth." }, { "instance_id": "R78163xR78064", "comparison_id": "R78163", "paper_id": "R78064", "text": "COVID-19 prevalence estimation: Four most affected African countries The world at large has been confronted with several disease outbreak which has posed and still posing a serious menace to public health globally. Recently, COVID-19 a new kind of coronavirus emerge from Wuhan city in China and was declared a pandemic by the World Health Organization. There has been a reported case of about 8622985 with global death of 457,355 as of 15.05 GMT, June 19, 2020. South-Africa, Egypt, Nigeria and Ghana are the most affected African countries with this outbreak. Thus, there is a need to monitor and predict COVID-19 prevalence in this region for effective control and management. Different statistical tools and time series model such as the linear regression model and auto-regressive integrated moving average (ARIMA) models have been applied for disease prevalence/incidence prediction in different diseases outbreak. However, in this study, we adopted the ARIMA model to forecast the trend of COVID-19 prevalence in the aforementioned African countries. The datasets examined in this analysis spanned from February 21, 2020, to June 16, 2020, and was extracted from the World Health Organization website. ARIMA models with minimum Akaike information criterion correction (AICc) and statistically significant parameters were selected as the best models. Accordingly, the ARIMA (0,2,3), ARIMA (0,1,1), ARIMA (3,1,0) and ARIMA (0,1,2) models were chosen as the best models for SA, Nigeria, and Ghana and Egypt, respectively. Forecasting was made based on the best models. It is noteworthy to claim that the ARIMA models are appropriate for predicting the prevalence of COVID-19. We noticed a form of exponential growth in the trend of this virus in Africa in the days to come. Thus, the government and health authorities should pay attention to the pattern of COVID-19 in Africa. Necessary plans and precautions should be put in place to curb this pandemic in Africa." }, { "instance_id": "R78163xR78055", "comparison_id": "R78163", "paper_id": "R78055", "text": "Predict new cases of the coronavirus 19; in Michigan, U.S.A. or other countries using Crow-AMSAA method BACKGROUND

Statistical predictions are useful to predict events based on statistical models. The data is useful to determine outcomes based on inputs and calculations. The Crow-AMSAA method will be explored to predict new cases of Coronavirus 19 (COVID19). This method is currently used within engineering reliability design to predict failures and evaluate the reliability growth. The author intents to use this model to predict the COVID19 cases by using daily reported data from Michigan, New York City, U.S.A and other countries. The piece wise Crow-AMSAA (CA) model fits the data very well for the infected cases and deaths at different phases during the start of the COVID19 outbreak. The slope \u03b2 of the Crow-AMSAA line indicates the speed of the transmission or death rate. The traditional epidemiological model is based on the exponential distribution, but the Crow-AMSAA is the Non Homogeneous Poisson Process (NHPP) which can be used to modeling the complex problem like COVID19, especially when the various mitigation strategies such as social distance, isolation and locking down were implemented by the government at different places.

OBJECTIVE

This paper is to use piece wise Crow-AMSAA method to fit the COVID19 confirmed cases in Michigan, New York City, U.S.A and other countries.

METHODS

piece wise Crow-AMSAA method to fit the COVID19 confirmed cases

RESULTS

From the Crow-AMSAA analysis above, at the beginning of the COVID 19, the infectious cases did not follow the Crow-AMSAA prediction line, but during the outbreak start, the confirmed cases does follow the CA line, the slope \u03b2 value indicates the pace of the transmission rate or death rate in each case. The piece wise Crow-AMSAA describes the different phases of spreading. This indicates the speed of the transmission rate could change according to the government interference, social distance order or other factors. Comparing the piece wise CA \u03b2 slopes (\u03b2: 1.683-- 0.834--0.092) in China and in U.S.A (\u03b2:5.138--10.48--5.259), the speed of infectious rate in U.S.A is much higher than the infectious rate in China. From the piece wise CA plots and summary table 1 of the CA slope \u03b2s, the COVID19 spreading has the different behavior at different places and countries where the government implemented the different policy to slow down the spreading.

CONCLUSIONS

From the analysis of data and conclusions from confirmed cases and deaths of COVID 19 in Michigan, New York city, U.S.A, China and other countries, the piece wise Crow-AMSAA method can be used to modeling the spreading of COVID19.

" }, { "instance_id": "R78163xR78157", "comparison_id": "R78163", "paper_id": "R78157", "text": "Clinical and epidemiological features of COVID-19 family clusters in Beijing, China ABSTRACT Background Since its discovery, SARS-CoV-2 has been spread throughout China before becoming a global pandemic. In Beijing, family clusters are the main mode of human-human transmission accounting for 57.6% of the total confirmed cases. Method We present the epidemiological and clinical features of the clusters of three large and one small families. Result Our results revealed that SARS-CoV-2 is transmitted quickly through contact with index case, and a total of 22/24 infections were observed. Among those infected, 20/22 had mild symptoms and only two had moderate to severe clinical manifestations. Children in the families generally showed milder symptoms. The incubation period varied from 2 to 13 days, and the shedding of virus from the upper respiratory tract lasted from 5 to over 30 days. A prolonged period of virus shedding (>30 days) in upper respiratory tract was observed in 6/24 cases. Conclusion SARS-CoV-2 is transmitted quickly in the form of family clusters. While the infection rate is high within the cluster, the disease manifestations, latent period, and virus shedding period varied greatly. We therefore recommend rigorously testing contacts even during the no-symptom phase and consider whether viral shedding has ceased before stopping isolation measures for an individual." }, { "instance_id": "R78163xR78051", "comparison_id": "R78163", "paper_id": "R78051", "text": "Using statistics and mathematical modelling to understandinfectious disease outbreaks: COVID-19 as an example Abstract During an infectious disease outbreak, biases in the data and complexities of the underlying dynamics pose significant challenges in mathematically modelling the outbreak and designing policy. Motivated by the ongoing response to COVID-19, we provide a toolkit of statistical and mathematical models beyond the simple SIR-type differential equation models for analysing the early stages of an outbreak and assessing interventions. In particular, we focus on parameter estimation in the presence of known biases in the data, and the effect of non-pharmaceutical interventions in enclosed subpopulations, such as households and care homes. We illustrate these methods by applying them to the COVID-19 pandemic." }, { "instance_id": "R8342xR8330", "comparison_id": "R8342", "paper_id": "R8330", "text": "An ontology of scientific experiments The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science." }, { "instance_id": "R8342xR8318", "comparison_id": "R8342", "paper_id": "R8318", "text": "ScholOnto: an ontology-based digital library server for research documents and discourse Abstract.The internet is rapidly becoming the first place for researchers to publish documents, but at present they receive little support in searching, tracking, analysing or debating concepts in a literature from scholarly perspectives. This paper describes the design rationale and implementation of ScholOnto, an ontology-based digital library server to support scholarly interpretation and discourse. It enables researchers to describe and debate via a semantic network the contributions a document makes, and its relationship to the literature. The paper discusses the computational services that an ontology-based server supports, alternative user interfaces to support interaction with a large semantic network, usability issues associated with knowledge formalisation, new work practices that could emerge, and related work." } ] }