,id,submitter,authors,title,comments,journal-ref,doi,report-no,categories,license,abstract,versions,update_date,authors_parsed,keywords 0,1407.7101,Md. Selim Al Mamun,"Md. Selim Al Mamun, Indrani Mandal, Md. Hasanuzzaman",Efficient Design of Reversible Sequential Circuit,IOSR Journal of Computer Engineering (IOSR-JCE) 5.6(2012),,10.9790/0661-0564247,,cs.ET,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Reversible logic has come to the forefront of theoretical and applied research today. Although many researchers are investigating techniques to synthesize reversible combinational logic, there is little work in the area of sequential reversible logic. Latches and flip-flops are the most significant memory elements for the forthcoming sequential memory elements. In this paper, we proposed two new reversible logic gates MG-1 and MG-2. We then proposed new design techniques for latches and flip-flops with the help of the new proposed gates. The proposed designs are better than the existing ones in terms of number of gates, garbage outputs and delay. ","[{'version': 'v1', 'created': 'Sat, 26 Jul 2014 07:05:33 GMT'}]",2014-07-29,"[['Mamun', 'Md. Selim Al', ''], ['Mandal', 'Indrani', ''], ['Hasanuzzaman', 'Md.', '']]","['Garbage Output', 'Latch', 'MG gate', 'Quantum Cost', 'Reversible Logic']" 1,1008.1659,EPTCS,"Ronny Polley (Uni Halle), Ludwig Staiger (Uni Halle)",The Maximal Subword Complexity of Quasiperiodic Infinite Words,"In Proceedings DCFS 2010, arXiv:1008.1270","EPTCS 31, 2010, pp. 169-176",10.4204/EPTCS.31.19,,cs.FL cs.DM cs.IT math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We provide an exact estimate on the maximal subword complexity for quasiperiodic infinite words. To this end we give a representation of the set of finite and of infinite words having a certain quasiperiod q via a finite language derived from q. It is shown that this language is a suffix code having a bounded delay of decipherability. Our estimate of the subword complexity now follows from this result, previously known results on the subword complexity and elementary results on formal power series. ","[{'version': 'v1', 'created': 'Tue, 10 Aug 2010 08:33:59 GMT'}]",2010-08-11,"[['Polley', 'Ronny', '', 'Uni Halle'], ['Staiger', 'Ludwig', '', 'Uni Halle']]","['quasiperiodic words', 'codes', 'subword complexity', 'structure generating function']" 2,1203.4933,Kishorjit Nongmeikapam Mr.,"Kishorjit Nongmeikapam, Lairenlakpam Nonglenjaoba, Yumnam Nirmal and Sivaji Bandyopadhyay","Reduplicated MWE (RMWE) helps in improving the CRF based Manipuri POS Tagger","15 pages, 4 tables, 2 figures, the link http://airccse.org/journal/jcsit/1011csit05.pdf. arXiv admin note: text overlap with arXiv:1111.2399",,10.5121/ijitcs.2012.210,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper gives a detail overview about the modified features selection in CRF (Conditional Random Field) based Manipuri POS (Part of Speech) tagging. Selection of features is so important in CRF that the better are the features then the better are the outputs. This work is an attempt or an experiment to make the previous work more efficient. Multiple new features are tried to run the CRF and again tried with the Reduplicated Multiword Expression (RMWE) as another feature. The CRF run with RMWE because Manipuri is rich of RMWE and identification of RMWE becomes one of the necessities to bring up the result of POS tagging. The new CRF system shows a Recall of 78.22%, Precision of 73.15% and F-measure of 75.60%. With the identification of RMWE and considering it as a feature makes an improvement to a Recall of 80.20%, Precision of 74.31% and F-measure of 77.14%. ","[{'version': 'v1', 'created': 'Thu, 22 Mar 2012 09:50:51 GMT'}]",2012-03-23,"[['Nongmeikapam', 'Kishorjit', ''], ['Nonglenjaoba', 'Lairenlakpam', ''], ['Nirmal', 'Yumnam', ''], ['Bandyopadhyay', 'Sivaji', '']]","['CRF', 'RMWE', 'POS', 'Features', 'Stemming', 'Root']" 3,1710.11200,Renato J Cintra,"N. Rajapaksha, A. Madanayake, R. J. Cintra, J. Adikari, V. S. Dimitrov",VLSI Computational Architectures for the Arithmetic Cosine Transform,"8 pages, 2 figures, 6 tables","IEEE Transactions on Computers, vol. 64, no. 9, Sep 2015",10.1109/TC.2014.2366732,,cs.AR cs.DS cs.MM math.NA stat.ME,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The discrete cosine transform (DCT) is a widely-used and important signal processing tool employed in a plethora of applications. Typical fast algorithms for nearly-exact computation of DCT require floating point arithmetic, are multiplier intensive, and accumulate round-off errors. Recently proposed fast algorithm arithmetic cosine transform (ACT) calculates the DCT exactly using only additions and integer constant multiplications, with very low area complexity, for null mean input sequences. The ACT can also be computed non-exactly for any input sequence, with low area complexity and low power consumption, utilizing the novel architecture described. However, as a trade-off, the ACT algorithm requires 10 non-uniformly sampled data points to calculate the 8-point DCT. This requirement can easily be satisfied for applications dealing with spatial signals such as image sensors and biomedical sensor arrays, by placing sensor elements in a non-uniform grid. In this work, a hardware architecture for the computation of the null mean ACT is proposed, followed by a novel architectures that extend the ACT for non-null mean signals. All circuits are physically implemented and tested using the Xilinx XC6VLX240T FPGA device and synthesized for 45 nm TSMC standard-cell library for performance assessment. ","[{'version': 'v1', 'created': 'Mon, 30 Oct 2017 19:06:19 GMT'}]",2017-11-01,"[['Rajapaksha', 'N.', ''], ['Madanayake', 'A.', ''], ['Cintra', 'R. J.', ''], ['Adikari', 'J.', ''], ['Dimitrov', 'V. S.', '']]","['Discrete cosine transform', 'Arithmetic cosine transform', 'fast algorithms', 'VLSI']" 4,0904.0352,Rami Puzis,"Shlomi Dolev, Yuval Elovici, Rami Puzis, Polina Zilberman","Incremental Deployment of Network Monitors Based on Group Betweenness Centrality",,"Information Processing Letters, 109(20), 1172-1176 (2009)",10.1016/j.ipl.2009.07.019,,cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In many applications we are required to increase the deployment of a distributed monitoring system on an evolving network. In this paper we present a new method for finding candidate locations for additional deployment in the network. This method is based on the Group Betweenness Centrality (GBC) measure that is used to estimate the influence of a group of nodes over the information flow in the network. The new method assists in finding the location of k additional monitors in the evolving network, such that the portion of additional traffic covered is at least (1-1/e) of the optimal. ","[{'version': 'v1', 'created': 'Thu, 2 Apr 2009 09:32:51 GMT'}, {'version': 'v2', 'created': 'Sun, 12 Jul 2009 10:01:36 GMT'}, {'version': 'v3', 'created': 'Fri, 2 Oct 2020 13:32:31 GMT'}]",2020-10-05,"[['Dolev', 'Shlomi', ''], ['Elovici', 'Yuval', ''], ['Puzis', 'Rami', ''], ['Zilberman', 'Polina', '']]","['Graph Algorithms', 'Distributed Systems', 'Interconnection Networks', 'Network']" 5,1001.0639,Arnaud Labourel,"Jurek Czyzowicz, David Ilcinkas (LaBRI, INRIA Bordeaux - Sud-Ouest), Arnaud Labourel (LaBRI), Andrzej Pelc",Optimal Exploration of Terrains with Obstacles,,,10.1007/978-3-642-13731-0_1,,cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A mobile robot represented by a point moving in the plane has to explore an unknown terrain with obstacles. Both the terrain and the obstacles are modeled as arbitrary polygons. We consider two scenarios: the unlimited vision, when the robot situated at a point p of the terrain explores (sees) all points q of the terrain for which the segment pq belongs to the terrain, and the limited vision, when we require additionally that the distance between p and q be at most 1. All points of the terrain (except obstacles) have to be explored and the performance of an exploration algorithm is measured by the length of the trajectory of the robot. For unlimited vision we show an exploration algorithm with complexity O(P + D?k), where P is the total perimeter of the terrain (including perimeters of obstacles), D is the diameter of the convex hull of the terrain, and k is the number of obstacles. We do not assume knowledge of these parameters. We also prove a matching lower bound showing that the above complexity is optimal, even if the terrain is known to the robot. For limited vision we show exploration algorithms with complexity O(P + A + ?Ak), where A is the area of the terrain (excluding obstacles). Our algorithms work either for arbitrary terrains, if one of the parameters A or k is known, or for c-fat terrains, where c is any constant (unknown to the robot) and no additional knowledge is assumed. (A terrain T with obstacles is c-fat if R/r ? c, where R is the radius of the smallest disc containing T and r is the radius of the largest disc contained in T .) We also prove a matching lower bound ?(P + A + ?Ak) on the complexity of exploration for limited vision, even if the terrain is known to the robot. ","[{'version': 'v1', 'created': 'Tue, 5 Jan 2010 07:29:11 GMT'}]",2015-05-14,"[['Czyzowicz', 'Jurek', '', 'LaBRI, INRIA Bordeaux - Sud-Ouest'], ['Ilcinkas', 'David', '', 'LaBRI, INRIA Bordeaux - Sud-Ouest'], ['Labourel', 'Arnaud', '', 'LaBRI'], ['Pelc', 'Andrzej', '']]","['mobile robot', 'exploration', 'polygon', 'obstacle']" 6,1802.07489,Sylwia Polberg,"Anthony Hunter, Sylwia Polberg, Matthias Thimm","Epistemic Graphs for Representing and Reasoning with Positive and Negative Influences of Arguments",,,10.1016/j.artint.2020.103236,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper introduces epistemic graphs as a generalization of the epistemic approach to probabilistic argumentation. In these graphs, an argument can be believed or disbelieved up to a given degree, thus providing a more fine--grained alternative to the standard Dung's approaches when it comes to determining the status of a given argument. Furthermore, the flexibility of the epistemic approach allows us to both model the rationale behind the existing semantics as well as completely deviate from them when required. Epistemic graphs can model both attack and support as well as relations that are neither support nor attack. The way other arguments influence a given argument is expressed by the epistemic constraints that can restrict the belief we have in an argument with a varying degree of specificity. The fact that we can specify the rules under which arguments should be evaluated and we can include constraints between unrelated arguments permits the framework to be more context--sensitive. It also allows for better modelling of imperfect agents, which can be important in multi--agent applications. ","[{'version': 'v1', 'created': 'Wed, 21 Feb 2018 10:05:49 GMT'}, {'version': 'v2', 'created': 'Tue, 14 Jan 2020 11:45:14 GMT'}]",2020-01-15,"[['Hunter', 'Anthony', ''], ['Polberg', 'Sylwia', ''], ['Thimm', 'Matthias', '']]","['abstract argumentation', 'epistemic argumentation', 'bipolar argumentation']" 7,1612.08012,Arnaud Arindra Adiyoso Setio,"Arnaud Arindra Adiyoso Setio, Alberto Traverso, Thomas de Bel, Moira S.N. Berens, Cas van den Bogaard, Piergiorgio Cerello, Hao Chen, Qi Dou, Maria Evelina Fantacci, Bram Geurts, Robbert van der Gugten, Pheng Ann Heng, Bart Jansen, Michael M.J. de Kaste, Valentin Kotov, Jack Yu-Hung Lin, Jeroen T.M.C. Manders, Alexander S\'onora-Mengana, Juan Carlos Garc\'ia-Naranjo, Evgenia Papavasileiou, Mathias Prokop, Marco Saletta, Cornelia M Schaefer-Prokop, Ernst T. Scholten, Luuk Scholten, Miranda M. Snoeren, Ernesto Lopez Torres, Jef Vandemeulebroucke, Nicole Walasek, Guido C.A. Zuidhof, Bram van Ginneken, Colin Jacobs","Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge",,,10.1016/j.media.2017.06.015,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems. ","[{'version': 'v1', 'created': 'Fri, 23 Dec 2016 15:47:27 GMT'}, {'version': 'v2', 'created': 'Thu, 5 Jan 2017 08:26:13 GMT'}, {'version': 'v3', 'created': 'Fri, 30 Jun 2017 07:56:47 GMT'}, {'version': 'v4', 'created': 'Sat, 15 Jul 2017 12:11:40 GMT'}]",2017-07-18,"[['Setio', 'Arnaud Arindra Adiyoso', ''], ['Traverso', 'Alberto', ''], ['de Bel', 'Thomas', ''], ['Berens', 'Moira S. N.', ''], ['Bogaard', 'Cas van den', ''], ['Cerello', 'Piergiorgio', ''], ['Chen', 'Hao', ''], ['Dou', 'Qi', ''], ['Fantacci', 'Maria Evelina', ''], ['Geurts', 'Bram', ''], ['van der Gugten', 'Robbert', ''], ['Heng', 'Pheng Ann', ''], ['Jansen', 'Bart', ''], ['de Kaste', 'Michael M. J.', ''], ['Kotov', 'Valentin', ''], ['Lin', 'Jack Yu-Hung', ''], ['Manders', 'Jeroen T. M. C.', ''], ['Sónora-Mengana', 'Alexander', ''], ['García-Naranjo', 'Juan Carlos', ''], ['Papavasileiou', 'Evgenia', ''], ['Prokop', 'Mathias', ''], ['Saletta', 'Marco', ''], ['Schaefer-Prokop', 'Cornelia M', ''], ['Scholten', 'Ernst T.', ''], ['Scholten', 'Luuk', ''], ['Snoeren', 'Miranda M.', ''], ['Torres', 'Ernesto Lopez', ''], ['Vandemeulebroucke', 'Jef', ''], ['Walasek', 'Nicole', ''], ['Zuidhof', 'Guido C. A.', ''], ['van Ginneken', 'Bram', ''], ['Jacobs', 'Colin', '']]","['pulmonary nodules', 'computed tomography', 'computer-aided detection', 'medical image challenges', 'deeplearning', 'convolutional networks']" 8,1610.00257,Maria Kulikova V.,Maria V. Kulikova,"Square-root algorithms for maximum correntropy estimation of linear discrete-time systems in presence of non-Gaussian noise","The paper is accepted for publication in Systems & Control Letters, 2017","Systems & Control Letters, 108: 8-15, 2017",10.1016/j.sysconle.2017.07.016,,cs.SY math.OC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Recent developments in the realm of state estimation of stochastic dynamic systems in the presence of non-Gaussian noise have induced a new methodology called the maximum correntropy filtering. The filters designed under the maximum correntropy criterion (MCC) utilize a similarity measure (or correntropy) between two random variables as a cost function. They are shown to improve the estimators' robustness against outliers or impulsive noises. In this paper we explore the numerical stability of linear filtering technique proposed recently under the MCC approach. The resulted estimator is called the maximum correntropy criterion Kalman filter (MCC-KF). The purpose of this study is two-fold. First, the previously derived MCC-KF equations are revise and the related Kalman-like equality conditions are proved. Based on this theoretical finding, we improve the MCC-KF technique in the sense that the new method possesses a better estimation quality with the reduced computational cost compared with the previously proposed MCC-KF variant. Second, we devise some square-root implementations for the newly-designed improved estimator. The square-root algorithms are well known to be inherently more stable than the conventional Kalman-like implementations, which process the full error covariance matrix in each iteration step of the filter. Additionally, following the latest achievements in the KF community, all square-root algorithms are formulated here in the so-called array form. All the MCC-KF variants developed in this paper are demonstrated to outperform the previously proposed MCC-KF version in two numerical examples. ","[{'version': 'v1', 'created': 'Sun, 2 Oct 2016 10:58:58 GMT'}, {'version': 'v2', 'created': 'Mon, 2 Jan 2017 16:32:01 GMT'}, {'version': 'v3', 'created': 'Fri, 14 Apr 2017 11:46:14 GMT'}, {'version': 'v4', 'created': 'Tue, 5 Sep 2017 12:15:35 GMT'}]",2017-09-06,"[['Kulikova', 'Maria V.', '']]","['Maximum correntropy criterion', 'Kalman filter', 'square-root filtering', 'robust estimation']" 9,1208.6324,Ines Klimann,Ines Klimann (LIAFA),"The finiteness of a group generated by a 2-letter invertible-reversible Mealy automaton is decidable",,"30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013), Kiel : Germany (2013)",10.4230/LIPIcs.STACS.2013.502,,cs.FL math.GR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We prove that a semigroup generated by a reversible two-state Mealy automaton is either finite or free of rank 2. This fact leads to the decidability of finiteness for groups generated by two-state or two-letter invertible-reversible Mealy automata and to the decidability of freeness for semigroups generated by two-state invertible-reversible Mealy automata. ","[{'version': 'v1', 'created': 'Thu, 30 Aug 2012 22:15:36 GMT'}, {'version': 'v2', 'created': 'Tue, 22 Oct 2013 09:12:05 GMT'}]",2013-10-23,"[['Klimann', 'Ines', '', 'LIAFA']]","['Mealy automata', 'automaton semigroups', 'decidability of finiteness', 'decidability of freeness', 'Nerode equivalence']" 10,1304.5974,Kevin Xu,Kevin S. Xu and Alfred O. Hero III,"Dynamic stochastic blockmodels: Statistical models for time-evolving networks",,"Proceedings of the 6th International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction (2013) 201-210",10.1007/978-3-642-37210-0_22,,cs.SI cs.LG physics.soc-ph stat.ME,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Significant efforts have gone into the development of statistical models for analyzing data in the form of networks, such as social networks. Most existing work has focused on modeling static networks, which represent either a single time snapshot or an aggregate view over time. There has been recent interest in statistical modeling of dynamic networks, which are observed at multiple points in time and offer a richer representation of many complex phenomena. In this paper, we propose a state-space model for dynamic networks that extends the well-known stochastic blockmodel for static networks to the dynamic setting. We then propose a procedure to fit the model using a modification of the extended Kalman filter augmented with a local search. We apply the procedure to analyze a dynamic social network of email communication. ","[{'version': 'v1', 'created': 'Mon, 22 Apr 2013 15:07:19 GMT'}]",2013-04-23,"[['Xu', 'Kevin S.', ''], ['Hero', 'Alfred O.', 'III']]","['dynamic network', 'stochastic blockmodel', 'state-space model']" 11,1701.05402,Mohammad Asif Mr.,"M. A. Habibi, M. Ulman, J. Van\v{e}k, J. Pavl\'ik","Measurement and Analysis of Quality of Service of Mobile Networks in Afghanistan End User Perspective",in AGRIS on-line Papers in Economics' and Informatics. December 2016,,10.7160/aol.2016.080407,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Enhanced Quality of Service (QoS) and satisfaction of mobile phone user are major concerns of a service provider. In order to manage network efficiently and to provide enhanced end to end Quality of Experience (QoE), operator is expected to measure and analyze QoS from various perspectives and at different relevant points of network. The scope of this paper is measurement and statistically analysis of QoS of mobile networks from end user perspective in Afghanistan. The study is based on primary data collected on random basis from 1,515 mobile phone users of five cellular operators. The paper furthermore proposes adequate technical solutions to mobile operators in order to address existing challenges in the area of QoS and to remain competitive in the market. Based on the result of processed data, considering geographical locations, population and telecom regulations of the government, authors recommend deployment of small cells (SCs), increasing number of regular performance tests, optimal placement of base stations, increasing number of carriers, and high order sectorization as proposed technical solutions. ","[{'version': 'v1', 'created': 'Thu, 19 Jan 2017 13:12:36 GMT'}]",2017-01-20,"[['Habibi', 'M. A.', ''], ['Ulman', 'M.', ''], ['Vaněk', 'J.', ''], ['Pavlík', 'J.', '']]","['Quality of service', 'quality of experience', 'quality of service parameters', 'mobile network', 'end user', 'data measurement', 'statistical analysis', 'Afghanistan']" 12,1706.01406,Alessandro Aimar,"Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde, Federico Corradi, Alejandro Linares-Barranco, Shih-Chii Liu, Tobi Delbruck","NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps",,,10.1109/TNNLS.2018.2852335,,cs.CV cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though Graphical Processing Units (GPUs) are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1x1 to 7x7. NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq FPGA platform and present results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Post-synthesis simulations using Mentor Modelsim in a 28nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6.3mm$^2$. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real time interactive demonstrations. ","[{'version': 'v1', 'created': 'Mon, 5 Jun 2017 16:20:24 GMT'}, {'version': 'v2', 'created': 'Tue, 6 Mar 2018 10:05:33 GMT'}]",2020-10-27,"[['Aimar', 'Alessandro', ''], ['Mostafa', 'Hesham', ''], ['Calabrese', 'Enrico', ''], ['Rios-Navarro', 'Antonio', ''], ['Tapiador-Morales', 'Ricardo', ''], ['Lungu', 'Iulia-Alexandra', ''], ['Milde', 'Moritz B.', ''], ['Corradi', 'Federico', ''], ['Linares-Barranco', 'Alejandro', ''], ['Liu', 'Shih-Chii', ''], ['Delbruck', 'Tobi', '']]","['Convolutional Neural Networks', 'VLSI', 'FPGA', 'computer vision', 'artificial intelligence']" 13,1809.00711,Edith Zavala,"Edith Zavala, Xavier Franch, Jordi Marco",Adaptive Monitoring: A Systematic Mapping,"57 pages, 20 figures, 8 tables, Inf. Softw. Technol., Aug. 2018, pre-print, CC-BY-NC-ND 4.0 license, https://doi.org/10.1016/j.infsof.2018.08.013",,10.1016/j.infsof.2018.08.013,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Context: Adaptive monitoring is a method used in a variety of domains for responding to changing conditions. It has been applied in different ways, from monitoring systems' customization to re-composition, in different application domains. However, to the best of our knowledge, there are no studies analyzing how adaptive monitoring differs or resembles among the existing approaches. Method: We have conducted a systematic mapping study of adaptive monitoring approaches following recommended practices. We have applied automatic search and snowballing sampling on different sources and used rigorous selection criteria to retrieve the final set of papers. Moreover, we have used an existing qualitative analysis method for extracting relevant data from studies. Finally, we have applied data mining techniques for identifying patterns in the solutions. Conclusions: This cross-domain overview of the current state of the art on adaptive monitoring may be a solid and comprehensive baseline for researchers and practitioners in the field. Especially, it may help in identifying opportunities of research, for instance, the need of proposing generic and flexible software engineering solutions for supporting adaptive monitoring in a variety of systems. ","[{'version': 'v1', 'created': 'Mon, 3 Sep 2018 20:19:31 GMT'}]",2018-09-05,"[['Zavala', 'Edith', ''], ['Franch', 'Xavier', ''], ['Marco', 'Jordi', '']]","['Adaptive Monitoring', 'Monitoring Reconfiguration', 'Monitor Customization', 'State of the Art', 'Systematic Mapping Study', 'Literature Review']" 14,1801.04971,Kathleen Gregory,"Kathleen Gregory, Helena Cousijn, Paul Groth, Andrea Scharnhorst, Sally Wyatt",Understanding Data Search as a Socio-technical Practice,"19 pages, 3 figures, 7 tables",Journal of Information Science. (2019). 0165551519837182,10.1177/0165551519837182,,cs.DL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Open research data are heralded as having the potential to increase effectiveness, productivity, and reproducibility in science, but little is known about the actual practices involved in data search. The socio-technical problem of locating data for reuse is often reduced to the technological dimension of designing data search systems. We combine a bibliometric study of the current academic discourse around data search with interviews with data seekers. In this article, we explore how adopting a contextual, socio-technical perspective can help to understand user practices and behavior and ultimately help to improve the design of data discovery systems. ","[{'version': 'v1', 'created': 'Mon, 15 Jan 2018 20:09:56 GMT'}, {'version': 'v2', 'created': 'Thu, 25 Jan 2018 09:42:25 GMT'}, {'version': 'v3', 'created': 'Mon, 18 Feb 2019 09:36:28 GMT'}]",2020-03-12,"[['Gregory', 'Kathleen', ''], ['Cousijn', 'Helena', ''], ['Groth', 'Paul', ''], ['Scharnhorst', 'Andrea', ''], ['Wyatt', 'Sally', '']]","['Data search', 'data reuse', 'data retrieval', 'information seeking', 'research data']" 15,2102.06740,Nicholas Baskerville,Nicholas P Baskerville and Diego Granziol and Jonathan P Keating,Appearance of Random Matrix Theory in Deep Learning,"33 pages, 14 figures",,10.1016/j.physa.2021.126742,,cs.LG math-ph math.MP stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We investigate the local spectral statistics of the loss surface Hessians of artificial neural networks, where we discover excellent agreement with Gaussian Orthogonal Ensemble statistics across several network architectures and datasets. These results shed new light on the applicability of Random Matrix Theory to modelling neural networks and suggest a previously unrecognised role for it in the study of loss surfaces in deep learning. Inspired by these observations, we propose a novel model for the true loss surfaces of neural networks, consistent with our observations, which allows for Hessian spectral densities with rank degeneracy and outliers, extensively observed in practice, and predicts a growing independence of loss gradients as a function of distance in weight-space. We further investigate the importance of the true loss surface in neural networks and find, in contrast to previous work, that the exponential hardness of locating the global minimum has practical consequences for achieving state of the art performance. ","[{'version': 'v1', 'created': 'Fri, 12 Feb 2021 19:49:19 GMT'}, {'version': 'v2', 'created': 'Fri, 5 Nov 2021 12:32:05 GMT'}, {'version': 'v3', 'created': 'Fri, 24 Dec 2021 11:22:13 GMT'}]",2021-12-28,"[['Baskerville', 'Nicholas P', ''], ['Granziol', 'Diego', ''], ['Keating', 'Jonathan P', '']]","['random matrix theory', 'deep learning', 'machine learning', 'neuralnetworks', 'local statistics', 'Wigner surmise']" 16,1812.05961,Thomas Rolinger,"Thomas B. Rolinger, Tyler A. Simon, Christopher D. Krieger",Parallel Sparse Tensor Decomposition in Chapel,"2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 5th Annual Chapel Implementers and Users Workshop (CHIUW 2018)",,10.1109/IPDPSW.2018.00143,,cs.DC cs.PF,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In big-data analytics, using tensor decomposition to extract patterns from large, sparse multivariate data is a popular technique. Many challenges exist for designing parallel, high performance tensor decomposition algorithms due to irregular data accesses and the growing size of tensors that are processed. There have been many efforts at implementing shared-memory algorithms for tensor decomposition, most of which have focused on the traditional C/C++ with OpenMP framework. However, Chapel is becoming an increasingly popular programing language due to its expressiveness and simplicity for writing scalable parallel programs. In this work, we port a state of the art C/OpenMP parallel sparse tensor decomposition tool, SPLATT, to Chapel. We present a performance study that investigates bottlenecks in our Chapel code and discusses approaches for improving its performance. Also, we discuss features in Chapel that would have been beneficial to our porting effort. We demonstrate that our Chapel code is competitive with the C/OpenMP code for both runtime and scalability, achieving 83%-96% performance of the original code and near linear scalability up to 32 cores. ","[{'version': 'v1', 'created': 'Fri, 14 Dec 2018 14:39:26 GMT'}]",2018-12-17,"[['Rolinger', 'Thomas B.', ''], ['Simon', 'Tyler A.', ''], ['Krieger', 'Christopher D.', '']]","['Chapel', 'OpenMP', 'sparse', 'tensor decomposition', 'performance study']" 17,1002.2440,Bernhard von Stengel,"Christoph Ambuehl, Bernd Gaertner, Bernhard von Stengel",Optimal Lower Bounds for Projective List Update Algorithms,"Version 3 same as version 2, but date in LaTeX \today macro replaced by March 8, 2012","ACM Transactions on Algorithms (TALG) Volume 9, Issue 4, September 2013, Article 31, 18 pages",10.1145/2500120,,cs.CC cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The list update problem is a classical online problem, with an optimal competitive ratio that is still open, known to be somewhere between 1.5 and 1.6. An algorithm with competitive ratio 1.6, the smallest known to date, is COMB, a randomized combination of BIT and the TIMESTAMP algorithm TS. This and almost all other list update algorithms, like MTF, are projective in the sense that they can be defined by looking only at any pair of list items at a time. Projectivity (also known as ""list factoring"") simplifies both the description of the algorithm and its analysis, and so far seems to be the only way to define a good online algorithm for lists of arbitrary length. In this paper we characterize all projective list update algorithms and show that their competitive ratio is never smaller than 1.6 in the partial cost model. Therefore, COMB is a best possible projective algorithm in this model. ","[{'version': 'v1', 'created': 'Thu, 11 Feb 2010 21:48:07 GMT'}, {'version': 'v2', 'created': 'Wed, 7 Mar 2012 08:21:54 GMT'}, {'version': 'v3', 'created': 'Thu, 8 Mar 2012 19:00:15 GMT'}]",2014-12-02,"[['Ambuehl', 'Christoph', ''], ['Gaertner', 'Bernd', ''], ['von Stengel', 'Bernhard', '']]","['linear lists', 'online algorithms', 'competitive analysis']" 18,1504.08213,Jean Louis Fendji Kedieng Ebongue,Jean Louis Ebongue Kedieng Fendji and Jean Michel Nlong,Rural Wireless Mesh Network: A Design Methodology,"9 pages, 2 figures, 3 tables","International Journal of Communications, Network and System Sciences, 8, 1-9",10.4236/ijcns.2015.81001,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Wireless Mesh Network is presented as an appealing solution for bridging the digital divide between developed and under-developed regions. But the planning and deployment of these networks are not just a technical matter, since the success depends on many other factors tied to the related region. Although we observe some deployments, to ensure usefulness and sustainability, there is still a need of concrete design process model and proper network planning approach for rural regions, especially in Sub-Saharan Africa. This paper presents a design methodology to provide network connectivity from a landline node in a rural region at very low cost. We propose a methodology composed of ten steps, starting by a deep analysis of the region in order to identify relevant constraints and useful applications to sustain local activities and communication. Approach for planning the physical architecture of the network is based on an indoor-outdoor deployment for reducing the overall cost of the network. ","[{'version': 'v1', 'created': 'Thu, 30 Apr 2015 13:20:50 GMT'}]",2015-05-01,"[['Fendji', 'Jean Louis Ebongue Kedieng', ''], ['Nlong', 'Jean Michel', '']]","['Design Methodology', 'Planning', 'Rural Regions', 'Wireless Mesh Network']" 19,1412.3639,Mostafa Zaman Chowdhury,"Mostafa Zaman Chowdhury, Yeong Min Jang, and Zygmunt J. Haas","Cost-Effective Frequency Planning for Capacity Enhancement of Femtocellular Networks",,"Wireless Personal Communications, vol. 60, no. 1, pp. 83-104, Sept. 2011",10.1007/s11277-011-0258-y,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Femtocellular networks will co-exist with macrocellular networks, mitigation of the interference between these two network types is a key challenge for successful integration of these two technologies. In particular, there are several interference mechanisms between the femtocellular and the macrocellular networks, and the effects of the resulting interference depend on the density of femtocells and the overlaid macrocells in a particular coverage area. While improper interference management can cause a significant reduction in the system capacity and can increase the outage probability, effective and efficient frequency allocation among femtocells and macrocells can result in a successful co-existence of these two technologies. Furthermore, highly dense femtocellular deployments the ultimate goal of the femtocellular technology will require significant degree of self-organization in lieu of manual configuration. In this paper, we present various femtocellular network deployment scenarios, and we propose a number of frequency-allocation schemes to mitigate the interference and to increases the spectral efficiency of the integrated network. These schemes include: shared frequency band, dedicated frequency band, sub-frequency band, static frequency-reuse, and dynamic frequency-reuse. ","[{'version': 'v1', 'created': 'Thu, 11 Dec 2014 13:04:25 GMT'}]",2014-12-12,"[['Chowdhury', 'Mostafa Zaman', ''], ['Jang', 'Yeong Min', ''], ['Haas', 'Zygmunt J.', '']]","['Femtocell', 'Femtocellular Network', 'Overlay Networks', 'Interference']" 20,1003.5440,Secretary Aircc Journal,"K.Ayyappan (1) and R. Kumar (2) ((1) Rajiv Gandhi College of Engineering and Technology, India, (2) SRM University, India)",QoS Based Capacity Enhancement for WCDMA Network with Coding Scheme,"10 Pages, VLSICS Journal","International Journal Of VLSI Design & Communication Systems 1.1 (2010) 10-19",10.5121/vlsic.2010.1102,,cs.NI,http://creativecommons.org/licenses/by-nc-sa/3.0/," The wide-band code division multiple access (WCDMA) based 3G and beyond cellular mobile wireless networks are expected to provide a diverse range of multimedia services to mobile users with guaranteed quality of service (QoS). To serve diverse quality of service requirements of these networks it necessitates new radio resource management strategies for effective utilization of network resources with coding schemes. Call admission control (CAC) is a significant component in wireless networks to guarantee quality of service requirements and also to enhance the network resilience. In this paper capacity enhancement for WCDMA network with convolutional coding scheme is discussed and compared with block code and without coding scheme to achieve a better balance between resource utilization and quality of service provisioning. The model of this network is valid for the real-time (RT) and non-real-time (NRT) services having different data rate. Simulation results demonstrate the effectiveness of the network using convolutional code in terms of capacity enhancement and QoS of the voice and video services. ","[{'version': 'v1', 'created': 'Mon, 29 Mar 2010 07:07:31 GMT'}]",2010-07-15,"[['Ayyappan', 'K.', ''], ['Kumar', 'R.', '']]","['Call admission control', 'Wide band code division multiple access', 'Wireless networks', 'Quality of service']" 21,1709.01304,Stefan Wagner,Stefan Wagner and Florian Deissenboeck,"Abstractness, specificity, and complexity in software design","8 pages, 3 figures","Proceedings of the 2nd International Workshop on The Role of Abstraction in Software Engineering (ROA '08), pages 35-42, ACM, 2008",10.1145/1370164.1370173,,cs.SE cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Abstraction is one of the fundamental concepts of software design. Consequently, the determination of an appropriate abstraction level for the multitude of artefacts that form a software system is an integral part of software engineering. However, the very nature of abstraction in software design and particularly its interrelation with equally important concepts like complexity, specificity or genericity are not fully understood today. As a step towards a better understanding of the trade-offs involved, this paper proposes a distinction of abstraction into two types that have different effects on the specificity and the complexity of artefacts. We discuss the roles of the two types of abstraction in software design and explain the interrelations between abstractness, specificity, and complexity. Furthermore, we illustrate the benefit of the proposed distinction with multiple examples and describe consequences of our findings for software design activities. ","[{'version': 'v1', 'created': 'Tue, 5 Sep 2017 09:38:06 GMT'}]",2017-09-06,"[['Wagner', 'Stefan', ''], ['Deissenboeck', 'Florian', '']]","['Abstractness', 'specificity', 'complexity', 'genericity']" 22,1807.00092,Ralf-Peter Mundani,"Ralf-Peter Mundani (1), J\'er\^ome Frisch (2), Vasco Varduhn (3), and Ernst Rank (1) ((1) Technische Universit\""at M\""unchen, Munich, Germany, (2) RWTH Aachen University, Aachen, Germany, (3) University of Minnesota, Minneapolis, MN, USA)","A sliding window technique for interactive high-performance computing scenarios","21 pages, 12 figures",Advances in Engineering Software 84 (2015) 21-30,10.1016/j.advengsoft.2015.02.003,,cs.CE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Interactive high-performance computing is doubtlessly beneficial for many computational science and engineering applications whenever simulation results should be visually processed in real time, i.e. during the computation process. Nevertheless, interactive HPC entails a lot of new challenges that have to be solved - one of them addressing the fast and efficient data transfer between a simulation back end and visualisation front end, as several gigabytes of data per second are nothing unusual for a simulation running on some (hundred) thousand cores. Here, a new approach based on a sliding window technique is introduced that copes with any bandwidth limitations and allows users to study both large and small scale effects of the simulation results in an interactive fashion. ","[{'version': 'v1', 'created': 'Sat, 30 Jun 2018 00:17:52 GMT'}]",2018-07-03,"[['Mundani', 'Ralf-Peter', ''], ['Frisch', 'Jérôme', ''], ['Varduhn', 'Vasco', ''], ['Rank', 'Ernst', '']]","['interactive HPC', 'sliding window', 'computational fluid dynamics']" 23,1304.3876,Daowen Qiu,"Shenggen Zheng, Daowen Qiu, Jozef Gruska","Power of the interactive proof systems with verifiers modeled by semi-quantum two-way finite automata","26 pages, 5 figures, some references have been added, and comments are welcome",Information and Computation 241(2015) 197-214,10.1016/j.ic.2015.02.003,,cs.CC cs.CR quant-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we explore the power of AM for the case that verifiers are {\em two-way finite automata with quantum and classical states} (2QCFA)--introduced by Ambainis and Watrous in 2002--and the communications are classical. It is of interest to consider AM with such ""semi-quantum"" verifiers because they use only limited quantum resources. Our main result is that such Quantum Arthur-Merlin proof systems (QAM(2QCFA)) with polynomial expected running time are more powerful than in the case verifiers are two-way probabilistic finite automata (AM(2PFA)) with polynomial expected running time. Moreover, we prove that there is a language which can be recognized by an exponential expected running time QAM(2QCFA), but can not be recognized by any AM(2PFA), and that the NP-complete language $L_{knapsack}$ can also be recognized by a QAM(2QCFA) working only on quantum pure states using unitary operators. ","[{'version': 'v1', 'created': 'Sun, 14 Apr 2013 04:59:44 GMT'}, {'version': 'v2', 'created': 'Tue, 7 May 2013 07:30:26 GMT'}, {'version': 'v3', 'created': 'Sat, 2 May 2015 15:55:26 GMT'}]",2015-05-05,"[['Zheng', 'Shenggen', ''], ['Qiu', 'Daowen', ''], ['Gruska', 'Jozef', '']]","['Quantum computing', 'quantum finite automata', 'quantum Arthur-Merlin proof systems', 'two-way finite automata with quantum', 'classical states']" 24,0802.2685,Maziar Nekovee,C. J. Rhodes and M. Nekovee,The Opportunistic Transmission of Wireless Worms between Mobile Devices,Submitted for publication,,10.1016/j.physa.2008.09.017,,cs.NI cond-mat.stat-mech cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The ubiquity of portable wireless-enabled computing and communications devices has stimulated the emergence of malicious codes (wireless worms) that are capable of spreading between spatially proximal devices. The potential exists for worms to be opportunistically transmitted between devices as they move around, so human mobility patterns will have an impact on epidemic spread. The scenario we address in this paper is proximity attacks from fleetingly in-contact wireless devices with short-range communication range, such as Bluetooth-enabled smart phones. An individual-based model of mobile devices is introduced and the effect of population characteristics and device behaviour on the outbreak dynamics is investigated. We show through extensive simulations that in the above scenario the resulting mass-action epidemic models remain applicable provided the contact rate is derived consistently from the underlying mobility model. The model gives useful analytical expressions against which more refined simulations of worm spread can be developed and tested. ","[{'version': 'v1', 'created': 'Tue, 19 Feb 2008 17:07:32 GMT'}]",2009-11-13,"[['Rhodes', 'C. J.', ''], ['Nekovee', 'M.', '']]","['epidemic model; kinetic theory', 'mass action', 'mobile computing', 'wireless worms']" 25,1608.02755,Jordi Pont-Tuset,"Kevis-Kokitsi Maninis and Jordi Pont-Tuset and Pablo Arbel\'aez and Luc Van Gool",Convolutional Oriented Boundaries,ECCV 2016 Camera Ready,,10.1007/978-3-319-46448-0_35,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments on BSDS, PASCAL Context, PASCAL Segmentation, and MS-COCO, showing that COB provides state-of-the-art contours, region hierarchies, and object proposals in all datasets. ","[{'version': 'v1', 'created': 'Tue, 9 Aug 2016 10:37:52 GMT'}]",2016-11-17,"[['Maninis', 'Kevis-Kokitsi', ''], ['Pont-Tuset', 'Jordi', ''], ['Arbeláez', 'Pablo', ''], ['Van Gool', 'Luc', '']]","['Contour detection', 'contour orientation estimation', 'hierarchical image segmentation', 'object proposals']" 26,1005.2405,Ozan Candogan,"Ozan Candogan, Ishai Menache, Asuman Ozdaglar, Pablo A. Parrilo",Flows and Decompositions of Games: Harmonic and Potential Games,,"Mathematics of Operations Research, Vol. 36, No. 3, pp. 474-503, 2011",10.1287/moor.1110.0500,,cs.GT math.OC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we introduce a novel flow representation for finite games in strategic form. This representation allows us to develop a canonical direct sum decomposition of an arbitrary game into three components, which we refer to as the potential, harmonic and nonstrategic components. We analyze natural classes of games that are induced by this decomposition, and in particular, focus on games with no harmonic component and games with no potential component. We show that the first class corresponds to the well-known potential games. We refer to the second class of games as harmonic games, and study the structural and equilibrium properties of this new class of games. Intuitively, the potential component of a game captures interactions that can equivalently be represented as a common interest game, while the harmonic part represents the conflicts between the interests of the players. We make this intuition precise, by studying the properties of these two classes, and show that indeed they have quite distinct and remarkable characteristics. For instance, while finite potential games always have pure Nash equilibria, harmonic games generically never do. Moreover, we show that the nonstrategic component does not affect the equilibria of a game, but plays a fundamental role in their efficiency properties, thus decoupling the location of equilibria and their payoff-related properties. Exploiting the properties of the decomposition framework, we obtain explicit expressions for the projections of games onto the subspaces of potential and harmonic games. This enables an extension of the properties of potential and harmonic games to ""nearby"" games. We exemplify this point by showing that the set of approximate equilibria of an arbitrary game can be characterized through the equilibria of its projection onto the set of potential games. ","[{'version': 'v1', 'created': 'Thu, 13 May 2010 19:55:59 GMT'}, {'version': 'v2', 'created': 'Fri, 25 Jun 2010 03:22:21 GMT'}]",2015-03-17,"[['Candogan', 'Ozan', ''], ['Menache', 'Ishai', ''], ['Ozdaglar', 'Asuman', ''], ['Parrilo', 'Pablo A.', '']]","['decomposition of games', 'potential games', 'harmonic games', 'strategic equivalence']" 27,1709.08526,\'Alvaro L\'opez Garc\'ia,"\'Alvaro L\'opez Garc\'ia, Enol Fern\'andez-del-Castillo, Pablo Orviz Fern\'andez, Isabel Campos Plasencia, Jes\'us Marco de Lucas",Resource provisioning in Science Clouds: Requirements and challenges,,Software: Practice and Experience. 2017;1-13,10.1002/spe.2544,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications. ","[{'version': 'v1', 'created': 'Mon, 25 Sep 2017 14:44:50 GMT'}]",2017-09-26,"[['García', 'Álvaro López', ''], ['Fernández-del-Castillo', 'Enol', ''], ['Fernández', 'Pablo Orviz', ''], ['Plasencia', 'Isabel Campos', ''], ['de Lucas', 'Jesús Marco', '']]","['Scientific Computing', 'Cloud Computing', 'Science Clouds', 'Cloud Challenges']" 28,1808.03965,Hongyang Gao,"Hongyang Gao, Zhengyang Wang, Shuiwang Ji",Large-Scale Learnable Graph Convolutional Networks,,"In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1416-1424). ACM (2018)",10.1145/3219819.3219947,,cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Convolutional neural networks (CNNs) have achieved great success on grid-like data such as images, but face tremendous challenges in learning from more generic data such as graphs. In CNNs, the trainable local filters enable the automatic extraction of high-level features. The computation with filters requires a fixed number of ordered units in the receptive fields. However, the number of neighboring units is neither fixed nor are they ordered in generic graphs, thereby hindering the applications of convolutional operations. Here, we address these challenges by proposing the learnable graph convolutional layer (LGCL). LGCL automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. To enable model training on large-scale graphs, we propose a sub-graph training method to reduce the excessive memory and computational resource requirements suffered by prior methods on graph convolutions. Our experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that our methods can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network datasets. Our results also indicate that the proposed methods using sub-graph training strategy are more efficient as compared to prior approaches. ","[{'version': 'v1', 'created': 'Sun, 12 Aug 2018 16:22:12 GMT'}]",2018-09-05,"[['Gao', 'Hongyang', ''], ['Wang', 'Zhengyang', ''], ['Ji', 'Shuiwang', '']]","['Deep learning', 'graph convolutional networks', 'graph mining', 'largescale learning']" 29,1807.02192,David Paulius,David Paulius and Yu Sun,A Survey of Knowledge Representation in Service Robotics,"Accepted for RAS Special Issue on Semantic Policy and Action Representations for Autonomous Robots - 22 Pages",,10.1016/j.robot.2019.03.005,,cs.RO cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Within the realm of service robotics, researchers have placed a great amount of effort into learning, understanding, and representing motions as manipulations for task execution by robots. The task of robot learning and problem-solving is very broad, as it integrates a variety of tasks such as object detection, activity recognition, task/motion planning, localization, knowledge representation and retrieval, and the intertwining of perception/vision and machine learning techniques. In this paper, we solely focus on knowledge representations and notably how knowledge is typically gathered, represented, and reproduced to solve problems as done by researchers in the past decades. In accordance with the definition of knowledge representations, we discuss the key distinction between such representations and useful learning models that have extensively been introduced and studied in recent years, such as machine learning, deep learning, probabilistic modelling, and semantic graphical structures. Along with an overview of such tools, we discuss the problems which have existed in robot learning and how they have been built and used as solutions, technologies or developments (if any) which have contributed to solving them. Finally, we discuss key principles that should be considered when designing an effective knowledge representation. ","[{'version': 'v1', 'created': 'Thu, 5 Jul 2018 22:18:08 GMT'}, {'version': 'v2', 'created': 'Tue, 5 Feb 2019 20:24:53 GMT'}, {'version': 'v3', 'created': 'Mon, 25 Mar 2019 00:39:17 GMT'}]",2019-05-02,"[['Paulius', 'David', ''], ['Sun', 'Yu', '']]","['Knowledge representation', 'Robot Learning', 'Domestic Robots', 'Task Planning', 'Service Robotics']" 30,1001.4341,Dariusz Dereniowski,Dariusz Dereniowski,Connected searching of weighted trees,,Theoretical Computer Science 412 (2011) 5700-5713,10.1016/j.tcs.2011.06.017,"Technical Report no 21/2009, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology",cs.DS cs.DM,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we consider the problem of connected edge searching of weighted trees. It is shown that there exists a polynomial-time algorithm for finding optimal connected search strategy for bounded degree trees with arbitrary weights on the edges and vertices of the tree. The problem is NP-complete for general node-weighted trees (the weight of each edge is 1). ","[{'version': 'v1', 'created': 'Mon, 25 Jan 2010 18:30:47 GMT'}]",2021-03-05,"[['Dereniowski', 'Dariusz', '']]","['connected searching', 'graph searching', 'search strategy']" 31,1311.6929,Jonathan Protzenko,Jonathan Protzenko,Illustrating the Mezzo programming language,,"1st French Singaporean Workshop on Formal Methods and Applications (FSFMA 2013)",10.4230/OASIcs.FSFMA.2013.68,,cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," When programmers want to prove strong program invariants, they are usually faced with a choice between using theorem provers and using traditional programming languages. The former requires them to provide program proofs, which, for many applications, is considered a heavy burden. The latter provides less guarantees and the programmer usually has to write run-time assertions to compensate for the lack of suitable invariants expressible in the type system. We introduce Mezzo, a programming language in the tradition of ML, in which the usual concept of a type is replaced by a more precise notion of a permission. Programs written in Mezzo usually enjoy stronger guarantees than programs written in pure ML. However, because Mezzo is based on a type system, the reasoning requires no user input. In this paper, we illustrate the key concepts of Mezzo, highlighting the static guarantees our language provides. ","[{'version': 'v1', 'created': 'Wed, 27 Nov 2013 10:57:10 GMT'}]",2013-11-28,"[['Protzenko', 'Jonathan', '']]","['Type system', 'Language design', 'ML', 'Permissions']" 32,1608.07323,Sherry Ruan,"Sherry Ruan, Jacob O. Wobbrock, Kenny Liou, Andrew Ng, James Landay","Comparing Speech and Keyboard Text Entry for Short Messages in Two Languages on Touchscreen Phones",23 pages,"Journal Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies archive Volume 1 Issue 4, December 2017",10.1145/3161187,,cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," With the ubiquity of mobile touchscreen devices like smartphones, two widely used text entry methods have emerged: small touch-based keyboards and speech recognition. Although speech recognition has been available on desktop computers for years, it has continued to improve at a rapid pace, and it is currently unknown how today's modern speech recognizers compare to state-of-the-art mobile touch keyboards, which also have improved considerably since their inception. To discover both methods' ""upper-bound performance,"" we evaluated them in English and Mandarin Chinese on an Apple iPhone 6 Plus in a laboratory setting. Our experiment was carried out using Baidu's Deep Speech 2, a deep learning-based speech recognition system, and the built-in Qwerty (English) or Pinyin (Mandarin) Apple iOS keyboards. We found that with speech recognition, the English input rate was 2.93 times faster (153 vs. 52 WPM), and the Mandarin Chinese input rate was 2.87 times faster (123 vs. 43 WPM) than the keyboard for short message transcription under laboratory conditions for both methods. Furthermore, although speech made fewer errors during entry (5.30% vs. 11.22% corrected error rate), it left slightly more errors in the final transcribed text (1.30% vs. 0.79% uncorrected error rate). Our results show that comparatively, under ideal conditions for both methods, upper-bound speech recognition performance has greatly improved compared to prior systems, and might see greater uptake in the future, although further study is required to quantify performance in non-laboratory settings for both methods. ","[{'version': 'v1', 'created': 'Thu, 25 Aug 2016 22:09:02 GMT'}, {'version': 'v2', 'created': 'Wed, 17 Jan 2018 02:14:37 GMT'}]",2018-01-18,"[['Ruan', 'Sherry', ''], ['Wobbrock', 'Jacob O.', ''], ['Liou', 'Kenny', ''], ['Ng', 'Andrew', ''], ['Landay', 'James', '']]","['Mobile phones', 'smartphones', 'text input', 'text entry', 'speech recognition', 'touch keyboards']" 33,1512.06532,Mohamed Lamine Lamali,"Mohamed Lamine Lamali, H\'elia Pouyllau, Dominique Barth (PRISM)","Path computation in multi-layer multi-domain networks: A language theoretic approach","Journal on Computer Communications, 2013",,10.1016/j.comcom.2012.11.009,,cs.DS cs.FL cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Multi-layer networks are networks in which several protocols may coexist at different layers. The Pseudo-Wire architecture provides encapsulation and de-capsulation functions of protocols over Packet-Switched Networks. In a multi-domain context, computing a path to support end-to-end services requires the consideration of encapsulation and decapsulation capabilities. It appears that graph models are not expressive enough to tackle this problem. In this paper, we propose a new model of heterogeneous networks using Automata Theory. A network is modeled as a Push-Down Automaton (PDA) which is able to capture the encapsulation and decapsulation capabilities, the PDA stack corresponding to the stack of encapsulated protocols. We provide polynomial algorithms that compute the shortest path either in hops or in the number of encapsulations and decapsulations along the inter-domain path, the latter reducing manual configurations and possible loops in the path. ","[{'version': 'v1', 'created': 'Mon, 21 Dec 2015 08:55:41 GMT'}]",2015-12-22,"[['Lamali', 'Mohamed Lamine', '', 'PRISM'], ['Pouyllau', 'Hélia', '', 'PRISM'], ['Barth', 'Dominique', '', 'PRISM']]","['Multi-layer networks', 'Pseudo-Wire', 'Push-Down Automata']" 34,1711.07786,Ale\v{s} Bizjak,"Nadia Creignou, Reinhard Pichler, Stefan Woltran",Do Hard SAT-Related Reasoning Tasks Become Easier in the Krom Fragment?,,"Logical Methods in Computer Science, Volume 14, Issue 4 (October 31, 2018) lmcs:4941",10.23638/LMCS-14(4:10)2018,,cs.LO,http://creativecommons.org/licenses/by/4.0/," Many reasoning problems are based on the problem of satisfiability (SAT). While SAT itself becomes easy when restricting the structure of the formulas in a certain way, the situation is more opaque for more involved decision problems. We consider here the CardMinSat problem which asks, given a propositional formula $\phi$ and an atom $x$, whether $x$ is true in some cardinality-minimal model of $\phi$. This problem is easy for the Horn fragment, but, as we will show in this paper, remains $\Theta_2$-complete (and thus $\mathrm{NP}$-hard) for the Krom fragment (which is given by formulas in CNF where clauses have at most two literals). We will make use of this fact to study the complexity of reasoning tasks in belief revision and logic-based abduction and show that, while in some cases the restriction to Krom formulas leads to a decrease of complexity, in others it does not. We thus also consider the CardMinSat problem with respect to additional restrictions to Krom formulas towards a better understanding of the tractability frontier of such problems. ","[{'version': 'v1', 'created': 'Tue, 21 Nov 2017 13:55:13 GMT'}, {'version': 'v2', 'created': 'Fri, 10 Aug 2018 15:02:19 GMT'}, {'version': 'v3', 'created': 'Mon, 29 Oct 2018 09:32:17 GMT'}]",2018-11-27,"[['Creignou', 'Nadia', ''], ['Pichler', 'Reinhard', ''], ['Woltran', 'Stefan', '']]","['Complexity', 'Satisfiability', 'Belief Revision', 'Abduction', 'Krom Formulas']" 35,1308.3384,Mauro Femminella,Mauro Femminella and Gianluca Reali,Consistency Analysis of Sensor Data Distribution,"IEEE IWCMC 2013, Cagliari, Italy, June 2013",,10.1109/IWCMC.2013.6583768,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we analyze the probability of consistency of sensor data distribution systems (SDDS), and determine suitable evaluation models. This problem is typically difficult, since a reliable model taking into account all parameters and processes which affect the system consistency is unavoidably very complex. The simplest candidate approach consists of modeling the state sojourn time, or holding time, as memoryless, and resorting to the well known solutions of Markovian processes. Nevertheless, it may happen that this approach does not fit with some working conditions. In particular, the correct modeling of the SDDS dynamics requires the introduction of a number of parameters, such as the packet transfer time or the packet loss probability, the value of which may determine the suitability of unsuitability of the Markovian model. Candidate alternative solutions include the Erlang phase-type approximation of nearly constant state holding time and a more refined model to account for overlapping events in semi-Markov processes. ","[{'version': 'v1', 'created': 'Thu, 15 Aug 2013 13:22:57 GMT'}]",2016-11-17,"[['Femminella', 'Mauro', ''], ['Reali', 'Gianluca', '']]","['information distribution systems', 'consistency', 'Markov processes', 'Semi-Markov processes', 'Erlang distribution']" 36,1811.01275,Andres Karjus,"Andres Karjus, Richard A. Blythe, Simon Kirby, Kenny Smith","Challenges in detecting evolutionary forces in language change using diachronic corpora",,"Glossa: a journal of general linguistics, 5(1) (2020), p.45",10.5334/gjgl.909,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Newberry et al. (Detecting evolutionary forces in language change, Nature 551, 2017) tackle an important but difficult problem in linguistics, the testing of selective theories of language change against a null model of drift. Having applied a test from population genetics (the Frequency Increment Test) to a number of relevant examples, they suggest stochasticity has a previously under-appreciated role in language evolution. We replicate their results and find that while the overall observation holds, results produced by this approach on individual time series can be sensitive to how the corpus is organized into temporal segments (binning). Furthermore, we use a large set of simulations in conjunction with binning to systematically explore the range of applicability of the Frequency Increment Test. We conclude that care should be exercised with interpreting results of tests like the Frequency Increment Test on individual series, given the researcher degrees of freedom available when applying the test to corpus data, and fundamental differences between genetic and linguistic data. Our findings have implications for selection testing and temporal binning in general, as well as demonstrating the usefulness of simulations for evaluating methods newly introduced to the field. ","[{'version': 'v1', 'created': 'Sat, 3 Nov 2018 20:02:17 GMT'}, {'version': 'v2', 'created': 'Wed, 13 Nov 2019 13:23:09 GMT'}]",2020-05-08,"[['Karjus', 'Andres', ''], ['Blythe', 'Richard A.', ''], ['Kirby', 'Simon', ''], ['Smith', 'Kenny', '']]","['language evolution', 'language change', 'selection', 'drift', 'corpus-based', 'temporalbinning']" 37,0903.0153,Ralph Kretschmer,"Patricio Galeas (1), Ralph Kretschmer (2), Bernd Freisleben (1) ((1) University of Marburg, Germany, (2) Kretschmer Software, Siegen, Germany)","Document Relevance Evaluation via Term Distribution Analysis Using Fourier Series Expansion","9 pages, submitted to proceedings of JCDL-2009","Proceedings of the 2009 Joint international Conference on Digital Libraries (Austin, TX, USA, June 15 - 19, 2009). JCDL '09. ACM, New York, NY, 277-284",10.1145/1555400.1555446,,cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In addition to the frequency of terms in a document collection, the distribution of terms plays an important role in determining the relevance of documents for a given search query. In this paper, term distribution analysis using Fourier series expansion as a novel approach for calculating an abstract representation of term positions in a document corpus is introduced. Based on this approach, two methods for improving the evaluation of document relevance are proposed: (a) a function-based ranking optimization representing a user defined document region, and (b) a query expansion technique based on overlapping the term distributions in the top-ranked documents. Experimental results demonstrate the effectiveness of the proposed approach in providing new possibilities for optimizing the retrieval process. ","[{'version': 'v1', 'created': 'Sun, 1 Mar 2009 17:08:17 GMT'}]",2009-07-18,"[['Galeas', 'Patricio', ''], ['Kretschmer', 'Ralph', ''], ['Freisleben', 'Bernd', '']]","['Ranked retrieval', 'Fourier series', 'content representation andindexing', 'term distribution', 'query expansion']" 38,1702.01636,Enzo Ferrante,Enzo Ferrante and Nikos Paragios,Slice-to-volume medical image registration: a survey,Accepted for publication in Medical Image Analysis,,10.1016/j.media.2017.04.010,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," During the last decades, the research community of medical imaging has witnessed continuous advances in image registration methods, which pushed the limits of the state-of-the-art and enabled the development of novel medical procedures. A particular type of image registration problem, known as slice-to-volume registration, played a fundamental role in areas like image guided surgeries and volumetric image reconstruction. However, to date, and despite the extensive literature available on this topic, no survey has been written to discuss this challenging problem. This paper introduces the first comprehensive survey of the literature about slice-to-volume registration, presenting a categorical study of the algorithms according to an ad-hoc taxonomy and analyzing advantages and disadvantages of every category. We draw some general conclusions from this analysis and present our perspectives on the future of the field. ","[{'version': 'v1', 'created': 'Mon, 6 Feb 2017 14:51:29 GMT'}, {'version': 'v2', 'created': 'Thu, 27 Apr 2017 14:49:15 GMT'}]",2017-05-02,"[['Ferrante', 'Enzo', ''], ['Paragios', 'Nikos', '']]","['Bibliographical review', 'slice-to-volume registration', 'medical image registration', 'medical image analysis']" 39,1407.6470,Gabriele D'Angelo,"Gabriele D'Angelo, Moreno Marzolla","New Trends in Parallel and Distributed Simulation: from Many-Cores to Cloud Computing","Simulation Modelling Practice and Theory (SIMPAT), Elsevier, vol. 49 (December 2014)",,10.1016/j.simpat.2014.06.007,,cs.DC cs.AR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Recent advances in computing architectures and networking are bringing parallel computing systems to the masses so increasing the number of potential users of these kinds of systems. In particular, two important technological evolutions are happening at the ends of the computing spectrum: at the ""small"" scale, processors now include an increasing number of independent execution units (cores), at the point that a mere CPU can be considered a parallel shared-memory computer; at the ""large"" scale, the Cloud Computing paradigm allows applications to scale by offering resources from a large pool on a pay-as-you-go model. Multi-core processors and Clouds both require applications to be suitably modified to take advantage of the features they provide. In this paper, we analyze the state of the art of parallel and distributed simulation techniques, and assess their applicability to multi-core architectures or Clouds. It turns out that most of the current approaches exhibit limitations in terms of usability and adaptivity which may hinder their application to these new computing architectures. We propose an adaptive simulation mechanism, based on the multi-agent system paradigm, to partially address some of those limitations. While it is unlikely that a single approach will work well on both settings above, we argue that the proposed adaptive mechanism has useful features which make it attractive both in a multi-core processor and in a Cloud system. These features include the ability to reduce communication costs by migrating simulation components, and the support for adding (or removing) nodes to the execution architecture at runtime. We will also show that, with the help of an additional support layer, parallel and distributed simulations can be executed on top of unreliable resources. ","[{'version': 'v1', 'created': 'Thu, 24 Jul 2014 07:05:38 GMT'}, {'version': 'v2', 'created': 'Tue, 4 Apr 2017 14:12:19 GMT'}]",2017-04-05,"[[""D'Angelo"", 'Gabriele', ''], ['Marzolla', 'Moreno', '']]","['Simulation', 'Parallel and Distributed Simulation', 'Cloud Computing', 'Adaptive Systems', 'Middleware', 'Agent-Based Simulation']" 40,1109.0090,Arup Pal,Arup Kumar Pal and Anup Sar,An Efficient Codebook Initialization Approach for LBG Algorithm,,,10.5121/ijcsea.2011.1407,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In VQ based image compression technique has three major steps namely (i) Codebook Design, (ii) VQ Encoding Process and (iii) VQ Decoding Process. The performance of VQ based image compression technique depends upon the constructed codebook. A widely used technique for VQ codebook design is the Linde-Buzo-Gray (LBG) algorithm. However the performance of the standard LBG algorithm is highly dependent on the choice of the initial codebook. In this paper, we have proposed a simple and very effective approach for codebook initialization for LBG algorithm. The simulation results show that the proposed scheme is computationally efficient and gives expected performance as compared to the standard LBG algorithm. ","[{'version': 'v1', 'created': 'Thu, 1 Sep 2011 04:47:08 GMT'}]",2011-09-02,"[['Pal', 'Arup Kumar', ''], ['Sar', 'Anup', '']]","['Codebook Generation', 'Image Compression', 'Image Pyramid', 'LBG algorithm', 'Vector Quantization (VQ)']" 41,1507.02084,Iago Landesa-V\'azquez,"Iago Landesa-V\'azquez, Jos\'e Luis Alba-Castro",Shedding Light on the Asymmetric Learning Capability of AdaBoost,,Pattern Recognition Letters 33 (2012) 247-255,10.1016/j.patrec.2011.10.022,,cs.LG cs.AI cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we propose a different insight to analyze AdaBoost. This analysis reveals that, beyond some preconceptions, AdaBoost can be directly used as an asymmetric learning algorithm, preserving all its theoretical properties. A novel class-conditional description of AdaBoost, which models the actual asymmetric behavior of the algorithm, is presented. ","[{'version': 'v1', 'created': 'Wed, 8 Jul 2015 09:58:06 GMT'}]",2015-07-15,"[['Landesa-Vázquez', 'Iago', ''], ['Alba-Castro', 'José Luis', '']]","['AdaBoost', 'Asymmetry', 'Boosting', 'Classification', 'Cost']" 42,1403.2043,Ruhi Gupta,Ruhi Gupta,"Implementation of an efficient RBAC in Cloud Computing using .NET environment","6 pages, 5 figures, 1 flowchart, published By International Journal of Computer Trends and Technology(IJCTT)","Ruhi Gupta. ""Implementation of an Efficient RBAC Technique of Cloud Computing In .NET Environment in (IJCTT)V8(3):120125,February2014.ISSN:22312803.www.ijcttjournal.org.Published by Seventh Sense Research Group",10.14445/22312803/IJCTT-V8P122,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Cloud Computing is flourishing day by day and it will continue in developing phase until computers and internet era is in existence. While dealing with cloud computing, a number of security and traffic related issues are confronted. Load Balancing is one of the answers to these issues. RBAC deals with such an answer. The proposed technique involves the hybrid of FCFS with RBAC technique. RBAC will assign roles to the clients and clients with a particular role can only access the particular document. Hence identity management and access management are fully implemented using this technique. ","[{'version': 'v1', 'created': 'Sun, 9 Mar 2014 09:59:32 GMT'}]",2014-03-11,"[['Gupta', 'Ruhi', '']]","['ABAC', 'Cloud Computing', 'IBAC', 'FCFS', 'RBAC']" 43,1406.0306,Juergen Zechner,"Benjamin Marussig and J\""urgen Zechner and Gernot Beer and Thomas-Peter Fries","Fast Isogeometric Boundary Element Method based on Independent Field Approximation","32 pages, 27 figures","Computer Methods in Applied Mechanics and Engineering, Volume 284, 2015, Pages 458-488",10.1016/j.cma.2014.09.035,,cs.NA math.NA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," An isogeometric boundary element method for problems in elasticity is presented, which is based on an independent approximation for the geometry, traction and displacement field. This enables a flexible choice of refinement strategies, permits an efficient evaluation of geometry related information, a mixed collocation scheme which deals with discontinuous tractions along non-smooth boundaries and a significant reduction of the right hand side of the system of equations for common boundary conditions. All these benefits are achieved without any loss of accuracy compared to conventional isogeometric formulations. The system matrices are approximated by means of hierarchical matrices to reduce the computational complexity for large scale analysis. For the required geometrical bisection of the domain, a strategy for the evaluation of bounding boxes containing the supports of NURBS basis functions is presented. The versatility and accuracy of the proposed methodology is demonstrated by convergence studies showing optimal rates and real world examples in two and three dimensions. ","[{'version': 'v1', 'created': 'Mon, 2 Jun 2014 09:33:19 GMT'}, {'version': 'v2', 'created': 'Tue, 23 Sep 2014 14:19:44 GMT'}, {'version': 'v3', 'created': 'Tue, 3 Feb 2015 08:28:44 GMT'}]",2015-02-04,"[['Marussig', 'Benjamin', ''], ['Zechner', 'Jürgen', ''], ['Beer', 'Gernot', ''], ['Fries', 'Thomas-Peter', '']]","['Subparametric Formulation', 'Isogeometric Analysis', 'Hierarchical Matrices', 'Elasticity', 'NURBS', 'Convergence']" 44,1901.06880,Anne-Elisabeth Falq,"Anne-Elisabeth Falq, Pierre Fouilhoux, Safia Kedad-Sidhoum","Mixed integer formulations using natural variables for single machine scheduling around a common due date","32 pages, 10 figures",Discrete Applied Mathematics 290 (2021) 36-59,10.1016/j.dam.2020.08.033,,cs.DS cs.DM,http://creativecommons.org/licenses/by-nc-sa/4.0/," While almost all existing works which optimally solve just-in-time scheduling problems propose dedicated algorithmic approaches, we propose in this work mixed integer formulations. We consider a single machine scheduling problem that aims at minimizing the weighted sum of earliness tardiness penalties around a common due-date. Using natural variables, we provide one compact formulation for the unrestrictive case and, for the general case, a non-compact formulation based on non-overlapping inequalities. We show that the separation problem related to the latter formulation is solved polynomially. In this formulation, solutions are only encoded by extreme points. We establish a theoretical framework to show the validity of such a formulation using non-overlapping inequalities, which could be used for other scheduling problems. A Branch-and-Cut algorithm together with an experimental analysis are proposed to assess the practical relevance of this mixed integer programming based methods. ","[{'version': 'v1', 'created': 'Mon, 21 Jan 2019 11:17:10 GMT'}, {'version': 'v2', 'created': 'Fri, 12 Feb 2021 16:11:14 GMT'}]",2021-02-15,"[['Falq', 'Anne-Elisabeth', ''], ['Fouilhoux', 'Pierre', ''], ['Kedad-Sidhoum', 'Safia', '']]","['Just-in-time scheduling', 'Mixed integer programming formulation', 'polyhedral approaches']" 45,1407.1540,Philipp Mayr,"Dagmar Kern, Peter Mutschke, Philipp Mayr","Establishing an Online Access Panel for Interactive Information Retrieval Research","2 pages, 1 figure, 2014 IEEE/ACM Joint Conference on Digital Libraries (JCDL), London, 8th-12th September 2014",,10.1109/JCDL.2014.6970231,,cs.IR cs.DL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We propose an online access panel to support the evaluation process of Interactive Information Retrieval (IIR) systems - called IIRpanel. By maintaining an online access panel with users of IIR systems we assume that the recurring effort to recruit participants for web-based as well as for lab studies can be minimized. We target on using the online access panel not only for our own development processes but to open it for other interested researchers in the field of IIR. In this paper we present the concept of IIRpanel as well as first implementation details. ","[{'version': 'v1', 'created': 'Sun, 6 Jul 2014 20:20:13 GMT'}]",2016-11-18,"[['Kern', 'Dagmar', ''], ['Mutschke', 'Peter', ''], ['Mayr', 'Philipp', '']]","['Online access panel', 'interactive information retrieval', 'retrieval evaluation', 'participant recruiting support', 'user interface development', 'online research']" 46,1807.06443,Filip Zagorski,"Karol Gotfryd, Pawel Lorek, Filip Zagorski",RiffleScrambler - a memory-hard password storing function,Accepted to ESORICS 2018,,10.1007/978-3-319-98989-1_16,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We introduce RiffleScrambler: a new family of directed acyclic graphs and a corresponding data-independent memory hard function with password independent memory access. We prove its memory hardness in the random oracle model. RiffleScrambler is similar to Catena -- updates of hashes are determined by a graph (bit-reversal or double-butterfly graph in Catena). The advantage of the RiffleScrambler over Catena is that the underlying graphs are not predefined but are generated per salt, as in Balloon Hashing. Such an approach leads to higher immunity against practical parallel attacks. RiffleScrambler offers better efficiency than Balloon Hashing since the in-degree of the underlying graph is equal to 3 (and is much smaller than in Ballon Hashing). At the same time, because the underlying graph is an instance of a Superconcentrator, our construction achieves the same time-memory trade-offs. ","[{'version': 'v1', 'created': 'Tue, 17 Jul 2018 14:02:55 GMT'}]",2020-08-10,"[['Gotfryd', 'Karol', ''], ['Lorek', 'Pawel', ''], ['Zagorski', 'Filip', '']]","['Memory hardness', 'password storing', 'key derivation function', 'Markov chains', 'mixing time', 'Thorp shuffle']" 47,1610.08436,Alexandre de Siqueira,"Alexandre Fioravante de Siqueira, Fl\'avio Camargo Cabrera, Aylton Pagamisse, Aldo Eloizo Job","Estimating the concentration of gold nanoparticles incorporated on Natural Rubber membranes using Multi-Level Starlet Optimal Segmentation","22 pages, 8 figures",J Nanopart Res (2014) 16: 2809,10.1007/s11051-014-2809-0,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This study consolidates Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS), techniques for photomicrograph segmentation that use starlet wavelet detail levels to separate areas of interest in an input image. Several segmentation levels can be obtained using Multi-Level Starlet Segmentation; after that, Matthews correlation coefficient (MCC) is used to choose an optimal segmentation level, giving rise to Multi-Level Starlet Optimal Segmentation. In this paper, MLSOS is employed to estimate the concentration of gold nanoparticles with diameter around 47 nm, reducted on natural rubber membranes. These samples were used on the construction of SERS/SERRS substrates and in the study of natural rubber membranes with incorporated gold nanoparticles influence on Leishmania braziliensis physiology. Precision, recall and accuracy are used to evaluate the segmentation performance, and MLSOS presents accuracy greater than 88% for this application. ","[{'version': 'v1', 'created': 'Wed, 26 Oct 2016 17:49:49 GMT'}]",2017-06-14,"[['de Siqueira', 'Alexandre Fioravante', ''], ['Cabrera', 'Flávio Camargo', ''], ['Pagamisse', 'Aylton', ''], ['Job', 'Aldo Eloizo', '']]","['Computational Vision', 'Gold Nanoparticles', 'Image Processing', 'Multi-Level Starlet Segmentation', 'Natural Rubber', 'ScanningElectron Microscopy', 'Wavelets']" 48,1705.06575,Kazem Cheshmi,"Kazem Cheshmi, Shoaib Kamil, Michelle Mills Strout, Maryam Mehri Dehnavi","Sympiler: Transforming Sparse Matrix Codes by Decoupling Symbolic Analysis",12 pages,"in SC 2017, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis",10.1145/3126908.3126936,,cs.PL,http://creativecommons.org/licenses/by-nc-sa/4.0/," Sympiler is a domain-specific code generator that optimizes sparse matrix computations by decoupling the symbolic analysis phase from the numerical manipulation stage in sparse codes. The computation patterns in sparse numerical methods are guided by the input sparsity structure and the sparse algorithm itself. In many real-world simulations, the sparsity pattern changes little or not at all. Sympiler takes advantage of these properties to symbolically analyze sparse codes at compile-time and to apply inspector-guided transformations that enable applying low-level transformations to sparse codes. As a result, the Sympiler-generated code outperforms highly-optimized matrix factorization codes from commonly-used specialized libraries, obtaining average speedups over Eigen and CHOLMOD of 3.8X and 1.5X respectively. ","[{'version': 'v1', 'created': 'Thu, 18 May 2017 13:16:14 GMT'}]",2018-01-08,"[['Cheshmi', 'Kazem', ''], ['Kamil', 'Shoaib', ''], ['Strout', 'Michelle Mills', ''], ['Dehnavi', 'Maryam Mehri', '']]","['Matrix computations', 'sparse methods', 'loop transformations', 'domainspecifc compilation']" 49,2003.00644,Jonni Virtema,"Miika Hannula, Juha Kontinen, Jan Van den Bussche and Jonni Virtema","Descriptive complexity of real computation and probabilistic independence logic",,"Proceedings of the 35th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 2020. Association for Computing Machinery, New York, NY, USA, 550-563",10.1145/3373718.3394773,,cs.LO cs.CC math.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We introduce a novel variant of BSS machines called Separate Branching BSS machines (S-BSS in short) and develop a Fagin-type logical characterisation for languages decidable in non-deterministic polynomial time by S-BSS machines. We show that NP on S-BSS machines is strictly included in NP on BSS machines and that every NP language on S-BSS machines is a countable union of closed sets in the usual topology of R^n. Moreover, we establish that on Boolean inputs NP on S-BSS machines without real constants characterises a natural fragment of the complexity class existsR (a class of problems polynomial time reducible to the true existential theory of the reals) and hence lies between NP and PSPACE. Finally we apply our results to determine the data complexity of probabilistic independence logic. ","[{'version': 'v1', 'created': 'Mon, 2 Mar 2020 03:56:38 GMT'}, {'version': 'v2', 'created': 'Wed, 8 Jul 2020 03:56:36 GMT'}]",2020-07-09,"[['Hannula', 'Miika', ''], ['Kontinen', 'Juha', ''], ['Bussche', 'Jan Van den', ''], ['Virtema', 'Jonni', '']]","['Blum-Shub-Smale machines', 'descriptive complexity', 'team semantics', 'independence logic', 'real arithmetic']" 50,1808.08208,Nitish Nag,"Vaibhav Pandey, Nitish Nag, Ramesh Jain",Ubiquitous Event Mining to Enhance Personal Health,"Accepted to UBICOMP 2018, International Workshop on Integrating Physical Activity and Health Aspects in Everyday Mobility. UbiComp / ISWC'18 Adjunct, October 8-12, 2018, Singapore, Singapore",,10.1145/3267305.3267684,,cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Advances in user interfaces, pattern recognition, and ubiquitous computing continue to pave the way for better navigation towards our health goals. Quantitative methods which can guide us towards our personal health goals will help us optimize our daily life actions, and environmental exposures. Ubiquitous computing is essential for monitoring these factors and actuating timely interventions in all relevant circumstances. We need to combine the events recognized by different ubiquitous systems and derive actionable causal relationships from an event ledger. Understanding of user habits and health should be teleported between applications rather than these systems working in silos, allowing systems to find the optimal guidance medium for required interventions. We propose a method through which applications and devices can enhance the user experience by leveraging event relationships, leading the way to more relevant, useful, and, most importantly, pleasurable health guidance experience. ","[{'version': 'v1', 'created': 'Fri, 24 Aug 2018 16:55:10 GMT'}]",2018-08-27,"[['Pandey', 'Vaibhav', ''], ['Nag', 'Nitish', ''], ['Jain', 'Ramesh', '']]","['Event Mining', 'Pattern Recognition', 'Personal Health Navigation', 'User Interface']" 51,1008.1661,EPTCS,"Yo-Sub Han, Kai Salomaa",Nondeterministic State Complexity for Suffix-Free Regular Languages,"In Proceedings DCFS 2010, arXiv:1008.1270","EPTCS 31, 2010, pp. 189-196",10.4204/EPTCS.31.21,,cs.FL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We investigate the nondeterministic state complexity of basic operations for suffix-free regular languages. The nondeterministic state complexity of an operation is the number of states that are necessary and sufficient in the worst-case for a minimal nondeterministic finite-state automaton that accepts the language obtained from the operation. We consider basic operations (catenation, union, intersection, Kleene star, reversal and complementation) and establish matching upper and lower bounds for each operation. In the case of complementation the upper and lower bounds differ by an additive constant of two. ","[{'version': 'v1', 'created': 'Tue, 10 Aug 2010 08:34:08 GMT'}]",2010-08-11,"[['Han', 'Yo-Sub', ''], ['Salomaa', 'Kai', '']]","['nondeterministic state complexity', 'suffix-free regular languages', 'suffix codes']" 52,1611.07769,Michael Schaub,"Michael T. Schaub and Jean-Charles Delvenne and Martin Rosvall and Renaud Lambiotte",The many facets of community detection in complex networks,"8 Pages, 1 Figure","Schaub, M.T., Delvenne, JC., Rosvall, M. et al. Appl Netw Sci (2017) 2: 4",10.1007/s41109-017-0023-6,,cs.SI physics.data-an physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Community detection, the decomposition of a graph into essential building blocks, has been a core research topic in network science over the past years. Since a precise notion of what constitutes a community has remained evasive, community detection algorithms have often been compared on benchmark graphs with a particular form of assortative community structure and classified based on the mathematical techniques they employ. However, this comparison can be misleading because apparent similarities in their mathematical machinery can disguise different goals and reasons for why we want to employ community detection in the first place. Here we provide a focused review of these different motivations that underpin community detection. This problem-driven classification is useful in applied network science, where it is important to select an appropriate algorithm for the given purpose. Moreover, highlighting the different facets of community detection also delineates the many lines of research and points out open directions and avenues for future research. ","[{'version': 'v1', 'created': 'Wed, 23 Nov 2016 12:39:52 GMT'}, {'version': 'v2', 'created': 'Thu, 12 Jan 2017 23:54:00 GMT'}, {'version': 'v3', 'created': 'Wed, 15 Feb 2017 19:40:18 GMT'}]",2017-02-17,"[['Schaub', 'Michael T.', ''], ['Delvenne', 'Jean-Charles', ''], ['Rosvall', 'Martin', ''], ['Lambiotte', 'Renaud', '']]","['community detection', 'graph partitioning', 'Modularity; block models']" 53,1408.3231,Bernhard Rumpe,"Delf Block, S\""onke Heeren, Stefan K\""uhnel, Andr\'e Leschke, Bernhard Rumpe, Vladislavs Serebro","Simulations on Consumer Tests: A Perspective for Driver Assistance Systems","6 pages, 5 figure, Proceedings of International Workshop on Engineering Simulations for Cyber-Physical Systems (ES4CPS '14)",,10.1145/2559627.2559633,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This article discusses new challenges for series development regarding the vehicle safety that arise from the recently published AEB test protocol by the consumer-test-organisation EuroNCAP for driver assistance systems [6]. The tests from the test protocol are of great significance for an OEM that sells millions of cars each year, due to the fact that a positive rating of the vehicle-under-test (VUT) in safety relevant aspects is important for the reputation of a car manufacturer. The further intensification and aggravation of the test requirements for those systems is one of the challenges, that has to be mastered in order to continuously make significant contributions to safety for high-volume cars. Therefore, it is to be shown how a simulation approach may support the development process, especially with tolerance analysis. This article discusses the current stage of work, steps that are planned for the future and results that can be expected at the end of such an analysis. ","[{'version': 'v1', 'created': 'Thu, 14 Aug 2014 09:42:56 GMT'}]",2014-08-15,"[['Block', 'Delf', ''], ['Heeren', 'Sönke', ''], ['Kühnel', 'Stefan', ''], ['Leschke', 'André', ''], ['Rumpe', 'Bernhard', ''], ['Serebro', 'Vladislavs', '']]","['Advanced driver assistance systems', 'black-box testing', 'consumer tests', 'equivalence class partitioning', 'model-based testing', 'simulation']" 54,1807.00851,Konstantinos Psychas,"Konstantinos Psychas, Javad Ghaderi",On Non-Preemptive VM Scheduling in the Cloud,29 pages,"POMACS, Volume 1, Issue 2, December 2017, Article No. 35",10.1145/3154493,,cs.NI cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We study the problem of scheduling VMs (Virtual Machines) in a distributed server platform, motivated by cloud computing applications. The VMs arrive dynamically over time to the system, and require a certain amount of resources (e.g. memory, CPU, etc) for the duration of their service. To avoid costly preemptions, we consider non-preemptive scheduling: Each VM has to be assigned to a server which has enough residual capacity to accommodate it, and once a VM is assigned to a server, its service \textit{cannot} be disrupted (preempted). Prior approaches to this problem either have high complexity, require synchronization among the servers, or yield queue sizes/delays which are excessively large. We propose a non-preemptive scheduling algorithm that resolves these issues. In general, given an approximation algorithm to Knapsack with approximation ratio $r$, our scheduling algorithm can provide $r\beta$ fraction of the throughput region for $\beta < r$. In the special case of a greedy approximation algorithm to Knapsack, we further show that this condition can be relaxed to $\beta<1$. The parameters $\beta$ and $r$ can be tuned to provide a tradeoff between achievable throughput, delay, and computational complexity of the scheduling algorithm. Finally extensive simulation results using both synthetic and real traffic traces are presented to verify the performance of our algorithm. ","[{'version': 'v1', 'created': 'Mon, 2 Jul 2018 18:27:28 GMT'}]",2018-07-04,"[['Psychas', 'Konstantinos', ''], ['Ghaderi', 'Javad', '']]","['Scheduling Algorithms', 'Stability', 'Queues', 'Knapsack Problem', 'Cloud']" 55,0708.3879,Dmitri Krioukov,"Xenofontas Dimitropoulos, Dmitri Krioukov, Amin Vahdat, George Riley",Graph Annotations in Modeling Complex Network Topologies,,"ACM Transactions on Modeling and Computer Simulation (TOMACS), v.19, n.4, p.17, 2009",10.1145/1596519.1596522,,cs.NI cond-mat.dis-nn physics.data-an physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The coarsest approximation of the structure of a complex network, such as the Internet, is a simple undirected unweighted graph. This approximation, however, loses too much detail. In reality, objects represented by vertices and edges in such a graph possess some non-trivial internal structure that varies across and differentiates among distinct types of links or nodes. In this work, we abstract such additional information as network annotations. We introduce a network topology modeling framework that treats annotations as an extended correlation profile of a network. Assuming we have this profile measured for a given network, we present an algorithm to rescale it in order to construct networks of varying size that still reproduce the original measured annotation profile. Using this methodology, we accurately capture the network properties essential for realistic simulations of network applications and protocols, or any other simulations involving complex network topologies, including modeling and simulation of network evolution. We apply our approach to the Autonomous System (AS) topology of the Internet annotated with business relationships between ASs. This topology captures the large-scale structure of the Internet. In depth understanding of this structure and tools to model it are cornerstones of research on future Internet architectures and designs. We find that our techniques are able to accurately capture the structure of annotation correlations within this topology, thus reproducing a number of its important properties in synthetically-generated random graphs. ","[{'version': 'v1', 'created': 'Wed, 29 Aug 2007 03:23:56 GMT'}, {'version': 'v2', 'created': 'Fri, 31 Aug 2007 22:09:32 GMT'}, {'version': 'v3', 'created': 'Fri, 19 Sep 2008 02:44:12 GMT'}, {'version': 'v4', 'created': 'Mon, 2 Nov 2009 20:00:00 GMT'}]",2009-11-02,"[['Dimitropoulos', 'Xenofontas', ''], ['Krioukov', 'Dmitri', ''], ['Vahdat', 'Amin', ''], ['Riley', 'George', '']]","['Annotations', 'AS relationships', 'complex networks', 'topology']" 56,1607.06268,Matteo Sammartino,"Joshua Moerman and Matteo Sammartino and Alexandra Silva and Bartek Klin and Micha{\l} Szynwelski",Learning Nominal Automata,,,10.1145/3009837.3009879,,cs.FL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present an Angluin-style algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alphabets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particular we can learn a subclass of nominal non-deterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments. ","[{'version': 'v1', 'created': 'Thu, 21 Jul 2016 11:16:47 GMT'}, {'version': 'v2', 'created': 'Tue, 8 Nov 2016 00:54:36 GMT'}, {'version': 'v3', 'created': 'Sat, 15 Dec 2018 09:03:56 GMT'}]",2018-12-18,"[['Moerman', 'Joshua', ''], ['Sammartino', 'Matteo', ''], ['Silva', 'Alexandra', ''], ['Klin', 'Bartek', ''], ['Szynwelski', 'Michał', '']]","['Active Learning', '(Non)Deterministic Finite Automata', 'Nominal Automata', 'Functional Programming']" 57,1902.08597,Volodymyr Sokolov,"Mahyar Taj Dini, Volodymyr Sokolov",Internet of Things Security Problems,,"Modern Information Protection (ISSN: 2409-7292), no. 1, 2017",10.5281/zenodo.2528814,,cs.CR cs.NI,http://creativecommons.org/licenses/by/4.0/," The rapid development of ""smart"" devices leads to explosive growth of unprotected or partially protected home networks. These networks are easy prey for unauthorized access, the collection of personal information (including from surveillance cameras), interference in the operation of individual devices and the entire system as a whole. In addition, existing solutions for managing a smart house offer work in the cloud, which in turn reduces the availability of the system and simultaneously increases the risk of the unscrupulous use of personal information by the service provider (up to the sale of data to a third party). This article examines the existing access technologies, their weaknesses, and offers solutions to improve the overall security of the system with a local IoT gateway and virtual subnets. ","[{'version': 'v1', 'created': 'Fri, 22 Feb 2019 18:27:09 GMT'}]",2019-02-25,"[['Dini', 'Mahyar Taj', ''], ['Sokolov', 'Volodymyr', '']]","['Internet of things', 'data privacy', 'cloud', 'home dashboard']" 58,1809.01021,Fatemeh Hadaeghi,Fatemeh Hadaeghi and Herbert Jaeger,"Computing optimal discrete readout weights in reservoir computing is NP-hard",8 pages submitted to Neurocomputing,"Neurocomputing Volume 338, 21 April 2019, Pages 233-236",10.1016/j.neucom.2019.02.009,,cs.CC,http://creativecommons.org/licenses/by-nc-sa/4.0/," We show NP-hardness of a generalized quadratic programming problem, which we called Unconstrained N-ary Quadratic Programming (UNQP). This problem has recently become practically relevant in the context of novel memristor-based neuromorphic microchip designs, where solving the UNQP is a key operation for on-chip training of the neural network implemented on the chip. UNQP is the problem of finding a vector $\mathbf{v} \in S^N$ which minimizes $\mathbf{v}^T\,Q\,\mathbf{v} +\mathbf{v}^T \mathbf{c} $, where $S = \{s_1, \ldots, s_n\} \subset \mathbb{Z}$ is a given set of eligible parameters for $\mathbf{v}$, $Q \in \mathbb{Z}^{N \times N}$ is positive semi-definite, and $\mathbf{c} \in \mathbb{Z}^{N}$. In memristor-based neuromorphic hardware, $S$ is physically given by a finite (and small) number of possible memristor states. The proof of NP-hardness is by reduction from the Unconstrained Binary Quadratic Programming problem, which is a special case of UNQP where $S = \{0, 1\}$ and which is known to be NP-hard. ","[{'version': 'v1', 'created': 'Tue, 4 Sep 2018 14:30:02 GMT'}]",2019-08-27,"[['Hadaeghi', 'Fatemeh', ''], ['Jaeger', 'Herbert', '']]","['Complexity', 'Linear Regression', 'Neuromorphic Hardware', 'Reservoir Computing', 'Unconstrained Quadratic Programming', 'Unconventional Computing']" 59,1706.01118,Kevin Moran P,Kevin Moran,Enhancing Android Application Bug Reporting,"3 Pages, in Proceedings of 10th Joint Meeting of the European Software Engineering Conference and the 23rd ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE'15) Student Research Competition (SRC)","Proceedings of the 23rd ACM SIGSOFT Symposium on the Foundations of Software Engineering (SRC), 2015, pp. 1045-1047",10.1145/2786805.2807557,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The modern software development landscape has seen a shift in focus toward mobile applications as smartphones and tablets near ubiquitous adoption. Due to this trend, the complexity of these ""apps"" has been increasing, making development and maintenance challenging. Current bug tracking systems do not effectively facilitate the creation of bug reports with useful information that will directly lead to a bug's resolution. To address the need for an improved reporting system, we introduce a novel solution, called Fusion, that helps reporters auto-complete reproduction steps in bug reports for mobile apps by taking advantage of their GUI-centric nature. Fusion links information, that reporters provide, to program artifacts extracted through static and dynamic analysis performed beforehand. This allows our system to facilitate the reporting process for developers and testers, while generating more reproducible bug reports with immediately actionable information. ","[{'version': 'v1', 'created': 'Sun, 4 Jun 2017 17:57:47 GMT'}]",2017-06-06,"[['Moran', 'Kevin', '']]","['Bug reports', 'android', 'reproduction steps', 'auto-completion']" 60,1911.11934,Michel Kinsy,"Lake Bu, Mihailo Isakov, Michel A. Kinsy","A Secure and Robust Scheme for Sharing Confidential Information in IoT Systems",,"Ad Hoc Networks, vol. 92, 2019 - Special Issue on Security of IoT-enabled Infrastructures in Smart Cities",10.1016/j.adhoc.2018.09.007,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In Internet of Things (IoT) systems with security demands, there is often a need to distribute sensitive information (such as encryption keys, digital signatures, or login credentials, etc.) among the devices, so that it can be retrieved for confidential purposes at a later moment. However, this information cannot be entrusted to any one device, since the failure of that device or an attack on it will jeopardize the security of the entire network. Even if the information is divided among devices, there is still the danger that an attacker can compromise a group of devices and expose the sensitive information. In this work, we design and implement a secure and robust scheme to enable the distribution of sensitive information in IoT networks. The proposed approach has two important properties: (1) it uses Threshold Secret Sharing (TSS) to split the information into pieces distributed among all devices in the system - and so the information can only be retrieved collaboratively by groups of devices; and (2) it ensures the privacy and integrity of the information, even when attackers hijack a large number of devices and use them in concert - specifically, all the compromised devices can be identified, the confidentiality of information is kept, and authenticity of the secret can be guaranteed. ","[{'version': 'v1', 'created': 'Wed, 27 Nov 2019 03:40:51 GMT'}]",2019-11-28,"[['Bu', 'Lake', ''], ['Isakov', 'Mihailo', ''], ['Kinsy', 'Michel A.', '']]","['IoT', 'security', 'secret sharing', 'encryption', 'authentication', 'group testing', 'PUF']" 61,1207.3437,Massimiliano Vasile,Massimiliano Vasile,"Robust Mission Design Through Evidence Theory and Multi-Agent Collaborative Search",,"Annals of the New York Academy of Science, Volume 1065, New Trends in Astrodynamics and Applications pages 152-173, December 2005",10.1196/annals.1370.024,,cs.CE cs.NE cs.SY math.OC math.PR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, the preliminary design of a space mission is approached introducing uncertainties on the design parameters and formulating the resulting reliable design problem as a multiobjective optimization problem. Uncertainties are modelled through evidence theory and the belief, or credibility, in the successful achievement of mission goals is maximised along with the reliability of constraint satisfaction. The multiobjective optimisation problem is solved through a novel algorithm based on the collaboration of a population of agents in search for the set of highly reliable solutions. Two typical problems in mission analysis are used to illustrate the proposed methodology. ","[{'version': 'v1', 'created': 'Sat, 14 Jul 2012 16:17:52 GMT'}]",2015-06-05,"[['Vasile', 'Massimiliano', '']]","['multiobjective optimization', 'robust design', 'mission analysis']" 62,1311.7359,Tobias Kloos,"Tobias Kloos and Joachim St\""ockler","Zak transforms and Gabor frames of totally positive functions and exponential B-splines",,,10.1016/j.jat.2014.05.010,,cs.IT math.IT math.NA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We study totally positive (TP) functions of finite type and exponential B-splines as window functions for Gabor frames. We establish the connection of the Zak transform of these two classes of functions and prove that the Zak transforms have only one zero in their fundamental domain of quasi-periodicity. Our proof is based on the variation-diminishing property of shifts of exponential B-splines. For the exponential B-spline B_m of order m, we determine a large set of lattice parameters a,b>0 such that the Gabor family of time-frequency shifts is a frame for L^2(R). By the connection of its Zak transform to the Zak transform of TP functions of finite type, our result provides an alternative proof that TP functions of finite type provide Gabor frames for all lattice parameters with ab<1. For even two-sided exponentials and the related exponential B-spline of order 2, we find lower frame-bounds A, which show the asymptotically linear decay A (1-ab) as the density ab of the time-frequency lattice tends to the critical density ab=1. ","[{'version': 'v1', 'created': 'Thu, 28 Nov 2013 16:29:18 GMT'}]",2014-11-07,"[['Kloos', 'Tobias', ''], ['Stöckler', 'Joachim', '']]","['Gabor frame', 'total positivity', 'exponential B-spline', 'Zaktransform']" 63,1202.4530,K Munivara Prasad,"K.Munivara Prasad, A.Rama Mohan Reddy, V Jyothsna","IP Traceback for Flooding attacks on Internet Threat Monitors (ITM) Using Honeypots","International Journal of Network Security & Its Applications (IJNSA), Vol.4, No.1, January 2012. arXiv admin note: substantial text overlap with arXiv:1201.2481","International Journal of Network Security & Its Applications (IJNSA), Vol.4, No.1, January 2012",10.5121/ijnsa.2012.4102,,cs.NI,http://creativecommons.org/licenses/by/3.0/," The Internet Threat Monitoring (ITM) is an efficient monitoring system used globally to measure, detect, characterize and track threats such as denial of service (DoS) and distributed Denial of Service (DDoS) attacks and worms. . To block the monitoring system in the internet the attackers are targeted the ITM system. In this paper we address the flooding attack of DDoS against ITM monitors to exhaust the network resources, such as bandwidth, computing power, or operating system data structures by sending the malicious traffic. We propose an information-theoretic frame work that models the flooding attacks using Botnet on ITM. One possible way to counter DDoS attacks is to trace the attack sources and punish the perpetrators. we propose a novel traceback method for DDoS using Honeypots. IP tracing through honeypot is a single packet tracing method and is more efficient than commonly used packet marking techniques. ","[{'version': 'v1', 'created': 'Tue, 21 Feb 2012 05:37:18 GMT'}]",2012-02-22,"[['Prasad', 'K. Munivara', ''], ['Reddy', 'A. Rama Mohan', ''], ['Jyothsna', 'V', '']]","['Internet Threat Monitors (ITM)', 'DDoS', 'Flooding attack', 'IpTrcing', 'Botnet and Honeypot']" 64,1712.09592,Murat Ozbayoglu,"O.B. Sezer, M. Ozbayoglu, E. Dogdu","An Artificial Neural Network-based Stock Trading System Using Technical Analysis and Big Data Framework","ACM Southeast Conference, ACMSE 2017, Kennesaw State University, GA, U.S.A., 13-15 April, 2017","ACM Southeast Conference, ACMSE 2017, Kennesaw State University, GA, U.S.A., 13-15 April, 2017",10.1145/3077286.3077294,,cs.CE q-fin.TR stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, a neural network-based stock price prediction and trading system using technical analysis indicators is presented. The model developed first converts the financial time series data into a series of buy-sell-hold trigger signals using the most commonly preferred technical analysis indicators. Then, a Multilayer Perceptron (MLP) artificial neural network (ANN) model is trained in the learning stage on the daily stock prices between 1997 and 2007 for all of the Dow30 stocks. Apache Spark big data framework is used in the training stage. The trained model is then tested with data from 2007 to 2017. The results indicate that by choosing the most appropriate technical indicators, the neural network model can achieve comparable results against the Buy and Hold strategy in most of the cases. Furthermore, fine tuning the technical indicators and/or optimization strategy can enhance the overall trading performance. ","[{'version': 'v1', 'created': 'Wed, 27 Dec 2017 14:45:40 GMT'}]",2017-12-29,"[['Sezer', 'O. B.', ''], ['Ozbayoglu', 'M.', ''], ['Dogdu', 'E.', '']]","['Stock market', 'Artificial neural network', 'multi layer perceptron', 'algorithmic trading', 'technical analysis']" 65,1310.7448,YuLi Sun,"Yuli Sun, Jinxu Tao, Conggui Liu","An iterative algorithm for computed tomography image reconstruction from limited-angle projections","14 pages, 1 figure, 1 table",,10.1007/s12204-015-1608-9,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In application of tomography imaging, limited-angle problem is a quite practical and important issue. In this paper, an iterative reprojection-reconstruction (IRR) algorithm using a modified Papoulis-Gerchberg (PG) iterative scheme is developed for reconstruction from limited-angle projections which contain noise. The proposed algorithm has two iterative update processes, one is the extrapolation of unknown data, and the other is the modification of the known noisy observation data. And the algorithm introduces scaling factors to control the two processes, respectively. The convergence of the algorithm is guaranteed, and the method of choosing the scaling factors is given with energy constraints. The simulation result demonstrates our conclusions and indicates that the algorithm proposed in this paper can obviously improve the reconstruction quality. ","[{'version': 'v1', 'created': 'Mon, 16 Sep 2013 07:52:16 GMT'}]",2019-05-01,"[['Sun', 'Yuli', ''], ['Tao', 'Jinxu', ''], ['Liu', 'Conggui', '']]","['Computed tomography', 'Limited-angle reconstruction', 'Papoulis-Gerchberg']" 66,0803.3224,Michael Hahsler,Michael Hahsler,"A Model-Based Frequency Constraint for Mining Associations from Transaction Data",,"Michael Hahsler. A model-based frequency constraint for mining associations from transaction data. Data Mining and Knowledge Discovery, 13(2):137-166, September 2006",10.1007/s10618-005-0026-2,,cs.DB,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Mining frequent itemsets is a popular method for finding associated items in databases. For this method, support, the co-occurrence frequency of the items which form an association, is used as the primary indicator of the associations's significance. A single user-specified support threshold is used to decided if associations should be further investigated. Support has some known problems with rare items, favors shorter itemsets and sometimes produces misleading associations. In this paper we develop a novel model-based frequency constraint as an alternative to a single, user-specified minimum support. The constraint utilizes knowledge of the process generating transaction data by applying a simple stochastic mixture model (the NB model) which allows for transaction data's typically highly skewed item frequency distribution. A user-specified precision threshold is used together with the model to find local frequency thresholds for groups of itemsets. Based on the constraint we develop the notion of NB-frequent itemsets and adapt a mining algorithm to find all NB-frequent itemsets in a database. In experiments with publicly available transaction databases we show that the new constraint provides improvements over a single minimum support threshold and that the precision threshold is more robust and easier to set and interpret by the user. ","[{'version': 'v1', 'created': 'Fri, 21 Mar 2008 20:39:53 GMT'}]",2008-12-18,"[['Hahsler', 'Michael', '']]","['Data mining', 'associations', 'interest measures', 'mixture models', 'transaction data']" 67,1908.00763,Ninnart Fuengfusin,"Ninnart Fuengfusin, Hakaru Tamukoh",Network with Sub-Networks,,,10.2991/jrnal.k.201215.006,,cs.LG cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We introduce network with sub-networks, a neural network which its weight layers could be detached into sub-neural networks during inference. To develop weights and biases which could be inserted in both base and sub-neural networks, firstly, the parameters are copied from sub-model to base-model. Each model is forward-propagated separately. Gradients from a pair of networks are averaged and, used to update both networks. Our base model achieves the test-accuracy which is comparable to the regularly trained models, while the model maintains the ability to detach weight layers. ","[{'version': 'v1', 'created': 'Fri, 2 Aug 2019 09:04:28 GMT'}, {'version': 'v2', 'created': 'Tue, 3 Dec 2019 04:41:02 GMT'}]",2021-10-20,"[['Fuengfusin', 'Ninnart', ''], ['Tamukoh', 'Hakaru', '']]","['Model Compression', 'Neural Networks', 'Multilayer Perceptron', 'Supervised Learning']" 68,1808.00823,Jacob Kreindl,"Jacob Kreindl (1), Manuel Rigger (1), Hanspeter M\""ossenb\""ock (1) ((1) Johannes Kepler University Linz)",Debugging Native Extensions of Dynamic Languages,"7 pages, 7 figures, accepted at 15th International Conference on Managed Languages & Runtimes (ManLang'18)",,10.1145/3237009.3237017,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Many dynamic programming languages such as Ruby and Python enable developers to use so called native extensions, code implemented in typically statically compiled languages like C and C++. However, debuggers for these dynamic languages usually lack support for also debugging these native extensions. GraalVM can execute programs implemented in various dynamic programming languages and, by using the LLVM-IR interpreter Sulong, also their native extensions. We added support for source-level debugging to Sulong based on GraalVM's debugging framework by associating run-time debug information from the LLVM-IR level to the original program code. As a result, developers can now use GraalVM to debug source code written in multiple LLVM-based programming languages as well as programs implemented in various dynamic languages that invoke it in a common debugger front-end. ","[{'version': 'v1', 'created': 'Thu, 2 Aug 2018 14:11:31 GMT'}]",2018-08-03,"[['Kreindl', 'Jacob', '', 'Johannes Kepler University Linz'], ['Rigger', 'Manuel', '', 'Johannes Kepler University Linz'], ['Mössenböck', 'Hanspeter', '', 'Johannes Kepler University Linz']]","['Sulong', 'GraalVM', 'Truffle', 'LLVM', 'Debugging', 'Native Extensions']" 69,1903.03018,Ian McQuillan,Oscar H. Ibarra and Ian McQuillan,"On the Density of Languages Accepted by Turing Machines and Other Machine Models",,"Journal of Automata, Languages and Combinatorics, 23, 189-199, 2018",10.25596/jalc-2018-189,,cs.FL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A language is dense if the set of all infixes (or subwords) of the language is the set of all words. Here, it is shown that it is decidable whether the language accepted by a nondeterministic Turing machine with a one-way read-only input and a reversal-bounded read/write worktape (the read/write head changes direction at most some fixed number of times) is dense. From this, it is implied that it is also decidable for one-way reversal-bounded queue automata, one-way reversal-bounded stack automata, and one-way reversal-bounded $k$-flip pushdown automata (machines that can ""flip"" their pushdowns up to $k$ times). However, it is undecidable for deterministic Turing machines with two 1-reversal-bounded worktapes (even when the two tapes are restricted to operate as 1-reversal-bounded pushdown stacks). ","[{'version': 'v1', 'created': 'Thu, 7 Mar 2019 16:10:24 GMT'}]",2019-03-08,"[['Ibarra', 'Oscar H.', ''], ['McQuillan', 'Ian', '']]","['density', 'Turing machines', 'store languages', 'pushdowns', 'queues', 'stacks']" 70,1803.06657,Can Pu,"Can Pu, Runzi Song, Radim Tylecek, Nanbo Li, Robert B Fisher","Sdf-GAN: Semi-supervised Depth Fusion with Multi-scale Adversarial Networks","This is our draft and accepted by the journal Remote Sensing. There is a little difference between the title on Arxiv and that on Remote Sensing. Two small corrections have been made in ""Performance on Kitti2015 Dataset"" in this latest version (which is slightly different from the previous version in Remote Sensing)","Remote Sens. 2019, 11, 487",10.3390/rs11050487,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at different receptive fields and scales. Assuming a Markov Random Field for the refined disparity map produces better estimates of the true disparity distribution. Both fully supervised and semi-supervised versions of the algorithm are proposed. The approach includes a more robust loss function to inpaint invalid disparity values and requires much less labeled data to train in the semi-supervised learning mode. The algorithm can be generalized to fuse depths from different kinds of depth sources. Experiments explored different fusion opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on public synthetic datasets (Scene Flow, SYNTH3, our synthetic garden dataset) and real datasets (Kitti2015 dataset and Trimbot2020 Garden dataset). ","[{'version': 'v1', 'created': 'Sun, 18 Mar 2018 13:17:16 GMT'}, {'version': 'v2', 'created': 'Sat, 2 Mar 2019 12:38:12 GMT'}, {'version': 'v3', 'created': 'Mon, 6 May 2019 15:48:51 GMT'}]",2019-05-07,"[['Pu', 'Can', ''], ['Song', 'Runzi', ''], ['Tylecek', 'Radim', ''], ['Li', 'Nanbo', ''], ['Fisher', 'Robert B', '']]","['Depth fusion', 'Disparity fusion', 'Stereo Vision', 'Monocular Vision', 'Time of Flight']" 71,1003.3569,Secretary Aircc Journal,"Hung-Chin Jang (National Chengchi University, Taiwan)","Applications of Geometric Algorithms to Reduce Interference in Wireless Mesh Network","24 Pages, JGraph-Hoc Journal 2010","International journal on applications of graph theory in wireless ad hoc networks and sensor networks 2.1 (2010) 62-85",10.5121/jgraphhoc.2010.2106,,cs.NI,http://creativecommons.org/licenses/by-nc-sa/3.0/," In wireless mesh networks such as WLAN (IEEE 802.11s) or WMAN (IEEE 802.11), each node should help to relay packets of neighboring nodes toward gateway using multi-hop routing mechanisms. Wireless mesh networks usually intensively deploy mesh nodes to deal with the problem of dead spot communication. However, the higher density of nodes deployed, the higher radio interference occurred. This causes significant degradation of system performance. In this paper, we first convert network problems into geometry problems in graph theory, and then solve the interference problem by geometric algorithms. We first define line intersection in a graph to reflect radio interference problem in a wireless mesh network. We then use plan sweep algorithm to find intersection lines, if any; employ Voronoi diagram algorithm to delimit the regions among nodes; use Delaunay Triangulation algorithm to reconstruct the graph in order to minimize the interference among nodes. Finally, we use standard deviation to prune off those longer links (higher interference links) to have a further enhancement. The proposed hybrid solution is proved to be able to significantly reduce interference in a wireless mesh network in O(n log n) time complexity. ","[{'version': 'v1', 'created': 'Thu, 18 Mar 2010 12:13:38 GMT'}]",2010-07-15,"[['Jang', 'Hung-Chin', '', 'National Chengchi University, Taiwan']]","['Wireless Mesh Network', 'Interference Reduction', 'Voronoi Diagram', 'Delaunay Triangulation Algorithm']" 72,1204.4202,Richard Preen,Richard J. Preen and Larry Bull,Fuzzy Dynamical Genetic Programming in XCSF,2 page GECCO 2011 poster paper,"In Proceedings of the 13th annual conference companion on genetic and evolutionary computation, GECCO '11, pp. 167-168. ACM, 2011",10.1145/2001858.2001952,,cs.AI cs.LG cs.NE cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A number of representation schemes have been presented for use within Learning Classifier Systems, ranging from binary encodings to Neural Networks, and more recently Dynamical Genetic Programming (DGP). This paper presents results from an investigation into using a fuzzy DGP representation within the XCSF Learning Classifier System. In particular, asynchronous Fuzzy Logic Networks are used to represent the traditional condition-action production system rules. It is shown possible to use self-adaptive, open-ended evolution to design an ensemble of such fuzzy dynamical systems within XCSF to solve several well-known continuous-valued test problems. ","[{'version': 'v1', 'created': 'Wed, 18 Apr 2012 20:40:18 GMT'}]",2013-04-29,"[['Preen', 'Richard J.', ''], ['Bull', 'Larry', '']]","['Fuzzy Logic Networks', 'Learning Classifier Systems', 'Reinforcement Learning', 'Self-Adaptation', 'XCSF']" 73,1609.02234,Le Guan,"Le Guan and Jun Xu and Shuai Wang and Xinyu Xing and Lin Lin and Heqing Huang and Peng Liu and Wenke Lee","From Physical to Cyber: Escalating Protection for Personalized Auto Insurance",Appeared in Sensys 2016,,10.1145/2994551.2994573,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Nowadays, auto insurance companies set personalized insurance rate based on data gathered directly from their customers' cars. In this paper, we show such a personalized insurance mechanism -- wildly adopted by many auto insurance companies -- is vulnerable to exploit. In particular, we demonstrate that an adversary can leverage off-the-shelf hardware to manipulate the data to the device that collects drivers' habits for insurance rate customization and obtain a fraudulent insurance discount. In response to this type of attack, we also propose a defense mechanism that escalates the protection for insurers' data collection. The main idea of this mechanism is to augment the insurer's data collection device with the ability to gather unforgeable data acquired from the physical world, and then leverage these data to identify manipulated data points. Our defense mechanism leveraged a statistical model built on unmanipulated data and is robust to manipulation methods that are not foreseen previously. We have implemented this defense mechanism as a proof-of-concept prototype and tested its effectiveness in the real world. Our evaluation shows that our defense mechanism exhibits a false positive rate of 0.032 and a false negative rate of 0.013. ","[{'version': 'v1', 'created': 'Thu, 8 Sep 2016 00:34:00 GMT'}, {'version': 'v2', 'created': 'Tue, 23 May 2017 14:58:27 GMT'}]",2017-05-24,"[['Guan', 'Le', ''], ['Xu', 'Jun', ''], ['Wang', 'Shuai', ''], ['Xing', 'Xinyu', ''], ['Lin', 'Lin', ''], ['Huang', 'Heqing', ''], ['Liu', 'Peng', ''], ['Lee', 'Wenke', '']]","['Telematics Device', 'Fraud Detection', 'Mixtures of RegressionModels']" 74,0704.1267,Laurence Likforman,"Laurence Likforman-Sulem, Abderrazak Zahour, Bruno Taconet",Text Line Segmentation of Historical Documents: a Survey,"25 pages, submitted version, To appear in International Journal on Document Analysis and Recognition, On line version available at http://www.springerlink.com/content/k2813176280456k3/","Vol. 9, no 2-4, April 2007, pp. 123-138",10.1007/s10032-006-0023-z,,cs.CV,," There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines),automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade, and dedicated to documents of historical interest. ","[{'version': 'v1', 'created': 'Tue, 10 Apr 2007 16:26:42 GMT'}]",2007-05-23,"[['Likforman-Sulem', 'Laurence', ''], ['Zahour', 'Abderrazak', ''], ['Taconet', 'Bruno', '']]","['segmentation', 'handwriting', 'text lines', 'Historical documents', 'survey']" 75,1804.06816,Christoph Meier,"Christoph Meier, Reimar Weissbach, Johannes Weinberg, Wolfgang A. Wall, A. John Hart","Modeling and Characterization of Cohesion in Fine Metal Powders with a Focus on Additive Manufacturing Process Simulations",,,10.1016/j.powtec.2018.11.072,,cs.CE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The cohesive interactions between fine metal powder particles crucially influence their flow behavior, which is in turn important to many powder-based manufacturing processes including emerging methods for powder-based metal additive manufacturing (AM). The present work proposes a novel modeling and characterization approach for micron-scale metal powders, with a special focus on characteristics of importance to powder-bed AM. The model is based on the discrete element method (DEM), and the considered particle-to-particle and particle-to-wall interactions involve frictional contact, rolling resistance and cohesive forces. Special emphasis lies on the modeling of cohesion. The proposed adhesion force law is defined by the pull-off force resulting from the surface energy of powder particles in combination with a van-der-Waals force curve regularization. The model is applied to predict the angle of repose (AOR) of exemplary spherical Ti-6Al-4V powders, and the surface energy value underlying the adhesion force law is calibrated by fitting the corresponding angle of repose values from numerical and experimental funnel tests. To the best of the authors' knowledge, this is the first work providing an experimental estimate for the effective surface energy of the considered class of metal powders. By this approach, an effective surface energy of $0.1mJ/m^2$ is found for the investigated Ti-6Al-4V powder. This value is considerably lower than typical experimental values for flat metal contact surfaces in the range of $30-50 mJ/m^2$, indicating the crucial influence of factors such as surface roughness and chemical surface contamination on fine metal powders. More importantly, the present study demonstrates that a neglect of the related cohesive forces leads to a drastical underestimation of the AOR and, consequently, to an insufficient representation of the bulk powder behavior. ","[{'version': 'v1', 'created': 'Wed, 18 Apr 2018 17:08:47 GMT'}, {'version': 'v2', 'created': 'Fri, 20 Apr 2018 08:44:27 GMT'}, {'version': 'v3', 'created': 'Fri, 25 May 2018 07:30:16 GMT'}]",2019-05-08,"[['Meier', 'Christoph', ''], ['Weissbach', 'Reimar', ''], ['Weinberg', 'Johannes', ''], ['Wall', 'Wolfgang A.', ''], ['Hart', 'A. John', '']]","['Cohesion', 'Surface Energy', 'Fine Metal Powders', 'Additive Manufacturing', 'Discrete Element Method', 'Modeling and Characterization']" 76,1612.04598,Stefan Wagner,Sebastian Winter and Stefan Wagner and Florian Deissenboeck,A Comprehensive Model of Usability,"18 pages, 3 figures","Engineering Interactive Systems: EIS 2007 Joint Working Conferences, EHCI 2007, DSV-IS 2007, HCSE 2007. Springer, 2008",10.1007/978-3-540-92698-6_7,,cs.HC cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Usability is a key quality attribute of successful software systems. Unfortunately, there is no common understanding of the factors influencing usability and their interrelations. Hence, the lack of a comprehensive basis for designing, analyzing, and improving user interfaces. This paper proposes a 2-dimensional model of usability that associates system properties with the activities carried out by the user. By separating activities and properties, sound quality criteria can be identified, thus facilitating statements concerning their interdependencies. This model is based on a tested quality meta-model that fosters preciseness and completeness. A case study demonstrates the manner by which such a model aids in revealing contradictions and omissions in existing usability standards. Furthermore, the model serves as a central and structured knowledge base for the entire quality assurance process, e.g. the automatic generation of guideline documents. ","[{'version': 'v1', 'created': 'Wed, 14 Dec 2016 12:17:54 GMT'}]",2016-12-15,"[['Winter', 'Sebastian', ''], ['Wagner', 'Stefan', ''], ['Deissenboeck', 'Florian', '']]","['usability', 'quality models', 'quality assessment']" 77,1603.08323,{\L}ukasz Olech Piotr,"Micha{\l} Spytkowski, {\L}ukasz P. Olech, Halina Kwa\'snicka",Hierarchy of Groups Evaluation Using Different F-score Variants,"Presented on ACIIDS2016 conference https://aciids.pwr.edu.pl/. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-662-49381-6_63","ACIIDS 2016, Da Nang, Vietnam, March 14-16, 2016, pp. 654 (Springer Berlin Heidelberg)",10.1007/978-3-662-49381-6_63,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The paper presents a cursory examination of clustering, focusing on a rarely explored field of hierarchy of clusters. Based on this, a short discussion of clustering quality measures is presented and the F-score measure is examined more deeply. As there are no attempts to assess the quality for hierarchies of clusters, three variants of the F-Score based index are presented: classic, hierarchical and partial order. The partial order index is the authors' approach to the subject. Conducted experiments show the properties of the considered measures. In conclusions, the strong and weak sides of each variant are presented. ","[{'version': 'v1', 'created': 'Mon, 28 Mar 2016 06:38:56 GMT'}]",2016-03-29,"[['Spytkowski', 'Michał', ''], ['Olech', 'Łukasz P.', ''], ['Kwaśnicka', 'Halina', '']]","['clustering quality measures', 'F-score', 'hierarchies of clusters']" 78,1802.05594,Beno\^it Girard,"Lise Aubin, Mehdi Khamassi (ISIR), Beno\^it Girard (ISIR)","Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays","Living Machines 2018 (Paris, France)",,10.1007/978-3-319-95972-6_4,,cs.AI cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," During sleep and awake rest, the hippocampus replays sequences of place cells that have been activated during prior experiences. These have been interpreted as a memory consolidation process, but recent results suggest a possible interpretation in terms of reinforcement learning. The Dyna reinforcement learning algorithms use off-line replays to improve learning. Under limited replay budget, a prioritized sweeping approach, which requires a model of the transitions to the predecessors, can be used to improve performance. We investigate whether such algorithms can explain the experimentally observed replays. We propose a neural network version of prioritized sweeping Q-learning, for which we developed a growing multiple expert algorithm, able to cope with multiple predecessors. The resulting architecture is able to improve the learning of simulated agents confronted to a navigation task. We predict that, in animals, learning the world model should occur during rest periods, and that the corresponding replays should be shuffled. ","[{'version': 'v1', 'created': 'Thu, 15 Feb 2018 15:15:19 GMT'}, {'version': 'v2', 'created': 'Mon, 13 Aug 2018 12:27:55 GMT'}]",2018-08-14,"[['Aubin', 'Lise', '', 'ISIR'], ['Khamassi', 'Mehdi', '', 'ISIR'], ['Girard', 'Benoît', '', 'ISIR']]","['Reinforcement Learning', 'Replays', 'DynaQ', 'Prioritized Sweeping', 'Neural Networks', 'Hippocampus', 'Navigation']" 79,1311.6264,"J\""urgen M\""unch","Frank Elberzhager, Alla Rosbach, J\""urgen M\""unch, Robert Eschbach","Inspection and Test Process Integration Based on Explicit Test Prioritization Strategies","12 pages. The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-642-27213-4_12","Proceedings of the Software Quality Days (SWQD), pages 181-192, Vienna, Austria, January 17-19 2012",10.1007/978-3-642-27213-4_12,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Today's software quality assurance techniques are often applied in isolation. Consequently, synergies resulting from systematically integrating different quality assurance activities are often not exploited. Such combinations promise benefits, such as a reduction in quality assurance effort or higher defect detection rates. The integration of inspection and testing, for instance, can be used to guide testing activities. For example, testing activities can be focused on defect-prone parts based upon inspection results. Existing approaches for predicting defect-prone parts do not make systematic use of the results from inspections. This article gives an overview of an integrated inspection and testing approach, and presents a preliminary case study aiming at verifying a study design for evaluating the approach. First results from this preliminary case study indicate that synergies resulting from the integration of inspection and testing might exist, and show a trend that testing activities could be guided based on inspection results. ","[{'version': 'v1', 'created': 'Mon, 25 Nov 2013 11:14:07 GMT'}]",2013-11-26,"[['Elberzhager', 'Frank', ''], ['Rosbach', 'Alla', ''], ['Münch', 'Jürgen', ''], ['Eschbach', 'Robert', '']]","['software inspections', 'testing', 'quality assurance', 'integration', 'focusing', 'synergy effects', 'case study', 'study design']" 80,1512.05141,Evangelos Spyrou,Evangelos D. Spyrou and Dimitrios K. Mitrakos,"Approximating Nash Equilibrium Uniqueness of Power Control In Practical WSNs","16 pages, 10 figures, Game theory, Wireless Sensor Networks, International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.6, November 2015",,10.5121/ijcnc.2015.7604,,cs.NI cs.GT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Transmission power has a major impact on link and communication reliability and network lifetime in Wireless Sensor Networks. We study power control in a multi-hop Wireless Sensor Network where nodes' communication interfere with each other. Our objective is to determine each node's transmission power level that will reduce the communication interference and keep energy consumption to a minimum. We propose a potential game approach to obtain the unique equilibrium of the network transmission power allocation. The unique equilibrium is located in a continuous domain. However, radio transceivers accept only discrete values for transmission power level setting. We study the viability and performance of mapping the continuous solution from the potential game to the discrete domain required by the radio. We demonstrate the success of our approach through TOSSIM simulation when nodes use the Collection Tree Protocol for routing the data. Also, we show results of our method from the Indriya testbed. We compare it with the case where the motes use Collection Tree Protocol with the maximum transmission power. ","[{'version': 'v1', 'created': 'Wed, 16 Dec 2015 12:08:00 GMT'}]",2015-12-17,"[['Spyrou', 'Evangelos D.', ''], ['Mitrakos', 'Dimitrios K.', '']]","['Transmission Power', 'Packet Reception Ratio (PRR)', 'Game Theory', 'Distributed Optimisation', 'Potential Game 1']" 81,1407.2961,Youness Aliyari Ghassabeh,Youness Aliyari Ghassabeh,"On the Convergence of the Mean Shift Algorithm in the One-Dimensional Space","13 pages, 10 figures, Published in Pattern Recognition Letters","Pattern Recognition Letters, 2013, vol. 34(12)",10.1016/j.patrec.2013.05.004,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The mean shift algorithm is a non-parametric and iterative technique that has been used for finding modes of an estimated probability density function. It has been successfully employed in many applications in specific areas of machine vision, pattern recognition, and image processing. Although the mean shift algorithm has been used in many applications, a rigorous proof of its convergence is still missing in the literature. In this paper we address the convergence of the mean shift algorithm in the one-dimensional space and prove that the sequence generated by the mean shift algorithm is a monotone and convergent sequence. ","[{'version': 'v1', 'created': 'Thu, 10 Jul 2014 20:55:25 GMT'}]",2014-07-14,"[['Ghassabeh', 'Youness Aliyari', '']]","['Mean Shift Algorithm', 'Mode Estimate Sequence', 'Monotone Sequence', 'Kernel Function', 'Convex function', 'Convergence']" 82,1207.1818,Evangelos Karapanos,"R\'uben Gouveia, Evangelos Niforatos, Evangelos Karapanos","Footprint Tracker: reviewing lifelogs and reconstructing daily experiences",,"Gouveia, R., Niforatos, E., Karapanos, E. (2012) Footprint Tracker: reviewing lifelogs and reconstructing daily experiences, In adjunct proceedings of ACM conference on Designing Interactive Systems, DIS'12",,,cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," With the increasing emphasis on how mobile technologies are experienced in everyday life, researchers are increasingly emphasizing the use of in-situ methods such as Experience Sampling and Day Reconstruction. In our line of research we explore the concept of Technology-Assisted Reconstruction, in which passively logged behavior data assist in the later reconstruction of daily experiences. In this paper we introduce Footprint tracker, a web application that supports participants in reviewing lifelogs and reconstructing their daily experiences. We focus on three kinds of data: visual (as captured through Microsoft's sensecam), location, and context (i.e., SMS and calls received and made). We describe how Footprint Tracker supports the user in reviewing these lifelogs and outline a field study that attempts to inquire into whether and how this data support reconstruction from memory. ","[{'version': 'v1', 'created': 'Sat, 7 Jul 2012 18:52:06 GMT'}]",2012-07-10,"[['Gouveia', 'Rúben', ''], ['Niforatos', 'Evangelos', ''], ['Karapanos', 'Evangelos', '']]","['Experience sampling', 'Day Reconstruction', 'life-logging']" 83,1810.07762,Jianguo Chen,"Jianguo Chen, Kenli Li, Huigui Rong, Kashif Bilal, Nan Yang, Keqin Li","A Disease Diagnosis and Treatment Recommendation System Based on Big Data Mining and Cloud Computing",,"Information Sciences, 2018, 435:124-149",10.1016/j.ins.2018.01.001,,cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," It is crucial to provide compatible treatment schemes for a disease according to various symptoms at different stages. However, most classification methods might be ineffective in accurately classifying a disease that holds the characteristics of multiple treatment stages, various symptoms, and multi-pathogenesis. Moreover, there are limited exchanges and cooperative actions in disease diagnoses and treatments between different departments and hospitals. Thus, when new diseases occur with atypical symptoms, inexperienced doctors might have difficulty in identifying them promptly and accurately. Therefore, to maximize the utilization of the advanced medical technology of developed hospitals and the rich medical knowledge of experienced doctors, a Disease Diagnosis and Treatment Recommendation System (DDTRS) is proposed in this paper. First, to effectively identify disease symptoms more accurately, a Density-Peaked Clustering Analysis (DPCA) algorithm is introduced for disease-symptom clustering. In addition, association analyses on Disease-Diagnosis (D-D) rules and Disease-Treatment (D-T) rules are conducted by the Apriori algorithm separately. The appropriate diagnosis and treatment schemes are recommended for patients and inexperienced doctors, even if they are in a limited therapeutic environment. Moreover, to reach the goals of high performance and low latency response, we implement a parallel solution for DDTRS using the Apache Spark cloud platform. Extensive experimental results demonstrate that the proposed DDTRS realizes disease-symptom clustering effectively and derives disease treatment recommendations intelligently and accurately. ","[{'version': 'v1', 'created': 'Wed, 17 Oct 2018 20:07:08 GMT'}]",2019-11-26,"[['Chen', 'Jianguo', ''], ['Li', 'Kenli', ''], ['Rong', 'Huigui', ''], ['Bilal', 'Kashif', ''], ['Yang', 'Nan', ''], ['Li', 'Keqin', '']]","['Big data mining', 'Cloud computing', 'Disease diagnosis and treatment', 'Recommendation system']" 84,1908.01501,Davide Taibi,"Steve Counsell, Mahir Arzoky, Giuseppe Destefanis, Davide Taibi","On the Relationship Between Coupling and Refactoring: An Empirical Viewpoint",,"ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). Brazil. 2019",,,cs.SE,http://creativecommons.org/licenses/by-sa/4.0/," [Background] Refactoring has matured over the past twenty years to become part of a developer's toolkit. However, many fundamental research questions still remain largely unexplored. [Aim] The goal of this paper is to investigate the highest and lowest quartile of refactoring-based data using two coupling metrics - the Coupling between Objects metric and the more recent Conceptual Coupling between Classes metric to answer this question. Can refactoring trends and patterns be identified based on the level of class coupling? [Method] In this paper, we analyze over six thousand refactoring operations drawn from releases of three open-source systems to address one such question. [Results] Results showed no meaningful difference in the types of refactoring applied across either lower or upper quartile of coupling for both metrics; refactorings usually associated with coupling removal were actually more numerous in the lower quartile in some cases. A lack of inheritance-related refactorings across all systems was also noted. [Conclusions] The emerging message (and a perplexing one) is that developers seem to be largely indifferent to classes with high coupling when it comes to refactoring types - they treat classes with relatively low coupling in almost the same way. ","[{'version': 'v1', 'created': 'Mon, 5 Aug 2019 07:54:31 GMT'}]",2019-08-06,"[['Counsell', 'Steve', ''], ['Arzoky', 'Mahir', ''], ['Destefanis', 'Giuseppe', ''], ['Taibi', 'Davide', '']]","['Refactoring', 'coupling', 'metrics', 'empirical']" 85,2102.12523,Chunjong Park,"Chunjong Park, Morelle Arian, Xin Liu, Leon Sasson, Jeffrey Kahn, Shwetak Patel, Alex Mariakakis, Tim Althoff","Online Mobile App Usage as an Indicator of Sleep Behavior and Job Performance",,,10.1145/3442381.3450093,,cs.HC cs.CY q-bio.NC,http://creativecommons.org/licenses/by/4.0/," Sleep is critical to human function, mediating factors like memory, mood, energy, and alertness; therefore, it is commonly conjectured that a good night's sleep is important for job performance. However, both real-world sleep behavior and job performance are hard to measure at scale. In this work, we show that people's everyday interactions with online mobile apps can reveal insights into their job performance in real-world contexts. We present an observational study in which we objectively tracked the sleep behavior and job performance of salespeople (N = 15) and athletes (N = 19) for 18 months, using a mattress sensor and online mobile app. We first demonstrate that cumulative sleep measures are correlated with job performance metrics, showing that an hour of daily sleep loss for a week was associated with a 9.0% and 9.5% reduction in performance of salespeople and athletes, respectively. We then examine the utility of online app interaction time as a passively collectible and scalable performance indicator. We show that app interaction time is correlated with the performance of the athletes, but not the salespeople. To support that our app-based performance indicator captures meaningful variation in psychomotor function and is robust against potential confounds, we conducted a second study to evaluate the relationship between sleep behavior and app interaction time in a cohort of 274 participants. Using a generalized additive model to control for per-participant random effects, we demonstrate that participants who lost one hour of daily sleep for a week exhibited 5.0% slower app interaction times. We also find that app interaction time exhibits meaningful chronobiologically consistent correlations with sleep history, time awake, and circadian rhythms. Our findings reveal an opportunity for online app developers to generate new insights regarding cognition and productivity. ","[{'version': 'v1', 'created': 'Wed, 24 Feb 2021 19:30:39 GMT'}]",2021-02-26,"[['Park', 'Chunjong', ''], ['Arian', 'Morelle', ''], ['Liu', 'Xin', ''], ['Sasson', 'Leon', ''], ['Kahn', 'Jeffrey', ''], ['Patel', 'Shwetak', ''], ['Mariakakis', 'Alex', ''], ['Althoff', 'Tim', '']]","['mobile app interaction', 'interaction time', 'sleep tracking', 'sleep behavior', 'job performance']" 86,1802.05831,Rozhin Eskandarpour,"Rozhin Eskandarpour, Amin Khodaei",Component Outage Estimation based on Support Vector Machine,,"Power & Energy Society General Meeting, 2017 IEEE",10.1109/PESGM.2017.8274276,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Predicting power system component outages in response to an imminent hurricane plays a major role in preevent planning and post-event recovery of the power system. An exact prediction of components states, however, is a challenging task and cannot be easily performed. In this paper, a Support Vector Machine (SVM) based method is proposed to help estimate the components states in response to anticipated path and intensity of an imminent hurricane. Components states are categorized into three classes of damaged, operational, and uncertain. The damaged components along with the components in uncertain class are then considered in multiple contingency scenarios of a proposed Event-driven Security-Constrained Unit Commitment (E-SCUC), which considers the simultaneous outage of multiple components under an N-m-u reliability criterion. Experimental results on the IEEE 118-bus test system show the merits and the effectiveness of the proposed SVM classifier and the E-SCUC model in improving power system resilience in response to extreme events. ","[{'version': 'v1', 'created': 'Fri, 16 Feb 2018 04:07:33 GMT'}]",2018-02-19,"[['Eskandarpour', 'Rozhin', ''], ['Khodaei', 'Amin', '']]","['Support vector machines', 'extreme events', 'power system resilience', 'resource scheduling', 'security-constrained unit commitment']" 87,1309.7979,Susan Stepney,"Clare Horsman, Susan Stepney, Rob C. Wagner, Viv Kendon",When does a physical system compute?,"22 pages, 10 figures. v3 as accepted by Proc.Roy.Soc.A","Proc. R. Soc. A 2014 470, 20140182",10.1098/rspa.2014.0182,,cs.ET physics.hist-ph quant-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper we introduce a formal framework that can be used to determine whether or not a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, drawing the comparison with the use of mathematical models to represent physical objects in experimental science. This powerful formulation allows a precise description of the similarities between experiments, computation, simulation, and technology, leading to our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions that must be satisfied in order for computation to be occurring, and illustrate these with a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We define the critical notion of a 'computational entity', and show the role this plays in defining when computing is taking place in physical systems. ","[{'version': 'v1', 'created': 'Mon, 30 Sep 2013 19:37:43 GMT'}, {'version': 'v2', 'created': 'Fri, 7 Mar 2014 09:30:44 GMT'}, {'version': 'v3', 'created': 'Mon, 16 Jun 2014 08:40:16 GMT'}]",2014-07-11,"[['Horsman', 'Clare', ''], ['Stepney', 'Susan', ''], ['Wagner', 'Rob C.', ''], ['Kendon', 'Viv', '']]","['computation', 'physical computation', 'computer']" 88,2206.02663,Yang Li,"Yang Li, Yu Shen, Huaijun Jiang, Wentao Zhang, Zhi Yang, Ce Zhang and Bin Cui",TransBO: Hyperparameter Optimization via Two-Phase Transfer Learning,9 pages and 2 extra pages of appendix,"Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022)",10.1145/3534678.3539255,,cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," With the extensive applications of machine learning models, automatic hyperparameter optimization (HPO) has become increasingly important. Motivated by the tuning behaviors of human experts, it is intuitive to leverage auxiliary knowledge from past HPO tasks to accelerate the current HPO task. In this paper, we propose TransBO, a novel two-phase transfer learning framework for HPO, which can deal with the complementary nature among source tasks and dynamics during knowledge aggregation issues simultaneously. This framework extracts and aggregates source and target knowledge jointly and adaptively, where the weights can be learned in a principled manner. The extensive experiments, including static and dynamic transfer learning settings and neural architecture search, demonstrate the superiority of TransBO over the state-of-the-arts. ","[{'version': 'v1', 'created': 'Mon, 6 Jun 2022 15:00:33 GMT'}]",2022-06-07,"[['Li', 'Yang', ''], ['Shen', 'Yu', ''], ['Jiang', 'Huaijun', ''], ['Zhang', 'Wentao', ''], ['Yang', 'Zhi', ''], ['Zhang', 'Ce', ''], ['Cui', 'Bin', '']]","['hyperparameter optimization', 'black-box optimization', 'bayesianoptimization', 'transfer learning']" 89,1912.02024,Annalisa Franco,"Annalisa Franco, Antonio Magnani and Dario Maio",Template co-updating in multi-modal human activity recognition systems,,,10.1145/3341105.3374085,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Multi-modal systems are quite common in the context of human activity recognition; widely used RGB-D sensors (Kinect is the most prominent example) give access to parallel data streams, typically RGB images, depth data, skeleton information. The richness of multimodal information has been largely exploited in many works in the literature, while an analysis of their effectiveness for incremental template updating has not been investigated so far. This paper is aimed at defining a general framework for unsupervised template updating in multi-modal systems, where the different data sources can provide complementary information, increasing the effectiveness of the updating procedure and reducing at the same time the probability of incorrect template modifications. ","[{'version': 'v1', 'created': 'Wed, 4 Dec 2019 14:39:25 GMT'}]",2019-12-05,"[['Franco', 'Annalisa', ''], ['Magnani', 'Antonio', ''], ['Maio', 'Dario', '']]","['Template co-updating', 'Human activity recognition', 'Kinect ® sensor']" 90,1404.3614,Jaroslav Vond\v{r}ejc,"Jaroslav Vond\v{r}ejc, Jan Zeman, Ivo Marek","Guaranteed upper-lower bounds on homogenized properties by FFT-based Galerkin method","37 pages, 20 figures","Computer Methods in Applied Mechanics and Engineering, 297, pp. 258-291, 2015",10.1016/j.cma.2015.09.003,,cs.NA math.NA physics.comp-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Guaranteed upper-lower bounds on homogenized coefficients, arising from the periodic cell problem, are calculated in a scalar elliptic setting. Our approach builds on the recent variational reformulation of the Moulinec-Suquet (1994) Fast Fourier Transform (FFT) homogenization scheme by Vond\v{r}ejc et al. (2014), which is based on the conforming Galerkin approximation with trigonometric polynomials. Upper-lower bounds are obtained by adjusting the primal-dual finite element framework developed independently by Dvo\v{r}\'{a}k (1993) and Wieckowski (1995) to the FFT-based Galerkin setting. We show that the discretization procedure differs for odd and non-odd number of grid points. Thanks to the Helmholtz decomposition inherited from the continuous formulation, the duality structure is fully preserved for the odd discretizations. In the latter case, a more complex primal-dual structure is observed due to presence of the trigonometric polynomials associated with the Nyquist frequencies. These theoretical findings are confirmed with numerical examples. To conclude, the main advantage of the FFT-based approach over conventional finite-element schemes is that the primal and the dual problems are treated on the same basis, and this property can be extended beyond the scalar elliptic setting. ","[{'version': 'v1', 'created': 'Mon, 14 Apr 2014 15:13:41 GMT'}, {'version': 'v2', 'created': 'Thu, 20 Nov 2014 10:58:02 GMT'}, {'version': 'v3', 'created': 'Fri, 17 Apr 2015 15:29:22 GMT'}]",2015-11-06,"[['Vondřejc', 'Jaroslav', ''], ['Zeman', 'Jan', ''], ['Marek', 'Ivo', '']]","['Upper-lower bounds', 'Numerical homogenization', 'Galerkin approximation', 'Trigonometricpolynomials', 'Fast Fourier Transform']" 91,1304.7346,Imran Sarwar Bajwa Dr.,"Imran Sarwar Bajwa, Behzad Bordbar, Mark Lee",SBVR vs OCL: A Comparative Analysis of Standards,"14th IEEE International Multitopic Conference (INMIC 2011), pp.261-266, Karachi, Pakistan",,10.1109/INMIC.2011.6151485,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In software modelling, the designers have to produce UML visual models with software constraints. Similarly, in business modelling, designers have to model business processes using business constraints (business rules). Constraints are the key components in the skeleton of business or software models. A designer has to write constraints to semantically compliment business models or UML models and finally implementing the constraints into business processes or source code. Business constraints/rules can be written using SBVR (Semantics of Business Vocabulary and Rules) while OCL (Object Constraint Language) is the well-known medium for writing software constraints. SBVR and OCL are two significant standards from OMG. Both standards are principally different as SBVR is typically used in business domains and OCL is employed to compliment software models. However, we have identified a few similarities in both standards that are interesting to study. In this paper, we have performed a comparative analysis of both standards as we are looking for a mechanism for automatic transformation of SBVR to OCL. The major emphasis of the study is to highlight principal features of SBVR and OCL such as similarities, differences and key parameters on which these both standards can work together. ","[{'version': 'v1', 'created': 'Sat, 27 Apr 2013 08:17:53 GMT'}]",2013-05-07,"[['Bajwa', 'Imran Sarwar', ''], ['Bordbar', 'Behzad', ''], ['Lee', 'Mark', '']]","['SBVR', 'OCL', 'MDA', 'UML']" 92,2102.11773,Mark Vella,Mark Vella and Christian Colombo,SpotCheck: On-Device Anomaly Detection for Android,,"SIN 2020: 13th International Conference on Security of Information and Networks, Merkez, Turkey, November 2020",10.1145/3433174.3433591,,cs.CR,http://creativecommons.org/licenses/by-nc-nd/4.0/," In recent years the PC has been replaced by mobile devices for many security sensitive operations, both from a privacy and a financial standpoint. While security mechanisms are deployed at various levels, these are frequently put under strain by previously unseen malware. An additional protection layer capable of novelty detection is therefore needed. In this work we propose SpotCheck, an anomaly detector intended to run on Android devices. It samples app executions and submits suspicious apps to more thorough processing by malware sandboxes. We compare Kernel Principal Component Analysis (KPCA) and Variational Autoencoders (VAE) on app execution representations based on the well-known system call traces, as well as a novel approach based on memory dumps. Results show that when using VAE, SpotCheck attains a level of effectiveness comparable to what has been previously achieved for network anomaly detection. Interestingly this is also true for the memory dump approach, relinquishing the need for continuous app monitoring. ","[{'version': 'v1', 'created': 'Tue, 23 Feb 2021 16:09:35 GMT'}, {'version': 'v2', 'created': 'Thu, 25 Feb 2021 06:53:27 GMT'}]",2021-02-26,"[['Vella', 'Mark', ''], ['Colombo', 'Christian', '']]","['Android malware', 'anomaly detection', 'memory dump analysis', 'kernel PCA', 'variational autoencoders']" 93,1803.06615,Qiang Hao,"Ewan Wright, Qiang Hao, Khaled Rasheed, Yan Liu","Feature Selection of Post-Graduation Income of College Students in the United States","14 pages, 6 tables, 3 figures","SBP-BRiMS 2018: Social, Cultural, and Behavioral Modeling, pp 38-45",10.1007/978-3-319-93372-6_4,,cs.CY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This study investigated the most important attributes of the 6-year post-graduation income of college graduates who used financial aid during their time at college in the United States. The latest data released by the United States Department of Education was used. Specifically, 1,429 cohorts of graduates from three years (2001, 2003, and 2005) were included in the data analysis. Three attribute selection methods, including filter methods, forward selection, and Genetic Algorithm, were applied to the attribute selection from 30 relevant attributes. Five groups of machine learning algorithms were applied to the dataset for classification using the best selected attribute subsets. Based on our findings, we discuss the role of neighborhood professional degree attainment, parental income, SAT scores, and family college education in post-graduation incomes and the implications for social stratification. ","[{'version': 'v1', 'created': 'Sun, 18 Mar 2018 07:06:19 GMT'}, {'version': 'v2', 'created': 'Mon, 28 May 2018 06:37:48 GMT'}]",2018-07-06,"[['Wright', 'Ewan', ''], ['Hao', 'Qiang', ''], ['Rasheed', 'Khaled', ''], ['Liu', 'Yan', '']]","['Attribute selection', 'feature selection', 'post-graduation income classification', 'post-graduation income prediction', 'social stratification']" 94,1709.04864,Eduardo Aguilar,"Eduardo Aguilar, Marc Bola\~nos, Petia Radeva",Food Recognition using Fusion of Classifiers based on CNNs,,ICIAP 10485 (2017) 213-224,10.1007/978-3-319-68548-9_20,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," With the arrival of convolutional neural networks, the complex problem of food recognition has experienced an important improvement in recent years. The best results have been obtained using methods based on very deep convolutional neural networks, which show that the deeper the model,the better the classification accuracy will be obtain. However, very deep neural networks may suffer from the overfitting problem. In this paper, we propose a combination of multiple classifiers based on different convolutional models that complement each other and thus, achieve an improvement in performance. The evaluation of our approach is done on two public datasets: Food-101 as a dataset with a wide variety of fine-grained dishes, and Food-11 as a dataset of high-level food categories, where our approach outperforms the independent CNN models. ","[{'version': 'v1', 'created': 'Thu, 14 Sep 2017 16:35:40 GMT'}]",2018-01-23,"[['Aguilar', 'Eduardo', ''], ['Bolaños', 'Marc', ''], ['Radeva', 'Petia', '']]","['Food Recognition', 'Fusion Classifiers', 'CNN']" 95,1008.1657,EPTCS,"Xiaoxue Piao, Kai Salomaa",Operational State Complexity of Deterministic Unranked Tree Automata,"In Proceedings DCFS 2010, arXiv:1008.1270","EPTCS 31, 2010, pp. 149-158",10.4204/EPTCS.31.17,,cs.FL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1) ( (m+1)2^n-2^(n-1) )-1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly) deterministic automata with, respectively, m and n vertical states. ","[{'version': 'v1', 'created': 'Tue, 10 Aug 2010 08:33:49 GMT'}]",2010-08-11,"[['Piao', 'Xiaoxue', ''], ['Salomaa', 'Kai', '']]","['operational state complexity', 'tree automata', 'unranked trees', 'tree operations']" 96,1805.10978,Marco di Biase,"Marco di Biase, Magiel Bruntink, Arie van Deursen, Alberto Bacchelli","The effects of change decomposition on code review -- a controlled experiment",,,10.7717/peerj-cs.193,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Background: Code review is a cognitively demanding and time-consuming process. Previous qualitative studies hinted at how decomposing change sets into multiple yet internally coherent ones would improve the reviewing process. So far, literature provided no quantitative analysis of this hypothesis. Aims: (1) Quantitatively measure the effects of change decomposition on the outcome of code review (in terms of number of found defects, wrongly reported issues, suggested improvements, time, and understanding); (2) Qualitatively analyze how subjects approach the review and navigate the code, building knowledge and addressing existing issues, in large vs. decomposed changes. Method: Controlled experiment using the pull-based development model involving 28 software developers among professionals and graduate students. Results: Change decomposition leads to fewer wrongly reported issues, influences how subjects approach and conduct the review activity (by increasing context-seeking), yet impacts neither understanding the change rationale nor the number of found defects. Conclusions: Change decomposition reduces the noise for subsequent data analyses but also significantly supports the tasks of the developers in charge of reviewing the changes. As such, commits belonging to different concepts should be separated, adopting this as a best practice in software engineering. ","[{'version': 'v1', 'created': 'Mon, 28 May 2018 15:34:45 GMT'}, {'version': 'v2', 'created': 'Sat, 18 Jan 2020 16:06:31 GMT'}]",2020-01-22,"[['di Biase', 'Marco', ''], ['Bruntink', 'Magiel', ''], ['van Deursen', 'Arie', ''], ['Bacchelli', 'Alberto', '']]","['Code review', 'Controlled experiment', 'Change decomposition', 'Pull-baseddevelopment model']" 97,1309.3676,Nicolae Cleju,Nicolae Cleju,"Optimized projections for compressed sensing via rank-constrained nearest correlation matrix","25 pages, 13 figures, to appear in Applied and Computational Harmonic Analysis",,10.1016/j.acha.2013.08.005,,cs.IT cs.LG math.IT stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Optimizing the acquisition matrix is useful for compressed sensing of signals that are sparse in overcomplete dictionaries, because the acquisition matrix can be adapted to the particular correlations of the dictionary atoms. In this paper a novel formulation of the optimization problem is proposed, in the form of a rank-constrained nearest correlation matrix problem. Furthermore, improvements for three existing optimization algorithms are introduced, which are shown to be particular instances of the proposed formulation. Simulation results show notable improvements and superior robustness in sparse signal recovery. ","[{'version': 'v1', 'created': 'Sat, 14 Sep 2013 15:08:48 GMT'}]",2013-09-17,"[['Cleju', 'Nicolae', '']]","['acquisition', 'compressed sensing', 'nearest correlation matrix', 'optimization']" 98,1408.3110,Surender Kumar,"Surender Kumar, Manish Prateek, N.J. Ahuja, Bharat Bhushan","MEECDA: Multihop Energy Efficient Clustering and Data Aggregation Protocol for HWSN","8 pages, 11 figures. available at http://ijcaonline.org/2014. arXiv admin note: substantial text overlap with arXiv:1408.2914",,10.5120/15383-4047,,cs.NI,http://creativecommons.org/licenses/by-nc-sa/3.0/," Wireless sensor network consists of large number of inexpensive tiny sensors which are connected with low power wireless communications. Most of the routing and data dissemination protocols of WSN assume a homogeneous network architecture, in which all sensors have the same capabilities in terms of battery power, communication, sensing, storage, and processing. However the continued advances in miniaturization of processors and low-power communications have enabled the development of a wide variety of nodes. When more than one type of node is integrated into a WSN, it is called heterogeneous. Multihop short distance communication is an important scheme to reduce the energy consumption in a sensor network because nodes are densely deployed in a WSN. In this paper M-EECDA (Multihop Energy Efficient Clustering & Data Aggregation Protocol for Heterogeneous WSN) is proposed and analyzed. The protocol combines the idea of multihop communications and clustering for achieving the best performance in terms of network life and energy consumption. M-EECDA introduces a sleep state and three tier architecture for some cluster heads to save energy of the network. M-EECDA consists of three types of sensor nodes: normal, advance and super. To become cluster head in a round normal nodes use residual energy based scheme. Advance and super nodes further act as relay node to reduce the transmission load of a normal node cluster head when they are not cluster heads in a round. ","[{'version': 'v1', 'created': 'Wed, 13 Aug 2014 05:44:44 GMT'}]",2015-06-22,"[['Kumar', 'Surender', ''], ['Prateek', 'Manish', ''], ['Ahuja', 'N. J.', ''], ['Bhushan', 'Bharat', '']]","['Cluster', 'Energy Efficiency', 'Multihop', 'Initial Energy', 'Residual Energy', 'Wireless Sensor Network']" 99,1302.6339,EPTCS,"Maribel Fern\'andez, Ian Mackie, Matthew Walker",Bigraphical Nets,"In Proceedings TERMGRAPH 2013, arXiv:1302.5997","EPTCS 110, 2013, pp. 74-81",10.4204/EPTCS.110.8,,cs.LO cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Interaction nets are a graphical model of computation, which has been used to define efficient evaluators for functional calculi, and specifically lambda calculi with patterns. However, the flat structure of interaction nets forces pattern matching and functional behaviour to be encoded at the same level, losing some potential parallelism. In this paper, we introduce bigraphical nets, or binets for short, as a generalisation of interaction nets using ideas from bigraphs and port graphs, and we present a formal notation and operational semantics for binets. We illustrate their expressive power by examples of applications. ","[{'version': 'v1', 'created': 'Tue, 26 Feb 2013 06:50:45 GMT'}]",2013-02-27,"[['Fernández', 'Maribel', ''], ['Mackie', 'Ian', ''], ['Walker', 'Matthew', '']]","['Interaction Net', 'Port Graph', 'Bigraph', 'Rewriting Calculus']" 100,1405.0786,Vishal Anand,"Vishal Anand, Ramani S","Fault Localization in a Software Project using Back-Tracking Principles of Matrix Dependency","5 pages, 8 figures, ""Published with International Journal of Engineering Trends and Technology (IJETT)""","Vishal Anand , Ramani S Article: ""Fault Localization in a Software Project using Back-Tracking Principles of Matrix Dependency"". International Journal of Engineering Trends and Technology (IJETT) V10(11):545-549, April 2014. ISSN:2231-5381",10.14445/22315381/IJETT-V10P308,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Fault identification and testing has always been the most specific concern in the field of software development. To identify and testify the bug we should be aware of the source of the failure or any unwanted issue. In this paper, we are trying to extract the location of failure and trying to cope up with the bug. Using directed graph, we tried to obtain the dependency of multiple activities in live environment to trace the origin of fault. Software development comes up with series of activities and we tried to show the dependency of multiple activities on each other. Critical activities are considered as they cause abnormal functioning of the whole system. The paper discuss about the priorities of activities of dependency of software failure on the critical activities. Matrix representation of activities as part of the software is chosen to determine root of the failure using concept of dependency. It can vary with the topography of network and software environment. When faults occur, the possible symptoms will be reflected in the dependency matrix with high probability in fault itself. Thus, independent faults are located in the main diagonal of dependency matrix. ","[{'version': 'v1', 'created': 'Mon, 5 May 2014 06:22:08 GMT'}]",2014-05-06,"[['Anand', 'Vishal', ''], ['S', 'Ramani', '']]","['Software', 'Dependency', 'Matrix', 'Bug', 'Critical Activity', 'Modules']" 101,1210.2282,Miguel Areias,Miguel Areias and Ricardo Rocha,Towards Multi-Threaded Local Tabling Using a Common Table Space,To appear in Theory and Practice of Logic Programming,"Theory and Practice of Logic Programming, Volume 12, Special Issue 4-5, 2012, pp 427-443",10.1017/S1471068412000117,,cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Multi-threading is currently supported by several well-known Prolog systems providing a highly portable solution for applications that can benefit from concurrency. When multi-threading is combined with tabling, we can exploit the power of higher procedural control and declarative semantics. However, despite the availability of both threads and tabling in some Prolog systems, the implementation of these two features implies complex ties to each other and to the underlying engine. Until now, XSB was the only Prolog system combining multi-threading with tabling. In XSB, tables may be either private or shared between threads. While thread-private tables are easier to implement, shared tables have all the associated issues of locking, synchronization and potential deadlocks. In this paper, we propose an alternative view to XSB's approach. In our proposal, each thread views its tables as private but, at the engine level, we use a common table space where tables are shared among all threads. We present three designs for our common table space approach: No-Sharing (NS) (similar to XSB's private tables), Subgoal-Sharing (SS) and Full-Sharing (FS). The primary goal of this work was to reduce the memory usage for the table space but, our experimental results, using the YapTab tabling system with a local evaluation strategy, show that we can also achieve significant reductions on running time. ","[{'version': 'v1', 'created': 'Mon, 8 Oct 2012 14:00:07 GMT'}, {'version': 'v2', 'created': 'Tue, 9 Oct 2012 22:12:00 GMT'}]",2012-10-11,"[['Areias', 'Miguel', ''], ['Rocha', 'Ricardo', '']]","['Tabling', 'Multi-Threading', 'Implementation']" 102,2102.10777,Tejas Khare,"Tejas Khare, Vaibhav Bahel and Anuradha C. Phadke",PCB-Fire: Automated Classification and Fault Detection in PCB,"6 Pages, 9 Figures, Conference","Proceeding Reference - 978-0-7381-4335-4/20/$31.00 \c{opyright}2020 IEEE",10.1109/MPCIT51588.2020.9350324,,cs.CV eess.IV,http://creativecommons.org/licenses/by-nc-nd/4.0/," Printed Circuit Boards are the foundation for the functioning of any electronic device, and therefore are an essential component for various industries such as automobile, communication, computation, etc. However, one of the challenges faced by the PCB manufacturers in the process of manufacturing of the PCBs is the faulty placement of its components including missing components. In the present scenario the infrastructure required to ensure adequate quality of the PCB requires a lot of time and effort. The authors present a novel solution for detecting missing components and classifying them in a resourceful manner. The presented algorithm focuses on pixel theory and object detection, which has been used in combination to optimize the results from the given dataset. ","[{'version': 'v1', 'created': 'Mon, 22 Feb 2021 05:19:22 GMT'}]",2021-02-23,"[['Khare', 'Tejas', ''], ['Bahel', 'Vaibhav', ''], ['Phadke', 'Anuradha C.', '']]","['Convolutional Neural Networks', 'Object Detection', 'Image processing', 'Automatic Optical Inspection (AOI)', 'YOLOv3']" 103,1708.00777,Zheng Li,"Zheng Li, Selome Tesfatsion, Saeed Bastani, Ahmed Ali-Eldin, Erik Elmroth, Maria Kihl, Rajiv Ranjan","A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates",in press,"IEEE Transactions on Sustainable Computing, 2017",10.1109/TSUSC.2017.2722822,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Given the complexity and heterogeneity in Cloud computing scenarios, the modeling approach has widely been employed to investigate and analyze the energy consumption of Cloud applications, by abstracting real-world objects and processes that are difficult to observe or understand directly. It is clear that the abstraction sacrifices, and usually does not need, the complete reflection of the reality to be modeled. Consequently, current energy consumption models vary in terms of purposes, assumptions, application characteristics and environmental conditions, with possible overlaps between different research works. Therefore, it would be necessary and valuable to reveal the state-of-the-art of the existing modeling efforts, so as to weave different models together to facilitate comprehending and further investigating application energy consumption in the Cloud domain. By systematically selecting, assessing and synthesizing 76 relevant studies, we rationalized and organized over 30 energy consumption models with unified notations. To help investigate the existing models and facilitate future modeling work, we deconstructed the runtime execution and deployment environment of Cloud applications, and identified 18 environmental factors and 12 workload factors that would be influential on the energy consumption. In particular, there are complicated trade-offs and even debates when dealing with the combinational impacts of multiple factors. ","[{'version': 'v1', 'created': 'Wed, 2 Aug 2017 14:45:47 GMT'}]",2017-08-03,"[['Li', 'Zheng', ''], ['Tesfatsion', 'Selome', ''], ['Bastani', 'Saeed', ''], ['Ali-Eldin', 'Ahmed', ''], ['Elmroth', 'Erik', ''], ['Kihl', 'Maria', ''], ['Ranjan', 'Rajiv', '']]","['Application energy consumption', 'Cloud computing', 'energyconsumption modeling', 'energy-related factors', 'systematic literature review']" 104,1804.05935,Qian Feng,Qian Feng and Sing Kiong Nguang,"Dissipative delay range analysis of coupled differential-difference delay systems with distributed delays",,"Systems and Control Letters, 2018, 116, 56 - 65",10.1016/j.sysconle.2018.04.008,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper proposes methods to handle the problem of delay range stability analysis for a linear coupled differential-difference system (CDDS) with distributed delays subject to dissipative constraints. The model of linear CDDS contains many models of linear delay systems as special cases. A novel Liapunov-Krasovskii functional with non-constant matrix parameters, which are related to the delay value polynomially, is applied to derive stability conditions. By constructing this new functional, sufficient conditions in terms of robust linear matrix inequalities (LMIs) can be derived, which guarantee range stability of a linear CDDS subject to dissipative constraints. To solve the resulting robust LMIs numerically, we apply the technique of sum of squares programming together with matrix relaxations without introducing any potential conservatism to the original robust LMIs. Furthermore, the proposed methods can be extended to solve delay margin estimation problems for a linear CDDS subject to prescribed dissipative constraints. Finally, numerical examples are presented to demonstrate the effectiveness of the proposed methodologies. ","[{'version': 'v1', 'created': 'Mon, 16 Apr 2018 20:54:58 GMT'}, {'version': 'v2', 'created': 'Mon, 14 May 2018 13:42:42 GMT'}, {'version': 'v3', 'created': 'Wed, 20 Jun 2018 02:12:05 GMT'}]",2018-06-21,"[['Feng', 'Qian', ''], ['Nguang', 'Sing Kiong', '']]","['Coupled Differential-Difference Systems', 'Range stability', 'Dissipativity', 'Sum of Squareprogramming']" 105,1711.05296,Thomas Pasquier,"Thomas Pasquier, Xueyuan Han, Mark Goldstein, Thomas Moyer, David Eyers, Margo Seltzer and Jean Bacon",Practical Whole-System Provenance Capture,"15 pages, 7 figures",SoCC '17 Proceedings of the 2017 Symposium on Cloud Computing,10.1145/3127479.3129249,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system's behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications. ","[{'version': 'v1', 'created': 'Tue, 14 Nov 2017 19:46:16 GMT'}]",2017-11-16,"[['Pasquier', 'Thomas', ''], ['Han', 'Xueyuan', ''], ['Goldstein', 'Mark', ''], ['Moyer', 'Thomas', ''], ['Eyers', 'David', ''], ['Seltzer', 'Margo', ''], ['Bacon', 'Jean', '']]","['Data Provenance', 'Whole-system provenance', 'Linux Kernel']" 106,1603.09434,Ibrahim AlShourbaji H,"Ibrahim AlShourbaji, Samaher Al-Janabi and Ahmed Patel",Document Selection in a Distributed Search Engine Architecture,"8 pages, 6 figures in Middle-East Journal of Scientific Research, IDOSI Publications, 2015",,10.5829/idosi.mejsr.2015.23.07.22398,,cs.IR cs.DB,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Distributed Search Engine Architecture (DSEA) hosts numerous independent topic-specific search engines and selects a subset of the databases to search within the architecture. The objective of this approach is to reduce the amount of space needed to perform a search by querying only a subset of the total data available. In order to manipulate data across many databases, it is most efficient to identify a smaller subset of databases that would be most likely to return the data of specific interest that can then be examined in greater detail. The selection index has been most commonly used as a method for choosing the most applicable databases as it captures broad information about each database and its indexed documents. Employing this type of database allows the researcher to find information more quickly, not only with less cost, but it also minimizes the potential for biases. This paper investigates the effectiveness of different databases selected within the framework and scope of the distributed search engine architecture. The purpose of the study is to improve the quality of distributed information retrieval. ","[{'version': 'v1', 'created': 'Thu, 31 Mar 2016 01:19:21 GMT'}]",2016-04-01,"[['AlShourbaji', 'Ibrahim', ''], ['Al-Janabi', 'Samaher', ''], ['Patel', 'Ahmed', '']]","['web search', 'distributed search engine', 'document selection', 'information retrieval', 'Collection Retrival Inference network 1']" 107,1204.2225,Sandhyarani Ramancha,"Ramancha Sandhyarani, Bodakuntla Rajkumar and Jayadev Gyani",Construction of Community Web Directories based on Web usage Data,"8 pages,5 figures","Advanced Computing: An International Journal (ACIJ), Vol.3, No.2, March 2012",10.5121/acij.2012.3205,,cs.OH,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper support the concept of a community Web directory, as a Web directory that is constructed according to the needs and interests of particular user communities. Furthermore, it presents the complete method for the construction of such directories by using web usage data. User community models take the form of thematic hierarchies and are constructed by employing clustering approach. We applied our methodology to the ODP directory and also to an artificial Web directory, which was generated by clustering Web pages that appear in the access log of an Internet Service Provider. For the discovery of the community models, we introduced a new criterion that combines a priori thematic informativeness of the Web directory categories with the level of interest observed in the usage data. In this context, we introduced and evaluated new clustering method. We have tested the methodology using access log files which are collected from the proxy servers of an Internet Service Provider and provided results that indicates the usability of the community Web directories. The proposed clustering methodology is evaluated both on a specialized artificial and a community Web directory, indicating its value to the user of the web. ","[{'version': 'v1', 'created': 'Tue, 10 Apr 2012 17:28:39 GMT'}]",2012-04-11,"[['Sandhyarani', 'Ramancha', ''], ['Rajkumar', 'Bodakuntla', ''], ['Gyani', 'Jayadev', '']]","['web directory', 'user communities', 'Internet service Provider', 'clustering', 'Open Directory Project…']" 108,2105.09448,Tirtharaj Dash,"Gunjan Chhablani, Abheesht Sharma, Harshit Pandey, Tirtharaj Dash","Superpixel-based Knowledge Infusion in Deep Neural Networks for Image Classification","ACM Proc. format: 5 pages; Accepted at ACM SE'22, April 18-20, 2022, Virtual Event, USA","Proceedings of the 2022 ACM Southeast Conference, April 2022, Pages 243-247",10.1145/3476883.3520216,,cs.CV cs.LG eess.IV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Superpixels are higher-order perceptual groups of pixels in an image, often carrying much more information than the raw pixels. There is an inherent relational structure to the relationship among different superpixels of an image such as adjacent superpixels are neighbours of each other. Our interest here is to treat these relative positions of various superpixels as relational information of an image. This relational information can convey higher-order spatial information about the image, such as the relationship between superpixels representing two eyes in an image of a cat. That is, two eyes are placed adjacent to each other in a straight line or the mouth is below the nose. Our motive in this paper is to assist computer vision models, specifically those based on Deep Neural Networks (DNNs), by incorporating this higher-order information from superpixels. We construct a hybrid model that leverages (a) Convolutional Neural Network (CNN) to deal with spatial information in an image and (b) Graph Neural Network (GNN) to deal with relational superpixel information in the image. The proposed model is learned using a generic hybrid loss function. Our experiments are extensive, and we evaluate the predictive performance of our proposed hybrid vision model on seven different image classification datasets from a variety of domains such as digit and object recognition, biometrics, medical imaging. The results demonstrate that the relational superpixel information processed by a GNN can improve the performance of a standard CNN-based vision system. ","[{'version': 'v1', 'created': 'Thu, 20 May 2021 01:25:42 GMT'}, {'version': 'v2', 'created': 'Wed, 23 Feb 2022 09:22:22 GMT'}]",2022-05-23,"[['Chhablani', 'Gunjan', ''], ['Sharma', 'Abheesht', ''], ['Pandey', 'Harshit', ''], ['Dash', 'Tirtharaj', '']]","['Knowledge-Infused Learning', 'Graph Neural Networks', 'Convolutional Neural Networks', 'Superpixels', 'SLIC']" 109,1301.4313,Pierre Lairez,"Alin Bostan (INRIA Saclay - Ile de France), Pierre Lairez (INRIA Saclay - Ile de France), Bruno Salvy (Inria Grenoble Rh\^one-Alpes / LIP Laboratoire de l'Informatique du Parall\'elisme)","Creative telescoping for rational functions using the Griffiths-Dwork method",,"Proceedings of ISSAC 2013, ACM, pp 93-100",10.1145/2465506.2465935,,cs.SC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Creative telescoping algorithms compute linear differential equations satisfied by multiple integrals with parameters. We describe a precise and elementary algorithmic version of the Griffiths-Dwork method for the creative telescoping of rational functions. This leads to bounds on the order and degree of the coefficients of the differential equation, and to the first complexity result which is simply exponential in the number of variables. One of the important features of the algorithm is that it does not need to compute certificates. The approach is vindicated by a prototype implementation. ","[{'version': 'v1', 'created': 'Fri, 18 Jan 2013 07:40:01 GMT'}, {'version': 'v2', 'created': 'Sun, 21 Apr 2013 06:22:26 GMT'}]",2014-05-09,"[['Bostan', 'Alin', '', 'INRIA Saclay - Ile de France'], ['Lairez', 'Pierre', '', 'INRIA\n Saclay - Ile de France'], ['Salvy', 'Bruno', '', ""Inria Grenoble Rhône-Alpes / LIP\n Laboratoire de l'Informatique du Parallélisme""]]","['Integration', 'creative telescoping', 'algorithms', 'complexity', 'Picard-Fuchs equation', 'Griffiths–Dwork method']" 110,1709.03793,Shubham Dokania,"Shubham Dokania, Sunyam Bagga, Rohit Sharma","Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem","6 pages, published in CISS 2017",,10.1109/CISS.2017.7926065,,cs.NE,http://creativecommons.org/licenses/by-nc-sa/4.0/," Self Organizing Migrating Algorithm (SOMA) is a meta-heuristic algorithm based on the self-organizing behavior of individuals in a simulated social environment. SOMA performs iterative computations on a population of potential solutions in the given search space to obtain an optimal solution. In this paper, an Opportunistic Self Organizing Migrating Algorithm (OSOMA) has been proposed that introduces a novel strategy to generate perturbations effectively. This strategy allows the individual to span across more possible solutions and thus, is able to produce better solutions. A comprehensive analysis of OSOMA on multi-dimensional unconstrained benchmark test functions is performed. OSOMA is then applied to solve real-time Dynamic Traveling Salesman Problem (DTSP). The problem of real-time DTSP has been stipulated and simulated using real-time data from Google Maps with a varying cost-metric between any two cities. Although DTSP is a very common and intuitive model in the real world, its presence in literature is still very limited. OSOMA performs exceptionally well on the problems mentioned above. To substantiate this claim, the performance of OSOMA is compared with SOMA, Differential Evolution and Particle Swarm Optimization. ","[{'version': 'v1', 'created': 'Tue, 12 Sep 2017 11:47:07 GMT'}]",2017-09-13,"[['Dokania', 'Shubham', ''], ['Bagga', 'Sunyam', ''], ['Sharma', 'Rohit', '']]","['Dynamic Traveling Salesman Problem', 'Evolutionary Algorithms', 'Optimization', 'Self Organizing Migrating Algorithm']" 111,1210.2877,Dimitris Arabadjis,"Constantin Papaodysseus, Dimitris Arabadjis, Michalis Exarhos, Panayiotis Rousopoulos, Solomon Zannos, Michail Panagopoulos and Lena Papazoglou-Manioudaki","Efficient Solution to the 3D Problem of Automatic Wall Paintings Reassembly",,"Mathematics & Computers with Applications, vol. 64, pp. 2712-2734, 2012",10.1016/j.bbr.2011.03.031,,cs.CV math.DG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper introduces a new approach for the automated reconstruction - reassembly of fragmented objects having one surface near to plane, on the basis of the 3D representation of their constituent fragments. The whole process starts by 3D scanning of the available fragments. The obtained representations are properly processed so that they can be tested for possible matches. Next, four novel criteria are introduced, that lead to the determination of pairs of matching fragments. These criteria have been chosen so as the whole process imitates the instinctive reassembling method dedicated scholars apply. The first criterion exploits the volume of the gap between two properly placed fragments. The second one considers the fragments' overlapping in each possible matching position. Criteria 3,4 employ principles from calculus of variations to obtain bounds for the area and the mean curvature of the contact surfaces and the length of contact curves, which must hold if the two fragments match. The method has been applied, with great success, both in the reconstruction of objects artificially broken by the authors and, most importantly, in the virtual reassembling of parts of wall paintings belonging to the Mycenaic civilization (c. 1300 B.C.), excavated in a highly fragmented condition in Tyrins, Greece. ","[{'version': 'v1', 'created': 'Wed, 10 Oct 2012 11:41:12 GMT'}]",2012-10-11,"[['Papaodysseus', 'Constantin', ''], ['Arabadjis', 'Dimitris', ''], ['Exarhos', 'Michalis', ''], ['Rousopoulos', 'Panayiotis', ''], ['Zannos', 'Solomon', ''], ['Panagopoulos', 'Michail', ''], ['Papazoglou-Manioudaki', 'Lena', '']]","['fragmented objects reassembly', 'wall paintings reconstruction', 'pattern']" 112,1907.12653,Timo Koch,"Timo Koch, Rainer Helmig, Martin Schneider","A new and consistent well model for one-phase flow in anisotropic porous media using a distributed source model","28 pages, 12 figures",,10.1016/j.jcp.2020.109369,,cs.CE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A new well model for one-phase flow in anisotropic porous media is introduced, where the mass exchange between well and a porous medium is modeled by spatially distributed source terms over a small neighborhood region. To this end, we first present a compact derivation of the exact analytical solution for an arbitrarily oriented, infinite well cylinder in an infinite porous medium with anisotropic permeability tensor in R3 , for constant well pressure and a given injection rate, using a conformal map. The analytical solution motivates the choice of a kernel function to distribute the sources. The presented model is independent from the discretization method and the choice of computational grids. In numerical experiments, the new well model is shown to be consistent and robust with respect to rotation of the well axis, rotation of the permeability tensor, and different anisotropy ratios. Finally, a comparison with a Peaceman-type well model suggests that the new scheme leads to an increased accuracy for injection (and production) rates for arbitrarily-oriented pressure-controlled wells. ","[{'version': 'v1', 'created': 'Fri, 26 Jul 2019 12:46:32 GMT'}]",2020-03-23,"[['Koch', 'Timo', ''], ['Helmig', 'Rainer', ''], ['Schneider', 'Martin', '']]","['well model', '1d-3d', 'mixed-dimension', 'anisotropic', 'analytic solution', 'Peaceman']" 113,1607.01249,Paul Springer,"Paul Springer, Aravind Sankaran, Paolo Bientinesi",TTC: A Tensor Transposition Compiler for Multiple Architectures,,,10.1145/2935323.2935328,,cs.MS cs.PF,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We consider the problem of transposing tensors of arbitrary dimension and describe TTC, an open source domain-specific parallel compiler. TTC generates optimized parallel C++/CUDA C code that achieves a significant fraction of the system's peak memory bandwidth. TTC exhibits high performance across multiple architectures, including modern AVX-based systems (e.g.,~Intel Haswell, AMD Steamroller), Intel's Knights Corner as well as different CUDA-based GPUs such as NVIDIA's Kepler and Maxwell architectures. We report speedups of TTC over a meaningful baseline implementation generated by external C++ compilers; the results suggest that a domain-specific compiler can outperform its general purpose counterpart significantly: For instance, comparing with Intel's latest C++ compiler on the Haswell and Knights Corner architecture, TTC yields speedups of up to $8\times$ and $32\times$, respectively. We also showcase TTC's support for multiple leading dimensions, making it a suitable candidate for the generation of performance-critical packing functions that are at the core of the ubiquitous BLAS 3 routines. ","[{'version': 'v1', 'created': 'Tue, 5 Jul 2016 13:53:57 GMT'}]",2016-07-06,"[['Springer', 'Paul', ''], ['Sankaran', 'Aravind', ''], ['Bientinesi', 'Paolo', '']]","['domain-specific compiler', 'multidimensional transpositions', 'high-performance computing', 'SIMD', 'tensors']" 114,1309.5502,Washington Alves de Oliveira,"Washington Alves de Oliveira, Antonio Carlos Moretti and Ednei Felix Reis","The multi-vehicle covering tour problem: building routes for urban patrolling","28 pages, 8 figures, 7 tables, Brazilian Operations Research Society; Printed version ISSN 0101-7438 / Online version ISSN 1678-5142",Pesquisa Operacional (2015) 35(3): 617-644,10.1590/0101-7438.2015.035.03.0617,,cs.AI cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we study a particular aspect of the urban community policing: routine patrol route planning. We seek routes that guarantee visibility, as this has a sizable impact on the community perceived safety, allowing quick emergency responses and providing surveillance of selected sites (e.g., hospitals, schools). The planning is restricted to the availability of vehicles and strives to achieve balanced routes. We study an adaptation of the model for the multi-vehicle covering tour problem, in which a set of locations must be visited, whereas another subset must be close enough to the planned routes. It constitutes an NP-complete integer programming problem. Suboptimal solutions are obtained with several heuristics, some adapted from the literature and others developed by us. We solve some adapted instances from TSPLIB and an instance with real data, the former being compared with results from literature, and latter being compared with empirical data. ","[{'version': 'v1', 'created': 'Sat, 21 Sep 2013 17:17:46 GMT'}, {'version': 'v2', 'created': 'Wed, 14 Sep 2016 23:16:49 GMT'}]",2016-09-16,"[['de Oliveira', 'Washington Alves', ''], ['Moretti', 'Antonio Carlos', ''], ['Reis', 'Ednei Felix', '']]","['Vehicle routing', 'Covering tour problem', 'Heuristics', 'Urban patrolling']" 115,1302.1756,Anastasios Kavoukis,"Anastasios Kavoukis, Salem Aljareh","Efficient time synchronized one-time password scheme to provide secure wake-up authentication on wireless sensor networks","International Journal Of Advanced Smart Sensor Network Systems (IJASSN), Vol 3, No.1, January 2013 http://airccse.org/journal/ijassn/papers/3113ijassn01.pdf",,10.5121/ijassn.2013.3101,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we propose Time Synchronized One-Time-Password scheme to provide secure wake up authentication. The main constraint of wireless sensor networks is their limited power resource that prevents us from using radio transmission over the network to transfer the passwords. On the other hand computation power consumption is insignificant when compared to the costs associated with the power needed for transmitting the right set of keys. In addition to prevent adversaries from reading and following the timeline of the network, we propose to encrypt the tokens using symmetric encryption to prevent replay attacks. ","[{'version': 'v1', 'created': 'Thu, 7 Feb 2013 14:24:51 GMT'}]",2013-02-08,"[['Kavoukis', 'Anastasios', ''], ['Aljareh', 'Salem', '']]","['Wake-up', 'Security', 'Wireless Sensor Networks', 'One-Time Password']" 116,1401.4282,"J\""urgen M\""unch","Mart\'in Soto, Alexis Ocampo, J\""urgen M\""unch","The Secret Life of a Process Description: A Look into the Evolution of a Large Process Model","12 pages. The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-540-79588-9_23","Making Globally Distributed Software Development a Success Story, volume 5007 of Lecture Notes in Computer Science, pages 257-268, Springer Berlin Heidelberg, 2008",10.1007/978-3-540-79588-9_23,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Software process models must change continuously in order to remain consistent over time with the reality they represent, as well as relevant to the task they are intended for. Performing these changes in a sound and disci- plined fashion requires software process model evolution to be understood and controlled. The current situation can be characterized by a lack of understanding of software process model evolution and, in consequence, by a lack of systematic support for evolving software process models in organizations. This paper presents an analysis of the evolution of a large software process standard, namely, the process standard for the German Federal Government (V-Modell(R) XT). The analysis was performed with the Evolyzer tool suite, and is based on the complete history of over 600 versions that have been created during the development and maintenance of the standard. The analysis reveals similarities and differences between process evolution and empirical findings in the area of software system evolution. These findings provide hints on how to better manage process model evolution in the future. ","[{'version': 'v1', 'created': 'Fri, 17 Jan 2014 09:33:31 GMT'}]",2014-01-20,"[['Soto', 'Martín', ''], ['Ocampo', 'Alexis', ''], ['Münch', 'Jürgen', '']]","['process modeling', 'process model change', 'process model evolution', 'model comparison', 'V-Modell® XT']" 117,1912.06466,Savva Ignatyev,"Vage Egiazarian, Savva Ignatyev, Alexey Artemov, Oleg Voynov, Andrey Kravchenko, Youyi Zheng, Luiz Velho, Evgeny Burnaev","Latent-Space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds",,,10.5220/0009102604210428,,cs.CV eess.IV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Constructing high-quality generative models for 3D shapes is a fundamental task in computer vision with diverse applications in geometry processing, engineering, and design. Despite the recent progress in deep generative modelling, synthesis of finely detailed 3D surfaces, such as high-resolution point clouds, from scratch has not been achieved with existing approaches. In this work, we propose to employ the latent-space Laplacian pyramid representation within a hierarchical generative model for 3D point clouds. We combine the recently proposed latent-space GAN and Laplacian GAN architectures to form a multi-scale model capable of generating 3D point clouds at increasing levels of detail. Our evaluation demonstrates that our model outperforms the existing generative models for 3D point clouds. ","[{'version': 'v1', 'created': 'Fri, 13 Dec 2019 13:32:28 GMT'}]",2021-02-08,"[['Egiazarian', 'Vage', ''], ['Ignatyev', 'Savva', ''], ['Artemov', 'Alexey', ''], ['Voynov', 'Oleg', ''], ['Kravchenko', 'Andrey', ''], ['Zheng', 'Youyi', ''], ['Velho', 'Luiz', ''], ['Burnaev', 'Evgeny', '']]","['Deep learning', '3D point clouds', 'generative adversarial networks', 'multi-scale 3D modelling', 'Laplacian pyramid']" 118,1805.10109,Sylvie Huet,"Sylvie Huet, Guillaume Deffuant, Armelle Nugier, Michel Streith, Serge Guimond",Resisting hostility generated by terror: An agent-based study,,,10.1371/journal.pone.0209907,,cs.MA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We aim to study through an agent-based model the cultural conditions leading to a decrease or an increase of discrimination between groups after a major cultural threat such as a terrorist attack. We propose an agent-based model of cultural dynamics inspired from the social psychological theories. An agent has a cultural identity comprised of the most acceptable positions about each of the different cultural worldviews corresponding to the main cultural groups of the considered society and a margin of acceptance around each of these most acceptable positions. An agent forms an attitude about another agent depending on the similarity between their cultural identities. When a terrorist attack is perpetrated in the name of an extreme cultural identity, the negatively perceived agents from this extreme cultural identity modify their margins of acceptance in order to differentiate themselves more from the threatening cultural identity. We generated a set of populations with cultural identities compatible with data given by a survey on groups' attitudes among a large sample representative of the population of France; we then simulated the reaction of these agents facing a threat. For most populations, the average attitude toward agents with the same preferred worldview as the terrorists becomes more negative; however, when the population shows some cultural properties, we noticed the opposite effect as the average attitude of the population becomes less negative. This particular context requires that the agents sharing the same preferred worldview with the terrorists strongly differentiate themselves from the terrorists' extreme cultural identity and that the other agents be aware of these changes. ","[{'version': 'v1', 'created': 'Fri, 25 May 2018 12:39:27 GMT'}]",2019-03-06,"[['Huet', 'Sylvie', ''], ['Deffuant', 'Guillaume', ''], ['Nugier', 'Armelle', ''], ['Streith', 'Michel', ''], ['Guimond', 'Serge', '']]","['Intergroup hostility', 'culture dynamics', 'Terror Management Theory', 'self-opinion']" 119,1111.5485,Willy Picard,Willy Picard,Membership(s) and compliance(s) with class-based graphs,"7 pages, 4 figures",Information Processing Letters. 112 (2012) 849-855,10.1016/j.ipl.2012.08.005,,cs.SI physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Besides the need for a better understanding of networks, there is a need for prescriptive models and tools to specify requirements concerning networks and their associated graph representations. We propose class-based graphs as a means to specify requirements concerning object-based graphs. Various variants of membership are proposed as special relations between class-based and object-based graphs at the local level, while various variants of compliance are proposed at the global level. ","[{'version': 'v1', 'created': 'Wed, 23 Nov 2011 13:15:11 GMT'}, {'version': 'v2', 'created': 'Thu, 30 Aug 2012 10:24:03 GMT'}]",2012-08-31,"[['Picard', 'Willy', '']]","['data structures', 'object-based graph', 'class-based graph', 'class membership', 'compliance']" 120,2105.11277,Andr\'e Vict\'oria Matias,"Andr\'e Vict\'oria Matias, Jo\~ao Gustavo Atkinson Amorim, Luiz Antonio Buschetto Macarini, Allan Cerentini, Alexandre Sherlley Casimiro Onofre, Fabiana Botelho de Miranda Onofre, Felipe Perozzo Dalto\'e, Marcelo Ricardo Stemmer, Aldo von Wangenheim","What is the State of the Art of Computer Vision-Assisted Cytology? A Systematic Literature Review",,,10.1016/j.compmedimag.2021.101934,,cs.CV eess.IV,http://creativecommons.org/licenses/by-nc-nd/4.0/," Cytology is a low-cost and non-invasive diagnostic procedure employed to support the diagnosis of a broad range of pathologies. Computer Vision technologies, by automatically generating quantitative and objective descriptions of examinations' contents, can help minimize the chances of misdiagnoses and shorten the time required for analysis. To identify the state-of-art of computer vision techniques currently applied to cytology, we conducted a Systematic Literature Review. We analyzed papers published in the last 5 years. The initial search was executed in September 2020 and resulted in 431 articles. After applying the inclusion/exclusion criteria, 157 papers remained, which we analyzed to build a picture of the tendencies and problems present in this research area, highlighting the computer vision methods, staining techniques, evaluation metrics, and the availability of the used datasets and computer code. As a result, we identified that the most used methods in the analyzed works are deep learning-based (70 papers), while fewer works employ classic computer vision only (101 papers). The most recurrent metric used for classification and object detection was the accuracy (33 papers and 5 papers), while for segmentation it was the Dice Similarity Coefficient (38 papers). Regarding staining techniques, Papanicolaou was the most employed one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of the datasets used in the papers are publicly available, with the DTU/Herlev dataset being the most used one. We conclude that there still is a lack of high-quality datasets for many types of stains and most of the works are not mature enough to be applied in a daily clinical diagnostic routine. We also identified a growing tendency towards adopting deep learning-based approaches as the methods of choice. ","[{'version': 'v1', 'created': 'Mon, 24 May 2021 13:50:45 GMT'}]",2021-05-28,"[['Matias', 'André Victória', ''], ['Amorim', 'João Gustavo Atkinson', ''], ['Macarini', 'Luiz Antonio Buschetto', ''], ['Cerentini', 'Allan', ''], ['Onofre', 'Alexandre Sherlley Casimiro', ''], ['Onofre', 'Fabiana Botelho de Miranda', ''], ['Daltoé', 'Felipe Perozzo', ''], ['Stemmer', 'Marcelo Ricardo', ''], ['von Wangenheim', 'Aldo', '']]","['Cytology', 'Segmentation', 'Classification', 'Deep Learning', 'Computer Vision']" 121,1407.1103,Hanbaek Lyu,Hanbaek Lyu,Synchronization of finite-state pulse-coupled oscillators,"23 pages, 17 figures, To appear in Physica D: Nonlinear Phenomena",,10.1016/j.physd.2015.03.007,,cs.SY math.CO math.DS math.OC nlin.CG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We propose a novel generalized cellular automaton(GCA) model for discrete-time pulse-coupled oscillators and study the emergence of synchrony. Given a finite simple graph and an integer $n\ge 3$, each vertex is an identical oscillator of period $n$ with the following weak coupling along the edges: each oscillator inhibits its phase update if it has at least one neighboring oscillator at a particular ""blinking"" state and if its state is ahead of this blinking state. We obtain conditions on initial configurations and on network topologies for which states of all vertices eventually synchronize. We show that our GCA model synchronizes arbitrary initial configurations on paths, trees, and with random perturbation, any connected graph. In particular, our main result is the following local-global principle for tree networks: for $n\in \{3,4,5,6\}$, any $n$-periodic network on a tree synchronizes arbitrary initial configuration if and only if the maximum degree of the tree is less than the period $n$. ","[{'version': 'v1', 'created': 'Fri, 4 Jul 2014 01:02:45 GMT'}, {'version': 'v2', 'created': 'Sat, 12 Jul 2014 18:16:35 GMT'}, {'version': 'v3', 'created': 'Mon, 30 Mar 2015 02:00:17 GMT'}]",2018-01-25,"[['Lyu', 'Hanbaek', '']]","['Synchronization', 'pulse-coupled oscillators', 'generalized cellular automata', 'digital clocksynchronization', 'self-stabilization', 'path', 'tree', 'absorbing chain']" 122,1609.08141,Trung Kien Vu,"Trung Kien Vu, Sungoh Kwon","On-Demand Routing Algorithm with Mobility Prediction in the Mobile Ad-hoc Networks","Preprint submitted to Computer Networks, 10 pages, 15 figues. arXiv admin note: text overlap with arXiv:1604.03330",,10.1016/j.comnet.2005.11.008,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we propose an ad-hoc on-demand distance vector routing algorithm for mobile ad-hoc networks taking into account node mobility. Changeable topology of such mobile ad-hoc networks provokes overhead messages in order to search available routes and maintain found routes. The overhead messages impede data delivery from sources to destination and deteriorate network performance. To overcome such a challenge, our proposed algorithm estimates link duration based neighboring node mobility and chooses the most reliable route. The proposed algorithm also applies the estimate for route maintenance to lessen the number of overhead messages. Via simulations, the proposed algorithm is verified in various mobile environments. In the low mobility environment, by reducing route maintenance messages, the proposed algorithm significantly improves network performance such as packet data rate and end-to-end delay. In the high mobility environment, the reduction of route discovery message enhances network performance since the proposed algorithm provides more reliable routes. ","[{'version': 'v1', 'created': 'Mon, 26 Sep 2016 19:54:39 GMT'}]",2017-06-30,"[['Vu', 'Trung Kien', ''], ['Kwon', 'Sungoh', '']]","['Mobility Prediction', 'Longest and Stable Route', 'Link Duration Probability', 'End-to-End Delay', 'Low Latency']" 123,1003.5435,Secretary Aircc Journal,"Kilari Veera Swamy (1), B.Chandra Mohan (2), Y.V.Bhaskar Reddy (3) and S.Srinivas Kumar (4) ((1) QISCET, Ongole, India, (2) BEC, Bapatla, India, (3) QISCET, Ongole, India and (4) JNTU, Kakinada, India)",Image Compression and Watermarking scheme using Scalar Quantization,"11 Pages, IJNGN Journal 2010",International Journal of Next-Generation Networks 2.1 (2010) 37-47,10.5121/ijngn.2010.2104,,cs.CV cs.MM,http://creativecommons.org/licenses/by-nc-sa/3.0/," This paper presents a new compression technique and image watermarking algorithm based on Contourlet Transform (CT). For image compression, an energy based quantization is used. Scalar quantization is explored for image watermarking. Double filter bank structure is used in CT. The Laplacian Pyramid (LP) is used to capture the point discontinuities, and then followed by a Directional Filter Bank (DFB) to link point discontinuities. The coefficients of down sampled low pass version of LP decomposed image are re-ordered in a pre-determined manner and prediction algorithm is used to reduce entropy (bits/pixel). In addition, the coefficients of CT are quantized based on the energy in the particular band. The superiority of proposed algorithm to JPEG is observed in terms of reduced blocking artifacts. The results are also compared with wavelet transform (WT). Superiority of CT to WT is observed when the image contains more contours. The watermark image is embedded in the low pass image of contourlet decomposition. The watermark can be extracted with minimum error. In terms of PSNR, the visual quality of the watermarked image is exceptional. The proposed algorithm is robust to many image attacks and suitable for copyright protection applications. ","[{'version': 'v1', 'created': 'Mon, 29 Mar 2010 06:51:17 GMT'}]",2010-07-15,"[['Swamy', 'Kilari Veera', ''], ['Mohan', 'B. Chandra', ''], ['Reddy', 'Y. V. Bhaskar', ''], ['Kumar', 'S. Srinivas', '']]","['Contourlet Transform', 'Directional Filter Bank', 'Laplacian Pyramid', 'Topological re-ordering', 'Quantization']" 124,1801.00680,Caelan Garrett,"Caelan Reed Garrett, Tom\'as Lozano-P\'erez, and Leslie Pack Kaelbling",Sampling-Based Methods for Factored Task and Motion Planning,,"The International Journal of Robotics Research (IJRR), 2018",10.1177/0278364918802962,,cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper presents a general-purpose formulation of a large class of discrete-time planning problems, with hybrid state and control-spaces, as factored transition systems. Factoring allows state transitions to be described as the intersection of several constraints each affecting a subset of the state and control variables. Robotic manipulation problems with many movable objects involve constraints that only affect several variables at a time and therefore exhibit large amounts of factoring. We develop a theoretical framework for solving factored transition systems with sampling-based algorithms. The framework characterizes conditions on the submanifold in which solutions lie, leading to a characterization of robust feasibility that incorporates dimensionality-reducing constraints. It then connects those conditions to corresponding conditional samplers that can be composed to produce values on this submanifold. We present two domain-independent, probabilistically complete planning algorithms that take, as input, a set of conditional samplers. We demonstrate the empirical efficiency of these algorithms on a set of challenging task and motion planning problems involving picking, placing, and pushing. ","[{'version': 'v1', 'created': 'Tue, 2 Jan 2018 15:15:35 GMT'}, {'version': 'v2', 'created': 'Thu, 3 May 2018 14:03:33 GMT'}, {'version': 'v3', 'created': 'Tue, 12 Feb 2019 18:40:09 GMT'}]",2019-02-13,"[['Garrett', 'Caelan Reed', ''], ['Lozano-Pérez', 'Tomás', ''], ['Kaelbling', 'Leslie Pack', '']]","['task and motion planning', 'manipulation planning', 'AI reasoning']" 125,1509.03208,AbdelRahim Elmadany,"Abdelrahim A Elmadany, Sherif M Abdou and Mervat Gheith",Towards Understanding Egyptian Arabic Dialogues,arXiv admin note: substantial text overlap with arXiv:1505.03081,"International Journal of Computer Applications 120(220, PP 7-12, June 2015",10.5120/21390-4427,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Labelling of user's utterances to understanding his attends which called Dialogue Act (DA) classification, it is considered the key player for dialogue language understanding layer in automatic dialogue systems. In this paper, we proposed a novel approach to user's utterances labeling for Egyptian spontaneous dialogues and Instant Messages using Machine Learning (ML) approach without relying on any special lexicons, cues, or rules. Due to the lack of Egyptian dialect dialogue corpus, the system evaluated by multi-genre corpus includes 4725 utterances for three domains, which are collected and annotated manually from Egyptian call-centers. The system achieves F1 scores of 70. 36% overall domains. ","[{'version': 'v1', 'created': 'Tue, 14 Jul 2015 02:47:40 GMT'}]",2015-09-11,"[['Elmadany', 'Abdelrahim A', ''], ['Abdou', 'Sherif M', ''], ['Gheith', 'Mervat', '']]","['Dialogue Act Classification', 'Arabic Dialogue Understanding', 'Egyptian Arabic Dialect', 'Arabic Instant Messages']" 126,1811.09982,Battista Biggio,"Battista Biggio, Ignazio Pillai, Samuel Rota Bul\`o, Davide Ariu, Marcello Pelillo, Fabio Roli",Is Data Clustering in Adversarial Settings Secure?,,"Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, AISec '13, pages 87-98, New York, NY, USA, 2013. ACM",,,cs.LG cs.CR cs.CV stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Clustering algorithms have been increasingly adopted in security applications to spot dangerous or illicit activities. However, they have not been originally devised to deal with deliberate attack attempts that may aim to subvert the clustering process itself. Whether clustering can be safely adopted in such settings remains thus questionable. In this work we propose a general framework that allows one to identify potential attacks against clustering algorithms, and to evaluate their impact, by making specific assumptions on the adversary's goal, knowledge of the attacked system, and capabilities of manipulating the input data. We show that an attacker may significantly poison the whole clustering process by adding a relatively small percentage of attack samples to the input data, and that some attack samples may be obfuscated to be hidden within some existing clusters. We present a case study on single-linkage hierarchical clustering, and report experiments on clustering of malware samples and handwritten digits. ","[{'version': 'v1', 'created': 'Sun, 25 Nov 2018 10:21:59 GMT'}]",2018-11-27,"[['Biggio', 'Battista', ''], ['Pillai', 'Ignazio', ''], ['Bulò', 'Samuel Rota', ''], ['Ariu', 'Davide', ''], ['Pelillo', 'Marcello', ''], ['Roli', 'Fabio', '']]","['Adversarial learning', 'Unsupervised Learning', 'Clustering', 'Security Evaluation', 'Computer Security', 'Malware Detection']" 127,1502.05742,Ahmadreza Baghaie,"Ahmadreza Baghaie, Roshan M. D'souza, Zeyun Yu","Application of Independent Component Analysis Techniques in Speckle Noise Reduction of Retinal OCT Images",,,10.1016/j.ijleo.2016.03.078,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Optical Coherence Tomography (OCT) is an emerging technique in the field of biomedical imaging, with applications in ophthalmology, dermatology, coronary imaging etc. OCT images usually suffer from a granular pattern, called speckle noise, which restricts the process of interpretation. Therefore the need for speckle noise reduction techniques is of high importance. To the best of our knowledge, use of Independent Component Analysis (ICA) techniques has never been explored for speckle reduction of OCT images. Here, a comparative study of several ICA techniques (InfoMax, JADE, FastICA and SOBI) is provided for noise reduction of retinal OCT images. Having multiple B-scans of the same location, the eye movements are compensated using a rigid registration technique. Then, different ICA techniques are applied to the aggregated set of B-scans for extracting the noise-free image. Signal-to-Noise-Ratio (SNR), Contrast-to-Noise-Ratio (CNR) and Equivalent-Number-of-Looks (ENL), as well as analysis on the computational complexity of the methods, are considered as metrics for comparison. The results show that use of ICA can be beneficial, especially in case of having fewer number of B-scans. ","[{'version': 'v1', 'created': 'Thu, 19 Feb 2015 22:49:37 GMT'}, {'version': 'v2', 'created': 'Mon, 15 Jun 2015 00:33:56 GMT'}, {'version': 'v3', 'created': 'Tue, 28 Jul 2015 15:31:04 GMT'}]",2016-05-25,"[['Baghaie', 'Ahmadreza', ''], [""D'souza"", 'Roshan M.', ''], ['Yu', 'Zeyun', '']]","['Independent Component Analysis', 'Speckle Reduction', 'OpticalCoherence Tomography (OCT)']" 128,1003.4146,Michael Bommarito II,"Michael J. Bommarito II, Daniel Martin Katz",A Mathematical Approach to the Study of the United States Code,"5 pages, 6 figures, 2 tables.",,10.1016/j.physa.2010.05.057,,cs.IR cs.CY cs.DL physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findings indicate that in the recent past, the Code has grown in its amount of structure, interdependence, and language. ","[{'version': 'v1', 'created': 'Mon, 22 Mar 2010 12:41:01 GMT'}]",2015-05-18,"[['Bommarito', 'Michael J.', 'II'], ['Katz', 'Daniel Martin', '']]","['United States Code', 'hierarchical network', 'citation network', 'language', 'computational legal studies']" 129,1003.5439,Secretary Aircc Journal,"Ratul Kr. Baruah (Tezpur University, India)",Design of A Low Power Low Voltage CMOS Opamp,"8 Pages, VLSICS Journal","International Journal Of VLSI Design & Communication Systems 1.1 (2010) 1-8",10.5121/vlsic.2010.1101,,cs.OH,http://creativecommons.org/licenses/by-nc-sa/3.0/," In this paper a CMOS operational amplifier is presented which operates at 2V power supply and 1microA input bias current at 0.8 micron technology using non conventional mode of operation of MOS transistors and whose input is depended on bias current. The unique behaviour of the MOS transistors in subthreshold region not only allows a designer to work at low input bias current but also at low voltage. While operating the device at weak inversion results low power dissipation but dynamic range is degraded. Optimum balance between power dissipation and dynamic range results when the MOS transistors are operated at moderate inversion. Power is again minimised by the application of input dependant bias current using feedback loops in the input transistors of the differential pair with two current substractors. In comparison with the reported low power low voltage opamps at 0.8 micron technology, this opamp has very low standby power consumption with a high driving capability and operates at low voltage. The opamp is fairly small (0.0084 mm 2) and slew rate is more than other low power low voltage opamps reported at 0.8 um technology [1,2]. Vittoz at al [3] reported that slew rate can be improved by adaptive biasing technique and power dissipation can be reduced by operating the device in weak inversion. Though lower power dissipation is achieved the area required by the circuit is very large and speed is too small. So, operating the device in moderate inversion is a good solution. Also operating the device in subthreshold region not only allows lower power dissipation but also a lower voltage operation is achieved. ","[{'version': 'v1', 'created': 'Mon, 29 Mar 2010 07:03:46 GMT'}]",2010-07-15,"[['Baruah', 'Ratul Kr.', '', 'Tezpur University, India']]","['Opamp', 'Adaptive biasing', 'Low power', 'Low voltage', 'Current Substractor']" 130,0903.2174,Amir Leshem,Amir Leshem and Ephi Zehavi,"Game theory and the frequency selective interference channel - A tutorial",,"IEEE Signal Processing Magazine. Special issue on applications of game theory in signal processing and communications. Volume 26, Issue 4, pages 28-40. Sep. 2009",10.1109/MSP.2009.933372,,cs.IT cs.GT math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper provides a tutorial overview of game theoretic techniques used for communication over frequency selective interference channels. We discuss both competitive and cooperative techniques. Keywords: Game theory, competitive games, cooperative games, Nash Equilibrium, Nash bargaining solution, Generalized Nash games, Spectrum optimization, distributed coordination, interference channel, multiple access channel, iterative water-filling. ","[{'version': 'v1', 'created': 'Thu, 12 Mar 2009 14:42:13 GMT'}]",2010-08-10,"[['Leshem', 'Amir', ''], ['Zehavi', 'Ephi', '']]","['Game theory', 'competitive games', 'cooperative games', 'Nash Equilibrium', 'Nash bargaining']" 131,1912.05897,Runhua Xu,"Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar and Heiko Ludwig","HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning","12 pages, AISec 2019",,10.1145/3338501.3357371,,cs.CR cs.LG,http://creativecommons.org/licenses/by/4.0/," Federated learning has emerged as a promising approach for collaborative and privacy-preserving learning. Participants in a federated learning process cooperatively train a model by exchanging model parameters instead of the actual training data, which they might want to keep private. However, parameter interaction and the resulting model still might disclose information about the training data used. To address these privacy concerns, several approaches have been proposed based on differential privacy and secure multiparty computation (SMC), among others. They often result in large communication overhead and slow training time. In this paper, we propose HybridAlpha, an approach for privacy-preserving federated learning employing an SMC protocol based on functional encryption. This protocol is simple, efficient and resilient to participants dropping out. We evaluate our approach regarding the training time and data volume exchanged using a federated learning process to train a CNN on the MNIST data set. Evaluation against existing crypto-based SMC solutions shows that HybridAlpha can reduce the training time by 68% and data transfer volume by 92% on average while providing the same model performance and privacy guarantees as the existing solutions. ","[{'version': 'v1', 'created': 'Thu, 12 Dec 2019 12:37:39 GMT'}]",2019-12-13,"[['Xu', 'Runhua', ''], ['Baracaldo', 'Nathalie', ''], ['Zhou', 'Yi', ''], ['Anwar', 'Ali', ''], ['Ludwig', 'Heiko', '']]","['Federated learning', 'privacy', 'functional encryption', 'neural networks']" 132,1206.0375,Hector Zenil,Hector Zenil and James A.R. Marshall,Some Computational Aspects of Essential Properties of Evolution and Life,"Invited contribution to the ACM Ubiquity Symposium on Evolutionary Computation","ACM Ubiquity, Symposium on Evolutionary Computation, 2012",,,cs.CC cs.IT math.IT nlin.AO nlin.PS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," While evolution has inspired algorithmic methods of heuristic optimisation, little has been done in the way of using concepts of computation to advance our understanding of salient aspects of biological phenomena. We argue that under reasonable assumptions, interesting conclusions can be drawn that are of relevance to behavioural evolution. We will focus on two important features of life--robustness and fitness--which, we will argue, are related to algorithmic probability and to the thermodynamics of computation, disciplines that may be capable of modelling key features of living organisms, and which can be used in formulating new algorithms of evolutionary computation. ","[{'version': 'v1', 'created': 'Sat, 2 Jun 2012 13:00:11 GMT'}]",2012-06-05,"[['Zenil', 'Hector', ''], ['Marshall', 'James A. R.', '']]","['algorithmic probability', 'information theory', 'thermodynamics of computation', 'biological robustness', 'evolutionary computation', 'fitness']" 133,1701.00180,Hamid Hamraz,"Hamid Hamraz, Marco A. Contreras, and Jun Zhang","A scalable approach for tree segmentation within small-footprint airborne LiDAR data","The replacement version is exactly the same and only the journal biblio information and the DOI of the published version was added",Computers and Geosciences 102 (pp. 139-147): Elsevier (2017),10.1016/j.cageo.2017.02.017,,cs.DC cs.CE,http://creativecommons.org/licenses/by/4.0/," This paper presents a distributed approach that scales up to segment tree crowns within a LiDAR point cloud representing an arbitrarily large forested area. The approach uses a single-processor tree segmentation algorithm as a building block in order to process the data delivered in the shape of tiles in parallel. The distributed processing is performed in a master-slave manner, in which the master maintains the global map of the tiles and coordinates the slaves that segment tree crowns within and across the boundaries of the tiles. A minimal bias was introduced to the number of detected trees because of trees lying across the tile boundaries, which was quantified and adjusted for. Theoretical and experimental analyses of the runtime of the approach revealed a near linear speedup. The estimated number of trees categorized by crown class and the associated error margins as well as the height distribution of the detected trees aligned well with field estimations, verifying that the distributed approach works correctly. The approach enables providing information of individual tree locations and point cloud segments for a forest-level area in a timely manner, which can be used to create detailed remotely sensed forest inventories. Although the approach was presented for tree segmentation within LiDAR point clouds, the idea can also be generalized to scale up processing other big spatial datasets. Highlights: - A scalable distributed approach for tree segmentation was developed and theoretically analyzed. - ~2 million trees in a 7440 ha forest was segmented in 2.5 hours using 192 cores. - 2% false positive trees were identified as a result of the distributed run. - The approach can be used to scale up processing other big spatial data ","[{'version': 'v1', 'created': 'Sun, 1 Jan 2017 00:10:42 GMT'}, {'version': 'v2', 'created': 'Sun, 19 Mar 2017 21:13:31 GMT'}]",2017-03-21,"[['Hamraz', 'Hamid', ''], ['Contreras', 'Marco A.', ''], ['Zhang', 'Jun', '']]","['distributed computing', 'big spatial data', 'remote sensing', 'remote forest inventory', 'individual tree']" 134,1705.05787,Luiz Gustavo Hafemann,"Luiz G. Hafemann, Robert Sabourin, Luiz S. Oliveira","Learning Features for Offline Handwritten Signature Verification using Deep Convolutional Neural Networks",,,10.1016/j.patcog.2017.05.012,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Verifying the identity of a person using handwritten signatures is challenging in the presence of skilled forgeries, where a forger has access to a person's signature and deliberately attempt to imitate it. In offline (static) signature verification, the dynamic information of the signature writing process is lost, and it is difficult to design good feature extractors that can distinguish genuine signatures and skilled forgeries. This reflects in a relatively poor performance, with verification errors around 7% in the best systems in the literature. To address both the difficulty of obtaining good features, as well as improve system performance, we propose learning the representations from signature images, in a Writer-Independent format, using Convolutional Neural Networks. In particular, we propose a novel formulation of the problem that includes knowledge of skilled forgeries from a subset of users in the feature learning process, that aims to capture visual cues that distinguish genuine signatures and forgeries regardless of the user. Extensive experiments were conducted on four datasets: GPDS, MCYT, CEDAR and Brazilian PUC-PR datasets. On GPDS-160, we obtained a large improvement in state-of-the-art performance, achieving 1.72% Equal Error Rate, compared to 6.97% in the literature. We also verified that the features generalize beyond the GPDS dataset, surpassing the state-of-the-art performance in the other datasets, without requiring the representation to be fine-tuned to each particular dataset. ","[{'version': 'v1', 'created': 'Tue, 16 May 2017 16:08:09 GMT'}]",2017-05-17,"[['Hafemann', 'Luiz G.', ''], ['Sabourin', 'Robert', ''], ['Oliveira', 'Luiz S.', '']]","['Signature Verification', 'Convolutional Neural Networks', 'Feature Learning', 'Deep']" 135,1807.05848,Dmitry Lande,"Andrei Snarskii, Dmyto Lande, Dmyto Manko","K-method of calculating the mutual influence of nodes in a directed weight complex networks","16 pages, 2 appendix",,10.1016/j.physa.2019.04.135,,cs.SI physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A new characteristic of paired nodes in a directed weight complex network is considered. A method (named as K-method) of the characteristics calculation for complex networks is proposed. The method is based on transforming the initial network with the subsequent application of the Kirchhoff rules. The scope of the method for sparse complex networks is proposed. The nodes of these complex networks are concepts of the real world, and the connections have a cause-effect character of the so-called ""cognitive maps"". Two new characteristics of concept nodes having a semantic interpretation are proposed, namely ""pressure"" and ""influence"" taking into account the influence of all nodes on each other. ","[{'version': 'v1', 'created': 'Mon, 16 Jul 2018 13:30:43 GMT'}]",2019-06-26,"[['Snarskii', 'Andrei', ''], ['Lande', 'Dmyto', ''], ['Manko', 'Dmyto', '']]","['complex networks', 'K-method', 'mutual influence', 'nodes ranking']" 136,1411.2406,Vladimir Khlevnoy,"Vladimir A. Khlevnoy, Andrey A. Shchurov",A Formal Approach to Distributed System Security Test Generation,"7 pages, 6 figures, 3 tables, Published with International Journal of Computer Trends and Technology (IJCTT). arXiv admin note: text overlap with arXiv:1410.1747","International Journal of Computer Trends and Technology (IJCTT) V16(3), 2014 pg 121-127",10.14445/22312803/IJCTT-V16P130,,cs.CR cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Deployment of distributed systems sets high requirements for procedures for the security testing of these systems. This work introduces: (1) a list of typical threats based on standards and actual practices; (2) an extended six-layered model for test generation mission on the basis of technical specifications and end-user requirements. Based on the list of typical threats and the multilayer model, we describe a formal approach to the automated design and generation of security mechanisms checklists for complex distributed systems. ","[{'version': 'v1', 'created': 'Mon, 10 Nov 2014 13:01:19 GMT'}]",2014-11-11,"[['Khlevnoy', 'Vladimir A.', ''], ['Shchurov', 'Andrey A.', '']]","['distributed systems', 'security testing', 'formal']" 137,1511.07846,Leonidas Fegaras,Leonidas Fegaras,Incremental Query Processing on Big Data Streams,Extended version of a paper submitted to a journal,,10.1109/TKDE.2016.2601103,,cs.DB cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper addresses online query processing for large-scale, incremental data analysis on a distributed stream processing engine (DSPE). Our goal is to convert any SQL-like query to an incremental DSPE program automatically. In contrast to other approaches, we derive incremental programs that return accurate results, not approximate answers. This is accomplished by retaining a minimal state during the query evaluation lifetime and by using incremental evaluation techniques to return an accurate snapshot answer at each time interval that depends on the current state and the latest batches of data. Our methods can handle many forms of queries on nested data collections, including iterative and nested queries, group-by with aggregation, and equi-joins. Finally, we report on a prototype implementation of our framework, called MRQL Streaming, running on top of Spark and we experimentally validate the effectiveness of our methods. ","[{'version': 'v1', 'created': 'Tue, 24 Nov 2015 19:55:09 GMT'}, {'version': 'v2', 'created': 'Sun, 17 Jan 2016 22:59:08 GMT'}, {'version': 'v3', 'created': 'Sun, 6 Mar 2016 19:21:25 GMT'}]",2016-08-23,"[['Fegaras', 'Leonidas', '']]","['Incremental Data Processing', 'DistributedStream Processing', 'Big Data', 'MRQL', 'Spark']" 138,1701.05751,Arnaud Martin,"Siwar Jendoubi (LARODEC, DRUID, CERT), Arnaud Martin (IRISA, UR1, DRUID), Ludovic Li\'etard (IRISA), Ben Hend (CERT), Ben Boutheina (LARODEC)",Two Evidential Data Based Models for Influence Maximization in Twitter,"Knowledge-Based Systems, Elsevier, 2017",,10.1016/j.knosys.2017.01.014,,cs.SI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Influence maximization is the problem of selecting a set of influential users in the social network. Those users could adopt the product and trigger a large cascade of adoptions through the "" word of mouth "" effect. In this paper, we propose two evidential influence maximization models for Twitter social network. The proposed approach uses the theory of belief functions to estimate users influence. Furthermore, the proposed influence estimation measure fuses many influence aspects in Twitter, like the importance of the user in the network structure and the popularity of user's tweets (messages). In our experiments, we compare the proposed solutions to existing ones and we show the performance of our models. ","[{'version': 'v1', 'created': 'Fri, 20 Jan 2017 10:39:13 GMT'}]",2017-01-23,"[['Jendoubi', 'Siwar', '', 'LARODEC, DRUID, CERT'], ['Martin', 'Arnaud', '', 'IRISA, UR1,\n DRUID'], ['Liétard', 'Ludovic', '', 'IRISA'], ['Hend', 'Ben', '', 'CERT'], ['Boutheina', 'Ben', '', 'LARODEC']]","['Influence maximization', 'Theory of belief functions', 'Twitter socialnetwork', 'Influence measure']" 139,2105.04294,Tonatiuh Hern\'andez-Del-Toro M.Sc.,"Tonatiuh Hern\'andez-Del-Toro, Carlos A. Reyes-Garc\'ia, Luis Villase\~nor-Pineda","Toward asynchronous EEG-based BCI: Detecting imagined words segments in continuous EEG signals","10 pages, 14 figures","Biomedical Signal Processing and Control. Volume 65 (2021), 102351",10.1016/j.bspc.2020.102351,,cs.HC cs.LG eess.SP,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," An asynchronous Brain--Computer Interface (BCI) based on imagined speech is a tool that allows to control an external device or to emit a message at the moment the user desires to by decoding EEG signals of imagined speech. In order to correctly implement these types of BCI, we must be able to detect from a continuous signal, when the subject starts to imagine words. In this work, five methods of feature extraction based on wavelet decomposition, empirical mode decomposition, frequency energies, fractal dimension and chaos theory features are presented to solve the task of detecting imagined words segments from continuous EEG signals as a preliminary study for a latter implementation of an asynchronous BCI based on imagined speech. These methods are tested in three datasets using four different classifiers and the higher F1 scores obtained are 0.73, 0.79, and 0.68 for each dataset, respectively. This results are promising to build a system that automatizes the segmentation of imagined words segments for latter classification. ","[{'version': 'v1', 'created': 'Tue, 13 Apr 2021 00:13:42 GMT'}]",2021-05-11,"[['Hernández-Del-Toro', 'Tonatiuh', ''], ['Reyes-García', 'Carlos A.', ''], ['Villaseñor-Pineda', 'Luis', '']]","['Imagined speech', 'Asynchronous BCI', 'Signal processing']" 140,1109.0397,Gabriele D'Angelo,"Moreno Marzolla, Stefano Ferretti, Gabriele D'Angelo",Auction-Based Resource Allocation in Digital Ecosystems,"Proceedings of the 6th International Conference on MOBILe Wireless MiddleWARE, Operating Systems, and Applications (MobilWare 2013). Bologna, Italy, November 11-12, 2013",,10.1109/Mobilware.2013.16,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The proliferation of portable devices (PDAs, smartphones, digital multimedia players, and so forth) allows mobile users to carry around a pool of computing, storage and communication resources. Sharing these resources with other users (""Digital Organisms"" -- DOs) opens the door to novel interesting scenarios, where people trade resources to allow the execution, anytime and anywhere, of applications that require a mix of capabilities. In this paper we present a fully distributed approach for resource sharing among multiple devices owned by different mobile users. Our scheme enables DOs to trade computing/networking facilities through an auction-based mechanism, without the need of a central control. We use a set of numerical experiments to compare our approach with an optimal (centralized) allocation strategy that, given the set of resource demands and offers, maximizes the number of matches. Results confirm the effectiveness of our approach since it produces a fair allocation of resources with low computational cost, providing DOs with the means to form an altruistic digital ecosystem. ","[{'version': 'v1', 'created': 'Fri, 2 Sep 2011 10:08:38 GMT'}, {'version': 'v2', 'created': 'Wed, 30 Jul 2014 12:22:44 GMT'}]",2014-07-31,"[['Marzolla', 'Moreno', ''], ['Ferretti', 'Stefano', ''], [""D'Angelo"", 'Gabriele', '']]","['Resource Allocation', 'Optimization', 'Peer-to-Peer Systems', 'Ad-hoc Networks']" 141,2006.06867,Onur Varol,"Mohsen Sayyadiharikandeh, Onur Varol, Kai-Cheng Yang, Alessandro Flammini, Filippo Menczer",Detection of Novel Social Bots by Ensembles of Specialized Classifiers,"8 pages, 10 figures, Accepted to CIKM'20","Proc. 29th ACM International Conference on Information and Knowledge Management (CIKM), pages 2725-2732, 2020",10.1145/3340531.3412698,,cs.SI cs.IR cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion. While researchers have developed sophisticated methods to detect abuse, novel bots with diverse behaviors evade detection. We show that different types of bots are characterized by different behavioral features. As a result, supervised learning techniques suffer severe performance deterioration when attempting to detect behaviors not observed in the training data. Moreover, tuning these models to recognize novel bots requires retraining with a significant amount of new annotations, which are expensive to obtain. To address these issues, we propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule. The ensemble of specialized classifiers (ESC) can better generalize, leading to an average improvement of 56\% in F1 score for unseen accounts across datasets. Furthermore, novel bot behaviors are learned with fewer labeled examples during retraining. We deployed ESC in the newest version of Botometer, a popular tool to detect social bots in the wild, with a cross-validation AUC of 0.99. ","[{'version': 'v1', 'created': 'Thu, 11 Jun 2020 22:59:59 GMT'}, {'version': 'v2', 'created': 'Fri, 14 Aug 2020 20:04:21 GMT'}]",2020-11-30,"[['Sayyadiharikandeh', 'Mohsen', ''], ['Varol', 'Onur', ''], ['Yang', 'Kai-Cheng', ''], ['Flammini', 'Alessandro', ''], ['Menczer', 'Filippo', '']]","['Social media', 'social bots', 'machine learning', 'cross-domain', 'recall']" 142,1901.10367,Tatyana Ivanova,Philippe Balbiani and Tatyana Ivanova,"Representation theorems for extended contact algebras based on equivalence relations",,,10.1007/s11225-020-09923-0,,cs.LO math.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The aim of this paper is to give new representation theorems for extended contact algebras. These representation theorems are based on equivalence relations. ","[{'version': 'v1', 'created': 'Tue, 29 Jan 2019 16:34:19 GMT'}]",2020-09-22,"[['Balbiani', 'Philippe', ''], ['Ivanova', 'Tatyana', '']]","['Regular closed subsets', 'Contact algebras', 'Extended contact algebras', 'Topological representation', 'Relational representation']" 143,1810.10464,Meisam Mohammady memoh,"Meisam Mohammady, Lingyu Wang, Yuan Hong, Habib Louafi, Makan Pourzandi, Mourad Debbabi",Preserving Both Privacy and Utility in Network Trace Anonymization,,,10.1145/3243734.3243809,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," As network security monitoring grows more sophisticated, there is an increasing need for outsourcing such tasks to third-party analysts. However, organizations are usually reluctant to share their network traces due to privacy concerns over sensitive information, e.g., network and system configuration, which may potentially be exploited for attacks. In cases where data owners are convinced to share their network traces, the data are typically subjected to certain anonymization techniques, e.g., CryptoPAn, which replaces real IP addresses with prefix-preserving pseudonyms. However, most such techniques either are vulnerable to adversaries with prior knowledge about some network flows in the traces, or require heavy data sanitization or perturbation, both of which may result in a significant loss of data utility. In this paper, we aim to preserve both privacy and utility through shifting the trade-off from between privacy and utility to between privacy and computational cost. The key idea is for the analysts to generate and analyze multiple anonymized views of the original network traces; those views are designed to be sufficiently indistinguishable even to adversaries armed with prior knowledge, which preserves the privacy, whereas one of the views will yield true analysis results privately retrieved by the data owner, which preserves the utility. We present the general approach and instantiate it based on CryptoPAn. We formally analyze the privacy of our solution and experimentally evaluate it using real network traces provided by a major ISP. The results show that our approach can significantly reduce the level of information leakage (e.g., less than 1\% of the information leaked by CryptoPAn) with comparable utility. ","[{'version': 'v1', 'created': 'Wed, 24 Oct 2018 15:54:26 GMT'}]",2018-10-25,"[['Mohammady', 'Meisam', ''], ['Wang', 'Lingyu', ''], ['Hong', 'Yuan', ''], ['Louafi', 'Habib', ''], ['Pourzandi', 'Makan', ''], ['Debbabi', 'Mourad', '']]","['Network trace anonymization', 'prefix-preserving anonymization', 'CryptoPAn', 'semantic attacks']" 144,1905.09433,Tongwen Huang,"Tongwen Huang, Zhiqi Zhang, Junlin Zhang","FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction","8 pages,5 figures","ACM Conference on Recommender Systems (RecSys '19), September 16--20, 2019, Copenhagen, Denmark",10.1145/3298689.3347043,,cs.LG cs.AI stat.ML,http://creativecommons.org/licenses/by-nc-sa/4.0/," Advertising and feed ranking are essential to many Internet companies such as Facebook and Sina Weibo. Among many real-world advertising and feed ranking systems, click through rate (CTR) prediction plays a central role. There are many proposed models in this field such as logistic regression, tree based models, factorization machine based models and deep learning based CTR models. However, many current works calculate the feature interactions in a simple way such as Hadamard product and inner product and they care less about the importance of features. In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions. On the one hand, the FiBiNET can dynamically learn the importance of features via the Squeeze-Excitation network (SENET) mechanism; on the other hand, it is able to effectively learn the feature interactions via bilinear function. We conduct extensive experiments on two real-world datasets and show that our shallow model outperforms other shallow models such as factorization machine(FM) and field-aware factorization machine(FFM). In order to improve performance further, we combine a classical deep neural network(DNN) component with the shallow model to be a deep model. The deep FiBiNET consistently outperforms the other state-of-the-art deep models such as DeepFM and extreme deep factorization machine(XdeepFM). ","[{'version': 'v1', 'created': 'Thu, 23 May 2019 02:10:17 GMT'}]",2019-11-13,"[['Huang', 'Tongwen', ''], ['Zhang', 'Zhiqi', ''], ['Zhang', 'Junlin', '']]","['Display Advertising', 'CTR Prediction', 'Factorization Machines', 'SqueezeExcitation network', 'Neural Network', 'Bilinear Function']" 145,1312.4794,Thabet Slimani,Thabet Slimani,Semantic Annotation: The Mainstay of Semantic Web,"8 pages, 3 figures","International Journal of Computer Applications Technology and Research, Volume 2, Issue 6, 763-770, 2013",10.7753/IJCATR0206.1025,,cs.DL cs.AI cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its vision (stack layers) with regard to some concept definitions that helps the understanding of semantic annotation. Additionally, this paper introduces the semantic annotation categories, tools, domains and models. ","[{'version': 'v1', 'created': 'Tue, 17 Dec 2013 14:12:51 GMT'}]",2013-12-18,"[['Slimani', 'Thabet', '']]","['semantic annotation', 'Semantic Web', 'Ontologies 1']" 146,0811.0133,Deepyaman Maiti,"Mithun Chakraborty, Deepyaman Maiti, Amit Konar, Ramadoss Janarthanan","A Study of the Grunwald-Letnikov Definition for Minimizing the Effects of Random Noise on Fractional Order Differential Equations","4th IEEE International Conference on Information and Automation for Sustainability, 2008",,10.1109/ICIAFS.2008.4783931,,cs.OH,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Of the many definitions for fractional order differintegral, the Grunwald-Letnikov definition is arguably the most important one. The necessity of this definition for the description and analysis of fractional order systems cannot be overstated. Unfortunately, the Fractional Order Differential Equation (FODE) describing such a systems, in its original form, highly sensitive to the effects of random noise components inevitable in a natural environment. Thus direct application of the definition in a real-life problem can yield erroneous results. In this article, we perform an in-depth mathematical analysis the Grunwald-Letnikov definition in depth and, as far as we know, we are the first to do so. Based on our analysis, we present a transformation scheme which will allow us to accurately analyze generalized fractional order systems in presence of significant quantities of random errors. Finally, by a simple experiment, we demonstrate the high degree of robustness to noise offered by the said transformation and thus validate our scheme. ","[{'version': 'v1', 'created': 'Sun, 2 Nov 2008 06:16:49 GMT'}]",2016-11-15,"[['Chakraborty', 'Mithun', ''], ['Maiti', 'Deepyaman', ''], ['Konar', 'Amit', ''], ['Janarthanan', 'Ramadoss', '']]","['Fractional calculus', 'fractional order differential Equation', 'Grunwald-Letnikov definition', 'random noise']" 147,1503.00173,Jonathan Mei,Jonathan Mei and Jos\'e M. F. Moura,Signal Processing on Graphs: Causal Modeling of Unstructured Data,,"IEEE Transactions on Signal Processing, vol. 65, no. 8, pp. 2077-2092, April 15, 2017",10.1109/TSP.2016.2634543,,cs.IT math.IT stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Many applications collect a large number of time series, for example, the financial data of companies quoted in a stock exchange, the health care data of all patients that visit the emergency room of a hospital, or the temperature sequences continuously measured by weather stations across the US. These data are often referred to as unstructured. A first task in its analytics is to derive a low dimensional representation, a graph or discrete manifold, that describes well the interrelations among the time series and their intrarelations across time. This paper presents a computationally tractable algorithm for estimating this graph that structures the data. The resulting graph is directed and weighted, possibly capturing causal relations, not just reciprocal correlations as in many existing approaches in the literature. A convergence analysis is carried out. The algorithm is demonstrated on random graph datasets and real network time series datasets, and its performance is compared to that of related methods. The adjacency matrices estimated with the new method are close to the true graph in the simulated data and consistent with prior physical knowledge in the real dataset tested. ","[{'version': 'v1', 'created': 'Sat, 28 Feb 2015 20:28:05 GMT'}, {'version': 'v2', 'created': 'Thu, 14 Apr 2016 20:58:45 GMT'}, {'version': 'v3', 'created': 'Tue, 13 Sep 2016 13:19:02 GMT'}, {'version': 'v4', 'created': 'Mon, 31 Oct 2016 22:05:33 GMT'}, {'version': 'v5', 'created': 'Wed, 30 Nov 2016 19:12:41 GMT'}, {'version': 'v6', 'created': 'Wed, 8 Feb 2017 15:49:58 GMT'}]",2017-02-09,"[['Mei', 'Jonathan', ''], ['Moura', 'José M. F.', '']]","['Graph Signal Processing', 'Graph Structure', 'Adjacency Matrix', 'Network', 'Time Series', 'Big Data', 'Causal']" 148,1911.11473,Dat Quoc Nguyen,"Dat Quoc Nguyen, Dai Quoc Nguyen, Son Bao Pham, The Duy Bui","A Fast Template-based Approach to Automatically Identify Primary Text Content of a Web Page","In Proceedings of the 2009 International Conference on Knowledge and Systems Engineering (KSE 2009)",,10.1109/KSE.2009.39,,cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Search engines have become an indispensable tool for browsing information on the Internet. The user, however, is often annoyed by redundant results from irrelevant Web pages. One reason is because search engines also look at non-informative blocks of Web pages such as advertisement, navigation links, etc. In this paper, we propose a fast algorithm called FastContentExtractor to automatically detect main content blocks in a Web page by improving the ContentExtractor algorithm. By automatically identifying and storing templates representing the structure of content blocks in a website, content blocks of a new Web page from the Website can be extracted quickly. The hierarchical order of the output blocks is also maintained which guarantees that the extracted content blocks are in the same order as the original ones. ","[{'version': 'v1', 'created': 'Tue, 26 Nov 2019 11:49:16 GMT'}]",2019-11-27,"[['Nguyen', 'Dat Quoc', ''], ['Nguyen', 'Dai Quoc', ''], ['Pham', 'Son Bao', ''], ['Bui', 'The Duy', '']]","['data mining', 'template detection', 'web mining']" 149,1711.08521,Ibrahim Aljarah,"Wadi' Hijawi, Hossam Faris, Ja'far Alqatawna, Ibrahim Aljarah, Ala' M. Al-Zoubi, and Maria Habib",EMFET: E-mail Features Extraction Tool,,,10.13140/RG.2.2.32995.45603,,cs.IR cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," EMFET is an open source and flexible tool that can be used to extract a large number of features from any email corpus with emails saved in EML format. The extracted features can be categorized into three main groups: header features, payload (body) features, and attachment features. The purpose of the tool is to help practitioners and researchers to build datasets that can be used for training machine learning models for spam detection. So far, 140 features can be extracted using EMFET. EMFET is extensible and easy to use. The source code of EMFET is publicly available at GitHub (https://github.com/WadeaHijjawi/EmailFeaturesExtraction) ","[{'version': 'v1', 'created': 'Wed, 22 Nov 2017 22:24:20 GMT'}]",2017-11-28,"[['Hijawi', ""Wadi'"", ''], ['Faris', 'Hossam', ''], ['Alqatawna', ""Ja'far"", ''], ['Aljarah', 'Ibrahim', ''], ['Al-Zoubi', ""Ala' M."", ''], ['Habib', 'Maria', '']]","['Spam Detection', 'Feature Extraction Tool', 'Spam Features', 'Data Mining', 'Machine learning']" 150,1612.06093,Min Jiang,"Min Jiang, Zhongqiang Huang, Liming Qiu, Wenzhen Huang and Gary G. Yen",Transfer Learning based Dynamic Multiobjective Optimization Algorithms,,,10.1109/TEVC.2017.2771451,,cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is the optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing the ""experiences"" to construct a prediction model via statistical machine learning approaches. However most of the existing methods ignore the non-independent and identically distributed nature of data used to construct the prediction model. In this paper, we propose an algorithmic framework, called Tr-DMOEA, which integrates transfer learning and population-based evolutionary algorithm for solving the DMOPs. This approach takes the transfer learning method as a tool to help reuse the past experience for speeding up the evolutionary process, and at the same time, any population based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this, we incorporate the proposed approach into the development of three well-known algorithms, nondominated sorting genetic algorithm II (NSGA-II), multiobjective particle swarm optimization (MOPSO), and the regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA), and then employ twelve benchmark functions to test these algorithms as well as compare with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed method through exploiting machine learning technology. ","[{'version': 'v1', 'created': 'Mon, 19 Dec 2016 09:49:28 GMT'}, {'version': 'v2', 'created': 'Sat, 18 Nov 2017 13:04:02 GMT'}]",2017-11-21,"[['Jiang', 'Min', ''], ['Huang', 'Zhongqiang', ''], ['Qiu', 'Liming', ''], ['Huang', 'Wenzhen', ''], ['Yen', 'Gary G.', '']]","['Dynamic multi-objective optimization', 'Domain adaption', 'Dimensionality reduction', 'Transfer learning', 'Evolutionary Algorithm']" 151,2102.08162,"Nicolas Pr\""ollochs","Kirill Solovev, Nicolas Pr\""ollochs",Integrating Floor Plans into Hedonic Models for Rent Price Appraisal,,,10.1145/3442381.3449967,,cs.LG econ.GN q-fin.EC stat.AP,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Online real estate platforms have become significant marketplaces facilitating users' search for an apartment or a house. Yet it remains challenging to accurately appraise a property's value. Prior works have primarily studied real estate valuation based on hedonic price models that take structured data into account while accompanying unstructured data is typically ignored. In this study, we investigate to what extent an automated visual analysis of apartment floor plans on online real estate platforms can enhance hedonic rent price appraisal. We propose a tailored two-staged deep learning approach to learn price-relevant designs of floor plans from historical price data. Subsequently, we integrate the floor plan predictions into hedonic rent price models that account for both structural and locational characteristics of an apartment. Our empirical analysis based on a unique dataset of 9174 real estate listings suggests that current hedonic models underutilize the available data. We find that (1) the visual design of floor plans has significant explanatory power regarding rent prices - even after controlling for structural and locational apartment characteristics, and (2) harnessing floor plans results in an up to 10.56% lower out-of-sample prediction error. We further find that floor plans yield a particularly high gain in prediction performance for older and smaller apartments. Altogether, our empirical findings contribute to the existing research body by establishing the link between the visual design of floor plans and real estate prices. Moreover, our approach has important implications for online real estate platforms, which can use our findings to enhance user experience in their real estate listings. ","[{'version': 'v1', 'created': 'Tue, 16 Feb 2021 14:05:33 GMT'}]",2021-02-17,"[['Solovev', 'Kirill', ''], ['Pröllochs', 'Nicolas', '']]","['Online real estate platforms', 'hedonic price models', 'floor plans', 'visual analytics', 'image sentiment']" 152,2105.09297,Rongyu Cao Dr.,Rongyu Cao and Yixuan Cao and Ganbin Zhou and Ping Luo,"Extracting Variable-Depth Logical Document Hierarchy from Long Documents: Method, Evaluation, and Application","23 pages, 10 figures, Journal of computer science and technology","Journal of computer science and technology, 2021",10.1007/s11390-021-1076-7,,cs.IR cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we study the problem of extracting variable-depth ""logical document hierarchy"" from long documents, namely organizing the recognized ""physical document objects"" into hierarchical structures. The discovery of logical document hierarchy is the vital step to support many downstream applications. However, long documents, containing hundreds or even thousands of pages and variable-depth hierarchy, challenge the existing methods. To address these challenges, we develop a framework, namely Hierarchy Extraction from Long Document (HELD), where we ""sequentially"" insert each physical object at the proper on of the current tree. Determining whether each possible position is proper or not can be formulated as a binary classification problem. To further improve its effectiveness and efficiency, we study the design variants in HELD, including traversal orders of the insertion positions, heading extraction explicitly or implicitly, tolerance to insertion errors in predecessor steps, and so on. The empirical experiments based on thousands of long documents from Chinese, English financial market and English scientific publication show that the HELD model with the ""root-to-leaf"" traversal order and explicit heading extraction is the best choice to achieve the tradeoff between effectiveness and efficiency with the accuracy of 0.9726, 0.7291 and 0.9578 in Chinese financial, English financial and arXiv datasets, respectively. Finally, we show that logical document hierarchy can be employed to significantly improve the performance of the downstream passage retrieval task. In summary, we conduct a systematic study on this task in terms of methods, evaluations, and applications. ","[{'version': 'v1', 'created': 'Fri, 14 May 2021 06:26:22 GMT'}]",2021-05-21,"[['Cao', 'Rongyu', ''], ['Cao', 'Yixuan', ''], ['Zhou', 'Ganbin', ''], ['Luo', 'Ping', '']]","['logical document hierarchy', 'long documents', 'passage retrieval']" 153,1305.4077,Riadh Bouslimi,"Abir Messaoudi, Riadh Bouslimi, Jalel Akaichi",Indexing Medical Images based on Collaborative Experts Reports,"9 pages, 8 figures. International Journal of Computer Applications, May 2013",,10.5120/11955-7787,,cs.CV cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A patient is often willing to quickly get, from his physician, reliable analysis and concise explanation according to provided linked medical images. The fact of making choices individually by the patient's physician may lead to malpractices and consequently generates unforeseeable damages. The Institute of Medicine of the National Sciences Academy(IMNAS) in USA published a study estimating that up to 98,000 hospital deathseach year can be attributed to medical malpractice [1]. Moreover, physician, in charge of medical image analysis, might be unavailable at the right time, which may complicate the patient's state. The goal of this paper is to provide to physicians and patients, a social network that permits to foster cooperation and to overcome the problem of unavailability of doctors on site any time. Therefore, patients can submit their medical images to be diagnosed and commented by several experts instantly. Consequently, the need to process opinions and to extract information automatically from the proposed social network became a necessity due to the huge number of comments expressing specialist's reviews. For this reason, we propose a kind of comments' summary keywords-based method which extracts the major current terms and relevant words existing on physicians' annotations. The extracted keywords will present a new and robust method for image indexation. In fact, significant extracted terms will be used later to index images in order to facilitate their discovery for any appropriate use. To overcome this challenge, we propose our Terminology Extraction of Annotation (TEA) mixed approach which focuses on algorithms mainly based on statistical methods and on external semantic resources. ","[{'version': 'v1', 'created': 'Fri, 17 May 2013 13:43:57 GMT'}, {'version': 'v2', 'created': 'Fri, 5 Jul 2013 18:01:35 GMT'}]",2015-06-16,"[['Messaoudi', 'Abir', ''], ['Bouslimi', 'Riadh', ''], ['Akaichi', 'Jalel', '']]","['Medical social network', 'Social network analysis', 'indexation', 'mixed approach', 'relevant words extraction', 'text mining', 'Medicalimages']" 154,1608.08831,Guillaume Noyel,"Guillaume Noyel (IPRI), Michel Jourlin (IPRI, LHC)","Spatio-Colour Aspl\""und 's Metric and Logarithmic Image Processing for Colour Images (LIPC)",,"C\'esar Beltr\'an-Casta\~n\'on, Ingela Nystr\""om, Fazel Famili CIARP2016 - XXI IberoAmerican Congress on Pattern Recognition, Nov 2016, Lima, Peru. Springer, 10125 2017, pp.36-43, 2016, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 21st Iberoamerican Congress, CIARP 2016, Lima, Peru, November 8--11, 2016, Proceedings",10.1007/978-3-319-52277-7_5,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Aspl\""und 's metric, which is useful for pattern matching, consists in a double-sided probing, i.e. the over-graph and the sub-graph of a function are probed jointly. This paper extends the Aspl\""und 's metric we previously defined for colour and multivariate images using a marginal approach (i.e. component by component) to the first spatio-colour Aspl\""und 's metric based on the vectorial colour LIP model (LIPC). LIPC is a non-linear model with operations between colour images which are consistent with the human visual system. The defined colour metric is insensitive to lighting variations and a variant which is robust to noise is used for colour pattern matching. ","[{'version': 'v1', 'created': 'Wed, 31 Aug 2016 12:49:12 GMT'}, {'version': 'v2', 'created': 'Mon, 27 Feb 2017 16:08:29 GMT'}]",2017-02-28,"[['Noyel', 'Guillaume', '', 'IPRI'], ['Jourlin', 'Michel', '', 'IPRI, LHC']]","['Aspl¨und’s metric', 'spatio-colour metric', 'colour LogarithmicImage Processing', 'double-sided probing', 'colour pattern recognition']" 155,1310.8097,"Hans-Peter Schr\""ocker","Hans-Peter Schr\""ocker, Matthias J. Weber",Guaranteed Collision Detection With Toleranced Motions,Accepted for publication in Computer Aided Geometric Design,"Comput. Aided Geom. Design, 31(7-8):602-612, 2014",10.1016/j.cagd.2014.08.001,,cs.CG cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a method for guaranteed collision detection with toleranced motions. The basic idea is to consider the motion as a curve in the 12-dimensional space of affine displacements, endowed with an object-oriented Euclidean metric, and cover it with balls. The associated orbits of points, lines, planes and polygons have particularly simple shapes that lend themselves well to exact and fast collision queries. We present formulas for elementary collision tests with these orbit shapes and we suggest an algorithm, based on motion subdivision and computation of bounding balls, that can give a no-collision guarantee. It allows a robust and efficient implementation and parallelization. At hand of several examples we explore the asymptotic behavior of the algorithm and compare different implementation strategies. ","[{'version': 'v1', 'created': 'Wed, 30 Oct 2013 10:41:32 GMT'}, {'version': 'v2', 'created': 'Sat, 14 Jun 2014 07:16:00 GMT'}, {'version': 'v3', 'created': 'Thu, 14 Aug 2014 19:01:23 GMT'}]",2018-07-31,"[['Schröcker', 'Hans-Peter', ''], ['Weber', 'Matthias J.', '']]","['Toleranced motion', 'collision detection', 'bounding ball', 'bounding volumes']" 156,1512.07143,Marc Bola\~nos,"Mariella Dimiccoli and Marc Bola\~nos and Estefania Talavera and Maedeh Aghaei and Stavri G. Nikolov and Petia Radeva","SR-Clustering: Semantic Regularized Clustering for Egocentric Photo Streams Segmentation","23 pages, 10 figures, 2 tables. In Press in Computer Vision and Image Understanding Journal",,10.1016/j.cviu.2016.10.005,,cs.AI cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," While wearable cameras are becoming increasingly popular, locating relevant information in large unstructured collections of egocentric images is still a tedious and time consuming processes. This paper addresses the problem of organizing egocentric photo streams acquired by a wearable camera into semantically meaningful segments. First, contextual and semantic information is extracted for each image by employing a Convolutional Neural Networks approach. Later, by integrating language processing, a vocabulary of concepts is defined in a semantic space. Finally, by exploiting the temporal coherence in photo streams, images which share contextual and semantic attributes are grouped together. The resulting temporal segmentation is particularly suited for further analysis, ranging from activity and event recognition to semantic indexing and summarization. Experiments over egocentric sets of nearly 17,000 images, show that the proposed approach outperforms state-of-the-art methods. ","[{'version': 'v1', 'created': 'Tue, 22 Dec 2015 16:13:54 GMT'}, {'version': 'v2', 'created': 'Mon, 17 Oct 2016 09:40:11 GMT'}]",2016-11-03,"[['Dimiccoli', 'Mariella', ''], ['Bolaños', 'Marc', ''], ['Talavera', 'Estefania', ''], ['Aghaei', 'Maedeh', ''], ['Nikolov', 'Stavri G.', ''], ['Radeva', 'Petia', '']]","['temporal segmentation', 'egocentric vision', 'photo streams clustering']" 157,1809.02266,Yucheng Fu,"Yucheng Fu, Yang Liu","BubGAN: Bubble Generative Adversarial Networks for Synthesizing Realistic Bubbly Flow Images","20 pages, 15 figures",,10.1016/j.ces.2019.04.004,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Bubble segmentation and size detection algorithms have been developed in recent years for their high efficiency and accuracy in measuring bubbly two-phase flows. In this work, we proposed an architecture called bubble generative adversarial networks (BubGAN) for the generation of realistic synthetic images which could be further used as training or benchmarking data for the development of advanced image processing algorithms. The BubGAN is trained initially on a labeled bubble dataset consisting of ten thousand images. By learning the distribution of these bubbles, the BubGAN can generate more realistic bubbles compared to the conventional models used in the literature. The trained BubGAN is conditioned on bubble feature parameters and has full control of bubble properties in terms of aspect ratio, rotation angle, circularity and edge ratio. A million bubble dataset is pre-generated using the trained BubGAN. One can then assemble realistic bubbly flow images using this dataset and associated image processing tool. These images contain detailed bubble information, therefore do not require additional manual labeling. This is more useful compared with the conventional GAN which generates images without labeling information. The tool could be used to provide benchmarking and training data for existing image processing algorithms and to guide the future development of bubble detecting algorithms. ","[{'version': 'v1', 'created': 'Fri, 7 Sep 2018 01:19:59 GMT'}]",2019-11-22,"[['Fu', 'Yucheng', ''], ['Liu', 'Yang', '']]","['Realistic bubble synthesis', 'object counting', 'bubble segmentation', 'generative adversarial networks', 'image processing']" 158,1307.3295,Tareq Alhmiedat,"Tareq Alhmiedat, Amer O. Abu Salem, and Anas Abu Taleb","An imporved decentralized approach for tracking multiple mobile targets through ZigBee WSNs",16 pages,,10.5121/ijwmn.2013.5305,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Target localization and tracking problems in WSNs have received considerable attention recently, driven by the requirement to achieve high localization accuracy, with the minimum cost possible. In WSN based tracking applications, it is critical to know the current location of any sensor node with the minimum energy consumed. This paper focuses on the energy consumption issue in terms of communication between nodes whenever the localization information is transmitted to a sink node. Tracking through WSNs can be categorized into centralized and decentralized systems. Decentralized systems offer low power consumption when deployed to track a small number of mobile targets compared to the centralized tracking systems. However, in several applications, it is essential to position a large number of mobile targets. In such applications, decentralized systems offer high power consumption, since the location of each mobile target is required to be transmitted to a sink node, and this increases the power consumption for the whole WSN. In this paper, we propose a power efficient decentralized approach for tracking a large number of mobile targets while offering reasonable localization accuracy through ZigBee network. ","[{'version': 'v1', 'created': 'Thu, 11 Jul 2013 23:39:46 GMT'}]",2013-07-15,"[['Alhmiedat', 'Tareq', ''], ['Salem', 'Amer O. Abu', ''], ['Taleb', 'Anas Abu', '']]","['Localization', 'Tracking', 'Decentralized', 'Wireless Sensor Networks', 'ZigBee']" 159,1612.01837,Chengqing Li,"Qiuye Gan, Simin Yu, Chengqing Li, Jinhu L\""u, Zhuosheng Lin, Ping Chen","Design and ARM-embedded implementation of a chaotic map-based multicast scheme for multiuser speech wireless communication","22 pages, 14 figures in International Journal of Circuit Theory and Applications, 2017",,10.1002/cta.2300,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper proposes a chaotic map-based multicast scheme for multiuser speech wireless communication and implements it in an ARM platform. The scheme compresses the digital audio signal decoded by a sound card and then encrypts it with a three-level chaotic encryption scheme. First, the position of every bit of the compressed data is permuted randomly with a pseudo-random number sequence (PRNS) generated by a 6-D chaotic map. Then, the obtained data are further permuted in the level of byte with a PRNS generated by a 7-D chaotic map. Finally, it is operated with a multiround chaotic stream cipher. The whole system owns the following merits: the redundancy in the original audio file is reduced effectively and the corresponding unicity distance is increased; the balancing point between a high security level of the system and real-time conduction speed is achieved well. In the ARM implementation, the framework of communication of multicast-multiuser in a subnet and the Internet Group Manage Protocol is adopted to obtain the function of communication between one client and other ones. Comprehensive test results were provided to show the feasibility and security performance of the whole system. ","[{'version': 'v1', 'created': 'Tue, 6 Dec 2016 14:55:29 GMT'}]",2016-12-07,"[['Gan', 'Qiuye', ''], ['Yu', 'Simin', ''], ['Li', 'Chengqing', ''], ['Lü', 'Jinhu', ''], ['Lin', 'Zhuosheng', ''], ['Chen', 'Ping', '']]","['ARM-embedded implementation', 'multicast-multiuser', 'chaotic map', 'securecommunication', 'speech', 'WIFI']" 160,1606.07583,Biljana Risteska Stojkoska Dr,"Biljana Stojkoska, Danco Davcev and Vladimir Trajkovik","N-queens-based algorithm for moving object detection in distributed wireless sensor networks",6 pages,"Proceedings of the ITI 2008 30th Int. Conf. on Information Technology Interfaces, June 23-26, 2008, Cavtat, Croatia, pp.899-904",10.1109/ITI.2008.4588530,,cs.MM cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The main constraint of wireless sensor networks (WSN) in enabling wireless image communication is the high energy requirement, which may exceed even the future capabilities of battery technologies. In this paper we have shown that this bottleneck can be overcome by developing local in-network image processing algorithm that offers optimal energy consumption. Our algorithm is very suitable for intruder detection applications. Each node is responsible for processing the image captured by the video sensor, which consists of NxN blocks. If an intruder is detected in the monitoring region, the node will transmit the image for further processing. Otherwise, the node takes no action. Results provided from our experiments show that our algorithm is better than the traditional moving object detection techniques by a factor of (N/2) in terms of energy savings. ","[{'version': 'v1', 'created': 'Fri, 24 Jun 2016 07:18:42 GMT'}]",2016-07-01,"[['Stojkoska', 'Biljana', ''], ['Davcev', 'Danco', ''], ['Trajkovik', 'Vladimir', '']]","['Wireless Sensor Networks', 'Multimedia', 'Image Processing', 'Motion Detection']" 161,1405.7868,Priya Bajaj,Priya Bajaj and Supriya Raheja,A Vague Improved Markov Model Approach for Web Page Prediction,"8 pages, 4 figures, 1 table, International Journal of Computer Science & Engineering Survey (IJCSES) Vol.5, No.2, April 2014",,10.5121/ijcses.2014.5205,,cs.IR cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Today most of the information in all areas is available over the web. It increases the web utilization as well as attracts the interest of researchers to improve the effectiveness of web access and web utilization. As the number of web clients gets increased, the bandwidth sharing is performed that decreases the web access efficiency. Web page prefetching improves the effectiveness of web access by availing the next required web page before the user demand. It is an intelligent predictive mining that analyze the user web access history and predict the next page. In this work, vague improved markov model is presented to perform the prediction. In this work, vague rules are suggested to perform the pruning at different levels of markov model. Once the prediction table is generated, the association mining will be implemented to identify the most effective next page. In this paper, an integrated model is suggested to improve the prediction accuracy and effectiveness. ","[{'version': 'v1', 'created': 'Thu, 8 May 2014 07:52:20 GMT'}]",2014-06-02,"[['Bajaj', 'Priya', ''], ['Raheja', 'Supriya', '']]","['Vague Rule', 'Markov Model', 'predictive', 'Web Usage Mining 1']" 162,1604.06707,"G\""unter Rote","Felix Herter and G\""unter Rote",Loopless Gray Code Enumeration and the Tower of Bucharest,"16 pages plus 8 pages of appendix with Python programs, 6 figures. A Python script to extract the program code from the LaTeX file is attached to the sources","Theoretical Computer Science 748 (2018), 40-54",10.1016/j.tcs.2017.11.017,,cs.DM cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We give new algorithms for generating all n-tuples over an alphabet of m letters, changing only one letter at a time (Gray codes). These algorithms are based on the connection with variations of the Towers of Hanoi game. Our algorithms are loopless, in the sense that the next change can be determined in a constant number of steps, and they can be implemented in hardware. We also give another family of loopless algorithms that is based on the idea of working ahead and saving the work in a buffer. ","[{'version': 'v1', 'created': 'Fri, 22 Apr 2016 15:19:24 GMT'}]",2018-11-26,"[['Herter', 'Felix', ''], ['Rote', 'Günter', '']]","['Tower of Hanoi', 'Gray code', 'enumeration', 'loopless generation']" 163,1505.03795,Houssam Abdul-Rahman,Houssam Abdul-Rahman and Nikolai Chernov,Fast and numerically stable circle fit,16 pages,"Journal of Mathematical Imaging and Vision June 2014, Volume 49, Issue 2, pp 289-295",10.1007/s10851-013-0461-4,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We develop a new algorithm for fitting circles that does not have drawbacks commonly found in existing circle fits. Our fit achieves ultimate accuracy (to machine precision), avoids divergence, and is numerically stable even when fitting circles get arbitrary large. Lastly, our algorithm takes less than 10 iterations to converge, on average. ","[{'version': 'v1', 'created': 'Thu, 14 May 2015 16:43:07 GMT'}]",2022-10-13,"[['Abdul-Rahman', 'Houssam', ''], ['Chernov', 'Nikolai', '']]","['fitting circles', 'geometric fit', 'Levenberg-Marquardt', 'Gauss-Newton']" 164,1610.08833,Erik Schnetter,"Anshu Dubey, Ann Almgren, John Bell, Martin Berzins, Steve Brandt, Greg Bryan, Phillip Colella, Daniel Graves, Michael Lijewski, Frank L\""offler, Brian O'Shea, Erik Schnetter, Brian Van Straalen, Klaus Weide","A Survey of High Level Frameworks in Block-Structured Adaptive Mesh Refinement Packages",,"Journal of Parallel and Distributed Computing, Volume 74, Issue 12, December 2014, Pages 3217-3227",10.1016/j.jpdc.2014.07.001,,cs.DC astro-ph.HE gr-qc,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Over the last decade block-structured adaptive mesh refinement (SAMR) has found increasing use in large, publicly available codes and frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, others have pursued a more general functionality, providing the building blocks for a larger variety of applications. In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that have been in existence for half a decade or more, have a reasonably sized and active user base outside of their home institutions, and are publicly available. The set consists of a mix of SAMR packages and application codes that cover a broad range of scientific domains. We look at their high-level frameworks, and their approach to dealing with the advent of radical changes in hardware architecture. The codes included in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah. ","[{'version': 'v1', 'created': 'Thu, 27 Oct 2016 15:23:34 GMT'}]",2016-10-28,"[['Dubey', 'Anshu', ''], ['Almgren', 'Ann', ''], ['Bell', 'John', ''], ['Berzins', 'Martin', ''], ['Brandt', 'Steve', ''], ['Bryan', 'Greg', ''], ['Colella', 'Phillip', ''], ['Graves', 'Daniel', ''], ['Lijewski', 'Michael', ''], ['Löffler', 'Frank', ''], [""O'Shea"", 'Brian', ''], ['Schnetter', 'Erik', ''], ['Van Straalen', 'Brian', ''], ['Weide', 'Klaus', '']]","['SAMR', 'BoxLib', 'Chombo', 'FLASH', 'Cactus', 'Enzo', 'Uintah']" 165,1402.1246,Rubia R,"Ms.Rubia.R, Mr.SivanArulSelvan","A Survey on Mobile Data Gathering in Wireless Sensor Networks - Bounded Relay","4 pages, 1 figure, ""Published with International Journal of Engineering Trends and Technology (IJETT)""","IJETT, 7(5),205-208,2014 published by seventh sense research group",10.14445/22315381/IJETT-V7P247,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Most of the wireless sensor networks consist of static sensors, which can be deployed in a wide environment for monitoring applications. While transmitting the data from source to static sink, the amount of energy consumption of the sensor node is high. It results in reduced lifetime of the network.Some of the WSN architectures have been proposed based on Mobile Elements. There is large number of approaches to resolve the above problem. It is found those two approaches, namely Single Hop Data Gathering problem (SHDGP) and mobile Data Gathering, which is used to increase the lifetime of the network. Single Hop Data Gathering Problem is used to achieve the uniform energy consumption. The mobile Data Gathering algorithm is used to find the minimal set of points in the sensor network, which serves as data gathering points for mobile network. Even after so many decades of research, there are some unresolved problems like non uniform energy consumption, increased latency, which needs to be resolved. ","[{'version': 'v1', 'created': 'Thu, 6 Feb 2014 05:11:35 GMT'}]",2014-02-07,"[['R', 'Ms. Rubia.', ''], ['SivanArulSelvan', 'Mr.', '']]","['Mobile Collector', 'SenCar', 'Polling Point', 'Neighbour set', 'Candidate polling point', 'mobile Data Gathering', 'SDMA']" 166,0704.3890,Valmir Barbosa,"Rodolfo M. Pussente, Valmir C. Barbosa","An algorithm for clock synchronization with the gradient property in sensor networks",,"Journal of Parallel and Distributed Computing 69 (2009), 261-265",10.1016/j.jpdc.2008.11.001,,cs.DC,," We introduce a distributed algorithm for clock synchronization in sensor networks. Our algorithm assumes that nodes in the network only know their immediate neighborhoods and an upper bound on the network's diameter. Clock-synchronization messages are only sent as part of the communication, assumed reasonably frequent, that already takes place among nodes. The algorithm has the gradient property of [2], achieving an O(1) worst-case skew between the logical clocks of neighbors. As in the case of [3,8], the algorithm's actions are such that no constant lower bound exists on the rate at which logical clocks progress in time, and for this reason the lower bound of [2,5] that forbids constant skew between neighbors does not apply. ","[{'version': 'v1', 'created': 'Mon, 30 Apr 2007 19:59:14 GMT'}]",2009-02-05,"[['Pussente', 'Rodolfo M.', ''], ['Barbosa', 'Valmir C.', '']]","['Distributed computing', 'Sensor networks', 'Clock synchronization', 'Gradient property in clock synchronization']" 167,1509.04064,Michael Castronovo,"Michael Castronovo, Damien Ernst, Adrien Couetoux, Raphael Fonteneau",Benchmarking for Bayesian Reinforcement Learning,37 pages,,10.1371/journal.pone.0157088,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but even though a few toy examples exist in the literature, there are still no extensive or rigorous benchmarks to compare them. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. ","[{'version': 'v1', 'created': 'Mon, 14 Sep 2015 12:47:52 GMT'}]",2016-09-28,"[['Castronovo', 'Michael', ''], ['Ernst', 'Damien', ''], ['Couetoux', 'Adrien', ''], ['Fonteneau', 'Raphael', '']]","['Bayesian Reinforcement Learning', 'Benchmarking', 'BBRL library', 'OfflineLearning', 'Reinforcement Learning']" 168,1507.02139,Giuseppe Carbone Dr.,"Giuseppe Carbone, Ilaria Giannoccaro",Model of human collective decision-making in complex environments,"12 pages, 8 figues in European Physical Journal B, 2015","European Physical Journal B, 88 (12), 339, 2015",10.1140/epjb/e2015-60609-0,,cs.MA cs.AI nlin.AO physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A continuous-time Markov process is proposed to analyze how a group of humans solves a complex task, consisting in the search of the optimal set of decisions on a fitness landscape. Individuals change their opinions driven by two different forces: (i) the self-interest, which pushes them to increase their own fitness values, and (ii) the social interactions, which push individuals to reduce the diversity of their opinions in order to reach consensus. Results show that the performance of the group is strongly affected by the strength of social interactions and by the level of knowledge of the individuals. Increasing the strength of social interactions improves the performance of the team. However, too strong social interactions slow down the search of the optimal solution and worsen the performance of the group. In particular, we find that the threshold value of the social interaction strength, which leads to the emergence of a superior intelligence of the group, is just the critical threshold at which the consensus among the members sets in. We also prove that a moderate level of knowledge is already enough to guarantee high performance of the group in making decisions. ","[{'version': 'v1', 'created': 'Wed, 8 Jul 2015 13:14:16 GMT'}, {'version': 'v2', 'created': 'Fri, 30 Oct 2015 15:06:52 GMT'}]",2015-12-21,"[['Carbone', 'Giuseppe', ''], ['Giannoccaro', 'Ilaria', '']]","['Decision making', 'social interactions', 'complexity', 'Markov chains']" 169,1511.09295,Savvas Zannettou,"Savvas Zannettou, Michael Sirivianos, Fragkiskos Papadopoulos",Exploiting Path Diversity in Datacenters using MPTCP-aware SDN,"8 pages, 7 figures, ISCC 2016, Messina, Italy","ISCC 2016, p.564-571",10.1109/ISCC.2016.7543794,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Recently, Multipath TCP (MPTCP) has been proposed as an alternative transport approach for datacenter networks. MPTCP provides the ability to split a flow into multiple paths thus providing better performance and resilience to failures. Usually, MPTCP is combined with flow-based Equal-Cost Multi-Path Routing (ECMP), which uses random hashing to split the MPTCP subflows over different paths. However, random hashing can be suboptimal as distinct subflows may end up using the same paths, while other available paths remain unutilized. In this paper, we explore an MPTCP-aware SDN controller that facilitates an alternative routing mechanism for the MPTCP subflows. The controller uses packet inspection to provide deterministic subflow assignment to paths. Using the controller, we show that MPTCP can deliver significantly improved performance when connections are not limited by the access links of hosts. To lessen the effect of throughput limitation due to access links, we also investigate the usage of multiple interfaces at the hosts. We demonstrate, using our modification of the MPTCP Linux Kernel, that using multiple subflows per pair of IP addresses can yield improved performance in multi-interface settings. ","[{'version': 'v1', 'created': 'Mon, 30 Nov 2015 13:19:48 GMT'}, {'version': 'v2', 'created': 'Mon, 29 Aug 2016 08:53:38 GMT'}]",2016-08-31,"[['Zannettou', 'Savvas', ''], ['Sirivianos', 'Michael', ''], ['Papadopoulos', 'Fragkiskos', '']]","['Datacenters', 'Multipath-TCP', 'MPTCP-aware SDN']" 170,1912.04465,Yudong Jiang,"Yudong Jiang, Kaixu Cui, Leilei Chen, Canjin Wang, Changliang Xu",SoccerDB: A Large-Scale Database for Comprehensive Video Understanding,accepted by MM2020 sports workshop,,10.1145/3422844.3423051,,cs.CV,http://creativecommons.org/licenses/by-nc-sa/4.0/," Soccer videos can serve as a perfect research object for video understanding because soccer games are played under well-defined rules while complex and intriguing enough for researchers to study. In this paper, we propose a new soccer video database named SoccerDB, comprising 171,191 video segments from 346 high-quality soccer games. The database contains 702,096 bounding boxes, 37,709 essential event labels with time boundary and 17,115 highlight annotations for object detection, action recognition, temporal action localization, and highlight detection tasks. To our knowledge, it is the largest database for comprehensive sports video understanding on various aspects. We further survey a collection of strong baselines on SoccerDB, which have demonstrated state-of-the-art performances on independent tasks. Our evaluation suggests that we can benefit significantly when jointly considering the inner correlations among those tasks. We believe the release of SoccerDB will tremendously advance researches around comprehensive video understanding. {\itshape Our dataset and code published on https://github.com/newsdata/SoccerDB.} ","[{'version': 'v1', 'created': 'Tue, 10 Dec 2019 02:57:28 GMT'}, {'version': 'v2', 'created': 'Fri, 13 Dec 2019 05:51:47 GMT'}, {'version': 'v3', 'created': 'Tue, 23 Jun 2020 03:16:38 GMT'}, {'version': 'v4', 'created': 'Tue, 8 Sep 2020 13:27:22 GMT'}]",2020-09-09,"[['Jiang', 'Yudong', ''], ['Cui', 'Kaixu', ''], ['Chen', 'Leilei', ''], ['Wang', 'Canjin', ''], ['Xu', 'Changliang', '']]","['object detection', 'action recognition', 'temporal action localization', 'highlight detection']" 171,2105.10464,David Cerezo S\'anchez,David Cerezo S\'anchez,Pravuil: Global Consensus for a United World,,"FinTech 2022, 1(4), 325-344",10.3390/fintech1040025,,cs.CR cs.DC econ.GN q-fin.EC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Pravuil is a robust, secure, and scalable consensus protocol for a permissionless blockchain suitable for deployment in an adversarial environment such as the Internet. Pravuil circumvents previous shortcomings of other blockchains: - Bitcoin's limited adoption problem: as transaction demand grows, payment confirmation times grow much lower than other PoW blockchains - higher transaction security at a lower cost - more decentralisation than other permissionless blockchains - impossibility of full decentralisation and the blockchain scalability trilemma: decentralisation, scalability, and security can be achieved simultaneously - Sybil-resistance for free implementing the social optimum - Pravuil goes beyond the economic limits of Bitcoin or other PoW/PoS blockchains, leading to a more valuable and stable crypto-currency ","[{'version': 'v1', 'created': 'Fri, 21 May 2021 17:02:14 GMT'}]",2022-11-02,"[['Sánchez', 'David Cerezo', '']]","['consensus', 'permissionless', 'permissioned', 'scalability', 'zeroknowledge', 'mutual attestation', 'zk-PoI']" 172,1507.04576,Maedeh Aghaei,Maedeh Aghaei and Mariella Dimiccoli and Petia Radeva,Multi-Face Tracking by Extended Bag-of-Tracklets in Egocentric Videos,"27 pages, 18 figures, submitted to computer vision and image understanding journal",,10.1016/j.cviu.2016.02.013,YCVIU2393,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in it. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness. ","[{'version': 'v1', 'created': 'Thu, 16 Jul 2015 13:51:47 GMT'}, {'version': 'v2', 'created': 'Wed, 13 Jan 2016 12:26:09 GMT'}]",2017-01-24,"[['Aghaei', 'Maedeh', ''], ['Dimiccoli', 'Mariella', ''], ['Radeva', 'Petia', '']]","['Egocentric vision', 'face tracking', 'low frame rate video analysis']" 173,2105.04273,Junaid Ali,"Junaid Ali, Muhammad Bilal Zafar, Adish Singla, Krishna P. Gummadi",Loss-Aversively Fair Classification,"8 pages, Accepted at AIES 2019","In AAAI/ACM Conference on AI, Ethics, and Society (AIES 2019), January 27-28 2019 Honolulu, HI, USA",10.1145/3461702.3462630,,cs.LG cs.CY,http://creativecommons.org/licenses/by/4.0/," The use of algorithmic (learning-based) decision making in scenarios that affect human lives has motivated a number of recent studies to investigate such decision making systems for potential unfairness, such as discrimination against subjects based on their sensitive features like gender or race. However, when judging the fairness of a newly designed decision making system, these studies have overlooked an important influence on people's perceptions of fairness, which is how the new algorithm changes the status quo, i.e., decisions of the existing decision making system. Motivated by extensive literature in behavioral economics and behavioral psychology (prospect theory), we propose a notion of fair updates that we refer to as loss-averse updates. Loss-averse updates constrain the updates to yield improved (more beneficial) outcomes to subjects compared to the status quo. We propose tractable proxy measures that would allow this notion to be incorporated in the training of a variety of linear and non-linear classifiers. We show how our proxy measures can be combined with existing measures for training nondiscriminatory classifiers. Our evaluation using synthetic and real-world datasets demonstrates that the proposed proxy measures are effective for their desired tasks. ","[{'version': 'v1', 'created': 'Mon, 10 May 2021 11:19:27 GMT'}]",2021-05-11,"[['Ali', 'Junaid', ''], ['Zafar', 'Muhammad Bilal', ''], ['Singla', 'Adish', ''], ['Gummadi', 'Krishna P.', '']]","['Algorithmic Fairness', 'Fair Updates', 'Fairness in Machine Learning', 'Loss-averse Fairness']" 174,1405.2362,Yan Fang,"Yan Fang, Matthew J. Cotter, Donald M. Chiarulli, Steven P. Levitan",Image Segmentation Using Frequency Locking of Coupled Oscillators,"7 pages, 14 figures, the 51th Design Automation Conference 2014, Work in Progress Poster Session",,10.1109/CNNA.2014.6888657,,cs.CV q-bio.NC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Synchronization of coupled oscillators is observed at multiple levels of neural systems, and has been shown to play an important function in visual perception. We propose a computing system based on locally coupled oscillator networks for image segmentation. The system can serve as the preprocessing front-end of an image processing pipeline where the common frequencies of clusters of oscillators reflect the segmentation results. To demonstrate the feasibility of our design, the system is simulated and tested on a human face image dataset and its performance is compared with traditional intensity threshold based algorithms. Our system shows both better performance and higher noise tolerance than traditional methods. ","[{'version': 'v1', 'created': 'Fri, 9 May 2014 21:53:05 GMT'}]",2014-09-24,"[['Fang', 'Yan', ''], ['Cotter', 'Matthew J.', ''], ['Chiarulli', 'Donald M.', ''], ['Levitan', 'Steven P.', '']]","['Oscillator', 'Computer Vision', 'Image Segmentation']" 175,1902.09779,Tanweer Alam,Tanweer Alam,Blockchain and its Role in the Internet of Things (IoT),7 Pages,"International Journal of Scientific Research in Computer Science, Engineering and Information Technology, pp. 151-157, 2019",10.32628/CSEIT195137,,cs.NI,http://creativecommons.org/licenses/by/4.0/," Blockchain (BC) in the Internet of Things (IoT) is a novel technology that acts with decentralized, distributed, public and real-time ledger to store transactions among IoT nodes. A blockchain is a series of blocks, each block is linked to its previous blocks. Every block has the cryptographic hash code, previous block hash, and its data. The transactions in BC are the basic units that are used to transfer data between IoT nodes. The IoT nodes are different kind of physical but smart devices with embedded sensors, actuators, programs and able to communicate with other IoT nodes. The role of BC in IoT is to provide a procedure to process secured records of data through IoT nodes. BC is a secured technology that can be used publicly and openly. IoT requires this kind of technology to allow secure communication among IoT nodes in heterogeneous environment. The transactions in BC could be traced and explored through anyone who are authenticated to communicate within the IoT. The BC in IoT may help to improve the communication security. In this paper, I explored this approach, its opportunities and challenges. ","[{'version': 'v1', 'created': 'Tue, 26 Feb 2019 07:48:10 GMT'}, {'version': 'v2', 'created': 'Fri, 5 Jun 2020 14:26:48 GMT'}]",2020-06-08,"[['Alam', 'Tanweer', '']]","['Blockchain', 'Internet of Things (IoT)', 'Cryptography', 'Security', 'Communication']" 176,1804.06025,Vahid Rasouli Disfani,"Changfu Li, Vahid R. Disfani, Zachary K. Pecenak, Saeed Mohajeryami, Jan Kleissl",Optimal OLTC Voltage Control Scheme to Enable High Solar Penetrations,,"Li, Changfu, Vahid R. Disfani, Zachary K. Pecenak, Saeed Mohajeryami, and Jan Kleissl. ""Optimal OLTC voltage control scheme to enable high solar penetrations."" Electric Power Systems Research 160 (2018): 318-326",10.1016/j.epsr.2018.02.016,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," High solar Photovoltaic (PV) penetration on distribution systems can cause over-voltage problems. To this end, an Optimal Tap Control (OTC) method is proposed to regulate On-Load Tap Changers (OLTCs) by minimizing the maximum deviation of the voltage profile from 1~p.u. on the entire feeder. A secondary objective is to reduce the number of tap operations (TOs), which is implemented for the optimization horizon based on voltage forecasts derived from high resolution PV generation forecasts. A linearization technique is applied to make the optimization problem convex and able to be solved at operational timescales. Simulations on a PC show the solution time for one time step is only 1.1~s for a large feeder with 4 OLTCs and 1623 buses. OTC results are compared against existing methods through simulations on two feeders in the Californian network. OTC is firstly compared against an advanced rule-based Voltage Level Control (VLC) method. OTC and VLC achieve the same reduction of voltage violations, but unlike VLC, OTC is capable of coordinating multiple OLTCs. Scalability to multiple OLTCs is therefore demonstrated against a basic conventional rule-based control method called Autonomous Tap Control (ATC). Comparing to ATC, the test feeder under control of OTC can accommodate around 67\% more PV without over-voltage issues. Though a side effect of OTC is an increase in tap operations, the secondary objective functionally balances operations between all OLTCs such that impacts on their lifetime and maintenance are minimized. ","[{'version': 'v1', 'created': 'Tue, 17 Apr 2018 03:13:08 GMT'}]",2018-04-18,"[['Li', 'Changfu', ''], ['Disfani', 'Vahid R.', ''], ['Pecenak', 'Zachary K.', ''], ['Mohajeryami', 'Saeed', ''], ['Kleissl', 'Jan', '']]","['Optimal voltage control', 'distribution system', 'convexoptimization', 'photovoltaic systems', 'tap changer']" 177,1312.4077,Srinjoy Ganguly Mr.,"Arpita Chakraborty, Srinjoy Ganguly, Mrinal Kanti Naskar and Anupam Karmakar","A Trust Based Congestion Aware Hybrid Ant Colony Optimization Algorithm for Energy Efficient Routing in Wireless Sensor Networks (TC-ACO)","6 pages, 5 figures and 2 tables (Conference Paper)","Proceedings of the IEEE International Conference on Advanced Computing (ICoAC)-2013, pp.XX-XX,Chennai, India, 18 - 20 December (2013)",10.1109/ICoAC.2013.6921940,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Congestion is a problem of paramount importance in resource constrained Wireless Sensor Networks, especially for large networks, where the traffic loads exceed the available capacity of the resources. Sensor nodes are prone to failure and the misbehavior of these faulty nodes creates further congestion. The resulting effect is a degradation in network performance, additional computation and increased energy consumption, which in turn decreases network lifetime. Hence, the data packet routing algorithm should consider congestion as one of the parameters, in addition to the role of the faulty nodes and not merely energy efficient protocols. Unfortunately most of the researchers have tried to make the routing schemes energy efficient without considering congestion factor and the effect of the faulty nodes. In this paper we have proposed a congestion aware, energy efficient, routing approach that utilizes Ant Colony Optimization algorithm, in which faulty nodes are isolated by means of the concept of trust. The merits of the proposed scheme are verified through simulations where they are compared with other protocols. ","[{'version': 'v1', 'created': 'Sat, 14 Dec 2013 18:41:22 GMT'}]",2016-11-18,"[['Chakraborty', 'Arpita', ''], ['Ganguly', 'Srinjoy', ''], ['Naskar', 'Mrinal Kanti', ''], ['Karmakar', 'Anupam', '']]","['Wireless Sensor Networks', 'Congestion', 'Trust', 'Energy Efficient Routing', 'Ant Colony Optimization']" 178,1609.02603,Ali Sedighimanesh,"Ali Sedighimanesh, Mohammad Sedighimanesh, Javad Baqeri","Cutting down energy usage in wireless sensor networks using Duty Cycle technique and multi-hop routing","19 page, 7 Figures","International Journal of Wireless & Mobile Networks. 2016;8(4):23-41",10.5121/ijwmn.2016.8402,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A wireless sensor network is composed of many sensor nodes that have been given out in a specific zone and each of them had an ability of collecting information from the environment and sending collected data to the sink. The most significant issues in wireless sensor networks despite the recent progress is the trouble of the severe limitations of energy resources. Since that in different applications of sensor nets we could throw a static or mobile sink then all aspects of such networks should be planned with an awareness of energy. One of the most significant topics related to these networks is routing. One of the most widely used and efficient methods of routing is a hierarchy (based on clustering) method. In The present study with the objective of cutting down energy consumption and persistence of network coverage we have offered a novel algorithm based on clustering algorithms and multi hop routing. To achieve this goal first we layer the network environment based on the size of the network. We will identify the optimal number of cluster heads and every cluster head based on the mechanism of topology control will start to accept members. Likewise we set the first layer as gate layer and subsequently identifying the gate nodes we would turn away half of the sensors and then stop using energy and the remaining nodes in this layer will join the gate nodes because they hold a critical part in bettering the functioning of the system. Cluster heads off following layers send the information to cluster heads in the above layer until sent data will be sent to gate nodes and finally will be sent to sink. We have tested the proposed algorithm in two situations 1) when the sink is off and 2) when a sink is on and simulation data shows that proposed algorithm has better performance in terms of the life span of a network than LEACH and E LEACH protocols. ","[{'version': 'v1', 'created': 'Thu, 8 Sep 2016 21:41:06 GMT'}]",2016-09-12,"[['Sedighimanesh', 'Ali', ''], ['Sedighimanesh', 'Mohammad', ''], ['Baqeri', 'Javad', '']]","['Wireless sensor networks', 'Lifetime', 'Hierarchical clustering', 'Hierarchical Routing', 'Cluster Topology']" 179,1302.1882,Dragan Vidakovic Novak,"Dragan Vidakovic, Olivera Nikolic and Dusko Parezanovic",Acceleration detection of large (probably) prime numbers,"8 pages, 6 figures","International Journal of UbiComp (IJU), Vol.4, No.1, January 2013",10.5121/iju.2013.4101,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In order to avoid unnecessary applications of Miller-Rabin algorithm to the number in question, we resort to trial division by a few initial prime numbers, since such a division take less time. How far we should go with such a division is the that we are trying to answer in this paper?For the theory of the matter is fully resolved. However, that in practice we do not have much use. Therefore, we present a solution that is probably irrelevant to theorists, but it is very useful to people who have spent many nights to produce large (probably) prime numbers using its own software. ","[{'version': 'v1', 'created': 'Thu, 7 Feb 2013 21:19:21 GMT'}]",2013-02-11,"[['Vidakovic', 'Dragan', ''], ['Nikolic', 'Olivera', ''], ['Parezanovic', 'Dusko', '']]","['Kryptography', 'Digital Signatures', 'RSA', 'Miller-Rabin', '(Large', 'probably) Prime Numbers']" 180,1902.00703,Giovanni Iacca Dr.,Stefano Fioravanzo and Giovanni Iacca,Evaluating MAP-Elites on Constrained Optimization Problems,,,10.1145/3319619.3321939,,cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Constrained optimization problems are often characterized by multiple constraints that, in the practice, must be satisfied with different tolerance levels. While some constraints are hard and as such must be satisfied with zero-tolerance, others may be soft, such that non-zero violations are acceptable. Here, we evaluate the applicability of MAP-Elites to ""illuminate"" constrained search spaces by mapping them into feature spaces where each feature corresponds to a different constraint. On the one hand, MAP-Elites implicitly preserves diversity, thus allowing a good exploration of the search space. On the other hand, it provides an effective visualization that facilitates a better understanding of how constraint violations correlate with the objective function. We demonstrate the feasibility of this approach on a large set of benchmark problems, in various dimensionalities, and with different algorithmic configurations. As expected, numerical results show that a basic version of MAP-Elites cannot compete on all problems (especially those with equality constraints) with state-of-the-art algorithms that use gradient information or advanced constraint handling techniques. Nevertheless, it has a higher potential at finding constraint violations vs. objectives trade-offs and providing new problem information. As such, it could be used in the future as an effective building-block for designing new constrained optimization algorithms. ","[{'version': 'v1', 'created': 'Sat, 2 Feb 2019 11:59:29 GMT'}, {'version': 'v2', 'created': 'Tue, 5 Feb 2019 08:04:27 GMT'}, {'version': 'v3', 'created': 'Thu, 4 Apr 2019 14:53:05 GMT'}, {'version': 'v4', 'created': 'Fri, 5 Apr 2019 07:09:37 GMT'}]",2020-12-21,"[['Fioravanzo', 'Stefano', ''], ['Iacca', 'Giovanni', '']]","['Constrained Optimization', 'Evolutionary Computation', 'MAP-Elites']" 181,2105.06524,Hongpeng Guo,"Hongpeng Guo, Shuochao Yao, Zhe Yang, Qian Zhou, Klara Nahrstedt","CrossRoI: Cross-camera Region of Interest Optimization for Efficient Real Time Video Analytics at Scale",accepted in 12th ACM Multimedia Systems Conference (MMsys 21'),,10.1145/3458305.3463381,,cs.DC cs.CV cs.MM cs.NI,http://creativecommons.org/licenses/by/4.0/," Video cameras are pervasively deployed in city scale for public good or community safety (i.e. traffic monitoring or suspected person tracking). However, analyzing large scale video feeds in real time is data intensive and poses severe challenges to network and computation systems today. We present CrossRoI, a resource-efficient system that enables real time video analytics at scale via harnessing the videos content associations and redundancy across a fleet of cameras. CrossRoI exploits the intrinsic physical correlations of cross-camera viewing fields to drastically reduce the communication and computation costs. CrossRoI removes the repentant appearances of same objects in multiple cameras without harming comprehensive coverage of the scene. CrossRoI operates in two phases - an offline phase to establish cross-camera correlations, and an efficient online phase for real time video inference. Experiments on real-world video feeds show that CrossRoI achieves 42% - 65% reduction for network overhead and 25% - 34% reduction for response delay in real time video analytics applications with more than 99% query accuracy, when compared to baseline methods. If integrated with SotA frame filtering systems, the performance gains of CrossRoI reach 50% - 80% (network overhead) and 33% - 61% (end-to-end delay). ","[{'version': 'v1', 'created': 'Thu, 13 May 2021 19:29:14 GMT'}]",2021-05-17,"[['Guo', 'Hongpeng', ''], ['Yao', 'Shuochao', ''], ['Yang', 'Zhe', ''], ['Zhou', 'Qian', ''], ['Nahrstedt', 'Klara', '']]","['video analytics', 'video streaming', 'convolutional neural networks']" 182,1802.08013,Daniel Tanneberg,"Daniel Tanneberg, Jan Peters, Elmar Rueckert","Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks",accepted in Neural Networks,"Volume 109, January 2019, Pages 67-80",10.1016/j.neunet.2018.10.005,,cs.AI cs.LG cs.RO stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signalcognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We evaluate our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points. ","[{'version': 'v1', 'created': 'Thu, 22 Feb 2018 12:41:06 GMT'}, {'version': 'v2', 'created': 'Tue, 23 Oct 2018 08:36:19 GMT'}]",2018-11-09,"[['Tanneberg', 'Daniel', ''], ['Peters', 'Jan', ''], ['Rueckert', 'Elmar', '']]","['Intrinsic Motivation', 'Online Learning', 'Experience Replay', 'Autonomous Robots', 'Spiking RecurrentNetworks', 'Neural Sampling']" 183,1310.2127,Shanmugapriyaa S,"S. Shanmugapriyaa, K. S. Kuppusamy, G. Aghila",BloSEn: Blog Search Engine Based On Post Concept Clustering,12 pages,"International Journal of Ambient Systems and Applications (IJASA) Vol.1, No.3, September 2013",10.5121/ijasa.2013.1302,,cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper focuses on building a blog search engine which doesn't focus only on keyword search but includes extended search capabilities. It also incorporates the blog-post concept clustering which is based on the category extracted from the blog post semantic content analysis. The proposed approach is titled as ""BloSen (Blog Search Engine)"". It involves in extracting the posts from blogs and parsing them to extract the blog elements and store them as fields in a document format. Inverted index is being built on the fields of the documents. Search is induced on the index and requested query is processed based on the documents so far made from blog posts. It currently focuses on Blogger and Wordpress hosted blogs since both these hosting services are the most popular ones in the blogosphere. The proposed BloSen model is experimented with a prototype implementation and the results of the experiments with the user's relevance cumulative metric value of 95.44% confirms the efficiency of the proposed model. ","[{'version': 'v1', 'created': 'Tue, 8 Oct 2013 13:16:30 GMT'}]",2013-10-09,"[['Shanmugapriyaa', 'S.', ''], ['Kuppusamy', 'K. S.', ''], ['Aghila', 'G.', '']]","['Blogs', 'Crawler', 'Document Parser', 'Apache Lucene', 'Inverted Index', 'Clustering']" 184,1709.04404,Huan Li,"Yujia Jin, Huan Li, Zhongzhi Zhang","Maximum matchings and minimum dominating sets in Apollonian networks and extended Tower of Hanoi graphs",,,10.1016/j.tcs.2017.08.024,,cs.DM math.CO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The Apollonian networks display the remarkable power-law and small-world properties as observed in most realistic networked systems. Their dual graphs are extended Tower of Hanoi graphs, which are obtained from the Tower of Hanoi graphs by adding a special vertex linked to all its three extreme vertices. In this paper, we study analytically maximum matchings and minimum dominating sets in Apollonian networks and their dual graph- s, both of which have found vast applications in various fields, e.g. structural controllability of complex networks. For both networks, we determine their matching number, domination number, the number of maximum matchings, as well as the number of minimum dominating sets. ","[{'version': 'v1', 'created': 'Wed, 13 Sep 2017 16:17:00 GMT'}]",2017-09-14,"[['Jin', 'Yujia', ''], ['Li', 'Huan', ''], ['Zhang', 'Zhongzhi', '']]","['Maximum matching', 'Minimum dominating set', 'Apolloniannetwork', 'Tower of Hanoi graph', 'Matching number', 'Domination number', 'Complex network']" 185,1911.11543,Shruti Jadon,"Tanvi Sahay, Ankita Mehta, Shruti Jadon",Schema Matching using Machine Learning,"7 pages, 2 figures, 2 tables",,10.1109/SPIN48934.2020.9071272,,cs.DB cs.AI cs.IR,http://creativecommons.org/licenses/by-nc-sa/4.0/," Schema Matching is a method of finding attributes that are either similar to each other linguistically or represent the same information. In this project, we take a hybrid approach at solving this problem by making use of both the provided data and the schema name to perform one to one schema matching and introduce the creation of a global dictionary to achieve one to many schema matching. We experiment with two methods of one to one matching and compare both based on their F-scores, precision, and recall. We also compare our method with the ones previously suggested and highlight differences between them. ","[{'version': 'v1', 'created': 'Sun, 24 Nov 2019 02:40:09 GMT'}]",2020-04-22,"[['Sahay', 'Tanvi', ''], ['Mehta', 'Ankita', ''], ['Jadon', 'Shruti', '']]","['Schema Matching', 'Machine Learning', 'SOM', 'EditDistance', 'One to Many Matching', 'One to One Matching']" 186,1608.08104,Fred Ngol\`e,"F. M. Ngol\`e Mboula, J.-L. Starck, K. Okumura, J. Amiaux, P. Hudelot",Constraint matrix factorization for space variant PSFs field restoration,33 pages,,10.1088/0266-5611/32/12/124001,,cs.CV astro-ph.IM,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Context: in large-scale spatial surveys, the Point Spread Function (PSF) varies across the instrument field of view (FOV). Local measurements of the PSFs are given by the isolated stars images. Yet, these estimates may not be directly usable for post-processings because of the observational noise and potentially the aliasing. Aims: given a set of aliased and noisy stars images from a telescope, we want to estimate well-resolved and noise-free PSFs at the observed stars positions, in particular, exploiting the spatial correlation of the PSFs across the FOV. Contributions: we introduce RCA (Resolved Components Analysis) which is a noise-robust dimension reduction and super-resolution method based on matrix factorization. We propose an original way of using the PSFs spatial correlation in the restoration process through sparsity. The introduced formalism can be applied to correlated data sets with respect to any euclidean parametric space. Results: we tested our method on simulated monochromatic PSFs of Euclid telescope (launch planned for 2020). The proposed method outperforms existing PSFs restoration and dimension reduction methods. We show that a coupled sparsity constraint on individual PSFs and their spatial distribution yields a significant improvement on both the restored PSFs shapes and the PSFs subspace identification, in presence of aliasing. Perspectives: RCA can be naturally extended to account for the wavelength dependency of the PSFs. ","[{'version': 'v1', 'created': 'Mon, 29 Aug 2016 15:30:25 GMT'}, {'version': 'v2', 'created': 'Tue, 30 Aug 2016 12:40:23 GMT'}, {'version': 'v3', 'created': 'Wed, 31 Aug 2016 07:10:41 GMT'}]",2016-11-04,"[['Mboula', 'F. M. Ngolè', ''], ['Starck', 'J. -L.', ''], ['Okumura', 'K.', ''], ['Amiaux', 'J.', ''], ['Hudelot', 'P.', '']]","['Dimension reduction', 'Spatial analysis', 'Super-resolution', 'Matrix factorization', 'Sparsity']" 187,1501.07800,Elias Rudberg,Emanuel H. Rubensson and Elias Rudberg,"Locality-aware parallel block-sparse matrix-matrix multiplication using the Chunks and Tasks programming model","35 pages, 14 figures",Parallel Comput. 57 (2016) 87-106,10.1016/j.parco.2016.06.005,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a method for parallel block-sparse matrix-matrix multiplication on distributed memory clusters. By using a quadtree matrix representation, data locality is exploited without prior information about the matrix sparsity pattern. A distributed quadtree matrix representation is straightforward to implement due to our recent development of the Chunks and Tasks programming model [Parallel Comput. 40, 328 (2014)]. The quadtree representation combined with the Chunks and Tasks model leads to favorable weak and strong scaling of the communication cost with the number of processes, as shown both theoretically and in numerical experiments. Matrices are represented by sparse quadtrees of chunk objects. The leaves in the hierarchy are block-sparse submatrices. Sparsity is dynamically detected by the matrix library and may occur at any level in the hierarchy and/or within the submatrix leaves. In case graphics processing units (GPUs) are available, both CPUs and GPUs are used for leaf-level multiplication work, thus making use of the full computing capacity of each node. The performance is evaluated for matrices with different sparsity structures, including examples from electronic structure calculations. Compared to methods that do not exploit data locality, our locality-aware approach reduces communication significantly, achieving essentially constant communication per node in weak scaling tests. ","[{'version': 'v1', 'created': 'Fri, 30 Jan 2015 15:15:22 GMT'}, {'version': 'v2', 'created': 'Sat, 7 Mar 2015 10:19:33 GMT'}, {'version': 'v3', 'created': 'Fri, 18 Sep 2015 14:16:35 GMT'}, {'version': 'v4', 'created': 'Mon, 27 Jun 2016 13:41:09 GMT'}]",2016-07-12,"[['Rubensson', 'Emanuel H.', ''], ['Rudberg', 'Elias', '']]","['parallel computing', 'sparse matrix-matrix multiplication', 'scalablealgorithms', 'large-scale computing', 'graphics processing units']" 188,1810.08040,Radom\'ir Hala\v{s},"Radom\'ir Hala\v{s}, Radko Mesiar, Jozef P\'ocs","Description of sup- and inf-preserving aggregation functions via families of clusters in data tables",24 pages,,10.1016/j.ins.2017.02.060,,cs.LO cs.AI math.RA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Connection between the theory of aggregation functions and formal concept analysis is discussed and studied, thus filling a gap in the literature by building a bridge between these two theories, one of them living in the world of data fusion, the second one in the area of data mining. We show how Galois connections can be used to describe an important class of aggregation functions preserving suprema, and, by duality, to describe aggregation functions preserving infima. Our discovered method gives an elegant and complete description of these classes. Also possible applications of our results within certain biclustering fuzzy FCA-based methods are discussed. ","[{'version': 'v1', 'created': 'Tue, 9 Oct 2018 07:51:48 GMT'}]",2018-10-19,"[['Halaš', 'Radomír', ''], ['Mesiar', 'Radko', ''], ['Pócs', 'Jozef', '']]","['sup-preserving aggregation function', 'bounded lattice', 'Galoisconnection']" 189,1512.05667,Emil Je\v{r}\'abek,Emil Je\v{r}\'abek,Proof complexity of intuitionistic implicational formulas,"47 pages, 1 figure; to appear in Annals of Pure and Applied Logic","Annals of Pure and Applied Logic 168 (2017), no. 1, pp. 150--190",10.1016/j.apal.2016.09.003,,cs.LO math.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We study implicational formulas in the context of proof complexity of intuitionistic propositional logic (IPC). On the one hand, we give an efficient transformation of tautologies to implicational tautologies that preserves the lengths of intuitionistic extended Frege (EF) or substitution Frege (SF) proofs up to a polynomial. On the other hand, EF proofs in the implicational fragment of IPC polynomially simulate full intuitionistic logic for implicational tautologies. The results also apply to other fragments of other superintuitionistic logics under certain conditions. In particular, the exponential lower bounds on the length of intuitionistic EF proofs by Hrube\v{s} \cite{hru:lbint}, generalized to exponential separation between EF and SF systems in superintuitionistic logics of unbounded branching by Je\v{r}\'abek \cite{ej:sfef}, can be realized by implicational tautologies. ","[{'version': 'v1', 'created': 'Thu, 17 Dec 2015 16:50:32 GMT'}, {'version': 'v2', 'created': 'Mon, 4 Jan 2016 18:06:04 GMT'}, {'version': 'v3', 'created': 'Sun, 18 Sep 2016 12:30:16 GMT'}]",2016-10-27,"[['Jeřábek', 'Emil', '']]","['proof complexity', 'intuitionistic logic', 'implicational fragment']" 190,1711.07023,Yannick Forster,"Yannick Forster, Edith Heiter, Gert Smolka",Verification of PCP-Related Computational Reductions in Coq,,,10.1007/978-3-319-94821-8_15,,cs.LO cs.FL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We formally verify several computational reductions concerning the Post correspondence problem (PCP) using the proof assistant Coq. Our verifications include a reduction of a string rewriting problem generalising the halting problem for Turing machines to PCP, and reductions of PCP to the intersection problem and the palindrome problem for context-free grammars. Interestingly, rigorous correctness proofs for some of the reductions are missing in the literature. ","[{'version': 'v1', 'created': 'Sun, 19 Nov 2017 14:15:45 GMT'}, {'version': 'v2', 'created': 'Wed, 18 Jul 2018 15:19:29 GMT'}]",2022-12-09,"[['Forster', 'Yannick', ''], ['Heiter', 'Edith', ''], ['Smolka', 'Gert', '']]","['Post Correspondence Problem', 'String Rewriting', 'Contextfree Grammars', 'Computational Reductions', 'Undecidability', 'Coq']" 191,1703.08738,Long Zhao,"Long Zhao, Fangda Han, Xi Peng, Xun Zhang, Mubbasir Kapadia, Vladimir Pavlovic, Dimitris N. Metaxas","Cartoonish sketch-based face editing in videos using identity deformation transfer","In Computers & Graphics, 2019. (12 pages, 10 figures)",,10.1016/j.cag.2019.01.004,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We address the problem of using hand-drawn sketches to create exaggerated deformations to faces in videos, such as enlarging the shape or modifying the position of eyes or mouth. This task is formulated as a 3D face model reconstruction and deformation problem. We first recover the facial identity and expressions from the video by fitting a face morphable model for each frame. At the same time, user's editing intention is recognized from input sketches as a set of facial modifications. Then a novel identity deformation algorithm is proposed to transfer these facial deformations from 2D space to the 3D facial identity directly while preserving the facial expressions. After an optional stage for further refining the 3D face model, these changes are propagated to the whole video with the modified identity. Both the user study and experimental results demonstrate that our sketching framework can help users effectively edit facial identities in videos, while high consistency and fidelity are ensured at the same time. ","[{'version': 'v1', 'created': 'Sat, 25 Mar 2017 20:33:45 GMT'}, {'version': 'v2', 'created': 'Thu, 31 May 2018 10:36:08 GMT'}, {'version': 'v3', 'created': 'Sat, 26 Jan 2019 04:05:45 GMT'}]",2019-01-29,"[['Zhao', 'Long', ''], ['Han', 'Fangda', ''], ['Peng', 'Xi', ''], ['Zhang', 'Xun', ''], ['Kapadia', 'Mubbasir', ''], ['Pavlovic', 'Vladimir', ''], ['Metaxas', 'Dimitris N.', '']]","['Video editing', 'Sketch-basedmodeling', 'Shape deformation', 'Deformation transfer', '3D morphable model']" 192,1604.07751,Rafal Kotynski,"David Pastor-Calle, Anna Pastuszczak, Michal Mikolajczyk and Rafal Kotynski",Compressive phase-only filtering at extreme compression rates,,"Opt. Commun. vol. 383, pp. 446-452, (2017)",10.1016/j.optcom.2016.09.024,,cs.CV physics.optics,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We introduce an efficient method for the reconstruction of the correlation between a compressively measured image and a phase-only filter. The proposed method is based on two properties of phase-only filtering: such filtering is a unitary circulant transform, and the correlation plane it produces is usually sparse. Thanks to these properties, phase-only filters are perfectly compatible with the framework of compressive sensing. Moreover, the lasso-based recovery algorithm is very fast when phase-only filtering is used as the compression matrix. The proposed method can be seen as a generalisation of the correlation-based pattern recognition technique, which is hereby applied directly to non-adaptively acquired compressed data. At the time of measurement, any prior knowledge of the target object for which the data will be scanned is not required. We show that images measured at extremely high compression rates may still contain sufficient information for target classification and localization, even if the compression rate is high enough, that visual recognition of the target in the reconstructed image is no longer possible. The method has been applied by us to highly undersampled measurements obtained from a single-pixel camera, with sampling based on randomly chosen Walsh-Hadamard patterns. ","[{'version': 'v1', 'created': 'Tue, 26 Apr 2016 16:49:58 GMT'}, {'version': 'v2', 'created': 'Wed, 25 May 2016 13:45:38 GMT'}, {'version': 'v3', 'created': 'Fri, 3 Jun 2016 09:46:50 GMT'}, {'version': 'v4', 'created': 'Fri, 22 Jul 2016 12:33:18 GMT'}, {'version': 'v5', 'created': 'Thu, 29 Sep 2016 06:43:39 GMT'}]",2016-09-30,"[['Pastor-Calle', 'David', ''], ['Pastuszczak', 'Anna', ''], ['Mikolajczyk', 'Michal', ''], ['Kotynski', 'Rafal', '']]","['Computational imaging', 'phase-only filter', 'smashed filter', 'single-pixel camera', 'patternrecognition']" 193,1902.01477,Nikolaj Tatti,Nikolaj Tatti,Faster way to agony: Discovering hierarchies in directed graphs,,,10.1007/978-3-662-44845-8_11,,cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Many real-world phenomena exhibit strong hierarchical structure. Consequently, in many real-world directed social networks vertices do not play equal role. Instead, vertices form a hierarchy such that the edges appear mainly from upper levels to lower levels. Discovering hierarchies from such graphs is a challenging problem that has gained attention. Formally, given a directed graph, we want to partition vertices into levels such that ideally there are only edges from upper levels to lower levels. From computational point of view, the ideal case is when the underlying directed graph is acyclic. In such case, we can partition the vertices into a hierarchy such that there are only edges from upper levels to lower edges. In practice, graphs are rarely acyclic, hence we need to penalize the edges that violate the hierarchy. One practical approach is agony, where each violating edge is penalized based on the severity of the violation. The fastest algorithm for computing agony requires $O(nm^2)$ time. In the paper we present an algorithm for computing agony that has better theoretical bound, namely $O(m^2)$. We also show that in practice the obtained bound is pessimistic and that we can use our algorithm to compute agony for large datasets. Moreover, our algorithm can be used as any-time algorithm. ","[{'version': 'v1', 'created': 'Mon, 4 Feb 2019 22:12:16 GMT'}]",2019-02-06,"[['Tatti', 'Nikolaj', '']]","['Graph mining', 'agony', 'hierarchy discovery', 'primal-dual', 'maximum eulerian subgraph']" 194,1404.0442,Kevin Carlberg,Kevin Carlberg,Adaptive $h$-refinement for reduced-order models,"submitted to the International Journal for Numerical Methods in Engineering, Special Issue on Model Reduction","International Journal for Numerical Methods in Engineering, Vol. 102, No. 5, p.1192-1210 (2014)",10.1002/nme.4800,,cs.NA math.NA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This work presents a method to adaptively refine reduced-order models \emph{a posteriori} without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive $h$-refinement: it enriches the reduced-basis space online by `splitting' a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive $k$-means clustering of the state variables using snapshot data. The method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Further, it enables the reduced-order model to satisfy \emph{any prescribed error tolerance} regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis. ","[{'version': 'v1', 'created': 'Wed, 2 Apr 2014 03:29:43 GMT'}, {'version': 'v2', 'created': 'Thu, 3 Apr 2014 04:12:34 GMT'}, {'version': 'v3', 'created': 'Fri, 18 Jul 2014 01:09:10 GMT'}]",2015-04-16,"[['Carlberg', 'Kevin', '']]","['adaptive refinement', 'h-refinement', 'model reduction', 'dual-weighted residual', 'adjoint errorestimation', 'clustering']" 195,1904.13086,Joachim Meyer,"Nir Douer, Joachim Meyer","Theoretical, Measured and Subjective Responsibility in Aided Decision Making",,"ACM Transactions on Intelligent Interactive Systems, 11(1), Article 5 (2021)",10.1145/3453938,,cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," When humans interact with intelligent systems, their causal responsibility for outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task. They were aided by classification systems with different capabilities. We compared the predicted theoretical responsibility values to the actual measured responsibility participants took on and to their subjective rankings of responsibility. The model predictions were strongly correlated with both measured and subjective responsibility. A bias existed only when participants with poor classification capabilities relied less-than-optimally on a system that had superior classification capabilities and assumed higher-than-optimal responsibility. The study implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their comparative causal responsibility will be small, even if formally the human is assigned major roles. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes. The results demonstrate the descriptive value of the ResQu model to predict behavior and perceptions of responsibility by considering the characteristics of the human, the intelligent system, the environment and some systematic behavioral biases. The ResQu model is a new quantitative method that can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems. ","[{'version': 'v1', 'created': 'Tue, 30 Apr 2019 07:37:33 GMT'}, {'version': 'v2', 'created': 'Tue, 15 Oct 2019 07:18:58 GMT'}, {'version': 'v3', 'created': 'Wed, 29 Apr 2020 15:28:35 GMT'}]",2022-05-20,"[['Douer', 'Nir', ''], ['Meyer', 'Joachim', '']]","['Artificial intelligence (AI)', 'Human-automation interaction', 'decision making', 'responsibility', 'cognitive engineering', 'autonomous systems', 'alert systems']" 196,1407.1972,Vinoth Kumar,"S. Rajaram, A. Babu Karuppiah, K. Vinoth Kumar",Secure Routing Path Using Trust Values for Wireless Sensor Networks,"10 pages, 4 figures, International Journal on Cryptography and Information Security (IJCIS)",http://airccse.org/journal/ijcis/current2014.html,10.5121/ijcis.2014.4203,,cs.CR cs.NI,http://creativecommons.org/licenses/by/3.0/," Traditional cryptography-based security mechanisms such as authentication and authorization are not effective against insider attacks like wormhole, sinkhole, selective forwarding attacks, etc., Trust based approaches have been widely used to counter insider attacks in wireless sensor networks. It provides a quantitative way to evaluate the trustworthiness of sensor nodes. An untrustworthy node can wreak considerable damage and adversely affect the quality and reliability of data. Therefore, analysing the trust level of a node is important. In this paper we focused about indirect trust mechanism, in which each node monitors the forwarding behavior of its neighbors in order to detect any node that behaves selfishly and does not forward the packets it receives. For this, we used a link state routing protocol based indirect trusts which forms the shortest route and finds the best trustworthy route among them by comparing the values of all the calculated route trusts as for each route present in the network. And finally, we compare our work with similar routing protocols and show its advantages over them. ","[{'version': 'v1', 'created': 'Tue, 8 Jul 2014 06:50:27 GMT'}]",2014-07-09,"[['Rajaram', 'S.', ''], ['Karuppiah', 'A. Babu', ''], ['Kumar', 'K. Vinoth', '']]","['Wireless Sensor Networks (WSNs)', 'Routing', 'Benevolent Node', 'Malicious Node', 'Trust Management']" 197,1707.05943,Yao-Lung Leo Fang,Yao-Lung L. Fang,FDTD: solving 1+1D delay PDE in parallel,"Introduced two parallelization approaches along with other improvements in the presentation. Code open sourced at https://github.com/leofang/FDTD. To appear in Computer Physics Communications","Computer Physics Communications 235, 422 (2019)",10.1016/j.cpc.2018.08.018,,cs.MS cs.NA math.NA physics.comp-ph quant-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a proof of concept for solving a 1+1D complex-valued, delay partial differential equation (PDE) that emerges in the study of waveguide quantum electrodynamics (QED) by adapting the finite-difference time-domain (FDTD) method. The delay term is spatially non-local, rendering conventional approaches such as the method of lines inapplicable. We show that by properly designing the grid and by supplying the (partial) exact solution as the boundary condition, the delay PDE can be numerically solved. In addition, we demonstrate that while the delay imposes strong data dependency, multi-thread parallelization can nevertheless be applied to such a problem. Our code provides a numerically exact solution to the time-dependent multi-photon scattering problem in waveguide QED. ","[{'version': 'v1', 'created': 'Wed, 19 Jul 2017 06:06:13 GMT'}, {'version': 'v2', 'created': 'Tue, 4 Sep 2018 02:51:39 GMT'}]",2018-11-19,"[['Fang', 'Yao-Lung L.', '']]","['Waveguide QED', 'Delay PDE', 'FDTD', 'Non-Markovianity']" 198,1606.06204,Richard Barnes,Richard Barnes,"Parallel Priority-Flood Depression Filling For Trillion Cell Digital Elevation Models On Desktops Or Clusters","21 pages, 4 tables, 8 figures","Computers and Geosciences, Volume 96, November 2016, pp. 56-68",10.1016/j.cageo.2016.07.001,,cs.DC cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Algorithms for extracting hydrologic features and properties from digital elevation models (DEMs) are challenged by large datasets, which often cannot fit within a computer's RAM. Depression filling is an important preconditioning step to many of these algorithms. Here, I present a new, linearly-scaling algorithm which parallelizes the Priority-Flood depression-filling algorithm by subdividing a DEM into tiles. Using a single-producer, multi-consumer design, the new algorithm works equally well on one core, multiple cores, or multiple machines and can take advantage of large memories or cope with small ones. Unlike previous algorithms, the new algorithm guarantees a fixed number of memory access and communication events per subdivision of the DEM. In comparison testing, this results in the new algorithm running generally faster while using fewer resources than previous algorithms. For moderately sized tiles, the algorithm exhibits ~60% strong and weak scaling efficiencies up to 48 cores, and linear time scaling across datasets ranging over three orders of magnitude. The largest dataset on which I run the algorithm has 2 trillion (2*10^12) cells. With 48 cores, processing required 4.8 hours wall-time (9.3 compute-days). This test is three orders of magnitude larger than any previously performed in the literature. Complete, well-commented source code and correctness tests are available for download from a repository. ","[{'version': 'v1', 'created': 'Mon, 20 Jun 2016 16:52:12 GMT'}, {'version': 'v2', 'created': 'Mon, 15 Aug 2016 22:35:43 GMT'}]",2016-08-17,"[['Barnes', 'Richard', '']]","['parallel computing', 'hydrology', 'geographic information system (GIS)', 'pit filling', 'sink removal']" 199,1702.08903,Michael Lampis,"R\'emy Belmonte, Michael Lampis, Valia Mitsou",Defective Coloring on Classes of Perfect Graphs,,"Discrete Mathematics & Theoretical Computer Science, vol. 24, no. 1, Discrete Algorithms (January 20, 2022) dmtcs:8918",10.46298/dmtcs.4926,,cs.DS math.CO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In Defective Coloring we are given a graph $G$ and two integers $\chi_d$, $\Delta^*$ and are asked if we can $\chi_d$-color $G$ so that the maximum degree induced by any color class is at most $\Delta^*$. We show that this natural generalization of Coloring is much harder on several basic graph classes. In particular, we show that it is NP-hard on split graphs, even when one of the two parameters $\chi_d$, $\Delta^*$ is set to the smallest possible fixed value that does not trivialize the problem ($\chi_d = 2$ or $\Delta^* = 1$). Together with a simple treewidth-based DP algorithm this completely determines the complexity of the problem also on chordal graphs. We then consider the case of cographs and show that, somewhat surprisingly, Defective Coloring turns out to be one of the few natural problems which are NP-hard on this class. We complement this negative result by showing that Defective Coloring is in P for cographs if either $\chi_d$ or $\Delta^*$ is fixed; that it is in P for trivially perfect graphs; and that it admits a sub-exponential time algorithm for cographs when both $\chi_d$ and $\Delta^*$ are unbounded. ","[{'version': 'v1', 'created': 'Tue, 28 Feb 2017 18:47:57 GMT'}, {'version': 'v2', 'created': 'Fri, 26 Oct 2018 10:18:24 GMT'}, {'version': 'v3', 'created': 'Tue, 17 Nov 2020 21:06:23 GMT'}, {'version': 'v4', 'created': 'Wed, 5 Jan 2022 10:29:32 GMT'}]",2022-03-14,"[['Belmonte', 'Rémy', ''], ['Lampis', 'Michael', ''], ['Mitsou', 'Valia', '']]","['Defective Coloring', 'Split Graphs', 'Cographs']" 200,1907.12316,Eug\'enio Ribeiro,"Eug\'enio Ribeiro, Ricardo Ribeiro, and David Martins de Matos",Hierarchical Multi-Label Dialog Act Recognition on Spanish Data,"21 pages, 4 figures, 17 tables, translated version of the article published in Linguam\'atica 11(1)",Linguam\'atica 11(1) (2019) 17-40,10.21814/lm.11.1.278,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Dialog acts reveal the intention behind the uttered words. Thus, their automatic recognition is important for a dialog system trying to understand its conversational partner. The study presented in this article approaches that task on the DIHANA corpus, whose three-level dialog act annotation scheme poses problems which have not been explored in recent studies. In addition to the hierarchical problem, the two lower levels pose multi-label classification problems. Furthermore, each level in the hierarchy refers to a different aspect concerning the intention of the speaker both in terms of the structure of the dialog and the task. Also, since its dialogs are in Spanish, it allows us to assess whether the state-of-the-art approaches on English data generalize to a different language. More specifically, we compare the performance of different segment representation approaches focusing on both sequences and patterns of words and assess the importance of the dialog history and the relations between the multiple levels of the hierarchy. Concerning the single-label classification problem posed by the top level, we show that the conclusions drawn on English data also hold on Spanish data. Furthermore, we show that the approaches can be adapted to multi-label scenarios. Finally, by hierarchically combining the best classifiers for each level, we achieve the best results reported for this corpus. ","[{'version': 'v1', 'created': 'Mon, 29 Jul 2019 10:12:18 GMT'}]",2019-07-30,"[['Ribeiro', 'Eugénio', ''], ['Ribeiro', 'Ricardo', ''], ['de Matos', 'David Martins', '']]","['Dialog Act Recognition', 'Hierarchical Classification', 'Multi-Label Classification', 'DIHANA Corpus']" 201,1307.2997,Padmavathi S,"S. Padmavathi, Manojna K.S.S, S. Sphoorthy Reddy, D. Meenakshy","Conversion of Braille to Text in English, Hindi and Tamil Languages","14 pages, 20 figures, 4 tables","International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.3, No.3, June 2013",10.5121/ijcsea.2013.3303,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The Braille system has been used by the visually impaired for reading and writing. Due to limited availability of the Braille text books an efficient usage of the books becomes a necessity. This paper proposes a method to convert a scanned Braille document to text which can be read out to many through the computer. The Braille documents are pre processed to enhance the dots and reduce the noise. The Braille cells are segmented and the dots from each cell is extracted and converted in to a number sequence. These are mapped to the appropriate alphabets of the language. The converted text is spoken out through a speech synthesizer. The paper also provides a mechanism to type the Braille characters through the number pad of the keyboard. The typed Braille character is mapped to the alphabet and spoken out. The Braille cell has a standard representation but the mapping differs for each language. In this paper mapping of English, Hindi and Tamil are considered. ","[{'version': 'v1', 'created': 'Thu, 11 Jul 2013 07:24:16 GMT'}]",2013-07-12,"[['Padmavathi', 'S.', ''], ['S', 'Manojna K. S.', ''], ['Reddy', 'S. Sphoorthy', ''], ['Meenakshy', 'D.', '']]","['Braille Conversion', 'Projection Profile', 'Tamil Braille conversion', 'Hindi Braille conversion', 'ImageSegmentation']" 202,1511.06568,Tom\'a\v{s} Dvo\v{r}\'ak,Tom\'a\v{s} Dvo\v{r}\'ak,Matchings of quadratic size extend to long cycles in hypercubes,,"Discrete Mathematics & Theoretical Computer Science, Vol. 18 no. 3, Graph Theory (September 1, 2016) dmtcs:2012",10.46298/dmtcs.1336,,cs.DM,http://creativecommons.org/licenses/by/4.0/," Ruskey and Savage in 1993 asked whether every matching in a hypercube can be extended to a Hamiltonian cycle. A positive answer is known for perfect matchings, but the general case has been resolved only for matchings of linear size. In this paper we show that there is a quadratic function $q(n)$ such that every matching in the $n$-dimensional hypercube of size at most $q(n)$ may be extended to a cycle which covers at least $\frac34$ of the vertices. ","[{'version': 'v1', 'created': 'Fri, 20 Nov 2015 12:03:20 GMT'}, {'version': 'v2', 'created': 'Sat, 23 Jan 2016 21:47:45 GMT'}, {'version': 'v3', 'created': 'Mon, 29 Aug 2016 14:04:51 GMT'}]",2021-10-04,"[['Dvořák', 'Tomáš', '']]","['Gray code', 'Hamiltonian cycle', 'hypercube', 'long cycle', 'matching', 'Ruskey and Savage problem']" 203,1811.10161,Adil Erzin I,Adil Erzin and Natalya Lagutkina,FPTAS for barrier covering problem with equal circles in 2D,,,10.1007/s11590-020-01650-8,,cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we consider a problem of covering a straight line segment by equal circles that are initially arbitrarily placed on a plane by moving their centers on a segment or on a straight line containing a segment so that the segment is completely covered, the neighboring circles in the cover are touching each other and the total length of the paths traveled by circles is minimal. The complexity status of the problem is not known. We propose a $O(n^{2+c}/\varepsilon^2)$--time FPTAS for this problem, where $n$ is the number of circles and $c>0$ is arbitrarily small real. ","[{'version': 'v1', 'created': 'Mon, 26 Nov 2018 03:24:42 GMT'}, {'version': 'v2', 'created': 'Tue, 23 Apr 2019 04:29:10 GMT'}]",2021-01-05,"[['Erzin', 'Adil', ''], ['Lagutkina', 'Natalya', '']]","['barrier coverage', 'mobile sensors', 'FPTAS']" 204,1705.00097,Alejandro D\'iaz-Caro,Alejandro D\'iaz-Caro,"A lambda calculus for density matrices with classical and probabilistic controls","This version includes a 11-pages appendix with proofs, and a small fix in the definition of property P(b,A)",LNCS 10695:448-467 (APLAS 2017),10.1007/978-3-319-71237-6_22,,cs.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we present two flavors of a quantum extension to the lambda calculus. The first one, $\lambda_\rho$, follows the approach of classical control/quantum data, where the quantum data is represented by density matrices. We provide an interpretation for programs as density matrices and functions upon them. The second one, $\lambda_\rho^\circ$, take advantage of the density matrices presentation in order to follow the mixed trace of programs in a kind of generalised density matrix. Such a control can be seen as a weaker form of the quantum control and data approach. ","[{'version': 'v1', 'created': 'Fri, 28 Apr 2017 23:22:16 GMT'}, {'version': 'v2', 'created': 'Fri, 16 Jun 2017 21:08:02 GMT'}, {'version': 'v3', 'created': 'Sun, 20 Aug 2017 15:28:03 GMT'}, {'version': 'v4', 'created': 'Mon, 20 Nov 2017 15:58:50 GMT'}]",2017-11-21,"[['Díaz-Caro', 'Alejandro', '']]","['lambda calculus', 'quantum computing', 'density matrices', 'classical control']" 205,1109.0660,Albert Fannjiang,Albert Fannjiang and Wenjing Liao,Mismatch and resolution in compressive imaging,Figure 5 revised,,10.1117/12.892434,,cs.IT math.IT math.NA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects {\em independent} of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices. ","[{'version': 'v1', 'created': 'Sat, 3 Sep 2011 23:58:02 GMT'}, {'version': 'v2', 'created': 'Fri, 16 Sep 2011 22:23:52 GMT'}]",2015-05-30,"[['Fannjiang', 'Albert', ''], ['Liao', 'Wenjing', '']]","['Model mismatch', 'compressed sensing', 'coherence band', 'gridding error', 'redundant dictionary']" 206,2105.09461,Ayman Al-Kababji,"Ayman Al-Kababji, Abbes Amira, Faycal Bensaali, Abdulah Jarouf, Lisan Shidqi, Hamza Djelouat",An IoT-Based Framework for Remote Fall Monitoring,"30 Pages, 9 figures, 9 tables. This is a the Accepted Manuscript version of the article published in Biomedical Signal Processing and Control (URL: https://doi.org/10.1016/j.bspc.2021.102532)",,10.1016/j.bspc.2021.102532,,cs.NI cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Fall detection is a serious healthcare issue that needs to be solved. Falling without quick medical intervention would lower the chances of survival for the elderly, especially if living alone. Hence, the need is there for developing fall detection algorithms with high accuracy. This paper presents a novel IoT-based system for fall detection that includes a sensing device transmitting data to a mobile application through a cloud-connected gateway device. Then, the focus is shifted to the algorithmic aspect where multiple features are extracted from 3-axis accelerometer data taken from existing datasets. The results emphasize on the significance of Continuous Wavelet Transform (CWT) as an influential feature for determining falls. CWT, Signal Energy (SE), Signal Magnitude Area (SMA), and Signal Vector Magnitude (SVM) features have shown promising classification results using K-Nearest Neighbors (KNN) and E-Nearest Neighbors (ENN). For all performance metrics (accuracy, recall, precision, specificity, and F1 Score), the achieved results are higher than 95% for a dataset of small size, while more than 98.47% score is achieved in the aforementioned criteria over the UniMiB-SHAR dataset by the same algorithms, where the classification time for a single test record is extremely efficient and is real-time ","[{'version': 'v1', 'created': 'Wed, 10 Mar 2021 22:37:19 GMT'}]",2021-05-21,"[['Al-Kababji', 'Ayman', ''], ['Amira', 'Abbes', ''], ['Bensaali', 'Faycal', ''], ['Jarouf', 'Abdulah', ''], ['Shidqi', 'Lisan', ''], ['Djelouat', 'Hamza', '']]","['Wearable sensing device', '3-axis accelerometer', 'Feature extraction algorithm selection', 'CWT', 'Mobile']" 207,1901.06988,Daniele Ravi,"Daniele Rav\`i, Agnieszka Barbara Szczotka, Stephen P Pereira, Tom Vercauteren","Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy",Accepted for publication on Medical Image Analysis journal,,10.1016/j.media.2019.01.011,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment. ","[{'version': 'v1', 'created': 'Mon, 21 Jan 2019 16:23:32 GMT'}, {'version': 'v2', 'created': 'Wed, 6 Feb 2019 18:31:03 GMT'}]",2019-02-07,"[['Ravì', 'Daniele', ''], ['Szczotka', 'Agnieszka Barbara', ''], ['Pereira', 'Stephen P', ''], ['Vercauteren', 'Tom', '']]","['Deep learning', 'Probe-based confocal laser endomicroscopy', 'Unsupervised Super-resolution', 'Cycle consistency', 'Adversarial training']" 208,1201.4342,Tobias Buer,Tobias Buer and Herbert Kopfer,"A Pareto-metaheuristic for a bi-objective winner determination problem in a combinatorial reverse auction","Accepted for publication in Computers & Operations Research, available online, Computers & Operations Research, 2013","Computers & Operations Research 41 (2014), 208-220",10.1016/j.cor.2013.04.004,,cs.GT cs.AI math.OC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The bi-objective winner determination problem (2WDP-SC) of a combinatorial procurement auction for transport contracts is characterized by a set B of bundle bids, with each bundle bid b in B consisting of a bidding carrier c_b, a bid price p_b, and a set tau_b transport contracts which is a subset of the set T of tendered transport contracts. Additionally, the transport quality q_{t,c_b} is given which is expected to be realized when a transport contract t is executed by a carrier c_b. The task of the auctioneer is to find a set X of winning bids (X subset B), such that each transport contract is part of at least one winning bid, the total procurement costs are minimized, and the total transport quality is maximized. This article presents a metaheuristic approach for the 2WDP-SC which integrates the greedy randomized adaptive search procedure with a two-stage candidate component selection procedure, large neighborhood search, and self-adaptive parameter setting in order to find a competitive set of non-dominated solutions. The heuristic outperforms all existing approaches. For seven small benchmark instances, the heuristic is the sole approach that finds all Pareto-optimal solutions. For 28 out of 30 large instances, none of the existing approaches is able to compute a solution that dominates a solution found by the proposed heuristic. ","[{'version': 'v1', 'created': 'Fri, 20 Jan 2012 17:09:22 GMT'}, {'version': 'v2', 'created': 'Mon, 22 Apr 2013 12:25:42 GMT'}]",2014-06-10,"[['Buer', 'Tobias', ''], ['Kopfer', 'Herbert', '']]","['Pareto optimization', 'multi-criteria winner determination', 'combinatorial auction', 'GRASP', 'ALNS']" 209,1706.01171,Rao Muhammad Anwer,"Rao Muhammad Anwer, Fahad Shahbaz Khan, Joost van de Weijer, Matthieu Molinier, Jorma Laaksonen","Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification",To appear in ISPRS Journal of Photogrammetry and Remote Sensing,,10.1016/j.isprsjprs.2018.01.023,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The d facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Our final combination outperforms the state-of-the-art without employing fine-tuning or ensemble of RGB network architectures. ","[{'version': 'v1', 'created': 'Mon, 5 Jun 2017 00:53:06 GMT'}, {'version': 'v2', 'created': 'Mon, 26 Mar 2018 10:27:27 GMT'}]",2018-03-28,"[['Anwer', 'Rao Muhammad', ''], ['Khan', 'Fahad Shahbaz', ''], ['van de Weijer', 'Joost', ''], ['Molinier', 'Matthieu', ''], ['Laaksonen', 'Jorma', '']]","['Remote sensing', 'Deep learning', 'Scene classification', 'Local Binary Patterns', 'Texture analysis']" 210,1709.08521,Omar Al-Harbi,Omar Al-Harbi,"Using objective words in the reviews to improve the colloquial arabic sentiment analysis","14 pages, 1 figure, International Journal on Natural Language Computing (IJNLC)","International Journal on Natural Language Computing (IJNLC) Vol. 6, No.3, June 2017",10.5121/ijnlc,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," One of the main difficulties in sentiment analysis of the Arabic language is the presence of the colloquialism. In this paper, we examine the effect of using objective words in conjunction with sentimental words on sentiment classification for the colloquial Arabic reviews, specifically Jordanian colloquial reviews. The reviews often include both sentimental and objective words, however, the most existing sentiment analysis models ignore the objective words as they are considered useless. In this work, we created two lexicons: the first includes the colloquial sentimental words and compound phrases, while the other contains the objective words associated with values of sentiment tendency based on a particular estimation method. We used these lexicons to extract sentiment features that would be training input to the Support Vector Machines (SVM) to classify the sentiment polarity of the reviews. The reviews dataset have been collected manually from JEERAN website. The results of the experiments show that the proposed approach improves the polarity classification in comparison to two baseline models, with accuracy 95.6%. ","[{'version': 'v1', 'created': 'Mon, 25 Sep 2017 14:40:28 GMT'}]",2017-09-26,"[['Al-Harbi', 'Omar', '']]","['Arabic sentiment analysis', 'opinion mining', 'colloquial Arabic language', 'colloquial Jordanian reviews']" 211,1612.05005,Ingmar Steiner,"Alexander Hewer, Stefanie Wuhrer, Ingmar Steiner, Korin Richmond","A Multilinear Tongue Model Derived from Speech Related MRI Data of the Human Vocal Tract",,Computer Speech & Language 51 (2018) 68-92,10.1016/j.csl.2018.02.001,,cs.CV,http://creativecommons.org/licenses/by/4.0/," We present a multilinear statistical model of the human tongue that captures anatomical and tongue pose related shape variations separately. The model is derived from 3D magnetic resonance imaging data of 11 speakers sustaining speech related vocal tract configurations. The extraction is performed by using a minimally supervised method that uses as basis an image segmentation approach and a template fitting technique. Furthermore, it uses image denoising to deal with possibly corrupt data, palate surface information reconstruction to handle palatal tongue contacts, and a bootstrap strategy to refine the obtained shapes. Our evaluation concludes that limiting the degrees of freedom for the anatomical and speech related variations to 5 and 4, respectively, produces a model that can reliably register unknown data while avoiding overfitting effects. Furthermore, we show that it can be used to generate a plausible tongue animation by tracking sparse motion capture data. ","[{'version': 'v1', 'created': 'Thu, 15 Dec 2016 10:31:40 GMT'}, {'version': 'v2', 'created': 'Mon, 3 Apr 2017 08:51:42 GMT'}, {'version': 'v3', 'created': 'Tue, 12 Dec 2017 16:00:02 GMT'}, {'version': 'v4', 'created': 'Fri, 13 Apr 2018 09:27:33 GMT'}, {'version': 'v5', 'created': 'Tue, 17 Apr 2018 08:16:54 GMT'}]",2018-04-18,"[['Hewer', 'Alexander', ''], ['Wuhrer', 'Stefanie', ''], ['Steiner', 'Ingmar', ''], ['Richmond', 'Korin', '']]","['tongue', 'vocal tract', 'MRI', 'statistical model', 'shape analysis']" 212,1706.03170,Amirhossein Tavanaei,Amirhossein Tavanaei and Anthony Maida,"Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals",,"Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science, vol 10639",10.1007/978-3-319-70136-3_95,,cs.NE cs.SD,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Spiking neural networks (SNNs) enable power-efficient implementations due to their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN that uses unsupervised learning to extract discriminative features from speech signals, which can subsequently be used in a classifier. The architecture consists of a spiking convolutional/pooling layer followed by a fully connected spiking layer for feature discovery. The convolutional layer of leaky, integrate-and-fire (LIF) neurons represents primary acoustic features. The fully connected layer is equipped with a probabilistic spike-timing-dependent plasticity learning rule. This layer represents the discriminative features through probabilistic, LIF neurons. To assess the discriminative power of the learned features, they are used in a hidden Markov model (HMM) for spoken digit recognition. The experimental results show performance above 96% that compares favorably with popular statistical feature extraction methods. Our results provide a novel demonstration of unsupervised feature acquisition in an SNN. ","[{'version': 'v1', 'created': 'Sat, 10 Jun 2017 02:14:42 GMT'}]",2017-11-23,"[['Tavanaei', 'Amirhossein', ''], ['Maida', 'Anthony', '']]","['Bio-inspired multi-layer framework', 'spiking network', 'speech recognition', 'unsupervised feature extraction']" 213,1211.4218,Natalia Melnikova,"N. B. Melnikova, V. V. Krzhizhanovskaya, P. M. A. Sloot","Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data",,"Journal of Hydrology 496 (2013), pp. 154-165",10.1016/j.jhydrol.2013.05.031,,cs.CE physics.geo-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities. ","[{'version': 'v1', 'created': 'Sun, 18 Nov 2012 13:05:54 GMT'}, {'version': 'v2', 'created': 'Wed, 21 Nov 2012 12:40:27 GMT'}]",2014-01-30,"[['Melnikova', 'N. B.', ''], ['Krzhizhanovskaya', 'V. V.', ''], ['Sloot', 'P. M. A.', '']]","['dike stability', 'porous flow', 'diffusivity calibration', 'sensitivity analysis', 'live sensor data']" 214,1803.01686,Yuanhang Su,"Yuanhang Su, C.-C. Jay Kuo","On Extended Long Short-term Memory and Dependent Bidirectional Recurrent Neural Network",github repo: https://github.com/yuanhangsu/ELSTM-DBRNN,Neurocomputing 356 (2019): 151-161,10.1016/j.neucom.2019.04.044,,cs.LG cs.CL cs.NE stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this work, we first analyze the memory behavior in three recurrent neural networks (RNN) cells; namely, the simple RNN (SRN), the long short-term memory (LSTM) and the gated recurrent unit (GRU), where the memory is defined as a function that maps previous elements in a sequence to the current output. Our study shows that all three of them suffer rapid memory decay. Then, to alleviate this effect, we introduce trainable scaling factors that act like an attention mechanism to adjust memory decay adaptively. The new design is called the extended LSTM (ELSTM). Finally, to design a system that is robust to previous erroneous predictions, we propose a dependent bidirectional recurrent neural network (DBRNN). Extensive experiments are conducted on different language tasks to demonstrate the superiority of the proposed ELSTM and DBRNN solutions. The ELTSM has achieved up to 30% increase in the labeled attachment score (LAS) as compared to LSTM and GRU in the dependency parsing (DP) task. Our models also outperform other state-of-the-art models such as bi-attention and convolutional sequence to sequence (convseq2seq) by close to 10% in the LAS. The code is released as an open source (https://github.com/yuanhangsu/ELSTM-DBRNN) ","[{'version': 'v1', 'created': 'Tue, 27 Feb 2018 02:47:13 GMT'}, {'version': 'v2', 'created': 'Sun, 16 Sep 2018 05:43:49 GMT'}, {'version': 'v3', 'created': 'Sun, 3 Mar 2019 04:30:02 GMT'}, {'version': 'v4', 'created': 'Tue, 14 May 2019 23:26:31 GMT'}, {'version': 'v5', 'created': 'Sun, 17 Nov 2019 21:39:02 GMT'}]",2019-11-19,"[['Su', 'Yuanhang', ''], ['Kuo', 'C. -C. Jay', '']]","['recurrent neural networks', 'long short-term memory', 'gated']" 215,1501.05613,Arnaud Martin,"Jungyeul Park (IRISA), Mouna Chebbah (IRISA), Siwar Jendoubi (IRISA), Arnaud Martin (IRISA)",Second-Order Belief Hidden Markov Models,,"Belief 2014, Sep 2014, Oxford, United Kingdom. pp.284 - 293",10.1007/978-3-319-11191-9_31,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Hidden Markov Models (HMMs) are learning methods for pattern recognition. The probabilistic HMMs have been one of the most used techniques based on the Bayesian model. First-order probabilistic HMMs were adapted to the theory of belief functions such that Bayesian probabilities were replaced with mass functions. In this paper, we present a second-order Hidden Markov Model using belief functions. Previous works in belief HMMs have been focused on the first-order HMMs. We extend them to the second-order model. ","[{'version': 'v1', 'created': 'Thu, 22 Jan 2015 19:56:34 GMT'}]",2015-01-23,"[['Park', 'Jungyeul', '', 'IRISA'], ['Chebbah', 'Mouna', '', 'IRISA'], ['Jendoubi', 'Siwar', '', 'IRISA'], ['Martin', 'Arnaud', '', 'IRISA']]","['Belief functions', 'Dempster-Shafer theory', 'first-order belief HMM', 'second-order belief HMM', 'probabilistic HMM']" 216,1403.7783,Mohammed Javed,"Mohammed Javed, P. Nagabhushan, B.B. Chaudhuri","Extraction of Line Word Character Segments Directly from Run Length Compressed Printed Text Documents","IEEE Proceedings in National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG 2013)",,10.1109/NCVPRIPG.2013.6776195,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Segmentation of a text-document into lines, words and characters, which is considered to be the crucial pre-processing stage in Optical Character Recognition (OCR) is traditionally carried out on uncompressed documents, although most of the documents in real life are available in compressed form, for the reasons such as transmission and storage efficiency. However, this implies that the compressed image should be decompressed, which indents additional computing resources. This limitation has motivated us to take up research in document image analysis using compressed documents. In this paper, we think in a new way to carry out segmentation at line, word and character level in run-length compressed printed-text-documents. We extract the horizontal projection profile curve from the compressed file and using the local minima points perform line segmentation. However, tracing vertical information which leads to tracking words-characters in a run-length compressed file is not very straight forward. Therefore, we propose a novel technique for carrying out simultaneous word and character segmentation by popping out column runs from each row in an intelligent sequence. The proposed algorithms have been validated with 1101 text-lines, 1409 words and 7582 characters from a data-set of 35 noise and skew free compressed documents of Bengali, Kannada and English Scripts. ","[{'version': 'v1', 'created': 'Sun, 30 Mar 2014 16:48:31 GMT'}]",2014-04-01,"[['Javed', 'Mohammed', ''], ['Nagabhushan', 'P.', ''], ['Chaudhuri', 'B. B.', '']]","['Compressed document segmentation', 'run-length compression', 'Line word', 'character segmentation']" 217,1806.01539,Christine Michel,"Elena Codreanu (GRePS, SICAL, WSE), Christine Michel (SICAL), Marc-Eric Bobillier-Chaumond (GRePS), Olivier Vigneau (WSE)","L'acceptation et l'appropriation des ENT (Espaces Num{\'e}riques de Travail) par les enseignants du primaire",in french,"2017, 24 (1), 39p. http://sticef.org",10.23709/sticef.24.1.1,,cs.CY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This article presents an evaluation of the conditions of use of a VWE (Virtual Work Environment) by primary school teachers. To this end, we conducted two studies and used activity theory as theoretical framework. Our first study aims to assess real practices carried out with the VWE and analyzed publications' content in order to understand how users appropriate the tool. The second study describes how teachers perceive the role of the VWE in the evolution of their working prac-tices (maintaining, transforming and restricting the existent practices). These stud-ies indicate that technological appropriation is achieved through instructional and communicational uses. The acceptance of this VWE is due to its ease of use and interface adequacy to teachers and young children. ","[{'version': 'v1', 'created': 'Tue, 5 Jun 2018 08:09:57 GMT'}]",2018-06-06,"[['Codreanu', 'Elena', '', 'GRePS, SICAL, WSE'], ['Michel', 'Christine', '', 'SICAL'], ['Bobillier-Chaumond', 'Marc-Eric', '', 'GRePS'], ['Vigneau', 'Olivier', '', 'WSE']]","['• Virtual Work Environment', 'Practices', 'Uses', 'Primary Education']" 218,1611.04529,Helena Sofia Rodrigues,Helena Sofia Rodrigues and Manuel Jos\'e Fonseca,"Can information be spread as a virus? Viral Marketing as epidemiological model","Please cite this paper as: Rodrigues, Helena Sofia and Fonseca, Manuel Jos\'e (2016) . Can information be spread as a virus? Viral Marketing as epidemiological model, Mathematical Methods in the Applied Sciences, 39: 4780--4786. arXiv admin note: substantial text overlap with arXiv:1507.06986","Mathematical Methods in the Applied Sciences,39: 4780--4786, 2016",10.1002/mma.3783,,cs.SI physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In epidemiology, an epidemic is defined as the spread of an infectious disease to a large number of people in a given population within a short period of time. In the marketing context, a message is viral when it is broadly sent and received by the target market through person-to-person transmission. This specific marketing communication strategy is commonly referred as viral marketing. Due to this similarity between an epidemic and the viral marketing process and because the understanding of the critical factors to this communications strategy effectiveness remain largely unknown, the mathematical models in epidemiology are presented in this marketing specific field. In this paper, an epidemiological model SIR (Susceptible- Infected-Recovered) to study the effects of a viral marketing strategy is presented. It is made a comparison between the disease parameters and the marketing application, and Matlab simulations are performed. Finally, some conclusions are carried out and their marketing implications are exposed: interactions across the parameters suggest some recommendations to marketers, as the profitability of the investment or the need to improve the targeting criteria of the communications campaigns. ","[{'version': 'v1', 'created': 'Tue, 8 Nov 2016 15:30:00 GMT'}]",2016-11-15,"[['Rodrigues', 'Helena Sofia', ''], ['Fonseca', 'Manuel José', '']]","['viral marketing', 'word-of-mouth', 'epidemiological model', 'numerical simulations', 'infectivity', 'recovery rate', 'seed population']" 219,1905.02951,Mouhamed Abdulla Ph.D.,Mouhamed Abdulla and Zohreh Motamedi and Amjed Majeed,"Redesigning Telecommunication Engineering Courses with CDIO geared for Polytechnic Education","Proc. of the 10th Conference on Canadian Engineering Education Association (CEEA'19)",,10.24908/PCEEA.VI0.13855,,cs.CY cs.IT math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Whether in chemical, civil, mechanical, electrical, or their related engineering subdisciplines, remaining up-to-date in the subject matter is crucial. However, due to the pace of technological evolution, information and communications technology (ICT) fields of study are impacted with much higher consequences. Meanwhile, the curricula of higher educational institutes are struggling to catch up to this reality. In order to remain competitive, engineering schools ought to offer ICT related courses that are at once modern, relevant and ultimately beneficial for the employability of their graduates. In this spirit, we were recently mandated by our engineering school to develop and design telecommunication courses with great emphasis on (i) technological modernity, and (ii) experiential learning. To accomplish these objectives, we utilized the conceive, design, implement and operate (CDIO) framework, a modern engineering education initiative of which Sheridan is a member. In this article, we chronicle the steps we took to streamline and modernize the curriculum by outlining an effective methodology for course design and development with CDIO. We then provide examples of course update and design using the proposed methodology and highlight the lessons learned from this systematic curriculum development endeavor. ","[{'version': 'v1', 'created': 'Wed, 8 May 2019 08:24:15 GMT'}]",2020-06-08,"[['Abdulla', 'Mouhamed', ''], ['Motamedi', 'Zohreh', ''], ['Majeed', 'Amjed', '']]","['Engineering Education', 'Engineering Design', 'Course Development', 'Applied Learning', 'CDIO']" 220,1901.10804,Samad Noeiaghdam,Samad Noeiaghdam,"Numerical approximation of modified non-linear SIR model of computer viruses",,Vol 1 No 1 (2019): Contemporary Mathematics,10.37256/cm.11201959.34-48,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, the non-linear modified epidemiological model of computer viruses is illustrated. For this aim, two semi-analytical methods, the differential transform method (DTM) and the Laplace-Adomian decomposition method (LADM) are applied. The numerical results are estimated for different values of iterations and compared to the results of the LADM and the homotopy analysis transform method (HATM). Also, graphs of residual errors and phase portraits of approximate solutions for $n=5,10,15$ are demonstrated. The numerical approximations show the performance of the LADM in comparison to the LADM and the HATM. ","[{'version': 'v1', 'created': 'Tue, 15 Jan 2019 07:12:32 GMT'}]",2020-01-07,"[['Noeiaghdam', 'Samad', '']]","['Non-linear Susceptible-Infected-Recovered model', 'Differential transform method', 'Laplace transformations', 'Adomian decomposition method']" 221,1607.08038,Konstantin Yakovlev S,"Aleksandr I. Panov, Konstantin Yakovlev","Behavior and path planning for the coalition of cognitive robots in smart relocation tasks","As submitted to the 4th International Conference on Robot Intelligence Technology and Applications (RiTA-2015), Bucheon, Korea, December 14-16, 2015",,10.1007/978-3-319-31293-4_1,,cs.AI cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we outline the approach of solving special type of navigation tasks for robotic systems, when a coalition of robots (agents) acts in the 2D environment, which can be modified by the actions, and share the same goal location. The latter is originally unreachable for some members of the coalition, but the common task still can be accomplished as the agents can assist each other (e.g. by modifying the environment). We call such tasks smart relocation tasks (as the can not be solved by pure path planning methods) and study spatial and behavior interaction of robots while solving them. We use cognitive approach and introduce semiotic knowledge representation - sign world model which underlines behavioral planning methodology. Planning is viewed as a recursive search process in the hierarchical state-space induced by sings with path planning signs reside on the lowest level. Reaching this level triggers path planning which is accomplished by state of the art grid-based planners focused on producing smooth paths (e.g. LIAN) and thus indirectly guarantying feasibility of that paths against agent's dynamic constraints. ","[{'version': 'v1', 'created': 'Wed, 27 Jul 2016 11:12:02 GMT'}]",2016-07-28,"[['Panov', 'Aleksandr I.', ''], ['Yakovlev', 'Konstantin', '']]","['behavior planning', 'task planning', 'coalition', 'path planning', 'sign world model', 'semiotic model', 'knowledge representation', 'LIAN']" 222,1803.02124,David Robb,"Helen Hastie, Francisco J. Chiyah Garcia, David A. Robb, Pedro Patron and Atanas Laskov",MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems,"2 pages, ICMI'17, 19th ACM International Conference on Multimodal Interaction, November 13-17 2017, Glasgow, UK",,10.1145/3136755.3143022,,cs.AI cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present MIRIAM (Multimodal Intelligent inteRactIon for Autonomous systeMs), a multimodal interface to support situation awareness of autonomous vehicles through chat-based interaction. The user is able to chat about the vehicle's plan, objectives, previous activities and mission progress. The system is mixed initiative in that it pro-actively sends messages about key events, such as fault warnings. We will demonstrate MIRIAM using SeeByte's SeeTrack command and control interface and Neptune autonomy simulator. ","[{'version': 'v1', 'created': 'Tue, 6 Mar 2018 11:33:04 GMT'}]",2018-03-07,"[['Hastie', 'Helen', ''], ['Garcia', 'Francisco J. Chiyah', ''], ['Robb', 'David A.', ''], ['Patron', 'Pedro', ''], ['Laskov', 'Atanas', '']]","['Multimodal output', 'natural language generation', 'autonomous systems']" 223,1407.2190,Shahid Alam,Shahid Alam,Is Fortran Still Relevant? Comparing Fortran with Java and C++,,"International Journal of Software Engineering & Application, pages 25-45, Volume 5, No 3, 2014",10.5121/ijsea.2014.5303,,cs.PL cs.SE,http://creativecommons.org/licenses/by-nc-sa/3.0/," This paper presents a comparative study to evaluate and compare Fortran with the two most popular programming languages Java and C++. Fortran has gone through major and minor extensions in the years 2003 and 2008. (1) How much have these extensions made Fortran comparable to Java and C++? (2) What are the differences and similarities, in supporting features like: Templates, object constructors and destructors, abstract data types and dynamic binding? These are the main questions we are trying to answer in this study. An object-oriented ray tracing application is implemented in these three languages to compare them. By using only one program we ensured there was only one set of requirements thus making the comparison homogeneous. Based on our literature survey this is the first study carried out to compare these languages by applying software metrics to the ray tracing application and comparing these results with the similarities and differences found in practice. We motivate the language implementers and compiler developers, by providing binary analysis and profiling of the application, to improve Fortran object handling and processing, and hence making it more prolific and general. This study facilitates and encourages the reader to further explore, study and use these languages more effectively and productively, especially Fortran. ","[{'version': 'v1', 'created': 'Wed, 11 Jun 2014 16:40:55 GMT'}]",2014-07-09,"[['Alam', 'Shahid', '']]","['Object-oriented programming languages', 'Comparing Languages', 'Fortran', 'Java', 'C++', 'Software Metrics']" 224,2207.10767,Prabhat Agarwal,"Prabhat Agarwal, Manisha Srivastava, Vishwakarma Singh, Charles Rosenberg",Modeling User Behavior With Interaction Networks for Spam Detection,"6 pages, 2 figures, accepted to SIGIR 2022","In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2022), pp. 2437-2442",10.1145/3477495.3531875,,cs.LG cs.IR cs.SI,http://creativecommons.org/licenses/by/4.0/," Spam is a serious problem plaguing web-scale digital platforms which facilitate user content creation and distribution. It compromises platform's integrity, performance of services like recommendation and search, and overall business. Spammers engage in a variety of abusive and evasive behavior which are distinct from non-spammers. Users' complex behavior can be well represented by a heterogeneous graph rich with node and edge attributes. Learning to identify spammers in such a graph for a web-scale platform is challenging because of its structural complexity and size. In this paper, we propose SEINE (Spam DEtection using Interaction NEtworks), a spam detection model over a novel graph framework. Our graph simultaneously captures rich users' details and behavior and enables learning on a billion-scale graph. Our model considers neighborhood along with edge types and attributes, allowing it to capture a wide range of spammers. SEINE, trained on a real dataset of tens of millions of nodes and billions of edges, achieves a high performance of 80% recall with 1% false positive rate. SEINE achieves comparable performance to the state-of-the-art techniques on a public dataset while being pragmatic to be used in a large-scale production system. ","[{'version': 'v1', 'created': 'Thu, 21 Jul 2022 21:34:56 GMT'}]",2022-07-25,"[['Agarwal', 'Prabhat', ''], ['Srivastava', 'Manisha', ''], ['Singh', 'Vishwakarma', ''], ['Rosenberg', 'Charles', '']]","['Heterogeneous Graph Neural Networks', 'Spam', 'Machine Learning']" 225,1912.05170,"G\""orkem Algan","G\""orkem Algan, Ilkay Ulusoy","Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey",,,10.1016/j.knosys.2021.106771,,cs.LG cs.CV stat.ML,http://creativecommons.org/licenses/by-nc-nd/4.0/," Image classification systems recently made a giant leap with the advancement of deep neural networks. However, these systems require an excessive amount of labeled data to be adequately trained. Gathering a correctly annotated dataset is not always feasible due to several factors, such as the expensiveness of the labeling process or difficulty of correctly classifying data, even for the experts. Because of these practical challenges, label noise is a common problem in real-world datasets, and numerous methods to train deep neural networks with label noise are proposed in the literature. Although deep neural networks are known to be relatively robust to label noise, their tendency to overfit data makes them vulnerable to memorizing even random noise. Therefore, it is crucial to consider the existence of label noise and develop counter algorithms to fade away its adverse effects to train deep neural networks efficiently. Even though an extensive survey of machine learning techniques under label noise exists, the literature lacks a comprehensive survey of methodologies centered explicitly around deep learning in the presence of noisy labels. This paper aims to present these algorithms while categorizing them into one of the two subgroups: noise model based and noise model free methods. Algorithms in the first group aim to estimate the noise structure and use this information to avoid the adverse effects of noisy labels. Differently, methods in the second group try to come up with inherently noise robust algorithms by using approaches like robust losses, regularizers or other learning paradigms. ","[{'version': 'v1', 'created': 'Wed, 11 Dec 2019 08:26:57 GMT'}, {'version': 'v2', 'created': 'Fri, 5 Jun 2020 09:56:01 GMT'}, {'version': 'v3', 'created': 'Mon, 11 Jan 2021 08:50:51 GMT'}]",2021-01-19,"[['Algan', 'Görkem', ''], ['Ulusoy', 'Ilkay', '']]","['deep learning', 'label noise', 'classification with noise', 'noise robust', 'noise tolerant']" 226,1007.5165,Secretary Iju,"R. Shankar (1), Timothy Rajkumar.K (2) and P.Dananjayan (2) ((1) Sri Manakula Vinayagar Engineering College and (2) Pondicherry Engineering College, India)","Security Enhancement With Optimal QOS Using EAP-AKA In Hybrid Coupled 3G-WLAN Convergence Network","12 pages, 5 figures",International Journal Of UbiComp 1.3 (2010) 31-42,10.5121/iju.2010.1303,,cs.NI,http://creativecommons.org/licenses/by-nc-sa/3.0/," The third generation partnership project (3GPP) has addressed the feasibility of interworking and specified the interworking architecture and security architecture for third generation (3G)-wireless local area network (WLAN), it is developing, system architecture evolution (SAE)/ long term evolution (LTE) architecture, for the next generation mobile communication system. To provide a secure 3G-WLAN interworking in the SAE/LTE architecture, Extensible authentication protocol-authentication and key agreement (EAP-AKA) is used. However, EAP-AKA have several vulnerabilities. Therefore, this paper not only analyses the threats and attacks in 3G-WLAN interworking but also proposes a new authentication and key agreement protocol based on EAP-AKA. The proposed protocol combines elliptic curve Diffie-Hellman (ECDH) with symmetric key cryptosystem to overcome the vulnerabilities. The proposed protocol is used in hybrid coupled 3G-WLAN convergence network to analyse its efficiency in terms of QoS metrics, the results obtained using OPNET 14.5 shows that the proposed protocol outperforms existing interworking protocols both in security and QoS. ","[{'version': 'v1', 'created': 'Thu, 29 Jul 2010 09:39:23 GMT'}]",2010-07-30,"[['Shankar', 'R.', ''], ['K', 'Timothy Rajkumar.', ''], ['Dananjayan', 'P.', '']]","['3G-WLAN', 'Convergence Network', 'EAP-AKA', 'Security', 'QoS']" 227,1210.2640,Eric Eaton,"Eric Eaton, Marie desJardins, Sara Jacob","Multi-view constrained clustering with an incomplete mapping between views",,"Knowledge and Information Systems 38(1): 231-257, 2014",10.1007/s10115-012-0577-7,,cs.LG cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Multi-view learning algorithms typically assume a complete bipartite mapping between the different views in order to exchange information during the learning process. However, many applications provide only a partial mapping between the views, creating a challenge for current methods. To address this problem, we propose a multi-view algorithm based on constrained clustering that can operate with an incomplete mapping. Given a set of pairwise constraints in each view, our approach propagates these constraints using a local similarity measure to those instances that can be mapped to the other views, allowing the propagated constraints to be transferred across views via the partial mapping. It uses co-EM to iteratively estimate the propagation within each view based on the current clustering model, transfer the constraints across views, and then update the clustering model. By alternating the learning process between views, this approach produces a unified clustering model that is consistent with all views. We show that this approach significantly improves clustering performance over several other methods for transferring constraints and allows multi-view clustering to be reliably applied when given a limited mapping between the views. Our evaluation reveals that the propagated constraints have high precision with respect to the true clusters in the data, explaining their benefit to clustering performance in both single- and multi-view learning scenarios. ","[{'version': 'v1', 'created': 'Tue, 9 Oct 2012 15:25:01 GMT'}]",2014-11-03,"[['Eaton', 'Eric', ''], ['desJardins', 'Marie', ''], ['Jacob', 'Sara', '']]","['constrained clustering', 'multi-view learning', 'semi-supervised learning']" 228,1202.5820,Zi-Ke Zhang Mr.,"Zi-Ke Zhang, Tao Zhou, Yi-Cheng Zhang",Tag-Aware Recommender Systems: A State-of-the-art Survey,"19 pages, 3 figures",Journal of Computer Science and Technology 26 (2011) 767,10.1007/s11390-011-0176-1,,cs.IR cs.SI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In the past decade, Social Tagging Systems have attracted increasing attention from both physical and computer science communities. Besides the underlying structure and dynamics of tagging systems, many efforts have been addressed to unify tagging information to reveal user behaviors and preferences, extract the latent semantic relations among items, make recommendations, and so on. Specifically, this article summarizes recent progress about tag-aware recommender systems, emphasizing on the contributions from three mainstream perspectives and approaches: network-based methods, tensor-based methods, and the topic-based methods. Finally, we outline some other tag-related works and future challenges of tag-aware recommendation algorithms. ","[{'version': 'v1', 'created': 'Mon, 27 Feb 2012 03:37:14 GMT'}]",2012-02-28,"[['Zhang', 'Zi-Ke', ''], ['Zhou', 'Tao', ''], ['Zhang', 'Yi-Cheng', '']]","['social tagging systems', 'tag-aware recommendation', 'network-based', 'tensor-based', 'topicbased methods']" 229,1803.06259,"Alexander Spr\""owitz","Alexander Spr\""owitz, Alexandre Tuleu, Mostafa Ajallooeian, Massimo Vespignani, Rico Moeckel, Peter Eckert, Michiel D'Haene, Jonas Degrave, Arne Nordmann, Benjamin Schrauwen, Jochen Steil, and Auke Jan Ijspeert","Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs",,Front. Robot. AI 5:67 (2018),10.3389/frobt.2018.00067,,cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present Oncilla robot, a novel mobile, quadruped legged locomotion machine. This large-cat sized, 5.1 robot is one of a kind of a recent, bioinspired legged robot class designed with the capability of model-free locomotion control. Animal legged locomotion in rough terrain is clearly shaped by sensor feedback systems. Results with Oncilla robot show that agile and versatile locomotion is possible without sensory signals to some extend, and tracking becomes robust when feedback control is added (Ajaoolleian 2015). By incorporating mechanical and control blueprints inspired from animals, and by observing the resulting robot locomotion characteristics, we aim to understand the contribution of individual components. Legged robots have a wide mechanical and control design parameter space, and a unique potential as research tools to investigate principles of biomechanics and legged locomotion control. But the hardware and controller design can be a steep initial hurdle for academic research. To facilitate the easy start and development of legged robots, Oncilla-robot's blueprints are available through open-source. [...] ","[{'version': 'v1', 'created': 'Fri, 16 Mar 2018 14:59:41 GMT'}, {'version': 'v2', 'created': 'Sat, 16 Jun 2018 07:43:16 GMT'}]",2018-09-11,"[['Spröwitz', 'Alexander', ''], ['Tuleu', 'Alexandre', ''], ['Ajallooeian', 'Mostafa', ''], ['Vespignani', 'Massimo', ''], ['Moeckel', 'Rico', ''], ['Eckert', 'Peter', ''], [""D'Haene"", 'Michiel', ''], ['Degrave', 'Jonas', ''], ['Nordmann', 'Arne', ''], ['Schrauwen', 'Benjamin', ''], ['Steil', 'Jochen', ''], ['Ijspeert', 'Auke Jan', '']]","['quadruped', 'robot', 'pantograph', 'open-source', 'multiple gaits', 'open-loop', 'pattern generator', 'turning']" 230,1711.08336,Eli (Omid) David,"Eli David, Nathan S. Netanyahu","DeepSign: Deep Learning for Automatic Malware Signature Generation and Classification",,"International Joint Conference on Neural Networks (IJCNN), pages 1-8, Killarney, Ireland, July 2015",10.1109/IJCNN.2015.7280815,,cs.CR cs.LG cs.NE stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper presents a novel deep learning based method for automatic malware signature generation and classification. The method uses a deep belief network (DBN), implemented with a deep stack of denoising autoencoders, generating an invariant compact representation of the malware behavior. While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing malware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants. Using a dataset containing hundreds of variants for several major malware families, our method achieves 98.6% classification accuracy using the signatures generated by the DBN. The presented method is completely agnostic to the type of malware behavior that is logged (e.g., API calls and their parameters, registry entries, websites and ports accessed, etc.), and can use any raw input from a sandbox to successfully train the deep neural network which is used to generate malware signatures. ","[{'version': 'v1', 'created': 'Tue, 21 Nov 2017 07:22:58 GMT'}, {'version': 'v2', 'created': 'Thu, 23 Nov 2017 16:27:18 GMT'}]",2017-11-27,"[['David', 'Eli', ''], ['Netanyahu', 'Nathan S.', '']]","['Deep Learning', 'Deep Belief Network', 'Autoencoders', 'Malware', 'Automatic Signature Generation']" 231,1402.2409,Christoph Koutschan,"Shaoshi Chen, Manuel Kauers, Christoph Koutschan",A Generalized Apagodu-Zeilberger Algorithm,,"Proceedings of the International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), pages 107-114, 2014. ACM, New York, USA, ISBN 978-1-4503-2501-1",10.1145/2608628.2608641,,cs.SC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The Apagodu-Zeilberger algorithm can be used for computing annihilating operators for definite sums over hypergeometric terms, or for definite integrals over hyperexponential functions. In this paper, we propose a generalization of this algorithm which is applicable to arbitrary $\partial$-finite functions. In analogy to the hypergeometric case, we introduce the notion of proper $\partial$-finite functions. We show that the algorithm always succeeds for these functions, and we give a tight a priori bound for the order of the output operator. ","[{'version': 'v1', 'created': 'Tue, 11 Feb 2014 09:35:59 GMT'}, {'version': 'v2', 'created': 'Tue, 22 Apr 2014 15:03:50 GMT'}, {'version': 'v3', 'created': 'Sat, 2 Aug 2014 19:32:33 GMT'}]",2014-08-05,"[['Chen', 'Shaoshi', ''], ['Kauers', 'Manuel', ''], ['Koutschan', 'Christoph', '']]","['Symbolic summation', 'symbolic integration', '∂-finite function', 'holonomic function', 'Ore algebra', 'creative telescoping']" 232,1810.08218,Luciano A. Romero Calla,"Luciano A. Romero Calla and Lizeth J. Fuentes Perez and Anselmo A. Montenegro","A minimalistic approach for fast computation of geodesic distances on triangular meshes",Preprint submitted to Computers & Graphics,,10.1016/j.cag.2019.08.014,,cs.CG,http://creativecommons.org/licenses/by-nc-sa/4.0/," The computation of geodesic distances is an important research topic in Geometry Processing and 3D Shape Analysis as it is a basic component of many methods used in these areas. In this work, we present a minimalistic parallel algorithm based on front propagation to compute approximate geodesic distances on meshes. Our method is practical and simple to implement and does not require any heavy pre-processing. The convergence of our algorithm depends on the number of discrete level sets around the source points from which distance information propagates. To appropriately implement our method on GPUs taking into account memory coalescence problems, we take advantage of a graph representation based on a breadth-first search traversal that works harmoniously with our parallel front propagation approach. We report experiments that show how our method scales with the size of the problem. We compare the mean error and processing time obtained by our method with such measures computed using other methods. Our method produces results in competitive times with almost the same accuracy, especially for large meshes. We also demonstrate its use for solving two classical geometry processing problems: the regular sampling problem and the Voronoi tessellation on meshes. ","[{'version': 'v1', 'created': 'Thu, 18 Oct 2018 18:01:13 GMT'}, {'version': 'v2', 'created': 'Tue, 16 Apr 2019 16:33:01 GMT'}, {'version': 'v3', 'created': 'Sun, 23 Jun 2019 02:42:30 GMT'}, {'version': 'v4', 'created': 'Wed, 31 Jul 2019 08:06:00 GMT'}, {'version': 'v5', 'created': 'Fri, 23 Aug 2019 21:38:42 GMT'}]",2019-09-24,"[['Calla', 'Luciano A. Romero', ''], ['Perez', 'Lizeth J. Fuentes', ''], ['Montenegro', 'Anselmo A.', '']]","['Geodesic distance', 'Fast marching', 'Triangular meshes', 'Parallel programming', 'Breadth-first search']" 233,1705.09797,Abdulhakeem Mohammed,Feodor F. Dragan and Abdulhakeem Mohammed,Slimness of graphs,,"Discrete Mathematics & Theoretical Computer Science, Vol. 21 no. 3 , Graph Theory (March 4, 2019) dmtcs:5245",10.23638/DMTCS-21-3-10,,cs.DM math.CO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Slimness of a graph measures the local deviation of its metric from a tree metric. In a graph $G=(V,E)$, a geodesic triangle $\bigtriangleup(x,y,z)$ with $x, y, z\in V$ is the union $P(x,y) \cup P(x,z) \cup P(y,z)$ of three shortest paths connecting these vertices. A geodesic triangle $\bigtriangleup(x,y,z)$ is called $\delta$-slim if for any vertex $u\in V$ on any side $P(x,y)$ the distance from $u$ to $P(x,z) \cup P(y,z)$ is at most $\delta$, i.e. each path is contained in the union of the $\delta$-neighborhoods of two others. A graph $G$ is called $\delta$-slim, if all geodesic triangles in $G$ are $\delta$-slim. The smallest value $\delta$ for which $G$ is $\delta$-slim is called the slimness of $G$. In this paper, using the layering partition technique, we obtain sharp bounds on slimness of such families of graphs as (1) graphs with cluster-diameter $\Delta(G)$ of a layering partition of $G$, (2) graphs with tree-length $\lambda$, (3) graphs with tree-breadth $\rho$, (4) $k$-chordal graphs, AT-free graphs and HHD-free graphs. Additionally, we show that the slimness of every 4-chordal graph is at most 2 and characterize those 4-chordal graphs for which the slimness of every of its induced subgraph is at most 1. ","[{'version': 'v1', 'created': 'Sat, 27 May 2017 09:44:44 GMT'}, {'version': 'v2', 'created': 'Wed, 7 Feb 2018 17:33:05 GMT'}, {'version': 'v3', 'created': 'Wed, 14 Feb 2018 00:17:47 GMT'}, {'version': 'v4', 'created': 'Thu, 28 Feb 2019 20:49:25 GMT'}]",2019-11-20,"[['Dragan', 'Feodor F.', ''], ['Mohammed', 'Abdulhakeem', '']]","['Metric tree-like structures', 'Slimness', 'Hyperbolicity', 'Layering Partition', 'Tree-length', 'Chordality']" 234,1409.2352,Bernhard Rumpe,"Shahar Maoz, Jan Oliver Ringert, Bernhard Rumpe",ADDiff: Semantic Differencing for Activity Diagrams,"11 pages, 9 figures","Proc. Euro. Soft. Eng. Conf. and SIGSOFT Symp. on the Foundations of Soft. Eng. (ESEC/FSE'11), pp. 179-189, ACM, 2011",,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Activity diagrams (ADs) have recently become widely used in the modeling of workflows, business processes, and web-services, where they serve various purposes, from documentation, requirement definitions, and test case specifications, to simulation and code generation. As models, programs, and systems evolve over time, understanding changes and their impact is an important challenge, which has attracted much research efforts in recent years. In this paper we present addiff, a semantic differencing operator for ADs. Unlike most existing approaches to model comparison, which compare the concrete or the abstract syntax of two given diagrams and output a list of syntactical changes or edit operations, addiff considers the Semantics of the diagrams at hand and outputs a set of diff witnesses, each of which is an execution trace that is possible in the first AD and is not possible in the second. We motivate the use of addiff, formally define it, and show two algorithms to compute it, a concrete forward-search algorithm and a symbolic xpoint algorithm, implemented using BDDs and integrated into the Eclipse IDE. Empirical results and examples demonstrate the feasibility and unique contribution of addiff to the state-of-the-art in version comparison and evolution analysis. ","[{'version': 'v1', 'created': 'Mon, 8 Sep 2014 14:14:56 GMT'}]",2014-09-09,"[['Maoz', 'Shahar', ''], ['Ringert', 'Jan Oliver', ''], ['Rumpe', 'Bernhard', '']]","['software evolution', 'activity diagrams', 'differencing']" 235,1701.01061,Samuel Weiser,Samuel Weiser and Mario Werner,SGXIO: Generic Trusted I/O Path for Intel SGX,To appear in CODASPY'16,,10.1145/3029806.3029822,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel introduced SGX, which allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I/O paths to protect user input and output between enclaves and I/O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I/O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available. ","[{'version': 'v1', 'created': 'Wed, 4 Jan 2017 16:17:23 GMT'}]",2017-01-05,"[['Weiser', 'Samuel', ''], ['Werner', 'Mario', '']]","['Trusted Path', 'SGX', 'Software Guard Extensions', 'SecureExecution', 'Hypervisor']" 236,1908.03505,David Semedo,"Gon\c{c}alo Marcelino, David Semedo, Andr\'e Mour\~ao, Saverio Blasi, Marta Mrak, Jo\~ao Magalh\~aes",A Benchmark of Visual Storytelling in Social Media,To appear in ACM ICMR 2019,,10.1145/3323873.3325047,,cs.MM cs.SI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Media editors in the newsroom are constantly pressed to provide a ""like-being there"" coverage of live events. Social media provides a disorganised collection of images and videos that media professionals need to grasp before publishing their latest news updated. Automated news visual storyline editing with social media content can be very challenging, as it not only entails the task of finding the right content but also making sure that news content evolves coherently over time. To tackle these issues, this paper proposes a benchmark for assessing social media visual storylines. The SocialStories benchmark, comprised by total of 40 curated stories covering sports and cultural events, provides the experimental setup and introduces novel quantitative metrics to perform a rigorous evaluation of visual storytelling with social media data. ","[{'version': 'v1', 'created': 'Fri, 9 Aug 2019 15:51:33 GMT'}]",2019-08-12,"[['Marcelino', 'Gonçalo', ''], ['Semedo', 'David', ''], ['Mourão', 'André', ''], ['Blasi', 'Saverio', ''], ['Mrak', 'Marta', ''], ['Magalhães', 'João', '']]","['Storytelling', 'social media', 'benchmark']" 237,1608.07934,Hadi Zare,Hadi Zare and Mojtaba Niazi,Relevant based structure learning for feature selection,"29 pages, 11 figures",Eng. Appl. Artif. Intel. 55 (2016) 93-102,10.1016/j.engappai.2016.06.001,,cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Feature selection is an important task in many problems occurring in pattern recognition, bioinformatics, machine learning and data mining applications. The feature selection approach enables us to reduce the computation burden and the falling accuracy effect of dealing with huge number of features in typical learning problems. There is a variety of techniques for feature selection in supervised learning problems based on different selection metrics. In this paper, we propose a novel unified framework for feature selection built on the graphical models and information theoretic tools. The proposed approach exploits the structure learning among features to select more relevant and less redundant features to the predictive modeling problem according to a primary novel likelihood based criterion. In line with the selection of the optimal subset of features through the proposed method, it provides us the Bayesian network classifier without the additional cost of model training on the selected subset of features. The optimal properties of our method are established through empirical studies and computational complexity analysis. Furthermore the proposed approach is evaluated on a bunch of benchmark datasets based on the well-known classification algorithms. Extensive experiments confirm the significant improvement of the proposed approach compared to the earlier works. ","[{'version': 'v1', 'created': 'Mon, 29 Aug 2016 07:21:20 GMT'}]",2016-08-30,"[['Zare', 'Hadi', ''], ['Niazi', 'Mojtaba', '']]","['Feature selection', 'Supervised learning', 'Relevant features', 'Mutual information', 'Structure learning', 'Graphical models']" 238,1911.12982,Xuewen Shi,"Xuewen Shi, Heyan Huang, Ping Jian, Yuhang Guo, Xiaochi Wei, Yi-Kun Tang",Neural Chinese Word Segmentation as Sequence to Sequence Translation,"In proceedings of SMP 2017 (Chinese National Conference on Social Media Processing)",,10.1007/978-981-10-6805-8_8,,cs.CL,http://creativecommons.org/licenses/by/4.0/," Recently, Chinese word segmentation (CWS) methods using neural networks have made impressive progress. Most of them regard the CWS as a sequence labeling problem which construct models based on local features rather than considering global information of input sequence. In this paper, we cast the CWS as a sequence translation problem and propose a novel sequence-to-sequence CWS model with an attention-based encoder-decoder framework. The model captures the global information from the input and directly outputs the segmented sequence. It can also tackle other NLP tasks with CWS jointly in an end-to-end mode. Experiments on Weibo, PKU and MSRA benchmark datasets show that our approach has achieved competitive performances compared with state-of-the-art methods. Meanwhile, we successfully applied our proposed model to jointly learning CWS and Chinese spelling correction, which demonstrates its applicability of multi-task fusion. ","[{'version': 'v1', 'created': 'Fri, 29 Nov 2019 07:22:01 GMT'}]",2019-12-02,"[['Shi', 'Xuewen', ''], ['Huang', 'Heyan', ''], ['Jian', 'Ping', ''], ['Guo', 'Yuhang', ''], ['Wei', 'Xiaochi', ''], ['Tang', 'Yi-Kun', '']]","['Chinese word segmentation', 'sequence-to-sequence', 'Chinesespelling correction', 'natural language processing']" 239,1212.5440,Zungeru Adamu Murtala,A. M. Zungeru,Development of an Anti-collision Model for Vehicles,"14 pages, 14 figures, Journal paper","International Journal of Embedded Systems and Applications (IJESA), vol. 2(4), pp. 21-34, 2012",10.5121/ijesa.2012.2402,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The Anti Collision device is a detection device meant to be incorporated into cars for the purpose of safety. As opposed to the anti collision devices present in the market today, this system is not designed to control the vehicle. Instead, it serves as an alert in the face of imminent collision. The device is intended to find a way to implement a minimum spacing for cars in traffic in an affordable way. It would also achieve safety for the passengers of a moving car. The device is made up of an infrared transmitter and receiver. Also incorporated into it is an audio visual alarm to work in with the receiver and effectively alert the driver and/or the passengers. To achieve this design, 555 timers coupled both as astable and monostable circuits were used along with a 38 KHz Square Pulse generator. The device works by sending out streams of infrared radiation and when these rays are seen by the other equipped vehicle, both are meant to take the necessary precaution to avert a collision. The device would still sound an alarm even though it is not receiving infrared beams from the oncoming vehicle. This is due to reflection of its own infrared beams. At the end of the design and testing process, overall system was implemented with a constructed work, tested working and perfectly functional. ","[{'version': 'v1', 'created': 'Fri, 21 Dec 2012 13:58:27 GMT'}]",2012-12-24,"[['Zungeru', 'A. M.', '']]","['Embedded Systems', 'Control System', 'Vehicle Automation', 'Anti-Collision', 'Electronic Circuit Design']" 240,1612.06454,Henrique Morimitsu,"Henrique Morimitsu, Isabelle Bloch and Roberto M. Cesar-Jr","Exploring Structure for Long-Term Tracking of Multiple Objects in Sports Videos","This version corresponds to the preprint of the paper accepted for CVIU",,10.1016/j.cviu.2016.12.003,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we propose a novel approach for exploiting structural relations to track multiple objects that may undergo long-term occlusion and abrupt motion. We use a model-free approach that relies only on annotations given in the first frame of the video to track all the objects online, i.e. without knowledge from future frames. We initialize a probabilistic Attributed Relational Graph (ARG) from the first frame, which is incrementally updated along the video. Instead of using the structural information only to evaluate the scene, the proposed approach considers it to generate new tracking hypotheses. In this way, our method is capable of generating relevant object candidates that are used to improve or recover the track of lost objects. The proposed method is evaluated on several videos of table tennis, volleyball, and on the ACASVA dataset. The results show that our approach is very robust, flexible and able to outperform other state-of-the-art methods in sports videos that present structural patterns. ","[{'version': 'v1', 'created': 'Mon, 19 Dec 2016 23:14:26 GMT'}]",2016-12-21,"[['Morimitsu', 'Henrique', ''], ['Bloch', 'Isabelle', ''], ['Cesar-Jr', 'Roberto M.', '']]","['Multi-object tracking', 'Structural information', 'Particle filter', 'Graph']" 241,1810.08678,Zhenpeng Zhou,"Zhenpeng Zhou, Steven Kearnes, Li Li, Richard N. Zare, and Patrick Riley",Optimization of Molecules via Deep Reinforcement Learning,,,10.1038/s41598-019-47148-x,,cs.LG cs.AI stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a framework, which we call Molecule Deep $Q$-Networks (MolDQN), for molecule optimization by combining domain knowledge of chemistry and state-of-the-art reinforcement learning techniques (double $Q$-learning and randomized value functions). We directly define modifications on molecules, thereby ensuring 100\% chemical validity. Further, we operate without pre-training on any dataset to avoid possible bias from the choice of that set. Inspired by problems faced during medicinal chemistry lead optimization, we extend our model with multi-objective reinforcement learning, which maximizes drug-likeness while maintaining similarity to the original molecule. We further show the path through chemical space to achieve optimization for a molecule to understand how the model works. ","[{'version': 'v1', 'created': 'Fri, 19 Oct 2018 20:23:44 GMT'}, {'version': 'v2', 'created': 'Tue, 23 Oct 2018 05:28:46 GMT'}, {'version': 'v3', 'created': 'Fri, 1 Mar 2019 01:46:11 GMT'}]",2020-06-22,"[['Zhou', 'Zhenpeng', ''], ['Kearnes', 'Steven', ''], ['Li', 'Li', ''], ['Zare', 'Richard N.', ''], ['Riley', 'Patrick', '']]","['Molecule Optimization', 'Reinforcement Learning', 'Learning from']" 242,1211.4840,Mohamed Farag,Mohamed Farag,"Multicore Dynamic Kernel Modules Attachment Technique for Kernel Performance Enhancement","13 pages, International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 4, August 2012",,10.5121/ijcsit.2012.4405,,cs.OS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Traditional monolithic kernels dominated kernel structures for long time along with small sized kernels,few hardware companies and limited kernel functionalities. Monolithic kernel structure was not applicable when the number of hardware companies increased and kernel services consumed by different users for many purposes. One of the biggest disadvantages of the monolithic kernels is the inflexibility due to the need to include all the available modules in kernel compilation causing high time consuming. Lately, new kernel structure was introduced through multicore operating systems. Unfortunately, many multicore operating systems such as barrelfish and FOS are experimental. This paper aims to simulate the performance of multicore hybrid kernels through dynamic kernel module customized attachment/ deattachment for multicore machines. In addition, this paper proposes a new technique for loading dynamic kernel modules based on the user needs and machine capabilities. ","[{'version': 'v1', 'created': 'Tue, 20 Nov 2012 19:38:00 GMT'}]",2012-11-21,"[['Farag', 'Mohamed', '']]","['Multicore', 'Kernel', 'Dynamic Module', 'UNIX', 'Linux']" 243,1503.08294,Giacomo Parigi,"Giacomo Parigi, Angelo Stramieri, Danilo Pau, Marco Piastra","A Multi-signal Variant for the GPU-based Parallelization of Growing Self-Organizing Networks",17 pages,"Informatics in Control, Automation and Robotics - 9th International Conference, ICINCO 2012 Rome, Italy, July 28-31, 2012 Revised Selected Papers. Part I, pp. 83-100",10.1007/978-3-319-03500-0_6,,cs.DC cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Among the many possible approaches for the parallelization of self-organizing networks, and in particular of growing self-organizing networks, perhaps the most common one is producing an optimized, parallel implementation of the standard sequential algorithms reported in the literature. In this paper we explore an alternative approach, based on a new algorithm variant specifically designed to match the features of the large-scale, fine-grained parallelism of GPUs, in which multiple input signals are processed at once. Comparative tests have been performed, using both parallel and sequential implementations of the new algorithm variant, in particular for a growing self-organizing network that reconstructs surfaces from point clouds. The experimental results show that this approach allows harnessing in a more effective way the intrinsic parallelism that the self-organizing networks algorithms seem intuitively to suggest, obtaining better performances even with networks of smaller size. ","[{'version': 'v1', 'created': 'Sat, 28 Mar 2015 10:51:55 GMT'}]",2015-03-31,"[['Parigi', 'Giacomo', ''], ['Stramieri', 'Angelo', ''], ['Pau', 'Danilo', ''], ['Piastra', 'Marco', '']]","['Growing self-organizing networks', 'graphics processing unit', 'parallelism', 'surface reconstruction', 'topology preservation']" 244,1405.7349,Bing Wang,"Bing Wang, Yao-hua Meng, Xiao-hong Yu","Radial basis function process neural network training based on generalized frechet distance and GA-SA hybrid strategy","9 pages, 4 figures,14 references","Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 6, December 2013:1-9",10.5121/cseij.2013.3601,,cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," For learning problem of Radial Basis Function Process Neural Network (RBF-PNN), an optimization training method based on GA combined with SA is proposed in this paper. Through building generalized Fr\'echet distance to measure similarity between time-varying function samples, the learning problem of radial basis centre functions and connection weights is converted into the training on corresponding discrete sequence coefficients. Network training objective function is constructed according to the least square error criterion, and global optimization solving of network parameters is implemented in feasible solution space by use of global optimization feature of GA and probabilistic jumping property of SA . The experiment results illustrate that the training algorithm improves the network training efficiency and stability. ","[{'version': 'v1', 'created': 'Thu, 9 Jan 2014 10:24:00 GMT'}]",2014-05-29,"[['Wang', 'Bing', ''], ['Meng', 'Yao-hua', ''], ['Yu', 'Xiao-hong', '']]","['Radial Basis Function Process Neural Network', 'Training Algorithm', 'Generalized Fréchet Distance', 'GA-SAHybrid Optimization']" 245,1901.10197,Hiteshwar Azad,"Hiteshwar Kumar Azad, Akshay Deepak",A new approach for query expansion using Wikipedia and WordNet,"20 pages, 17 figures. arXiv admin note: text overlap with arXiv:1708.00247","Information Sciences, 2019",10.1016/j.ins.2019.04.019,,cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Query expansion (QE) is a well-known technique used to enhance the effectiveness of information retrieval. QE reformulates the initial query by adding similar terms that help in retrieving more relevant results. Several approaches have been proposed in literature producing quite favorable results, but they are not evenly favorable for all types of queries (individual and phrase queries). One of the main reasons for this is the use of the same kind of data sources and weighting scheme while expanding both the individual and the phrase query terms. As a result, the holistic relationship among the query terms is not well captured or scored. To address this issue, we have presented a new approach for QE using Wikipedia and WordNet as data sources. Specifically, Wikipedia gives rich expansion terms for phrase terms, while WordNet does the same for individual terms. We have also proposed novel weighting schemes for expansion terms: in-link score (for terms extracted from Wikipedia) and a tf-idf based scheme (for terms extracted from WordNet). In the proposed Wikipedia-WordNet-based QE technique (WWQE), we weigh the expansion terms twice: first, they are scored by the weighting scheme individually, and then, the weighting scheme scores the selected expansion terms concerning the entire query using correlation score. The proposed approach gains improvements of 24% on the MAP score and 48% on the GMAP score over unexpanded queries on the FIRE dataset. Experimental results achieve a significant improvement over individual expansion and other related state-of-the-art approaches. We also analyzed the effect on retrieval effectiveness of the proposed technique by varying the number of expansion terms. ","[{'version': 'v1', 'created': 'Tue, 29 Jan 2019 10:01:22 GMT'}, {'version': 'v2', 'created': 'Thu, 20 Jun 2019 10:15:19 GMT'}]",2019-06-21,"[['Azad', 'Hiteshwar Kumar', ''], ['Deepak', 'Akshay', '']]","['Query Expansion', 'Information Retrieval', 'WordNet', 'Wikipedia']" 246,2206.02511,Yang Li,"Yang Li, Yu Shen, Huaijun Jiang, Tianyi Bai, Wentao Zhang, Ce Zhang and Bin Cui",Transfer Learning based Search Space Design for Hyperparameter Tuning,9 pages and 2 extra pages for appendix,"Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022)",10.1145/3534678.3539369,,cs.LG cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The tuning of hyperparameters becomes increasingly important as machine learning (ML) models have been extensively applied in data mining applications. Among various approaches, Bayesian optimization (BO) is a successful methodology to tune hyper-parameters automatically. While traditional methods optimize each tuning task in isolation, there has been recent interest in speeding up BO by transferring knowledge across previous tasks. In this work, we introduce an automatic method to design the BO search space with the aid of tuning history from past tasks. This simple yet effective approach can be used to endow many existing BO methods with transfer learning capabilities. In addition, it enjoys the three advantages: universality, generality, and safeness. The extensive experiments show that our approach considerably boosts BO by designing a promising and compact search space instead of using the entire space, and outperforms the state-of-the-arts on a wide range of benchmarks, including machine learning and deep learning tuning tasks, and neural architecture search. ","[{'version': 'v1', 'created': 'Mon, 6 Jun 2022 11:48:58 GMT'}]",2022-06-07,"[['Li', 'Yang', ''], ['Shen', 'Yu', ''], ['Jiang', 'Huaijun', ''], ['Bai', 'Tianyi', ''], ['Zhang', 'Wentao', ''], ['Zhang', 'Ce', ''], ['Cui', 'Bin', '']]","['hyperparameter optimization', 'search space design', 'bayesian optimization', 'transfer learning']" 247,1204.1672,Gabriele Fici,Gabriele Fici,A Characterization of Bispecial Sturmian Words,Accepted to MFCS 2012,"LNCS 7464, pp. 383-394, 2012",10.1007/978-3-642-32589-2_35,,cs.FL cs.CG cs.DM,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A finite Sturmian word w over the alphabet {a,b} is left special (resp. right special) if aw and bw (resp. wa and wb) are both Sturmian words. A bispecial Sturmian word is a Sturmian word that is both left and right special. We show as a main result that bispecial Sturmian words are exactly the maximal internal factors of Christoffel words, that are words coding the digital approximations of segments in the Euclidean plane. This result is an extension of the known relation between central words and primitive Christoffel words. Our characterization allows us to give an enumerative formula for bispecial Sturmian words. We also investigate the minimal forbidden words for the set of Sturmian words. ","[{'version': 'v1', 'created': 'Sat, 7 Apr 2012 18:56:05 GMT'}, {'version': 'v2', 'created': 'Wed, 11 Apr 2012 20:08:21 GMT'}, {'version': 'v3', 'created': 'Mon, 11 Jun 2012 11:47:23 GMT'}, {'version': 'v4', 'created': 'Mon, 18 Jun 2012 16:25:36 GMT'}]",2015-03-20,"[['Fici', 'Gabriele', '']]","['Sturmian words', 'Christoffel words', 'special factors', 'minimal forbidden words', 'enumerative formula']" 248,1903.10681,Ahlem Aboud,"Ahlem Aboud, Raja Fdhila and Adel M. Alimi","Dynamic Multi Objective Particle Swarm Optimization based on a New Environment Change Detection Strategy","10 pages, 5 figures, International Conference on Neural Information Processing",,10.1007/978-3-319-70093-9_27,,cs.NE,http://creativecommons.org/licenses/by-nc-sa/4.0/," The dynamic of real-world optimization problems raises new challenges to the traditional particle swarm optimization (PSO). Responding to these challenges, the dynamic optimization has received considerable attention over the past decade. This paper introduces a new dynamic multi-objective optimization based particle swarm optimization (Dynamic-MOPSO).The main idea of this paper is to solve such dynamic problem based on a new environment change detection strategy using the advantage of the particle swarm optimization. In this way, our approach has been developed not just to obtain the optimal solution, but also to have a capability to detect the environment changes. Thereby, DynamicMOPSO ensures the balance between the exploration and the exploitation in dynamic research space. Our approach is tested through the most popularized dynamic benchmark's functions to evaluate its performance as a good method. ","[{'version': 'v1', 'created': 'Mon, 25 Mar 2019 10:05:28 GMT'}]",2019-03-27,"[['Aboud', 'Ahlem', ''], ['Fdhila', 'Raja', ''], ['Alimi', 'Adel M.', '']]","['dynamic optimization', 'dynamic multi-objective problems', 'particle swarms optimization', 'dynamic environment', 'time varying parameters']" 249,1901.08100,Luis Sentis,"D. Kim, S. Jorgensen, J. Lee, J. Ahn, J. Luo, and L. Sentis","Dynamic Locomotion For Passive-Ankle Biped Robots And Humanoids Using Whole-Body Locomotion Control",,,10.1177/0278364920918014,,cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Whole-body control (WBC) is a generic task-oriented control method for feedback control of loco-manipulation behaviors in humanoid robots. The combination of WBC and model-based walking controllers has been widely utilized in various humanoid robots. However, to date, the WBC method has not been employed for unsupported passive-ankle dynamic locomotion. As such, in this paper, we devise a new WBC, dubbed whole-body locomotion controller (WBLC), that can achieve experimental dynamic walking on unsupported passive-ankle biped robots. A key aspect of WBLC is the relaxation of contact constraints such that the control commands produce reduced jerk when switching foot contacts. To achieve robust dynamic locomotion, we conduct an in-depth analysis of uncertainty for our dynamic walking algorithm called time-to-velocity-reversal (TVR) planner. The uncertainty study is fundamental as it allows us to improve the control algorithms and mechanical structure of our robot to fulfill the tolerated uncertainty. In addition, we conduct extensive experimentation for: 1) unsupported dynamic balancing (i.e. in-place stepping) with a six degree-of-freedom (DoF) biped, Mercury; 2) unsupported directional walking with Mercury; 3) walking over an irregular and slippery terrain with Mercury; and 4) in-place walking with our newly designed ten-DoF viscoelastic liquid-cooled biped, DRACO. Overall, the main contributions of this work are on: a) achieving various modalities of unsupported dynamic locomotion of passive-ankle bipeds using a WBLC controller and a TVR planner, b) conducting an uncertainty analysis to improve the mechanical structure and the controllers of Mercury, and c) devising a whole-body control strategy that reduces movement jerk during walking. ","[{'version': 'v1', 'created': 'Wed, 23 Jan 2019 19:49:10 GMT'}]",2021-04-28,"[['Kim', 'D.', ''], ['Jorgensen', 'S.', ''], ['Lee', 'J.', ''], ['Ahn', 'J.', ''], ['Luo', 'J.', ''], ['Sentis', 'L.', '']]","['Legged Robot', 'Humanoid Robots', 'Dynamics']" 250,0901.3769,Sebastien Verel,"William Beaudoin (I3S), S\'ebastien Verel (I3S), Philippe Collard (I3S), Cathy Escazut (I3S)",Deceptiveness and Neutrality - the ND family of fitness landscapes,"Genetic And Evolutionary Computation Conference, Seatle : \'Etats-Unis d'Am\'erique (2006)",,10.1145/1143997.1144091,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," When a considerable number of mutations have no effects on fitness values, the fitness landscape is said neutral. In order to study the interplay between neutrality, which exists in many real-world applications, and performances of metaheuristics, it is useful to design landscapes which make it possible to tune precisely neutral degree distribution. Even though many neutral landscape models have already been designed, none of them are general enough to create landscapes with specific neutral degree distributions. We propose three steps to design such landscapes: first using an algorithm we construct a landscape whose distribution roughly fits the target one, then we use a simulated annealing heuristic to bring closer the two distributions and finally we affect fitness values to each neutral network. Then using this new family of fitness landscapes we are able to highlight the interplay between deceptiveness and neutrality. ","[{'version': 'v1', 'created': 'Fri, 23 Jan 2009 20:15:22 GMT'}]",2009-01-26,"[['Beaudoin', 'William', '', 'I3S'], ['Verel', 'Sébastien', '', 'I3S'], ['Collard', 'Philippe', '', 'I3S'], ['Escazut', 'Cathy', '', 'I3S']]","['Fitness landscapes', 'genetic algorithms', 'search', 'benchmark']" 251,2208.03244,Manuel Rebol,"Manuel Rebol, Christian G\""utl, Krzysztof Pietroszek","Real-time Gesture Animation Generation from Speech for Virtual Human Interaction","CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. arXiv admin note: text overlap with arXiv:2107.00712","In CHI EA 2021. ACM, New York, NY, USA, Article 197, 1-4",10.1145/3411763.3451554,,cs.CV,http://creativecommons.org/licenses/by/4.0/," We propose a real-time system for synthesizing gestures directly from speech. Our data-driven approach is based on Generative Adversarial Neural Networks to model the speech-gesture relationship. We utilize the large amount of speaker video data available online to train our 3D gesture model. Our model generates speaker-specific gestures by taking consecutive audio input chunks of two seconds in length. We animate the predicted gestures on a virtual avatar. We achieve a delay below three seconds between the time of audio input and gesture animation. Code and videos are available at https://github.com/mrebol/Gestures-From-Speech ","[{'version': 'v1', 'created': 'Fri, 5 Aug 2022 15:56:34 GMT'}]",2022-08-08,"[['Rebol', 'Manuel', ''], ['Gütl', 'Christian', ''], ['Pietroszek', 'Krzysztof', '']]","['Gestures', 'Animation', 'NUI']" 252,1610.08309,Edita Pelantova,"Christiane Frougny, Marta Pavelka, Edita Pelantova, Milena Svobodova","On-line algorithms for multiplication and division in real and complex numeration systems","Extended version of contribution on 23rd IEEE Symposium on Computer Arithmetic ARITH23","Discrete Mathematics & Theoretical Computer Science, Vol. 21 no. 3 , Discrete Algorithms (June 20, 2019) dmtcs:5569",10.23638/DMTCS-21-3-14,,cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A positional numeration system is given by a base and by a set of digits. The base is a real or complex number $\beta$ such that $|\beta|>1$, and the digit set $A$ is a finite set of digits including $0$. Thus a number can be seen as a finite or infinite string of digits. An on-line algorithm processes the input piece-by-piece in a serial fashion. On-line arithmetic, introduced by Trivedi and Ercegovac, is a mode of computation where operands and results flow through arithmetic units in a digit serial manner, starting with the most significant digit. In this paper, we first formulate a generalized version of the on-line algorithms for multiplication and division of Trivedi and Ercegovac for the cases that $\beta$ is any real or complex number, and digits are real or complex. We then define the so-called OL Property, and show that if $(\beta, A)$ has the OL Property, then on-line multiplication and division are feasible by the Trivedi-Ercegovac algorithms. For a real base $\beta$ and a digit set $A$ of contiguous integers, the system $(\beta, A)$ has the OL Property if $\# A > |\beta|$. For a complex base $\beta$ and symmetric digit set $A$ of contiguous integers, the system $(\beta, A)$ has the OL Property if $\# A > \beta\overline{\beta} + |\beta + \overline{\beta}|$. Provided that addition and subtraction are realizable in parallel in the system $(\beta, A)$ and that preprocessing of the denominator is possible, our on-line algorithms for multiplication and division have linear time complexity. Three examples are presented in detail: base $\beta=\frac{3+\sqrt{5}}{2}$ with digits $A=\{-1,0,1\}$; base $\beta=2i$ with digits $A = \{-2,-1, 0,1,2\}$; and base $\beta = -\frac{3}{2} + i \frac{\sqrt{3}}{2} = -1 + \omega$, where $\omega = \exp{\frac{2i\pi}{3}}$, with digits $A = \{0, \pm 1, \pm \omega, \pm \omega^2 \}$. ","[{'version': 'v1', 'created': 'Wed, 26 Oct 2016 13:05:12 GMT'}, {'version': 'v2', 'created': 'Sun, 18 Feb 2018 11:04:16 GMT'}, {'version': 'v3', 'created': 'Fri, 25 Jan 2019 10:07:38 GMT'}, {'version': 'v4', 'created': 'Mon, 20 May 2019 09:12:15 GMT'}, {'version': 'v5', 'created': 'Tue, 11 Jun 2019 16:16:23 GMT'}]",2019-11-20,"[['Frougny', 'Christiane', ''], ['Pavelka', 'Marta', ''], ['Pelantova', 'Edita', ''], ['Svobodova', 'Milena', '']]","['On-line algorithm', 'numeration system', 'multiplication', 'division', 'preprocessing']" 253,1611.05182,Susmita Bhaduri,"Susmita Bhaduri, Anirban Bhaduri, Dipak Ghosh",Detecting tala Computationally in Polyphonic Context - A Novel Approach,"It is a 20 page document with 8 figures, essentially portrays a pattern recognition novel approach to detect tala from a polyphonic song having table content and of North-Indian-Music-System(NIMS) genre","American Journal of Computer Science and Information Technology(2018)",10.21767/2349-3917.100030,,cs.SD,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In North-Indian-Music-System(NIMS),tabla is mostly used as percussive accompaniment for vocal-music in polyphonic-compositions. The human auditory system uses perceptual grouping of musical-elements and easily filters the tabla component, thereby decoding prominent rhythmic features like tala, tempo from a polyphonic composition. For Western music, lots of work have been reported for automated drum analysis of polyphonic composition. However, attempts at computational analysis of tala by separating the tabla-signal from mixed signal in NIMS have not been successful. Tabla is played with two components - right and left. The right-hand component has frequency overlap with voice and other instruments. So, tala analysis of polyphonic-composition, by accurately separating the tabla-signal from the mixture is a baffling task, therefore an area of challenge. In this work we propose a novel technique for successfully detecting tala using left-tabla signal, producing meaningful results because the left-tabla normally doesn't have frequency overlap with voice and other instruments. North-Indian-rhythm follows complex cyclic pattern, against linear approach of Western-rhythm. We have exploited this cyclic property along with stressed and non-stressed methods of playing tabla-strokes to extract a characteristic pattern from the left-tabla strokes, which, after matching with the grammar of tala-system, determines the tala and tempo of the composition. A large number of polyphonic(vocal+tabla+other-instruments) compositions has been analyzed with the methodology and the result clearly reveals the effectiveness of proposed techniques. ","[{'version': 'v1', 'created': 'Wed, 16 Nov 2016 08:15:00 GMT'}, {'version': 'v2', 'created': 'Mon, 24 Sep 2018 08:37:49 GMT'}]",2018-11-08,"[['Bhaduri', 'Susmita', ''], ['Bhaduri', 'Anirban', ''], ['Ghosh', 'Dipak', '']]","['Left-tabl¯adrum', 'T¯aladetection', 'Tempo detection', 'Polyphonic composition', 'Cyclicpattern', 'North Indian Music System']" 254,1412.5010,Jens Ma{\ss}berg,Jens Ma{\ss}berg,"The rectilinear Steiner tree problem with given topology and length restrictions",14 pages,"Computing and Combinatorics, Lecture Notes in Computer Science, Volume 9198, 2015, pp 445-456",10.1007/978-3-319-21398-9_35,,cs.DS cs.CG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We consider the problem of embedding the Steiner points of a Steiner tree with given topology into the rectilinear plane. Thereby, the length of the path between a distinguished terminal and each other terminal must not exceed given length restrictions. We want to minimize the total length of the tree. The problem can be formulated as a linear program and therefore it is solvable in polynomial time. In this paper we analyze the structure of feasible embeddings and give a combinatorial polynomial time algorithm for the problem. Our algorithm combines a dynamic programming approach and binary search and relies on the total unimodularity of a matrix appearing in a sub-problem. ","[{'version': 'v1', 'created': 'Tue, 16 Dec 2014 14:21:39 GMT'}]",2015-08-19,"[['Maßberg', 'Jens', '']]","['Steiner trees with given topology', 'rectilinear Steiner trees', 'dynamic programming', 'totally unimodular', 'shallow light Steiner trees']" 255,1609.00543,Richard Oentaryo,"Richard Jayadi Oentaryo, Arinto Murdopo, Philips Kokoh Prasetyo, Ee-Peng Lim",On Profiling Bots in Social Media,,,10.1007/978-3-319-47880-7_6,,cs.SI cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The popularity of social media platforms such as Twitter has led to the proliferation of automated bots, creating both opportunities and challenges in information dissemination, user engagements, and quality of services. Past works on profiling bots had been focused largely on malicious bots, with the assumption that these bots should be removed. In this work, however, we find many bots that are benign, and propose a new, broader categorization of bots based on their behaviors. This includes broadcast, consumption, and spam bots. To facilitate comprehensive analyses of bots and how they compare to human accounts, we develop a systematic profiling framework that includes a rich set of features and classifier bank. We conduct extensive experiments to evaluate the performances of different classifiers under varying time windows, identify the key features of bots, and infer about bots in a larger Twitter population. Our analysis encompasses more than 159K bot and human (non-bot) accounts in Twitter. The results provide interesting insights on the behavioral traits of both benign and malicious bots. ","[{'version': 'v1', 'created': 'Fri, 2 Sep 2016 10:47:28 GMT'}]",2018-05-14,"[['Oentaryo', 'Richard Jayadi', ''], ['Murdopo', 'Arinto', ''], ['Prasetyo', 'Philips Kokoh', ''], ['Lim', 'Ee-Peng', '']]","['Bot profiling', 'classification', 'feature extraction', 'social media']" 256,1106.3967,Emilio Ferrara,Emilio Ferrara and Robert Baumgartner,Intelligent Self-Repairable Web Wrappers,"12 pages, 4 figures; Proceedings of the 12th International Conference of the Italian Association for Artificial Intelligence, 2011","Lecture Notes in Computer Science, 6934:274-285, 2011",10.1007/978-3-642-23954-0_26,,cs.AI cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The amount of information available on the Web grows at an incredible high rate. Systems and procedures devised to extract these data from Web sources already exist, and different approaches and techniques have been investigated during the last years. On the one hand, reliable solutions should provide robust algorithms of Web data mining which could automatically face possible malfunctioning or failures. On the other, in literature there is a lack of solutions about the maintenance of these systems. Procedures that extract Web data may be strictly interconnected with the structure of the data source itself; thus, malfunctioning or acquisition of corrupted data could be caused, for example, by structural modifications of data sources brought by their owners. Nowadays, verification of data integrity and maintenance are mostly manually managed, in order to ensure that these systems work correctly and reliably. In this paper we propose a novel approach to create procedures able to extract data from Web sources -- the so called Web wrappers -- which can face possible malfunctioning caused by modifications of the structure of the data source, and can automatically repair themselves. ","[{'version': 'v1', 'created': 'Mon, 20 Jun 2011 17:02:40 GMT'}]",2012-02-13,"[['Ferrara', 'Emilio', ''], ['Baumgartner', 'Robert', '']]","['Web data extraction', 'wrappers', 'automatic adaptation']" 257,0909.0237,Vassilis Kostakos,Vassilis Kostakos,"Is the crowd's wisdom biased? A quantitative asessment of three online communities","17 pages, 6 tagles","Computational Science and Engineering, p. 251-255, 2009",10.1109/CSE.2009.491,,cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper presents a study of user voting on three websites: Imdb, Amazon and BookCrossings. It reports on an expert evaluation of the voting mechanisms of each website and a quantitative data analysis of users' aggregate voting behavior. The results suggest that voting follows different patterns across the websites, with higher barrier to vote introducing a more of one-off voters and attracting mostly experts. The results also show that that one-off voters tend to vote on popular items, while experts mostly vote for obscure, low-rated items. The study concludes with design suggestions to address the ""wisdom of the crowd"" bias. ","[{'version': 'v1', 'created': 'Tue, 1 Sep 2009 18:31:17 GMT'}, {'version': 'v2', 'created': 'Thu, 3 Sep 2009 13:46:03 GMT'}, {'version': 'v3', 'created': 'Wed, 16 Sep 2009 13:40:23 GMT'}, {'version': 'v4', 'created': 'Tue, 6 Oct 2009 08:43:33 GMT'}, {'version': 'v5', 'created': 'Sun, 8 Nov 2009 01:54:35 GMT'}]",2013-06-06,"[['Kostakos', 'Vassilis', '']]","['Voting', 'rating', 'quantitative analysis', 'expert evaluation']" 258,1609.07721,Diederik Aerts,"Diederik Aerts, Jonito Aerts Argu\""elles, Lester Beltran, Lyneth Beltran, Massimiliano Sassoli de Bianchi, Sandro Sozzo and Tomas Veloz",Testing Quantum Models of Conjunction Fallacy on the World Wide Web,12 pages,"International Journal of Theoretical Physics, 56, pp. 3744-3756 (2017)",10.1007/s10773-017-3288-8,,cs.AI quant-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The 'conjunction fallacy' has been extensively debated by scholars in cognitive science and, in recent times, the discussion has been enriched by the proposal of modeling the fallacy using the quantum formalism. Two major quantum approaches have been put forward: the first assumes that respondents use a two-step sequential reasoning and that the fallacy results from the presence of 'question order effects'; the second assumes that respondents evaluate the cognitive situation as a whole and that the fallacy results from the 'emergence of new meanings', as an 'effect of overextension' in the conceptual conjunction. Thus, the question arises as to determine whether and to what extent conjunction fallacies would result from 'order effects' or, instead, from 'emergence effects'. To help clarify this situation, we propose to use the World Wide Web as an 'information space' that can be interrogated both in a sequential and non-sequential way, to test these two quantum approaches. We find that 'emergence effects', and not 'order effects', should be considered the main cognitive mechanism producing the observed conjunction fallacies. ","[{'version': 'v1', 'created': 'Sun, 25 Sep 2016 09:58:14 GMT'}, {'version': 'v2', 'created': 'Fri, 2 Jun 2017 23:30:10 GMT'}]",2019-02-12,"[['Aerts', 'Diederik', ''], ['Arguëlles', 'Jonito Aerts', ''], ['Beltran', 'Lester', ''], ['Beltran', 'Lyneth', ''], ['de Bianchi', 'Massimiliano Sassoli', ''], ['Sozzo', 'Sandro', ''], ['Veloz', 'Tomas', '']]","['Quantum cognition', 'conjunction fallacy', 'emergent reasoning', 'meaning bond', 'World Wide Web']" 259,1307.3439,Y Jayanta Singh,"Y. Jayanta Singh, Shalu Gupta",Speedy Object Detection based on Shape,arXiv admin note: text overlap with arXiv:1210.7038 by other authors,"The International Journal of Multimedia & Its Applications (IJMA) Vol.5, No.3, June 2013",10.5121/ijma.2013.5302,,cs.CV,http://creativecommons.org/licenses/by/3.0/," This study is a part of design of an audio system for in-house object detection system for visually impaired, low vision personnel by birth or by an accident or due to old age. The input of the system will be scene and output as audio. Alert facility is provided based on severity levels of the objects (snake, broke glass etc) and also during difficulties. The study proposed techniques to provide speedy detection of objects based on shapes and its scale. Features are extraction to have minimum spaces using dynamic scaling. From a scene, clusters of objects are formed based on the scale and shape. Searching is performed among the clusters initially based on the shape, scale, mean cluster value and index of object(s). The minimum operation to detect the possible shape of the object is performed. In case the object does not have a likely matching shape, scale etc, then the several operations required for an object detection will not perform; instead, it will declared as a new object. In such way, this study finds a speedy way of detecting objects. ","[{'version': 'v1', 'created': 'Fri, 12 Jul 2013 12:37:06 GMT'}]",2013-07-15,"[['Singh', 'Y. Jayanta', ''], ['Gupta', 'Shalu', '']]","['Speedy object detection', 'shape', 'scale and dynamic']" 260,1902.10820,Nicolai Kraus,Gun Pinyo and Nicolai Kraus,From Cubes to Twisted Cubes via Graph Morphisms in Type Theory,"v4: 18 pages, postproceedings of TYPES'2019",,10.4230/LIPIcs.TYPES.2019.5,,cs.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Cube categories are used to encode higher-dimensional categorical structures. They have recently gained significant attention in the community of homotopy type theory and univalent foundations, where types carry the structure of such higher groupoids. Bezem, Coquand, and Huber have presented a constructive model of univalence using a specific cube category, which we call the BCH category. The higher categories encoded with the BCH category have the property that all morphisms are invertible, mirroring the fact that equality is symmetric. This might not always be desirable: the field of directed type theory considers a notion of equality that is not necessarily invertible. This motivates us to suggest a category of twisted cubes which avoids built-in invertibility. Our strategy is to first develop several alternative (but equivalent) presentations of the BCH category using morphisms between suitably defined graphs. Starting from there, a minor modification allows us to define our category of twisted cubes. We prove several first results about this category, and our work suggests that twisted cubes combine properties of cubes with properties of globes and simplices (tetrahedra). ","[{'version': 'v1', 'created': 'Wed, 27 Feb 2019 22:53:38 GMT'}, {'version': 'v2', 'created': 'Sun, 3 Mar 2019 23:16:43 GMT'}, {'version': 'v3', 'created': 'Thu, 4 Jul 2019 22:14:51 GMT'}, {'version': 'v4', 'created': 'Sun, 19 Jul 2020 19:20:21 GMT'}]",2020-07-21,"[['Pinyo', 'Gun', ''], ['Kraus', 'Nicolai', '']]","['homotopy type theory', 'cubical sets', 'directed equality', 'graph morphisms']" 261,2010.04840,Jiahao Chen,Leo de Castro and Jiahao Chen and Antigoni Polychroniadou,CryptoCredit: Securely Training Fair Models,8 pages,"Proceedings of the 1st ACM International Conference on AI in Finance (ICAIF '20), October 15-16, 2020, New York, NY, USA",10.1145/3383455.3422567,,cs.LG cs.AI cs.CR stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," When developing models for regulated decision making, sensitive features like age, race and gender cannot be used and must be obscured from model developers to prevent bias. However, the remaining features still need to be tested for correlation with sensitive features, which can only be done with the knowledge of those features. We resolve this dilemma using a fully homomorphic encryption scheme, allowing model developers to train linear regression and logistic regression models and test them for possible bias without ever revealing the sensitive features in the clear. We demonstrate how it can be applied to leave-one-out regression testing, and show using the adult income data set that our method is practical to run. ","[{'version': 'v1', 'created': 'Fri, 9 Oct 2020 23:05:37 GMT'}]",2020-10-13,"[['de Castro', 'Leo', ''], ['Chen', 'Jiahao', ''], ['Polychroniadou', 'Antigoni', '']]","['fully homomorphic encryption', 'logistic regression', 'Wald test']" 262,1307.5102,Stefano Gonella,"Stefano Gonella, Jarvis D. Haupt","Automated Defect Localization via Low Rank Plus Outlier Modeling of Propagating Wavefield Data","16 pages, 9 figures, Submitted to the IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control on August 30th 2012","IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, v. 60, n.12, pp. 2553 - 2565",10.1109/TUFFC.2013.2854,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This work proposes an agnostic inference strategy for material diagnostics, conceived within the context of laser-based non-destructive evaluation methods, which extract information about structural anomalies from the analysis of acoustic wavefields measured on the structure's surface by means of a scanning laser interferometer. The proposed approach couples spatiotemporal windowing with low rank plus outlier modeling, to identify a priori unknown deviations in the propagating wavefields caused by material inhomogeneities or defects, using virtually no knowledge of the structural and material properties of the medium. This characteristic makes the approach particularly suitable for diagnostics scenarios where the mechanical and material models are complex, unknown, or unreliable. We demonstrate our approach in a simulated environment using benchmark point and line defect localization problems based on propagating flexural waves in a thin plate. ","[{'version': 'v1', 'created': 'Fri, 19 Jul 2013 00:06:59 GMT'}]",2014-05-13,"[['Gonella', 'Stefano', ''], ['Haupt', 'Jarvis D.', '']]","['Anomaly detection', 'Low rank plus outlier models', 'Saliency', 'Non-destructive evaluation']" 263,2012.12798,Stasys Jukna,Stasys Jukna,Coin Flipping in Dynamic Programming is Almost Useless,"25 pages, 1 table","ACM Trans. on Computation Theory (2020) 26 pages, Article 17",10.1145/3397476,,cs.CC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We consider probabilistic circuits working over the real numbers, and using arbitrary semialgebraic functions of bounded description complexity as gates. In particular, such circuits can use all arithmetic operations +, -, x, /, optimization operations min and max, conditional branching (if-then-else), and many more. We show that probabilistic circuits using any of these operations as gates can be simulated by deterministic circuits with only about a quadratical blowup in size. A not much larger blow up in circuit size is also shown when derandomizing approximating circuits. The algorithmic consequence, motivating the title, is that randomness cannot substantially speed up dynamic programming algorithms. ","[{'version': 'v1', 'created': 'Wed, 23 Dec 2020 16:58:49 GMT'}]",2020-12-24,"[['Jukna', 'Stasys', '']]","['derandomization', 'dynamic programming', 'semialgebraic functions', 'sign patterns of polynomials']" 264,1207.6033,Emilio Ferrara,"Giovanni Quattrone, Licia Capra, Pasquale De Meo, Emilio Ferrara, Domenico Ursino","Effective Retrieval of Resources in Folksonomies Using a New Tag Similarity Measure","6 pages, 2 figures, CIKM 2011: 20th ACM Conference on Information and Knowledge Management","Proceedings of the 20th ACM international conference on Information and knowledge management, pp. 545-550, 2011",10.1145/2063576.2063657,,cs.IR cs.SI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Social (or folksonomic) tagging has become a very popular way to describe content within Web 2.0 websites. However, as tags are informally defined, continually changing, and ungoverned, it has often been criticised for lowering, rather than increasing, the efficiency of searching. To address this issue, a variety of approaches have been proposed that recommend users what tags to use, both when labeling and when looking for resources. These techniques work well in dense folksonomies, but they fail to do so when tag usage exhibits a power law distribution, as it often happens in real-life folksonomies. To tackle this issue, we propose an approach that induces the creation of a dense folksonomy, in a fully automatic and transparent way: when users label resources, an innovative tag similarity metric is deployed, so to enrich the chosen tag set with related tags already present in the folksonomy. The proposed metric, which represents the core of our approach, is based on the mutual reinforcement principle. Our experimental evaluation proves that the accuracy and coverage of searches guaranteed by our metric are higher than those achieved by applying classical metrics. ","[{'version': 'v1', 'created': 'Wed, 25 Jul 2012 15:46:58 GMT'}]",2012-07-26,"[['Quattrone', 'Giovanni', ''], ['Capra', 'Licia', ''], ['De Meo', 'Pasquale', ''], ['Ferrara', 'Emilio', ''], ['Ursino', 'Domenico', '']]","['Folksonomy', 'tag similarity', 'tag recommendations']" 265,1809.00211,Mateusz Trokielewicz,Mateusz Trokielewicz and Adam Czajka and Piotr Maciejewicz,Cataract influence on iris recognition performance,,"Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2014, 929020 (2014)",10.1117/12.2076040,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper presents the experimental study revealing weaker performance of the automatic iris recognition methods for cataract-affected eyes when compared to healthy eyes. There is little research on the topic, mostly incorporating scarce databases that are often deficient in images representing more than one illness. We built our own database, acquiring 1288 eye images of 37 patients of the Medical University of Warsaw. Those images represent several common ocular diseases, such as cataract, along with less ordinary conditions, such as iris pattern alterations derived from illness or eye trauma. Images were captured in near-infrared light (used in biometrics) and for selected cases also in visible light (used in ophthalmological diagnosis). Since cataract is a disorder that is most populated by samples in the database, in this paper we focus solely on this illness. To assess the extent of the performance deterioration we use three iris recognition methodologies (commercial and academic solutions) to calculate genuine match scores for healthy eyes and those influenced by cataract. Results show a significant degradation in iris recognition reliability manifesting by worsening the genuine scores in all three matchers used in this study (12% of genuine score increase for an academic matcher, up to 175% of genuine score increase obtained for an example commercial matcher). This increase in genuine scores affected the final false non-match rate in two matchers. To our best knowledge this is the only study of such kind that employs more than one iris matcher, and analyzes the iris image segmentation as a potential source of decreased reliability. ","[{'version': 'v1', 'created': 'Sat, 1 Sep 2018 15:40:47 GMT'}]",2018-09-05,"[['Trokielewicz', 'Mateusz', ''], ['Czajka', 'Adam', ''], ['Maciejewicz', 'Piotr', '']]","['biometrics', 'iris recognition', 'ophthalmic disease', 'cataract']" 266,1609.07288,Suthee Ruangwises,"Suthee Ruangwises, Toshiya Itoh",Random Popular Matchings with Incomplete Preference Lists,A shortened version of this paper has appeared at WALCOM 2018,"Journal of Graph Algorithms and Applications, 23(5): 815-835 (2019)",10.7155/jgaa.00513,,cs.DM cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Given a set $A$ of $n$ people and a set $B$ of $m \geq n$ items, with each person having a list that ranks his/her preferred items in order of preference, we want to match every person with a unique item. A matching $M$ is called popular if for any other matching $M'$, the number of people who prefer $M$ to $M'$ is not less than the number of those who prefer $M'$ to $M$. For given $n$ and $m$, consider the probability of existence of a popular matching when each person's preference list is independently and uniformly generated at random. Previously, Mahdian showed that when people's preference lists are strict (containing no ties) and complete (containing all items in $B$), if $\alpha = m/n > \alpha_*$, where $\alpha_* \approx 1.42$ is the root of equation $x^2 = e^{1/x}$, then a popular matching exists with probability $1-o(1)$; and if $\alpha < \alpha_*$, then a popular matching exists with probability $o(1)$, i.e. a phase transition occurs at $\alpha_*$. In this paper, we investigate phase transitions in the case that people's preference lists are strict but not complete. We show that in the case where every person has a preference list with length of a constant $k \geq 4$, a similar phase transition occurs at $\alpha_k$, where $\alpha_k \geq 1$ is the root of equation $x e^{-1/2x} = 1-(1-e^{-1/x})^{k-1}$. ","[{'version': 'v1', 'created': 'Fri, 23 Sep 2016 09:38:24 GMT'}, {'version': 'v2', 'created': 'Thu, 20 Apr 2017 14:53:52 GMT'}, {'version': 'v3', 'created': 'Wed, 4 Oct 2017 12:07:37 GMT'}, {'version': 'v4', 'created': 'Fri, 15 Dec 2017 15:49:07 GMT'}, {'version': 'v5', 'created': 'Thu, 5 Jul 2018 02:14:47 GMT'}, {'version': 'v6', 'created': 'Wed, 26 Sep 2018 15:23:08 GMT'}, {'version': 'v7', 'created': 'Wed, 23 Oct 2019 07:22:51 GMT'}, {'version': 'v8', 'created': 'Sat, 26 Oct 2019 09:47:37 GMT'}]",2019-10-29,"[['Ruangwises', 'Suthee', ''], ['Itoh', 'Toshiya', '']]","['popular matching', 'incomplete preference lists', 'phase transition', 'complex component']" 267,1912.05879,Johan Medrano,"Johan Medrano, Fuchun Joseph Lin","Enabling Machine Learning Across Heterogeneous Sensor Networks with Graph Autoencoders",,"Chatzigiannakis I., De Ruyter B., Mavrommati I. (eds) Ambient Intelligence. AmI 2019. Lecture Notes in Computer Science, vol 11912. Springer, Cham",10.1007/978-3-030-34255-5_11,,cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Machine Learning (ML) has been applied to enable many life-assisting appli-cations, such as abnormality detection and emdergency request for the soli-tary elderly. However, in most cases machine learning algorithms depend on the layout of the target Internet of Things (IoT) sensor network. Hence, to deploy an application across Heterogeneous Sensor Networks (HSNs), i.e. sensor networks with different sensors type or layouts, it is required to repeat the process of data collection and ML algorithm training. In this paper, we introduce a novel framework leveraging deep learning for graphs to enable using the same activity recognition system across HSNs deployed in differ-ent smart homes. Using our framework, we were able to transfer activity classifiers trained with activity labels on a source HSN to a target HSN, reaching about 75% of the baseline accuracy on the target HSN without us-ing target activity labels. Moreover, our model can quickly adapt to unseen sensor layouts, which makes it highly suitable for the gradual deployment of real-world ML-based applications. In addition, we show that our framework is resilient to suboptimal graph representations of HSNs. ","[{'version': 'v1', 'created': 'Thu, 12 Dec 2019 11:14:12 GMT'}]",2019-12-13,"[['Medrano', 'Johan', ''], ['Lin', 'Fuchun Joseph', '']]","['Graph Autoencoders', 'Heterogeneous Sensor Networks', 'Smart Homes']" 268,1711.00698,Benoit Girard,"Guillaume Viejo (ISIR), Beno\^it Girard (ISIR), Emmanuel Procyk, Mehdi Khamassi (ISIR)","Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task","Behavioural Brain Research, Elsevier, 2017",,10.1016/j.bbr.2017.09.030,,cs.AI q-bio.NC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Accumulating evidence suggest that human behavior in trial-and-error learning tasks based on decisions between discrete actions may involve a combination of reinforcement learning (RL) and working-memory (WM). While the understanding of brain activity at stake in this type of tasks often involve the comparison with non-human primate neurophysiological results, it is not clear whether monkeys use similar combined RL and WM processes to solve these tasks. Here we analyzed the behavior of five monkeys with computational models combining RL and WM. Our model-based analysis approach enables to not only fit trial-by-trial choices but also transient slowdowns in reaction times, indicative of WM use. We found that the behavior of the five monkeys was better explained in terms of a combination of RL and WM despite inter-individual differences. The same coordination dynamics we used in a previous study in humans best explained the behavior of some monkeys while the behavior of others showed the opposite pattern, revealing a possible different dynamics of WM process. We further analyzed different variants of the tested models to open a discussion on how the long pretraining in these tasks may have favored particular coordination dynamics between RL and WM. This points towards either inter-species differences or protocol differences which could be further tested in humans. ","[{'version': 'v1', 'created': 'Thu, 2 Nov 2017 11:53:54 GMT'}]",2019-04-30,"[['Viejo', 'Guillaume', '', 'ISIR'], ['Girard', 'Benoît', '', 'ISIR'], ['Procyk', 'Emmanuel', '', 'ISIR'], ['Khamassi', 'Mehdi', '', 'ISIR']]","['Reinforcement Learning', 'Decision-making', 'Working-Memory', 'Bayesian Inference', 'Computational Modeling', 'Model Comparison']" 269,2105.06291,Christian Bartolo Burlò,"Christian Batrolo Burl\`o, Adrian Francalanza, Alceste Scalas","On the Monitorability of Session Types, in Theory and Practice (Extended Version)",,,10.4230/LIPIcs.ECOOP.2021.22,,cs.PL,http://creativecommons.org/licenses/by/4.0/," In concurrent and distributed systems, software components are expected to communicate according to predetermined protocols and APIs - and if a component does not observe them, the system's reliability is compromised. Furthermore, isolating and fixing protocol/API errors can be very difficult. Many methods have been proposed to check the correctness of communicating systems, ranging from compile-time to run-time verification; among such methods, session types have been applied for both static type-checking, and run-time monitoring. This work takes a fresh look at the run-time verification of communicating systems using session types, in theory and in practice. On the theoretical side, we develop a novel formal model of session-monitored processes; with it, we formulate and prove new results on the monitorability of session types, connecting their run-time and static verification - in terms of soundness (i.e., whether monitors only flag ill-typed processes) and completeness (i.e., whether all ill-typed processes can be flagged by a monitor). On the practical side, we show that our monitoring theory is indeed realisable: building upon our formal model, we develop a Scala toolkit for the automatic generation of session monitors. Our executable monitors can be used to instrument black-box processes written in any programming language; we assess the viability of our approach with a series of benchmarks. ","[{'version': 'v1', 'created': 'Thu, 13 May 2021 13:36:42 GMT'}, {'version': 'v2', 'created': 'Sat, 22 May 2021 09:10:16 GMT'}]",2021-05-25,"[['Burlò', 'Christian Batrolo', ''], ['Francalanza', 'Adrian', ''], ['Scalas', 'Alceste', '']]","['Session types', 'monitorability', 'monitor correctness', 'Scala']" 270,1606.04488,Ashkan Kalantari,"Ashkan Kalantari, Mojtaba Soltanalian, Sina Maleki, Symeon Chatzinotas, and Bj\""orn Ottersten","Directional Modulation via Symbol-Level Precoding: A Way to Enhance Security","This manuscript is submitted to IEEE Journal of Selected Topics in Signal Processing",,10.1109/JSTSP.2016.2600521,,cs.IT math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Wireless communication provides a wide coverage at the cost of exposing information to unintended users. As an information-theoretic paradigm, secrecy rate derives bounds for secure transmission when the channel to the eavesdropper is known. However, such bounds are shown to be restrictive in practice and may require exploitation of specialized coding schemes. In this paper, we employ the concept of directional modulation and follow a signal processing approach to enhance the security of multi-user MIMO communication systems when a multi-antenna eavesdropper is present. Enhancing the security is accomplished by increasing the symbol error rate at the eavesdropper. Unlike the information-theoretic secrecy rate paradigm, we assume that the legitimate transmitter is not aware of its channel to the eavesdropper, which is a more realistic assumption. We examine the applicability of MIMO receiving algorithms at the eavesdropper. Using the channel knowledge and the intended symbols for the users, we design security enhancing symbol-level precoders for different transmitter and eavesdropper antenna configurations. We transform each design problem to a linearly constrained quadratic program and propose two solutions, namely the iterative algorithm and one based on non-negative least squares, at each scenario for a computationally-efficient modulation. Simulation results verify the analysis and show that the designed precoders outperform the benchmark scheme in terms of both power efficiency and security enhancement. ","[{'version': 'v1', 'created': 'Tue, 14 Jun 2016 18:21:19 GMT'}, {'version': 'v2', 'created': 'Mon, 1 Aug 2016 08:53:14 GMT'}]",2016-11-17,"[['Kalantari', 'Ashkan', ''], ['Soltanalian', 'Mojtaba', ''], ['Maleki', 'Sina', ''], ['Chatzinotas', 'Symeon', ''], ['Ottersten', 'Björn', '']]","['Array processing', 'directional modulation', 'M -PSKmodulation', 'physical layer security', 'symbol-level precoding']" 271,1711.01589,Saeed Ghodsi,"Saeed Ghodsi, Hoda Mohammadzade, Erfan Korki","Simultaneous Joint and Object Trajectory Templates for Human Activity Recognition from 3-D Data",,,10.1016/j.jvcir.2018.08.001,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The availability of low-cost range sensors and the development of relatively robust algorithms for the extraction of skeleton joint locations have inspired many researchers to develop human activity recognition methods using the 3-D data. In this paper, an effective method for the recognition of human activities from the normalized joint trajectories is proposed. We represent the actions as multidimensional signals and introduce a novel method for generating action templates by averaging the samples in a ""dynamic time"" sense. Then in order to deal with the variations in the speed and style of performing actions, we warp the samples to the action templates by an efficient algorithm and employ wavelet filters to extract meaningful spatiotemporal features. The proposed method is also capable of modeling the human-object interactions, by performing the template generation and temporal warping procedure via the joint and object trajectories simultaneously. The experimental evaluation on several challenging datasets demonstrates the effectiveness of our method compared to the state-of-the-arts. ","[{'version': 'v1', 'created': 'Sun, 5 Nov 2017 13:52:55 GMT'}]",2018-08-14,"[['Ghodsi', 'Saeed', ''], ['Mohammadzade', 'Hoda', ''], ['Korki', 'Erfan', '']]","['Human Activity Recognition', 'RGB-D Sensors', 'Trajectory-basedRepresentation', 'Action Template', 'Dynamic Time Warping (DTW)', 'HumanObject Interaction']" 272,2207.12964,Guangchen Shi,"Guangchen Shi, Yirui Wu, Jun Liu, Shaohua Wan, Wenhai Wang, Tong Lu","Incremental Few-Shot Semantic Segmentation via Embedding Adaptive-Update and Hyper-class Representation",,"Proceedings of the 30th ACM International Conference on Multimedia 2022",10.1145/3503161.3548218,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Incremental few-shot semantic segmentation (IFSS) targets at incrementally expanding model's capacity to segment new class of images supervised by only a few samples. However, features learned on old classes could significantly drift, causing catastrophic forgetting. Moreover, few samples for pixel-level segmentation on new classes lead to notorious overfitting issues in each learning session. In this paper, we explicitly represent class-based knowledge for semantic segmentation as a category embedding and a hyper-class embedding, where the former describes exclusive semantical properties, and the latter expresses hyper-class knowledge as class-shared semantic properties. Aiming to solve IFSS problems, we present EHNet, i.e., Embedding adaptive-update and Hyper-class representation Network from two aspects. First, we propose an embedding adaptive-update strategy to avoid feature drift, which maintains old knowledge by hyper-class representation, and adaptively update category embeddings with a class-attention scheme to involve new classes learned in individual sessions. Second, to resist overfitting issues caused by few training samples, a hyper-class embedding is learned by clustering all category embeddings for initialization and aligned with category embedding of the new class for enhancement, where learned knowledge assists to learn new knowledge, thus alleviating performance dependence on training data scale. Significantly, these two designs provide representation capability for classes with sufficient semantics and limited biases, enabling to perform segmentation tasks requiring high semantic dependence. Experiments on PASCAL-5i and COCO datasets show that EHNet achieves new state-of-the-art performance with remarkable advantages. ","[{'version': 'v1', 'created': 'Tue, 26 Jul 2022 15:20:07 GMT'}]",2022-10-13,"[['Shi', 'Guangchen', ''], ['Wu', 'Yirui', ''], ['Liu', 'Jun', ''], ['Wan', 'Shaohua', ''], ['Wang', 'Wenhai', ''], ['Lu', 'Tong', '']]","['incremental learning', 'few-shot learning', 'semantic segmentation', 'adaptive update', 'hyper-class representation']" 273,1805.09563,Michele Scalas,"Michele Scalas, Davide Maiorca, Francesco Mercaldo, Corrado Aaron Visaggio, Fabio Martinelli and Giorgio Giacinto","On the Effectiveness of System API-Related Information for Android Ransomware Detection",,Computers & Security 86C (2019) pp. 168-182,10.1016/j.cose.2019.06.004,,cs.CR cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Ransomware constitutes a significant threat to the Android operating system. It can either lock or encrypt the target devices, and victims are forced to pay ransoms to restore their data. Hence, the prompt detection of such attacks has a priority in comparison to other malicious threats. Previous works on Android malware detection mainly focused on Machine Learning-oriented approaches that were tailored to identifying malware families, without a clear focus on ransomware. More specifically, such approaches resorted to complex information types such as permissions, user-implemented API calls, and native calls. However, this led to significant drawbacks concerning complexity, resilience against obfuscation, and explainability. To overcome these issues, in this paper, we propose and discuss learning-based detection strategies that rely on System API information. These techniques leverage the fact that ransomware attacks heavily resort to System API to perform their actions, and allow distinguishing between generic malware, ransomware and goodware. We tested three different ways of employing System API information, i.e., through packages, classes, and methods, and we compared their performances to other, more complex state-of-the-art approaches. The attained results showed that systems based on System API could detect ransomware and generic malware with very good accuracy, comparable to systems that employed more complex information. Moreover, the proposed systems could accurately detect novel samples in the wild and showed resilience against static obfuscation attempts. Finally, to guarantee early on-device detection, we developed and released on the Android platform a complete ransomware and malware detector (R-PackDroid) that employed one of the methodologies proposed in this paper. ","[{'version': 'v1', 'created': 'Thu, 24 May 2018 09:18:08 GMT'}, {'version': 'v2', 'created': 'Wed, 27 Jun 2018 10:29:25 GMT'}, {'version': 'v3', 'created': 'Thu, 17 Jan 2019 10:58:04 GMT'}, {'version': 'v4', 'created': 'Wed, 26 Jun 2019 09:46:16 GMT'}]",2019-07-03,"[['Scalas', 'Michele', ''], ['Maiorca', 'Davide', ''], ['Mercaldo', 'Francesco', ''], ['Visaggio', 'Corrado Aaron', ''], ['Martinelli', 'Fabio', ''], ['Giacinto', 'Giorgio', '']]","['Malware', 'Android', 'Ransomware', 'Machine Learning', 'Security']" 274,1212.1449,Carlos Ernesto Laciana,"Carlos E. Laciana, Santiago L. Rovere and Guillermo P. Podest\'a","Exploring associations between micro-level models of innovation diffusion and emerging macro-level adoption patterns","20 pages, 4 figures and a table of supplementary data. Accepted for publication",,10.1016/j.physa.2012.12.023,,cs.SI physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A micro-level agent-based model of innovation diffusion was developed that explicitly combines (a) an individual's perception of the advantages or relative utility derived from adoption, and (b) social influence from members of the individual's social network. The micro-model was used to simulate macro-level diffusion patterns emerging from different configurations of micro-model parameters. Micro-level simulation results matched very closely the adoption patterns predicted by the widely-used Bass macro-level model (Bass, 1969). For a portion of the domain, results from micro-simulations were consistent with aggregate-level adoption patterns reported in the literature. Induced Bass macro-level parameters and responded to changes in micro-parameters: (1) increased with the number of innovators and with the rate at which innovators are introduced; (2) increased with the probability of rewiring in small-world networks, as the characteristic path length decreases; and (3) an increase in the overall perceived utility of an innovation caused a corresponding increase in induced and values. Understanding micro to macro linkages can inform the design and assessment of marketing interventions on micro-variables - or processes related to them - to enhance adoption of future products or technologies. ","[{'version': 'v1', 'created': 'Thu, 6 Dec 2012 19:45:50 GMT'}, {'version': 'v2', 'created': 'Wed, 12 Dec 2012 11:48:37 GMT'}, {'version': 'v3', 'created': 'Wed, 2 Jan 2013 16:22:28 GMT'}]",2015-06-12,"[['Laciana', 'Carlos E.', ''], ['Rovere', 'Santiago L.', ''], ['Podestá', 'Guillermo P.', '']]","['innovation diffusion', 'Bass model', 'agent-based models', 'technology adoption']" 275,0901.3987,Mohammad Ravanbakhsh,"Mohammad Ravanbakhsh, Angela I. Barbero Diez, and Oyvind Ytrehus","Improved Delay Estimates for a Queueing Model for Random Linear Coding for Unicast","5 pages, 3 figures, accepted at the 2009 IEEE International Symposium on Information Theory",,10.1109/ISIT.2009.5205892,,cs.IT math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Consider a lossy communication channel for unicast with zero-delay feedback. For this communication scenario, a simple retransmission scheme is optimum with respect to delay. An alternative approach is to use random linear coding in automatic repeat-request (ARQ) mode. We extend the work of Shrader and Ephremides, by deriving an expression for the delay of random linear coding over field of infinite size. Simulation results for various field sizes are also provided. ","[{'version': 'v1', 'created': 'Mon, 26 Jan 2009 13:32:16 GMT'}, {'version': 'v2', 'created': 'Tue, 26 May 2009 11:04:19 GMT'}]",2016-11-18,"[['Ravanbakhsh', 'Mohammad', ''], ['Diez', 'Angela I. Barbero', ''], ['Ytrehus', 'Oyvind', '']]","['Random linear coding', 'Feedback channel', 'Erasure channel', 'ARQ', 'Bulk service', 'Delay']" 276,1805.04586,Peter Kling,"Petra Berenbrink, Robert Els\""asser, Tom Friedetzky, Dominik Kaaser, Peter Kling, Tomasz Radzik",Time-space Trade-offs in Population Protocols for the Majority Problem,,,10.1007/s00446-020-00385-0,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Population protocols are a model for distributed computing that is focused on simplicity and robustness. A system of $n$ identical agents (finite state machines) performs a global task like electing a unique leader or determining the majority opinion when each agent has one of two opinions. Agents communicate in pairwise interactions with randomly assigned communication partners. Quality is measured in two ways: the number of interactions to complete the task and the number of states per agent. We present protocols for the majority problem that allow for a trade-off between these two measures. Compared to the only other trade-off result [Alistarh, Gelashvili, Vojnovic; PODC'15], we improve the number of interactions by almost a linear factor. Furthermore, our protocols can be made uniform (working correctly without any information on the population size $n$), yielding the first uniform majority protocols that stabilize in a subquadratic number of interactions. ","[{'version': 'v1', 'created': 'Fri, 11 May 2018 20:43:15 GMT'}, {'version': 'v2', 'created': 'Sat, 23 Jun 2018 06:10:21 GMT'}, {'version': 'v3', 'created': 'Fri, 17 Jul 2020 15:43:24 GMT'}]",2020-08-24,"[['Berenbrink', 'Petra', ''], ['Elsässer', 'Robert', ''], ['Friedetzky', 'Tom', ''], ['Kaaser', 'Dominik', ''], ['Kling', 'Peter', ''], ['Radzik', 'Tomasz', '']]","['distributed computing', 'majority', 'population protocols', 'stochastic processes']" 277,1208.6335,Aman Chadha Mr.,"Aman Chadha, Sushmit Mallik and Ravdeep Johar","Comparative Study and Optimization of Feature-Extraction Techniques for Content based Image Retrieval","8 pages, 16 figures, 11 tables","International Journal of Computer Applications 52(20):35-42, 2012",10.5120/8320-1959,"Volume 52, Number 20, 2012",cs.CV cs.AI cs.IR cs.LG cs.MM,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The aim of a Content-Based Image Retrieval (CBIR) system, also known as Query by Image Content (QBIC), is to help users to retrieve relevant images based on their contents. CBIR technologies provide a method to find images in large databases by using unique descriptors from a trained image. The image descriptors include texture, color, intensity and shape of the object inside an image. Several feature-extraction techniques viz., Average RGB, Color Moments, Co-occurrence, Local Color Histogram, Global Color Histogram and Geometric Moment have been critically compared in this paper. However, individually these techniques result in poor performance. So, combinations of these techniques have also been evaluated and results for the most efficient combination of techniques have been presented and optimized for each class of image query. We also propose an improvement in image retrieval performance by introducing the idea of Query modification through image cropping. It enables the user to identify a region of interest and modify the initial query to refine and personalize the image retrieval results. ","[{'version': 'v1', 'created': 'Thu, 30 Aug 2012 23:50:06 GMT'}, {'version': 'v2', 'created': 'Thu, 9 Jul 2020 01:34:05 GMT'}]",2020-07-10,"[['Chadha', 'Aman', ''], ['Mallik', 'Sushmit', ''], ['Johar', 'Ravdeep', '']]","['Feature Extraction', 'Image Similarities', 'Feature Matching', 'Image Retrieval']" 278,1610.03263,"Patrick Bl\""obaum","Patrick Bl\""obaum, Takashi Washio, Shohei Shimizu",Error Asymmetry in Causal and Anticausal Regression,,"Behaviormetrika, 2017, 10.1007/s41237-017-0022-z",10.1007/s41237-017-0022-z,,cs.AI cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," It is generally difficult to make any statements about the expected prediction error in an univariate setting without further knowledge about how the data were generated. Recent work showed that knowledge about the real underlying causal structure of a data generation process has implications for various machine learning settings. Assuming an additive noise and an independence between data generating mechanism and its input, we draw a novel connection between the intrinsic causal relationship of two variables and the expected prediction error. We formulate the theorem that the expected error of the true data generating function as prediction model is generally smaller when the effect is predicted from its cause and, on the contrary, greater when the cause is predicted from its effect. The theorem implies an asymmetry in the error depending on the prediction direction. This is further corroborated with empirical evaluations in artificial and real-world data sets. ","[{'version': 'v1', 'created': 'Tue, 11 Oct 2016 10:15:15 GMT'}, {'version': 'v2', 'created': 'Mon, 17 Apr 2017 12:25:44 GMT'}]",2017-04-18,"[['Blöbaum', 'Patrick', ''], ['Washio', 'Takashi', ''], ['Shimizu', 'Shohei', '']]","['causality', 'prediction error', 'error asymmetry', 'causal and anticausalprediction', 'inverse prediction', 'calibration']" 279,1805.08960,Yong Man Ro,"Seong Tae Kim, Hakmin Lee, Hak Gu Kim, Yong Man Ro",ICADx: Interpretable computer aided diagnosis of breast masses,"This paper was presented at SPIE Medical Imaging 2018, Houston, TX, USA",,10.1117/12.2293570,"Proc. SPIE 10575, Medical Imaging 2018: Computer-Aided Diagnosis, 1057522 (27 February 2018)",cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system. ","[{'version': 'v1', 'created': 'Wed, 23 May 2018 04:52:06 GMT'}]",2018-05-24,"[['Kim', 'Seong Tae', ''], ['Lee', 'Hakmin', ''], ['Kim', 'Hak Gu', ''], ['Ro', 'Yong Man', '']]","['Computer-aided diagnosis', 'Interpretable AI', 'Deep learning', 'Explainable deep learning']" 280,1912.11701,Abhishek Singh,"Abhishek Kumar Singh, Manish Gupta, Vasudeva Varma",Hybrid MemNet for Extractive Summarization,"Accepted in CIKM '17 Proceedings of the 2017 ACM on Conference on Information and Knowledge Management","In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). ACM, New York, NY, USA, pages 2303-2306",10.1145/3132847.3133127,,cs.CL cs.IR cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Extractive text summarization has been an extensive research problem in the field of natural language understanding. While the conventional approaches rely mostly on manually compiled features to generate the summary, few attempts have been made in developing data-driven systems for extractive summarization. To this end, we present a fully data-driven end-to-end deep network which we call as Hybrid MemNet for single document summarization task. The network learns the continuous unified representation of a document before generating its summary. It jointly captures local and global sentential information along with the notion of summary worthy sentences. Experimental results on two different corpora confirm that our model shows significant performance gains compared with the state-of-the-art baselines. ","[{'version': 'v1', 'created': 'Wed, 25 Dec 2019 17:48:09 GMT'}]",2019-12-30,"[['Singh', 'Abhishek Kumar', ''], ['Gupta', 'Manish', ''], ['Varma', 'Vasudeva', '']]","['Summarization', 'Deep Learning', 'Natural Language']" 281,1208.6391,Jalil Boukhobza,"Pierre Olivier (Lab-STICC), Jalil Boukhobza (Lab-STICC), Eric Senn (Lab-STICC)",On Benchmarking Embedded Linux Flash File Systems,"Embed With Linux, Lorient : France (2012)","ACM SIGBED Review 9(2) 43-47 9, 2 (2012) 43-47",,,cs.OS cs.PF,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Due to its attractive characteristics in terms of performance, weight and power consumption, NAND flash memory became the main non volatile memory (NVM) in embedded systems. Those NVMs also present some specific characteristics/constraints: good but asymmetric I/O performance, limited lifetime, write/erase granularity asymmetry, etc. Those peculiarities are either managed in hardware for flash disks (SSDs, SD cards, USB sticks, etc.) or in software for raw embedded flash chips. When managed in software, flash algorithms and structures are implemented in a specific flash file system (FFS). In this paper, we present a performance study of the most widely used FFSs in embedded Linux: JFFS2, UBIFS,and YAFFS. We show some very particular behaviors and large performance disparities for tested FFS operations such as mounting, copying, and searching file trees, compression, etc. ","[{'version': 'v1', 'created': 'Fri, 31 Aug 2012 06:32:38 GMT'}]",2013-12-17,"[['Olivier', 'Pierre', '', 'Lab-STICC'], ['Boukhobza', 'Jalil', '', 'Lab-STICC'], ['Senn', 'Eric', '', 'Lab-STICC']]","['NAND flash memory', 'Embedded storage', 'Flash File Systems', 'I/O Performance', 'Benchmarking']" 282,1304.6810,Guy Van den Broeck,"Daan Fierens, Guy Van den Broeck, Joris Renkens, Dimitar Shterionov, Bernd Gutmann, Ingo Thon, Gerda Janssens, Luc De Raedt","Inference and learning in probabilistic logic programs using weighted Boolean formulas",To appear in Theory and Practice of Logic Programming (TPLP),Theory and Practice of Logic Programming 15 (2015) 358-401,10.1017/S1471068414000076,,cs.AI cs.LG cs.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations. ","[{'version': 'v1', 'created': 'Thu, 25 Apr 2013 06:10:55 GMT'}]",2020-02-19,"[['Fierens', 'Daan', ''], ['Broeck', 'Guy Van den', ''], ['Renkens', 'Joris', ''], ['Shterionov', 'Dimitar', ''], ['Gutmann', 'Bernd', ''], ['Thon', 'Ingo', ''], ['Janssens', 'Gerda', ''], ['De Raedt', 'Luc', '']]","['Probabilistic logic programming', 'Probabilistic inference', 'Parameter learning']" 283,1811.09160,Michael David,"Michael David (CRAN), A. Aubry (CRAN), W. Derigent (CRAN)",Towards energy efficient buildings: how ICTs can convert advances?,,"IFAC INCOM 2018, Jun 2018, Bergamo, Italy. 51 (11), pp.758 - 763, 2018",10.1016/j.ifacol.2018.08.410,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This work is a positioning research paper for energy efficient building based on ICT solutions. Through the literature about the solutions for energy control of buildings during operational phase, a 3-layers model is proposed to integrate these solutions: first level consists in communication technologies, second level is about data modelling and third level is related to decision-making tools. For each level, key research topics and remaining problems are identified in order to achieve a concrete step forward. 1. CONTEXT AND PROBLEMATICS Through studies on ICT solutions for energy control of buildings, a 3-layers model is proposed to integrate these solutions and position a new way for energy efficiency. The building sector is the largest user of energy and CO 2 emitter in the EU, estimated at approximately 40% of the total consumption (Sharples et al., 1999). According to the International Panel on Climate Change (European Union, 2010), 30% of energy used in buildings could be reduced with net economic benefits by 2030. Such a reduction, however, is meaningless unless ""sustainability"" is considered. Because of these factors, healthy, sustainable, and energy efficient buildings have become active topics in international research; there is an urgent need for a new kind of high-technology driven and integrative research that should lead to the massive development of smart buildings and, in the medium term, smart cities. From a building lifecycle perspective, most of the energy (~80%) is consumed during the operational stage of the building (European Union, 2010) (Bilsen et al., 2013). Reducing building energy consumption may be addressed by the physical modifications which can be operated on a building like upgrading windows, heating systems or modifying thermic characteristics by insulating. Another possible path to reduce the energy consumption of a building is to use Information and Communication Technologies (ICT). According to the International Panel on Climate Change, a reduction of energy even greater than the 30% can be targeted by 2030 by considering ICT solutions. In support of this claim, some specialists believe that ICT-based solutions have the potential to enable 50-80% greenhouse gas reduction globally. In this respect, ICT innovation opens prospects for the development of a new range of new services highly available, flexible, safe, easy to integrate, and user friendly (Bilsen et al., 2013). This, in turn, should foster a sophisticated, reliable and fast communication infrastructure for the connection of various distributed elements (sensors, generators, substations...) that enables to exchange real-time data, information and knowledge needed to improve efficiency (e.g., to monitor and control energy consumption), reliability (e.g., to facilitate maintenance operations), flexibility (e.g., to integrate new rules to meet new consumer expectations), and investment returns, but also to induce a shift in consumer behaviour. ","[{'version': 'v1', 'created': 'Thu, 22 Nov 2018 13:07:24 GMT'}]",2018-11-26,"[['David', 'Michael', '', 'CRAN'], ['Aubry', 'A.', '', 'CRAN'], ['Derigent', 'W.', '', 'CRAN']]","['Energy Control', 'Data Models', 'Networks', 'Information Technology', 'Decentralized Control']" 284,1502.04033,Tobias Reitmaier,Tobias Reitmaier and Bernhard Sick,"The Responsibility Weighted Mahalanobis Kernel for Semi-Supervised Training of Support Vector Machines for Classification",,,10.1016/j.ins.2015.06.027,,cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Kernel functions in support vector machines (SVM) are needed to assess the similarity of input samples in order to classify these samples, for instance. Besides standard kernels such as Gaussian (i.e., radial basis function, RBF) or polynomial kernels, there are also specific kernels tailored to consider structure in the data for similarity assessment. In this article, we will capture structure in data by means of probabilistic mixture density models, for example Gaussian mixtures in the case of real-valued input spaces. From the distance measures that are inherently contained in these models, e.g., Mahalanobis distances in the case of Gaussian mixtures, we derive a new kernel, the responsibility weighted Mahalanobis (RWM) kernel. Basically, this kernel emphasizes the influence of model components from which any two samples that are compared are assumed to originate (that is, the ""responsible"" model components). We will see that this kernel outperforms the RBF kernel and other kernels capturing structure in data (such as the LAP kernel in Laplacian SVM) in many applications where partially labeled data are available, i.e., for semi-supervised training of SVM. Other key advantages are that the RWM kernel can easily be used with standard SVM implementations and training algorithms such as sequential minimal optimization, and heuristics known for the parametrization of RBF kernels in a C-SVM can easily be transferred to this new kernel. Properties of the RWM kernel are demonstrated with 20 benchmark data sets and an increasing percentage of labeled samples in the training data. ","[{'version': 'v1', 'created': 'Fri, 13 Feb 2015 15:48:00 GMT'}, {'version': 'v2', 'created': 'Mon, 16 Feb 2015 13:02:05 GMT'}]",2015-07-03,"[['Reitmaier', 'Tobias', ''], ['Sick', 'Bernhard', '']]","['support vector machine', 'pattern classification', 'kernel function', 'responsibility weighted Mahalanobis kernel', 'semi-supervised learning']" 285,1803.02307,Youngjun Cho,"Youngjun Cho, Andrea Bianchi, Nicolai Marquardt and Nadia Bianchi-Berthouze","RealPen: Providing Realism in Handwriting Tasks on Touch Surfaces using Auditory-Tactile Feedback","Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16)",,10.1145/2984511.2984550,,cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present RealPen, an augmented stylus for capacitive tablet screens that recreates the physical sensation of writing on paper with a pencil, ball-point pen or marker pen. The aim is to create a more engaging experience when writing on touch surfaces, such as screens of tablet computers. This is achieved by re-generating the friction-induced oscillation and sound of a real writing tool in contact with paper. To generate realistic tactile feedback, our algorithm analyses the frequency spectrum of the friction oscillation generated when writing with traditional tools, extracts principal frequencies, and uses the actuator's frequency response profile for an adjustment weighting function. We enhance the realism by providing the sound feedback aligned with the writing pressure and speed. Furthermore, we investigated the effects of superposition and fluctuation of several frequencies on human tactile perception, evaluated the performance of RealPen, and characterized users' perception and preference of each feedback type. ","[{'version': 'v1', 'created': 'Tue, 6 Mar 2018 17:17:19 GMT'}]",2018-03-07,"[['Cho', 'Youngjun', ''], ['Bianchi', 'Andrea', ''], ['Marquardt', 'Nicolai', ''], ['Bianchi-Berthouze', 'Nadia', '']]","['H.5.2 [information interfaces and presentation (e.g', 'HCI)] User Interfaces – Tactile feedback', 'Auditory feedback']" 286,1612.08811,Ahmed Mateen Mr.,"Ahmed Mateen, Muhammad Azeem, Mohammad Shafiq",AZ Model for Software Development,4 pages,"International Journal of Computer Applications Foundation of Computer Science (FCS), NY, USA Volume 151 - Number 6 Year of Publication: 2016",10.5120/ijca2016911701,,cs.SE,http://creativecommons.org/licenses/by/4.0/," Know a days Computer system become essential and it is most commonly used in every field of life. The computer saves time and use to solve complex and extensive problem quickly in an efficient way. For this purpose software programs are develop to facilitate the works for administrator, offices, banks etc. so Quality is the most important factor as it mostly defines CUSTOMER SATISFACTION which directly related to success of the project so there are many approaches (methodologies) have been developed for this purpose occasionally. The main study of this paper is to propose a new methodology for the development of the software which focuses on the quality improvement of all kind of product. This study will also discuss the features and limitation of the traditional methodologies like waterfall iterative spiral RUP and Agile and show how the new innovative methodology is better than previous one. ","[{'version': 'v1', 'created': 'Wed, 28 Dec 2016 06:47:56 GMT'}]",2016-12-30,"[['Mateen', 'Ahmed', ''], ['Azeem', 'Muhammad', ''], ['Shafiq', 'Mohammad', '']]","['Software process model', 'high quality product', 'innovative methodology', 'Traditional Development Models', 'propose Model']" 287,1601.07795,Setareh Maghsudi,Setareh Maghsudi and Ekram Hossain,"Distributed User Association in Energy Harvesting Small Cell Networks: A Probabilistic Model","27 Pages, Single-Column",,10.1109/TWC.2017.2647946,,cs.IT cs.LG math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We consider a distributed downlink user association problem in a small cell network, where small cells obtain the required energy for providing wireless services to users through ambient energy harvesting. Since energy harvesting is opportunistic in nature, the amount of harvested energy is a random variable, without any a priori known characteristics. Moreover, since users arrive in the network randomly and require different wireless services, the energy consumption is a random variable as well. In this paper, we propose a probabilistic framework to mathematically model and analyze the random behavior of energy harvesting and energy consumption in dense small cell networks. Furthermore, as acquiring (even statistical) channel and network knowledge is very costly in a distributed dense network, we develop a bandit-theoretical formulation for distributed user association when no information is available at users ","[{'version': 'v1', 'created': 'Wed, 27 Jan 2016 17:14:44 GMT'}]",2017-01-09,"[['Maghsudi', 'Setareh', ''], ['Hossain', 'Ekram', '']]","['Small cell networks', 'energy harvesting', 'distributed user association', 'uncertainty', 'bandit']" 288,1512.07766,Fabrice Rouillier,"P.-V Koseleff (OURAGAN, IMJ-PRG, UPMC), D Pecker (IMJ-PRG, UPMC), Fabrice Rouillier (OURAGAN, IMJ-PRG, UPMC), C Tran (UPMC, IMJ-PRG)",Computing Chebyshev knot diagrams,,,10.1016/j.jsc.2017.04.001,hal-01232181,cs.SC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A Chebyshev curve $\mathcal{C}(a,b,c,\phi)$ has a parametrization of the form$ x(t)=T\_a(t)$; \ $y(t)=T\_b(t)$; $z(t)= T\_c(t + \phi)$, where $a,b,c$are integers, $T\_n(t)$ is the Chebyshev polynomialof degree $n$ and $\phi \in \mathbb{R}$. When $\mathcal{C}(a,b,c,\phi)$ is nonsingular,it defines a polynomial knot. We determine all possible knot diagrams when $\phi$ varies. Let $a,b,c$ be integers, $a$ is odd, $(a,b)=1$, we show that one can list all possible knots $\mathcal{C}(a,b,c,\phi)$ in$\tilde{\mathcal{O}}(n^2)$ bit operations, with $n=abc$. ","[{'version': 'v1', 'created': 'Thu, 24 Dec 2015 09:23:25 GMT'}, {'version': 'v2', 'created': 'Tue, 16 May 2017 07:36:49 GMT'}]",2017-05-17,"[['Koseleff', 'P. -V', '', 'OURAGAN, IMJ-PRG, UPMC'], ['Pecker', 'D', '', 'IMJ-PRG, UPMC'], ['Rouillier', 'Fabrice', '', 'OURAGAN, IMJ-PRG, UPMC'], ['Tran', 'C', '', 'UPMC, IMJ-PRG']]","['Zero dimensional systems', 'Chebyshev curves', 'Lissajous knots', 'polynomial knots', 'factorization of Chebyshev polynomials', 'minimal polynomial', 'Chebyshev forms']" 289,1802.08690,Chenhao Tan,Chenhao Tan and Hao Peng and Noah A. Smith,"""You are no Jack Kennedy"": On Media Selection of Highlights from Presidential Debates","10 pages, 5 figures, to appear in Proceedings of WWW 2018, data and more at https://chenhaot.com/papers/debate-quotes.html",,10.1145/3178876.3186142,,cs.SI cs.CL physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Political speeches and debates play an important role in shaping the images of politicians, and the public often relies on media outlets to select bits of political communication from a large pool of utterances. It is an important research question to understand what factors impact this selection process. To quantitatively explore the selection process, we build a three- decade dataset of presidential debate transcripts and post-debate coverage. We first examine the effect of wording and propose a binary classification framework that controls for both the speaker and the debate situation. We find that crowdworkers can only achieve an accuracy of 60% in this task, indicating that media choices are not entirely obvious. Our classifiers outperform crowdworkers on average, mainly in primary debates. We also compare important factors from crowdworkers' free-form explanations with those from data-driven methods and find interesting differences. Few crowdworkers mentioned that ""context matters"", whereas our data show that well-quoted sentences are more distinct from the previous utterance by the same speaker than less-quoted sentences. Finally, we examine the aggregate effect of media preferences towards different wordings to understand the extent of fragmentation among media outlets. By analyzing a bipartite graph built from quoting behavior in our data, we observe a decreasing trend in bipartisan coverage. ","[{'version': 'v1', 'created': 'Fri, 23 Feb 2018 19:00:01 GMT'}]",2018-02-27,"[['Tan', 'Chenhao', ''], ['Peng', 'Hao', ''], ['Smith', 'Noah A.', '']]","['media bias', 'presidential debates', 'quotations', 'wording', 'conversations']" 290,1808.08709,Damien Chablat,"Philippe Wenger (LS2N, ReV), D. Chablat (LS2N, ReV)","Kinetostatic analysis and solution classification of a class of planar tensegrity mechanisms",,"Robotica, Cambridge University Press, 2018, pp.1 - 11",10.1017/S026357471800070X,,cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Tensegrity mechanisms are composed of rigid and tensile parts that are in equilibrium. They are interesting alternative designs for some applications, such as modelling musculo-skeleton systems. Tensegrity mechanisms are more difficult to analyze than classical mechanisms as the static equilibrium conditions that must be satisfied generally result in complex equations. A class of planar one-degree-of-freedom tensegrity mechanisms with three linear springs is analyzed in detail for the sake of systematic solution classifications. The kinetostatic equations are derived and solved under several loading and geometric conditions. It is shown that these mechanisms exhibit up to six equilibrium configurations, of which one or two are stable, depending on the geometric and loading conditions. Discriminant varieties and cylindrical algebraic decomposition combined with Groebner base elimination are used to classify solutions as a function of the geometric, loading and actuator input parameters. ","[{'version': 'v1', 'created': 'Mon, 27 Aug 2018 07:16:22 GMT'}]",2018-08-28,"[['Wenger', 'Philippe', '', 'LS2N, ReV'], ['Chablat', 'D.', '', 'LS2N, ReV']]","['Tensegrity mechanism', 'kinetostatic model', 'geometric design', 'algebraic computation']" 291,1312.0910,Derek Groen,"Derek Groen, Steven Rieder and Simon Portegies Zwart","MPWide: a light-weight library for efficient message passing over wide area networks","accepted by the Journal Of Open Research Software, 13 pages, 4 figures, 1 table","Journal of Open Research Software 1(1):e9, 2013",10.5334/jors.ah,,cs.DC cs.NI,http://creativecommons.org/licenses/by/3.0/," We present MPWide, a light weight communication library which allows efficient message passing over a distributed network. MPWide has been designed to connect application running on distributed (super)computing resources, and to maximize the communication performance on wide area networks for those without administrative privileges. It can be used to provide message-passing between application, move files, and make very fast connections in client-server environments. MPWide has already been applied to enable distributed cosmological simulations across up to four supercomputers on two continents, and to couple two different bloodflow simulations to form a multiscale simulation. ","[{'version': 'v1', 'created': 'Tue, 3 Dec 2013 19:17:57 GMT'}]",2014-01-06,"[['Groen', 'Derek', ''], ['Rieder', 'Steven', ''], ['Zwart', 'Simon Portegies', '']]","['communication library', 'distributed computing', 'message passing', 'TCP', 'modelcoupling', 'communication performance', 'data transfer', 'co-allocation']" 292,1302.2718,Monika Agarwal,Monika Agarwal,Text Steganographic Approaches: A Comparison,"16 pages, 6 figures, 5 tables","Monika Agarwal, ""Text Steganographic Approaches: A Comparison"", International Journal of Network Security & Its Applications (IJNSA), Vol.5, No.1, January 2013, pp.91-106",10.5121/ijnsa.2013.5107,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper presents three novel approaches of text steganography. The first approach uses the theme of missing letter puzzle where each character of message is hidden by missing one or more letters in a word of cover. The average Jaro score was found to be 0.95 indicating closer similarity between cover and stego file. The second approach hides a message in a wordlist where ASCII value of embedded character determines length and starting letter of a word. The third approach conceals a message, without degrading cover, by using start and end letter of words of the cover. For enhancing the security of secret message, the message is scrambled using one-time pad scheme before being concealed and cipher text is then concealed in cover. We also present an empirical comparison of the proposed approaches with some of the popular text steganographic approaches and show that our approaches outperform the existing approaches. ","[{'version': 'v1', 'created': 'Tue, 12 Feb 2013 07:03:02 GMT'}, {'version': 'v2', 'created': 'Thu, 14 Feb 2013 04:37:25 GMT'}]",2013-02-15,"[['Agarwal', 'Monika', '']]","['Information Hiding', 'Steganography', 'Cryptography', 'Text Steganography']" 293,1003.1291,Alejandro Lorca,"Alejandro Lorca, Eduardo Huedo, Ignacio M. Llorente","The Grid[Way] Job Template Manager, a tool for parameter sweeping","26 pages, 1 figure,",,10.1016/j.cpc.2010.12.041,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. ","[{'version': 'v1', 'created': 'Fri, 5 Mar 2010 15:34:09 GMT'}]",2015-05-18,"[['Lorca', 'Alejandro', ''], ['Huedo', 'Eduardo', ''], ['Llorente', 'Ignacio M.', '']]","['e-science', 'parameter sweep', 'grid computing', 'middleware', 'high-throughput computing']" 294,1708.06822,Mehmet Turan,"Mehmet Turan, Yasin Almalioglu, Helder Araujo, Ender Konukoglu, Metin Sitti","Deep EndoVO: A Recurrent Convolutional Neural Network (RCNN) based Visual Odometry Approach for Endoscopic Capsule Robots",,,10.1016/j.neucom.2017.10.014,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Ingestible wireless capsule endoscopy is an emerging minimally invasive diagnostic technology for inspection of the GI tract and diagnosis of a wide range of diseases and pathologies. Medical device companies and many research groups have recently made substantial progresses in converting passive capsule endoscopes to active capsule robots, enabling more accurate, precise, and intuitive detection of the location and size of the diseased areas. Since a reliable real time pose estimation functionality is crucial for actively controlled endoscopic capsule robots, in this study, we propose a monocular visual odometry (VO) method for endoscopic capsule robot operations. Our method lies on the application of the deep Recurrent Convolutional Neural Networks (RCNNs) for the visual odometry task, where Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are used for the feature extraction and inference of dynamics across the frames, respectively. Detailed analyses and evaluations made on a real pig stomach dataset proves that our system achieves high translational and rotational accuracies for different types of endoscopic capsule robot trajectories. ","[{'version': 'v1', 'created': 'Tue, 22 Aug 2017 21:13:18 GMT'}, {'version': 'v2', 'created': 'Fri, 8 Sep 2017 13:47:53 GMT'}]",2017-11-21,"[['Turan', 'Mehmet', ''], ['Almalioglu', 'Yasin', ''], ['Araujo', 'Helder', ''], ['Konukoglu', 'Ender', ''], ['Sitti', 'Metin', '']]","['Endoscopic Capsule Robot', 'Visual Odometry', 'sequential deep']" 295,1509.06854,Bo Jiang,"Bo Jiang, Peng Chen, W.K. Chan, and Xinchao Zhang","To What Extent Is Stress Testing of Android TV Applications Automated in Industrial Environments?",17 pages,,10.1109/TR.2015.2481601,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," An Android-based smart Television (TV) must reliably run its applications in an embedded program environment under diverse hardware resource conditions. Owing to the diverse hardware components used to build numerous TV models, TV simulators are usually not high enough in fidelity to simulate various TV models, and thus are only regarded as unreliable alternatives when stress testing such applications. Therefore, even though stress testing on real TV sets is tedious, it is the de facto approach to ensure the reliability of these applications in the industry. In this paper, we study to what extent stress testing of smart TV applications can be fully automated in the industrial environments. To the best of our knowledge, no previous work has addressed this important question. We summarize the find-ings collected from 10 industrial test engineers to have tested 20 such TV applications in a real production environment. Our study shows that the industry required test automation supports on high-level GUI object controls and status checking, setup of resource conditions and the interplay between the two. With such supports, 87% of the industrial test specifications of one TV model can be fully automated and 71.4% of them were found to be fully reusable to test a subsequent TV model with major up-grades of hardware, operating system and application. It repre-sents a significant improvement with margins of 28% and 38%, respectively, compared to stress testing without such supports. ","[{'version': 'v1', 'created': 'Wed, 23 Sep 2015 06:26:16 GMT'}]",2015-09-24,"[['Jiang', 'Bo', ''], ['Chen', 'Peng', ''], ['Chan', 'W. K.', ''], ['Zhang', 'Xinchao', '']]","['Stress Testing', 'Android', 'TV', 'Reliability', 'Automation', 'Test Case Creation', 'Software Reuse']" 296,1701.08025,Magnus Andersson,"Hanqing Zhang, Tim Stangner, Krister Wiklund, Alvaro Rodriguez, Magnus Andersson","UmUTracker: A versatile MATLAB program for automated particle tracking of 2D light microscopy or 3D digital holography data",Manuscript including supplementary materials,,10.1016/j.cpc.2017.05.029,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a versatile and fast MATLAB program (UmUTracker) that automatically detects and tracks particles by analyzing video sequences acquired by either light microscopy or digital in-line holographic microscopy. Our program detects the 2D lateral positions of particles with an algorithm based on the isosceles triangle transform, and reconstructs their 3D axial positions by a fast implementation of the Rayleigh-Sommerfeld model using a radial intensity profile. To validate the accuracy and performance of our program, we first track the 2D position of polystyrene particles using bright field and digital holographic microscopy. Second, we determine the 3D particle position by analyzing synthetic and experimentally acquired holograms. Finally, to highlight the full program features, we profile the microfluidic flow in a 100 micrometer high flow chamber. This result agrees with computational fluid dynamic simulations. On a regular desktop computer UmUTracker can detect, analyze, and track multiple particles at 5 frames per second for a template size of 201 x 201 in a 1024 x 1024 image. To enhance usability and to make it easy to implement new functions we used object-oriented programming. UmUTracker is suitable for studies related to: particle dynamics, cell localization, colloids and microfluidic flow measurement. ","[{'version': 'v1', 'created': 'Fri, 27 Jan 2017 12:20:45 GMT'}, {'version': 'v2', 'created': 'Fri, 21 Apr 2017 08:53:23 GMT'}]",2017-09-13,"[['Zhang', 'Hanqing', ''], ['Stangner', 'Tim', ''], ['Wiklund', 'Krister', ''], ['Rodriguez', 'Alvaro', ''], ['Andersson', 'Magnus', '']]","['image processing', 'digital holographic microscopy', 'particle tracking velocimetry', 'microfluidics']" 297,1307.4279,Yuansheng Liu,Yuansheng Liu,"Cryptanalyzing a RGB image encryption algorithm based on DNA encoding and chaos map",,,10.1016/j.optlastec.2014.01.015,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Recently, a RGB image encryption algorithm based on DNA encoding and chaos map has been proposed. It was reported that the encryption algorithm can be broken with four pairs of chosen plain-images and the corresponding cipher-images. This paper re-evaluates the security of the encryption algorithm, and finds that the encryption algorithm can be broken efficiently with only one known plain-image. The effectiveness of the proposed known-plaintext attack is supported by both rigorous theoretical analysis and experimental results. In addition, two other security defects are also reported. ","[{'version': 'v1', 'created': 'Tue, 16 Jul 2013 14:02:07 GMT'}, {'version': 'v2', 'created': 'Thu, 2 Jan 2014 01:52:36 GMT'}]",2014-03-05,"[['Liu', 'Yuansheng', '']]","['image encryption', 'cryptanalysis', 'known-plaintext attack']" 298,1709.03915,"Wilhelmiina H\""am\""al\""ainen","Wilhelmiina H\""am\""al\""ainen and Geoffrey I. Webb","Specious rules: an efficient and effective unifying method for removing misleading and uninformative patterns in association rule mining","Note: This is a corrected version of the paper published in SDM'17. In the equation on page 4, the range of the sum has been corrected","Proceedings of SIAM International Conference on Data Mining, pp. 309-317, SIAM 2017",10.1137/1.9781611974973.35,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present theoretical analysis and a suite of tests and procedures for addressing a broad class of redundant and misleading association rules we call \emph{specious rules}. Specious dependencies, also known as \emph{spurious}, \emph{apparent}, or \emph{illusory associations}, refer to a well-known phenomenon where marginal dependencies are merely products of interactions with other variables and disappear when conditioned on those variables. The most extreme example is Yule-Simpson's paradox where two variables present positive dependence in the marginal contingency table but negative in all partial tables defined by different levels of a confounding factor. It is accepted wisdom that in data of any nontrivial dimensionality it is infeasible to control for all of the exponentially many possible confounds of this nature. In this paper, we consider the problem of specious dependencies in the context of statistical association rule mining. We define specious rules and show they offer a unifying framework which covers many types of previously proposed redundant or misleading association rules. After theoretical analysis, we introduce practical algorithms for detecting and pruning out specious association rules efficiently under many key goodness measures, including mutual information and exact hypergeometric probabilities. We demonstrate that the procedure greatly reduces the number of associations discovered, providing an elegant and effective solution to the problem of association mining discovering large numbers of misleading and redundant rules. ","[{'version': 'v1', 'created': 'Tue, 12 Sep 2017 15:39:47 GMT'}]",2017-09-13,"[['Hämäläinen', 'Wilhelmiina', ''], ['Webb', 'Geoffrey I.', '']]","['specious dependency', 'association rule', 'YuleSimpson’s paradox', 'mutual information', 'Birch’s test']" 299,1908.09505,Peter Kietzmann,"Hauke Petersen, Peter Kietzmann, Cenk G\""undo\u{g}an, Thomas C. Schmidt, Matthias W\""ahlisch",Bluetooth Mesh under the Microscope: How much ICN is Inside?,,Proceedings of ACM ICN 2019,10.1145/3357150.3357398,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Bluetooth (BT) mesh is a new mode of BT operation for low-energy devices that offers group-based publish-subscribe as a network service with additional caching capabilities. These features resemble concepts of information-centric networking (ICN), and the analogy to ICN has been repeatedly drawn in the BT community. In this paper, we compare BT mesh with ICN both conceptually and in real-world experiments. We contrast both architectures and their design decisions in detail. Experiments are performed on an IoT testbed using NDN/CCNx and BT mesh on constrained RIOT nodes. Our findings indicate significant differences both in concepts and in real-world performance. Supported by new insights, we identify synergies and sketch a design of a BT-ICN that benefits from both worlds. ","[{'version': 'v1', 'created': 'Mon, 26 Aug 2019 07:34:00 GMT'}]",2019-08-27,"[['Petersen', 'Hauke', ''], ['Kietzmann', 'Peter', ''], ['Gündoğan', 'Cenk', ''], ['Schmidt', 'Thomas C.', ''], ['Wählisch', 'Matthias', '']]","['IoT', 'ICN', 'Bluetooth', 'Constrained devices']" 300,2102.03955,Eduardo Velloso,"Eduardo Velloso, Carlos Hitoshi Morimoto","A Probabilistic Interpretation of Motion Correlation Selection Techniques",,,10.1145/3411764.3445184,,cs.HC,http://creativecommons.org/licenses/by-nc-nd/4.0/," Motion correlation interfaces are those that present targets moving in different patterns, which the user can select by matching their motion. In this paper, we re-formulate the task of target selection as a probabilistic inference problem. We demonstrate that previous interaction techniques can be modelled using a Bayesian approach and that how modelling the selection task as transmission of information can help us make explicit the assumptions behind similarity measures. We propose ways of incorporating uncertainty into the decision-making process and demonstrate how the concept of entropy can illuminate the measurement of the quality of a design. We apply these techniques in a case study and suggest guidelines for future work. ","[{'version': 'v1', 'created': 'Mon, 8 Feb 2021 00:35:59 GMT'}]",2021-02-09,"[['Velloso', 'Eduardo', ''], ['Morimoto', 'Carlos Hitoshi', '']]","['motion correlation', 'pursuits', 'computational interaction', 'probabilistic input', 'gestures', 'gaze interaction']" 301,1509.04387,Aman Chadha Mr.,"Aman Chadha, Sushmit Mallik, Ankit Chadha, Ravdeep Johar and M. Mani Roja",Dual-Layer Video Encryption using RSA Algorithm,"arXiv admin note: text overlap with arXiv:1104.0800, arXiv:1112.0836 by other authors","International Journal of Computer Applications 116(1):33-40, April 2015",10.5120/20302-2341,,cs.CR cs.MM eess.IV,http://creativecommons.org/licenses/by-nc-sa/4.0/," This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video. ","[{'version': 'v1', 'created': 'Mon, 14 Sep 2015 06:52:20 GMT'}]",2020-09-07,"[['Chadha', 'Aman', ''], ['Mallik', 'Sushmit', ''], ['Chadha', 'Ankit', ''], ['Johar', 'Ravdeep', ''], ['Roja', 'M. Mani', '']]","['encryption', 'video encryption', 'RSA', 'pseudo noise']" 302,1903.01618,Hyunjae Kang,"Hyunjae Kang, Jae-wook Jang, Aziz Mohaisen and Huy Kang Kim","Detecting and Classifying Android Malware using Static Analysis along with Creator Information",International Journal of Distributed Sensor Networks,,10.1155/2015/479174,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Thousands of malicious applications targeting mobile devices, including the popular Android platform, are created every day. A large number of those applications are created by a small number of professional under-ground actors, however previous studies overlooked such information as a feature in detecting and classifying malware, and in attributing malware to creators. Guided by this insight, we propose a method to improve on the performance of Android malware detection by incorporating the creator's information as a feature and classify malicious applications into similar groups. We developed a system that implements this method in practice. Our system enables fast detection of malware by using creator information such as serial number of certificate. Additionally, it analyzes malicious be-haviors and permissions to increase detection accuracy. The system also can classify malware based on similarity scoring. Finally, we showed detection and classification performance with 98% and 90% accuracy respectively. ","[{'version': 'v1', 'created': 'Sat, 2 Mar 2019 13:26:33 GMT'}]",2019-03-06,"[['Kang', 'Hyunjae', ''], ['Jang', 'Jae-wook', ''], ['Mohaisen', 'Aziz', ''], ['Kim', 'Huy Kang', '']]","['Mobile malware', 'Android security', 'malware detection', 'malware classification', 'creator information']" 303,1907.08003,Dominik Aumayr,"Dominik Aumayr, Stefan Marr, Elisa Gonzalez Boix, Hanspeter M\""ossenb\""ock","Asynchronous Snapshots of Actor Systems for Latency-Sensitive Applications","This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 16th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes (MPLR '19), October 21-22, 2019, Athens, Greece, https://doi.org/10.1145/3357390.3361019",,10.1145/3357390.3361019,,cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The actor model is popular for many types of server applications. Efficient snapshotting of applications is crucial in the deployment of pre-initialized applications or moving running applications to different machines, e.g for debugging purposes. A key issue is that snapshotting blocks all other operations. In modern latency-sensitive applications, stopping the application to persist its state needs to be avoided, because users may not tolerate the increased request latency. In order to minimize the impact of snapshotting on request latency, our approach persists the application's state asynchronously by capturing partial heaps, completing snapshots step by step. Additionally, our solution is transparent and supports arbitrary object graphs. We prototyped our snapshotting approach on top of the Truffle/Graal platform and evaluated it with the Savina benchmarks and the Acme Air microservice application. When performing a snapshot every thousand Acme Air requests, the number of slow requests ( 0.007% of all requests) with latency above 100ms increases by 5.43%. Our Savina microbenchmark results detail how different utilization patterns impact snapshotting cost. To the best of our knowledge, this is the first system that enables asynchronous snapshotting of actor applications, i.e. without stop-the-world synchronization, and thereby minimizes the impact on latency. We thus believe it enables new deployment and debugging options for actor systems. ","[{'version': 'v1', 'created': 'Thu, 18 Jul 2019 11:49:57 GMT'}, {'version': 'v2', 'created': 'Wed, 18 Sep 2019 08:40:30 GMT'}]",2019-09-19,"[['Aumayr', 'Dominik', ''], ['Marr', 'Stefan', ''], ['Boix', 'Elisa Gonzalez', ''], ['Mössenböck', 'Hanspeter', '']]","['Actors', 'Snapshots', 'Micro services', 'Latency']" 304,0910.4568,Ilango Sriram,Ilango Sriram,"SPECI, a simulation tool exploring cloud-scale data centres",,"Ilango Sriram, SPECI, a Simulation Tool Exploring Cloud-Scale Data Centres, In: CloudCom 2009, LNCS 5931, pp. 381-392, 2009, M.G. Jaatun, G. Zhao, and C. Rong (Eds.), Springer-Verlag Berlin Heidelberg 2009",10.1007/978-3-642-10665-1_35,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," There is a rapid increase in the size of data centres (DCs) used to provide cloud computing services. It is commonly agreed that not all properties in the middleware that manages DCs will scale linearly with the number of components. Further, ""normal failure"" complicates the assessment of the per-formance of a DC. However, unlike in other engineering domains, there are no well established tools that allow the prediction of the performance and behav-iour of future generations of DCs. SPECI, Simulation Program for Elastic Cloud Infrastructures, is a simulation tool which allows exploration of aspects of scaling as well as performance properties of future DCs. ","[{'version': 'v1', 'created': 'Fri, 23 Oct 2009 19:05:29 GMT'}]",2015-05-14,"[['Sriram', 'Ilango', '']]","['Cloud computing', 'data centre', 'middleware', 'scaling of performance', 'simulation tools']" 305,1912.01944,Saeideh Ghanbari Azar,Saeideh Ghanbari Azar and Hadi Seyedarabi,"Trajectory-Based Recognition of Dynamic Persian Sign Language Using Hidden Markov Model",,,10.1016/j.csl.2019.101053,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Sign Language Recognition (SLR) is an important step in facilitating the communication among deaf people and the rest of society. Existing Persian sign language recognition systems are mainly restricted to static signs which are not very useful in everyday communications. In this study, a dynamic Persian sign language recognition system is presented. A collection of 1200 videos were captured from 12 individuals performing 20 dynamic signs with a simple white glove. The trajectory of the hands, along with hand shape information were extracted from each video using a simple region-growing technique. These time-varying trajectories were then modeled using Hidden Markov Model (HMM) with Gaussian probability density functions as observations. The performance of the system was evaluated in different experimental strategies. Signer-independent and signer-dependent experiments were performed on the proposed system and the average accuracy of 97.48% was obtained. The experimental results demonstrated that the performance of the system is independent of the subject and it can also perform excellently even with a limited number of training data. ","[{'version': 'v1', 'created': 'Wed, 4 Dec 2019 13:08:58 GMT'}]",2019-12-05,"[['Azar', 'Saeideh Ghanbari', ''], ['Seyedarabi', 'Hadi', '']]","['Sign Language Recognition', 'Persian Sign Language', 'Trajectory', 'Hidden']" 306,1312.2342,Meiappane Aroumougame,"A.Meiappane, B. Chithra, Prasanna Venkataesan","Evaluation of Software Architecture Quality Attribute for an Internet Banking System",4 pages,,10.5120/10189-5062,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The design phase plays a vital role than all other phases in the software development. Software Architecture has to meet both the functional and non-functional quality requirements. The Evaluation of Architecture has to be performed, so that the developers are assured that their selected Architecture will reduce the cost and effort and also enhances the various quality attributes like Availability, Reusability, Performance, Modifiability and Extendibility. The success of the system depends upon the Architecture Evaluation by the essential method to the system. The overall ranking of the candidate architecture is ascertained by assigning weight to the scenario and scenario interaction. In this paper, SAAM method is taken to evaluate the two architectures from the various available method and techniques to achieve the various quality attributes by weight metric. ","[{'version': 'v1', 'created': 'Mon, 9 Dec 2013 08:48:42 GMT'}]",2015-06-18,"[['Meiappane', 'A.', ''], ['Chithra', 'B.', ''], ['Venkataesan', 'Prasanna', '']]","['Software architecture', 'Evaluation', 'quality attributes', 'weight metric']" 307,1701.07756,Siwar Jendoubi,"Siwar Jendoubi, Arnaud Martin, Ludovic Li\'etard, Boutheina Ben Yaghlane, Hend Ben Hadji","Dynamic time warping distance for message propagation classification in Twitter","10 pages, 1 figure ECSQARU 2015, Proceedings of the 13th European Conferences on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, 2015",,10.1007/978-3-319-20807-7_38,,cs.AI cs.SI stat.ML,http://creativecommons.org/licenses/by-nc-sa/4.0/," Social messages classification is a research domain that has attracted the attention of many researchers in these last years. Indeed, the social message is different from ordinary text because it has some special characteristics like its shortness. Then the development of new approaches for the processing of the social message is now essential to make its classification more efficient. In this paper, we are mainly interested in the classification of social messages based on their spreading on online social networks (OSN). We proposed a new distance metric based on the Dynamic Time Warping distance and we use it with the probabilistic and the evidential k Nearest Neighbors (k-NN) classifiers to classify propagation networks (PrNets) of messages. The propagation network is a directed acyclic graph (DAG) that is used to record propagation traces of the message, the traversed links and their types. We tested the proposed metric with the chosen k-NN classifiers on real world propagation traces that were collected from Twitter social network and we got good classification accuracies. ","[{'version': 'v1', 'created': 'Thu, 26 Jan 2017 16:14:40 GMT'}]",2017-01-27,"[['Jendoubi', 'Siwar', ''], ['Martin', 'Arnaud', ''], ['Liétard', 'Ludovic', ''], ['Yaghlane', 'Boutheina Ben', ''], ['Hadji', 'Hend Ben', '']]","['Propagation network (PrNet)', 'classification', 'Dynamic Time Warping (DTW)', 'k Nearest Neighbor (k-NN)']" 308,1608.05812,Suleiman Yerima,"Suleiman Y. Yerima, Sakir Sezer, Gavin McWilliams","Analysis of Bayesian Classification based Approaches for Android Malware Detection",arXiv admin note: text overlap with arXiv:1608.00848,"IET Information Security, Volume 8, Issue 1, January 2014, pp. 25-36, Print ISSN 1751-8709, Online ISSN 1751-8717",10.1049/iet-ifs.2013.0095,,cs.CR cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Mobile malware has been growing in scale and complexity spurred by the unabated uptake of smartphones worldwide. Android is fast becoming the most popular mobile platform resulting in sharp increase in malware targeting the platform. Additionally, Android malware is evolving rapidly to evade detection by traditional signature-based scanning. Despite current detection measures in place, timely discovery of new malware is still a critical issue. This calls for novel approaches to mitigate the growing threat of zero-day Android malware. Hence, in this paper we develop and analyze proactive Machine Learning approaches based on Bayesian classification aimed at uncovering unknown Android malware via static analysis. The study, which is based on a large malware sample set of majority of the existing families, demonstrates detection capabilities with high accuracy. Empirical results and comparative analysis are presented offering useful insight towards development of effective static-analytic Bayesian classification based solutions for detecting unknown Android malware. ","[{'version': 'v1', 'created': 'Sat, 20 Aug 2016 12:10:49 GMT'}]",2016-08-23,"[['Yerima', 'Suleiman Y.', ''], ['Sezer', 'Sakir', ''], ['McWilliams', 'Gavin', '']]","['mobile security', 'Android', 'malware detection', 'data mining', 'Bayesian classification', 'static analysis', 'machine learning']" 309,1706.07786,Ismail Rusli,Ismail Rusli,"Comparison of Modified Kneser-Ney and Witten-Bell Smoothing Techniques in Statistical Language Model of Bahasa Indonesia","9 pages, 3 figures, 2nd International Conference on Information and Communication Technology (ICoICT), Bandung, 2014",,10.1109/ICoICT.2014.6914097,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Smoothing is one technique to overcome data sparsity in statistical language model. Although in its mathematical definition there is no explicit dependency upon specific natural language, different natures of natural languages result in different effects of smoothing techniques. This is true for Russian language as shown by Whittaker (1998). In this paper, We compared Modified Kneser-Ney and Witten-Bell smoothing techniques in statistical language model of Bahasa Indonesia. We used train sets of totally 22M words that we extracted from Indonesian version of Wikipedia. As far as we know, this is the largest train set used to build statistical language model for Bahasa Indonesia. The experiments with 3-gram, 5-gram, and 7-gram showed that Modified Kneser-Ney consistently outperforms Witten-Bell smoothing technique in term of perplexity values. It is interesting to note that our experiments showed 5-gram model for Modified Kneser-Ney smoothing technique outperforms that of 7-gram. Meanwhile, Witten-Bell smoothing is consistently improving over the increase of n-gram order. ","[{'version': 'v1', 'created': 'Fri, 23 Jun 2017 17:43:20 GMT'}]",2017-06-26,"[['Rusli', 'Ismail', '']]","['n-gram', 'Kneser-Ney', 'Witten-Bell', 'smoothing technique', 'statistical language model of Bahasa Indonesia']" 310,1911.08684,Kaiqun Fu,"Kaiqun Fu, Taoran Ji, Liang Zhao, Chang-Tien Lu","TITAN: A Spatiotemporal Feature Learning Framework for Traffic Incident Duration Prediction",,,10.1145/3347146.3359381,,cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Critical incident stages identification and reasonable prediction of traffic incident duration are essential in traffic incident management. In this paper, we propose a traffic incident duration prediction model that simultaneously predicts the impact of the traffic incidents and identifies the critical groups of temporal features via a multi-task learning framework. First, we formulate a sparsity optimization problem that extracts low-level temporal features based on traffic speed readings and then generalizes higher level features as phases of traffic incidents. Second, we propose novel constraints on feature similarity exploiting prior knowledge about the spatial connectivity of the road network to predict the incident duration. The proposed problem is challenging to solve due to the orthogonality constraints, non-convexity objective, and non-smoothness penalties. We develop an algorithm based on the alternating direction method of multipliers (ADMM) framework to solve the proposed formulation. Extensive experiments and comparisons to other models on real-world traffic data and traffic incident records justify the efficacy of our model. ","[{'version': 'v1', 'created': 'Wed, 20 Nov 2019 03:32:43 GMT'}]",2019-11-21,"[['Fu', 'Kaiqun', ''], ['Ji', 'Taoran', ''], ['Zhao', 'Liang', ''], ['Lu', 'Chang-Tien', '']]","['intelligent transportation systems', 'feature learning', 'incident impactanalysis']" 311,1705.07563,Yuxin Su,"Yuxin Su, Irwin King, Michael Lyu",Learning to Rank Using Localized Geometric Mean Metrics,To appear in SIGIR'17,,10.1145/3077136.3080828,,cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Many learning-to-rank (LtR) algorithms focus on query-independent model, in which query and document do not lie in the same feature space, and the rankers rely on the feature ensemble about query-document pair instead of the similarity between query instance and documents. However, existing algorithms do not consider local structures in query-document feature space, and are fragile to irrelevant noise features. In this paper, we propose a novel Riemannian metric learning algorithm to capture the local structures and develop a robust LtR algorithm. First, we design a concept called \textit{ideal candidate document} to introduce metric learning algorithm to query-independent model. Previous metric learning algorithms aiming to find an optimal metric space are only suitable for query-dependent model, in which query instance and documents belong to the same feature space and the similarity is directly computed from the metric space. Then we extend the new and extremely fast global Geometric Mean Metric Learning (GMML) algorithm to develop a localized GMML, namely L-GMML. Based on the combination of local learned metrics, we employ the popular Normalized Discounted Cumulative Gain~(NDCG) scorer and Weighted Approximate Rank Pairwise (WARP) loss to optimize the \textit{ideal candidate document} for each query candidate set. Finally, we can quickly evaluate all candidates via the similarity between the \textit{ideal candidate document} and other candidates. By leveraging the ability of metric learning algorithms to describe the complex structural information, our approach gives us a principled and efficient way to perform LtR tasks. The experiments on real-world datasets demonstrate that our proposed L-GMML algorithm outperforms the state-of-the-art metric learning to rank methods and the stylish query-independent LtR algorithms regarding accuracy and computational efficiency. ","[{'version': 'v1', 'created': 'Mon, 22 May 2017 05:46:44 GMT'}]",2017-05-23,"[['Su', 'Yuxin', ''], ['King', 'Irwin', ''], ['Lyu', 'Michael', '']]","['Learning to Rank', 'Distance Metric Learning', 'Local Metric Learning']" 312,1509.00388,Philipp Kindermann,"Franz J. Brandenburg, Walter Didimo, William S. Evans, Philipp Kindermann, Giuseppe Liotta, Fabrizio Montecchiani",Recognizing and Drawing IC-planar Graphs,,Theor. Comput. Sci. 636: 1-16 (2016),10.1016/j.tcs.2016.04.026,,cs.CG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," IC-planar graphs are those graphs that admit a drawing where no two crossed edges share an end-vertex and each edge is crossed at most once. They are a proper subfamily of the 1-planar graphs. Given an embedded IC-planar graph $G$ with $n$ vertices, we present an $O(n)$-time algorithm that computes a straight-line drawing of $G$ in quadratic area, and an $O(n^3)$-time algorithm that computes a straight-line drawing of $G$ with right-angle crossings in exponential area. Both these area requirements are worst-case optimal. We also show that it is NP-complete to test IC-planarity both in the general case and in the case in which a rotation system is fixed for the input graph. Furthermore, we describe a polynomial-time algorithm to test whether a set of matching edges can be added to a triangulated planar graph such that the resulting graph is IC-planar. ","[{'version': 'v1', 'created': 'Tue, 1 Sep 2015 16:54:16 GMT'}, {'version': 'v2', 'created': 'Mon, 18 Jul 2016 17:06:34 GMT'}]",2016-07-19,"[['Brandenburg', 'Franz J.', ''], ['Didimo', 'Walter', ''], ['Evans', 'William S.', ''], ['Kindermann', 'Philipp', ''], ['Liotta', 'Giuseppe', ''], ['Montecchiani', 'Fabrizio', '']]","['1-Planarity', 'IC-Planarity', 'Right Angle Crossings', 'Graph Drawing', 'NP-hardness']" 313,1808.01552,Leye Wang,"Leye Wang, Bin Guo, Qiang Yang",Smart City Development with Urban Transfer Learning,,"IEEE Computer ( Volume: 51, Issue: 12, Dec. 2018)",10.1109/MC.2018.2880015,,cs.AI cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Nowadays, the smart city development levels of different cities are still unbalanced. For a large number of cities which just started development, the governments will face a critical cold-start problem: 'how to develop a new smart city service with limited data?'. To address this problem, transfer learning can be leveraged to accelerate the smart city development, which we term the urban transfer learning paradigm. This article investigates the common process of urban transfer learning, aiming to provide city planners and relevant practitioners with guidelines on how to apply this novel learning paradigm. Our guidelines include common transfer strategies to take, general steps to follow, and case studies in public safety, transportation management, etc. We also summarize a few research opportunities and expect this article can attract more researchers to study urban transfer learning. ","[{'version': 'v1', 'created': 'Sun, 5 Aug 2018 02:28:27 GMT'}, {'version': 'v2', 'created': 'Sun, 21 Oct 2018 03:42:02 GMT'}]",2022-05-31,"[['Wang', 'Leye', ''], ['Guo', 'Bin', ''], ['Yang', 'Qiang', '']]","['transfer learning', 'urban computing', 'smart city']" 314,1406.3969,Siddhartha Ghosh,"Siddhartha Ghosh, Sujata Thamke and Kalyani U.R.S","Translation Of Telugu-Marathi and Vice-Versa using Rule Based Machine Translation","13 pages, Fourth International Conference on Advances in Computing and Information Technology (ACITY 2014) Delhi, India - May 2014",,10.5121/csit.2014.4501,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In todays digital world automated Machine Translation of one language to another has covered a long way to achieve different kinds of success stories. Whereas Babel Fish supports a good number of foreign languages and only Hindi from Indian languages, the Google Translator takes care of about 10 Indian languages. Though most of the Automated Machine Translation Systems are doing well but handling Indian languages needs a major care while handling the local proverbs/ idioms. Most of the Machine Translation system follows the direct translation approach while translating one Indian language to other. Our research at KMIT R&D Lab found that handling the local proverbs/idioms is not given enough attention by the earlier research work. This paper focuses on two of the majorly spoken Indian languages Marathi and Telugu, and translation between them. Handling proverbs and idioms of both the languages have been given a special care, and the research outcome shows a significant achievement in this direction. ","[{'version': 'v1', 'created': 'Mon, 16 Jun 2014 10:59:03 GMT'}]",2014-06-17,"[['Ghosh', 'Siddhartha', ''], ['Thamke', 'Sujata', ''], ['S', 'Kalyani U. R.', '']]","['Machine Translation', 'NLP', 'Parts Of Speech', 'Indian Languages']" 315,1902.09749,Andr\'es Monroy-Hern\'andez,"Taryn Bipat, Maarten Willem Bos, Rajan Vaish, Andr\'es Monroy-Hern\'andez",Analyzing the Use of Camera Glasses in the Wild,"In Proceedings of the 37th Annual ACM Conference on Human Factors in Computing Systems (CHI 2019). ACM, New York, NY, USA",,10.1145/3290605.3300651,,cs.HC cs.CY,http://creativecommons.org/licenses/by/4.0/," Camera glasses enable people to capture point-of-view videos using a common accessory, hands-free. In this paper, we investigate how, when, and why people used one such product: Spectacles. We conducted 39 semi-structured interviews and surveys with 191 owners of Spectacles. We found that the form factor elicits sustained usage behaviors, and opens opportunities for new use-cases and types of content captured. We provide a usage typology, and highlight societal and individual factors that influence the classification of behaviors. ","[{'version': 'v1', 'created': 'Tue, 26 Feb 2019 06:20:44 GMT'}]",2019-02-27,"[['Bipat', 'Taryn', ''], ['Bos', 'Maarten Willem', ''], ['Vaish', 'Rajan', ''], ['Monroy-Hernández', 'Andrés', '']]","['camera glasses', 'smart glasses', 'wearables', 'usability']" 316,1803.07613,Radhika Jagtap,"Radhika Jagtap, Matthias Jung, Wendy Elsasser, Christian Weis, Andreas Hansson, Norbert Wehn",Integrating DRAM Power-Down Modes in gem5 and Quantifying their Impact,,"In Proceedings of MEMSYS 2017, Alexandria, VA, USA, October 2, 2017, 10 pages, ACM",10.1145/3132402.3132444,,cs.AR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Across applications, DRAM is a significant contributor to the overall system power, with the DRAM access energy per bit up to three orders of magnitude higher compared to on-chip memory accesses. To improve the power efficiency, DRAM technology incorporates multiple power-down modes, each with different trade-offs between achievable power savings and performance impact due to entry and exit delay requirements. Accurate modeling of these low power modes and entry and exit control is crucial to analyze the trade-offs across controller configurations and workloads with varied memory access characteristics. To address this, we integrate the power-down modes into the DRAM controller model in the open-source simulator gem5. This is the first publicly available full-system simulator with DRAM power-down modes, providing the research community a tool for DRAM power analysis for a breadth of use cases. We validate the power-down functionality with sweep tests, which trigger defined memory access characteristics. We further evaluate the model with real HPC workloads, illustrating the value of integrating low power functionality into a full system simulator. ","[{'version': 'v1', 'created': 'Tue, 20 Mar 2018 19:22:27 GMT'}]",2018-03-22,"[['Jagtap', 'Radhika', ''], ['Jung', 'Matthias', ''], ['Elsasser', 'Wendy', ''], ['Weis', 'Christian', ''], ['Hansson', 'Andreas', ''], ['Wehn', 'Norbert', '']]","['DRAM', 'Power-Down', 'Simulation', 'gem5', 'Power']" 317,1404.3186,Martin Monperrus,"Favio Demarco, Jifeng Xuan (INRIA Lille - Nord Europe), Daniel Le Berre (CRIL), Martin Monperrus (INRIA Lille - Nord Europe)","Automatic Repair of Buggy If Conditions and Missing Preconditions with SMT","CSTVA'2014, India (2014)","6th International Workshop on Constraints in Software Testing, Verification, and Analysis (CSTVA 2014), 2014",10.1145/2593735.2593740,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls. ","[{'version': 'v1', 'created': 'Fri, 11 Apr 2014 18:57:52 GMT'}]",2018-07-06,"[['Demarco', 'Favio', '', 'INRIA Lille - Nord Europe'], ['Xuan', 'Jifeng', '', 'INRIA Lille - Nord Europe'], ['Berre', 'Daniel Le', '', 'CRIL'], ['Monperrus', 'Martin', '', 'INRIA Lille - Nord Europe']]","['Automatic repair', 'test suite', 'buggy if condition', 'missing precondition', 'SMT', 'angelic fix localization']" 318,1706.02889,Antonio Pertusa,"Antonio Pertusa, Antonio-Javier Gallego, Marisa Bernabeu","MirBot: A collaborative object recognition system for smartphones using convolutional neural networks","Accepted in Neurocomputing, 2018","Neurocomputing, vol 293, 2018, Pages 87-99",10.1016/j.neucom.2018.03.005,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," MirBot is a collaborative application for smartphones that allows users to perform object recognition. This app can be used to take a photograph of an object, select the region of interest and obtain the most likely class (dog, chair, etc.) by means of similarity search using features extracted from a convolutional neural network (CNN). The answers provided by the system can be validated by the user so as to improve the results for future queries. All the images are stored together with a series of metadata, thus enabling a multimodal incremental dataset labeled with synset identifiers from the WordNet ontology. This dataset grows continuously thanks to the users' feedback, and is publicly available for research. This work details the MirBot object recognition system, analyzes the statistics gathered after more than four years of usage, describes the image classification methodology, and performs an exhaustive evaluation using handcrafted features, convolutional neural codes and different transfer learning techniques. After comparing various models and transformation methods, the results show that the CNN features maintain the accuracy of MirBot constant over time, despite the increasing number of new classes. The app is freely available at the Apple and Google Play stores. ","[{'version': 'v1', 'created': 'Fri, 9 Jun 2017 10:50:43 GMT'}, {'version': 'v2', 'created': 'Tue, 13 Mar 2018 08:34:12 GMT'}, {'version': 'v3', 'created': 'Sat, 24 Mar 2018 08:30:28 GMT'}]",2020-06-05,"[['Pertusa', 'Antonio', ''], ['Gallego', 'Antonio-Javier', ''], ['Bernabeu', 'Marisa', '']]","['Object recognition', 'image datasets', 'Convolutional neural networks', 'transfer learning', 'multimodality', 'human computer interaction']" 319,2102.07195,Mohamed Alrshah,"Ali A. Elrowayati, Mohamed A. Alrshah, M.F.L. Abdullah, Rohaya Latip","HEVC Watermarking Techniques for Authentication and Copyright Applications: Challenges and Opportunities","Review article, 20 pages",,10.1109/ACCESS.2020.3004049,,cs.CR cs.MM,http://creativecommons.org/licenses/by-nc-nd/4.0/," Recently, High-Efficiency Video Coding (HEVC/H.265) has been chosen to replace previous video coding standards, such as H.263 and H.264. Despite the efficiency of HEVC, it still lacks reliable and practical functionalities to support authentication and copyright applications. In order to provide this support, several watermarking techniques have been proposed by many researchers during the last few years. However, those techniques are still suffering from many issues that need to be considered for future designs. In this paper, a Systematic Literature Review (SLR) is introduced to identify HEVC challenges and potential research directions for interested researchers and developers. The time scope of this SLR covers all research articles published during the last six years starting from January 2014 up to the end of April 2020. Forty-two articles have met the criteria of selection out of 343 articles published in this area during the mentioned time scope. A new classification has been drawn followed by an identification of the challenges of implementing HEVC watermarking techniques based on the analysis and discussion of those chosen articles. Eventually, recommendations for HEVC watermarking techniques have been listed to help researchers to improve the existing techniques or to design new efficient ones. ","[{'version': 'v1', 'created': 'Sun, 14 Feb 2021 16:56:42 GMT'}]",2021-02-16,"[['Elrowayati', 'Ali A.', ''], ['Alrshah', 'Mohamed A.', ''], ['Abdullah', 'M. F. L.', ''], ['Latip', 'Rohaya', '']]","[', and synonyms, as in Table 1', 'Initially,this search strategy produces lists of related', 'interestingarticles including many duplicated', 'redundant items']" 320,1809.00094,Miko{\l}aj Morzy,Miko{\l}aj Morzy and Tomasz Kajdanowicz,"Graph Energies of Egocentric Networks and Their Correlation with Vertex Centrality Measures",,"Entropy 2018, 20(12), 916",10.3390/e20120916,,cs.SI physics.soc-ph,http://creativecommons.org/licenses/by/4.0/," Graph energy is the energy of the matrix representation of the graph, where the energy of a matrix is the sum of singular values of the matrix. Depending on the definition of a matrix, one can contemplate graph energy, Randi\'c energy, Laplacian energy, distance energy, and many others. Although theoretical properties of various graph energies have been investigated in the past in the areas of mathematics, chemistry, physics, or graph theory, these explorations have been limited to relatively small graphs representing chemical compounds or theoretical graph classes with strictly defined properties. In this paper we investigate the usefulness of the concept of graph energy in the context of large, complex networks. We show that when graph energies are applied to local egocentric networks, the values of these energies correlate strongly with vertex centrality measures. In particular, for some generative network models graph energies tend to correlate strongly with the betweenness and the eigencentrality of vertices. As the exact computation of these centrality measures is expensive and requires global processing of a network, our research opens the possibility of devising efficient algorithms for the estimation of these centrality measures based only on local information. ","[{'version': 'v1', 'created': 'Sat, 1 Sep 2018 01:25:37 GMT'}, {'version': 'v2', 'created': 'Mon, 12 Nov 2018 20:51:41 GMT'}]",2019-02-12,"[['Morzy', 'Mikołaj', ''], ['Kajdanowicz', 'Tomasz', '']]","['Graph energy', 'Randi´c energy', 'Laplacian energy', 'egocentric network', 'vertex centralitymeasures']" 321,1603.04276,Elaheh Ghassabani,"Elaheh Ghassabani (1), Andrew Gacek (2), Michael W. Whalen (1) ((1) University of Minnesota, (2) Rockwell Collins Advanced Technology Center)",Efficient Generation of Inductive Validity Cores for Safety Properties,"appears in FSE2016: ACM Sigsoft International Symposium on the Foundations of Software Engineering, Seattle, WA, November 13-19, 2016",,10.1145/2950290.2950346,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Symbolic model checkers can construct proofs of properties over very complex models. However, the results reported by the tool when a proof succeeds do not generally provide much insight to the user. It is often useful for users to have traceability information related to the proof: which portions of the model were necessary to construct it. This traceability information can be used to diagnose a variety of modeling problems such as overconstrained axioms and underconstrained properties, and can also be used to measure completeness of a set of requirements over a model. In this paper, we present a new algorithm to efficiently compute the inductive validity core (IVC) within a model necessary for inductive proofs of safety properties for sequential systems. The algorithm is based on the UNSAT core support built into current SMT solvers and a novel encoding of the inductive problem to try to generate a minimal inductive validity core. We prove our algorithm is correct, and describe its implementation in the JKind model checker for Lustre models. We then present an experiment in which we benchmark the algorithm in terms of speed, diversity of produced cores, and minimality, with promising results. ","[{'version': 'v1', 'created': 'Mon, 14 Mar 2016 14:36:58 GMT'}, {'version': 'v2', 'created': 'Fri, 29 Jul 2016 17:36:57 GMT'}]",2016-08-01,"[['Ghassabani', 'Elaheh', ''], ['Gacek', 'Andrew', ''], ['Whalen', 'Michael W.', '']]","['Traceability', 'Requirements Completeness', 'k-Induction', 'IC3/PDR']" 322,2002.03256,Margaret Mitchell,"Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben Hutchinson, Alex Hanna, Timnit Gebru, Jamie Morgenstern",Diversity and Inclusion Metrics in Subset Selection,,"AIES 2020: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society",10.1145/3375627.3375832,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives. When considering the relevance of ethical concepts to subset selection problems, the concepts of diversity and inclusion are additionally applicable in order to create outputs that account for social power and access differentials. We introduce metrics based on these concepts, which can be applied together, separately, and in tandem with additional fairness constraints. Results from human subject experiments lend support to the proposed criteria. Social choice methods can additionally be leveraged to aggregate and choose preferable sets, and we detail how these may be applied. ","[{'version': 'v1', 'created': 'Sun, 9 Feb 2020 00:29:40 GMT'}]",2020-02-11,"[['Mitchell', 'Margaret', ''], ['Baker', 'Dylan', ''], ['Moorosi', 'Nyalleng', ''], ['Denton', 'Emily', ''], ['Hutchinson', 'Ben', ''], ['Hanna', 'Alex', ''], ['Gebru', 'Timnit', ''], ['Morgenstern', 'Jamie', '']]","['machine learning fairness', 'subset selection', 'diversity and inclusion']" 323,1207.3932,Kishorjit Nongmeikapam Mr.,"Kishorjit Nongmeikapam, Vidya Raj RK, Oinam Imocha Singh and Sivaji Bandyopadhyay",Automatic Segmentation of Manipuri (Meiteilon) Word into Syllabic Units,"12 Pages, 5 Tables See the link http://airccse.org/journal/jcsit/0612csit11.pdf",,10.5121/ijcsit.2012.4311,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The work of automatic segmentation of a Manipuri language (or Meiteilon) word into syllabic units is demonstrated in this paper. This language is a scheduled Indian language of Tibeto-Burman origin, which is also a very highly agglutinative language. This language usages two script: a Bengali script and Meitei Mayek (Script). The present work is based on the second script. An algorithm is designed so as to identify mainly the syllables of Manipuri origin word. The result of the algorithm shows a Recall of 74.77, Precision of 91.21 and F-Score of 82.18 which is a reasonable score with the first attempt of such kind for this language. ","[{'version': 'v1', 'created': 'Tue, 17 Jul 2012 10:14:24 GMT'}]",2012-07-18,"[['Nongmeikapam', 'Kishorjit', ''], ['RK', 'Vidya Raj', ''], ['Singh', 'Oinam Imocha', ''], ['Bandyopadhyay', 'Sivaji', '']]","['Syllable', 'Syllabic Unit', 'Manipuri', 'Meitei Mayek']" 324,1604.08501,"Andreas Kl\""ockner","Andreas Kl\""ockner and Lucas C. Wilcox and T. Warburton","Array Program Transformation with Loo.py by Example: High-Order Finite Elements",,"ARRAY 2016 Proceedings of the 3rd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming Pages 9-16",10.1145/2935323.2935325,,cs.PL cs.PF math.NA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," To concisely and effectively demonstrate the capabilities of our program transformation system Loo.py, we examine a transformation path from two real-world Fortran subroutines as found in a weather model to a single high-performance computational kernel suitable for execution on modern GPU hardware. Along the transformation path, we encounter kernel fusion, vectorization, prefetch- ing, parallelization, and algorithmic changes achieved by mechanized conversion between imperative and functional/substitution- based code, among a number more. We conclude with performance results that demonstrate the effects and support the effectiveness of the applied transformations. ","[{'version': 'v1', 'created': 'Wed, 13 Apr 2016 20:56:15 GMT'}]",2018-10-05,"[['Klöckner', 'Andreas', ''], ['Wilcox', 'Lucas C.', ''], ['Warburton', 'T.', '']]","['Code generation', 'high-level language', 'GPU', 'substitution rule', 'embedded language', 'high-performance', 'program transformation', 'OpenCL']" 325,1107.3682,Jun Wu,Jun Wu and Shigeru Shimamoto,"Context-Capture Multi-Valued Decision Fusion With Fault Tolerant Capability For Wireless Sensor Networks","13 pages, 7 figures",,10.5121/ijwmn.2011.3310,,cs.DC cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Wireless sensor networks (WSNs) are usually utilized to perform decision fusion of event detection. Current decision fusion schemes are based on binary valued decision and do not consider bursty contextcapture. However, bursty context and multi-valued data are important characteristics of WSNs. One on hand, the local decisions from sensors usually have bursty and contextual characteristics. Fusion center must capture the bursty context information from the sensors. On the other hand, in practice, many applications need to process multi-valued data, such as temperature and reflection level used for lightening prediction. To address these challenges, the Markov modulated Poisson process (MMPP) and multi-valued logic are introduced into WSNs to perform context-capture multi-valued decision fusion. The overall decision fusion is decomposed into two parts. The first part is the context-capture model for WSNs using superposition MMPP. Through this procedure, the fusion center has a higher probability to get useful local decisions from sensors. The second one is focused on multi-valued decision fusion. Fault detection can also be performed based on MVL. Once the fusion center detects the faulty nodes, all their local decisions are removed from the computation of the likelihood ratios. Finally, we evaluate the capability of context-capture and fault tolerant. The result supports the usefulness of our scheme. ","[{'version': 'v1', 'created': 'Tue, 19 Jul 2011 10:50:36 GMT'}]",2011-07-20,"[['Wu', 'Jun', ''], ['Shimamoto', 'Shigeru', '']]","['Wireless Sensor Networks', 'Decision Fusion', 'Context', 'Fault-Tolerant', 'Multi-Valued Logic']" 326,2103.06379,Taha Hassan,"Taha Hassan, Bob Edmison, Timothy Stelter, D. Scott McCrickard","Learning to Trust: Understanding Editorial Authority and Trust in Recommender Systems for Education","(UMAP '21) Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, June 21 - 25, 2021 (Utrecht, the Netherlands)","Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '21), pp. 24-32. 2021",10.1145/3450613.3456811,,cs.HC,http://creativecommons.org/licenses/by-nc-sa/4.0/," Trust in a recommendation system (RS) is often algorithmically incorporated using implicit or explicit feedback of user-perceived trustworthy social neighbors, and evaluated using user-reported trustworthiness of recommended items. However, real-life recommendation settings can feature group disparities in trust, power, and prerogatives. Our study examines a complementary view of trust which relies on the editorial power relationships and attitudes of all stakeholders in the RS application domain. We devise a simple, first-principles metric of editorial authority, i.e., user preferences for recommendation sourcing, veto power, and incorporating user feedback, such that one RS user group confers trust upon another by ceding or assigning editorial authority. In a mixed-methods study at Virginia Tech, we surveyed faculty, teaching assistants, and students about their preferences of editorial authority, and hypothesis-tested its relationship with trust in algorithms for a hypothetical `Suggested Readings' RS. We discover that higher RS editorial authority assigned to students is linked to the relative trust the course staff allocates to RS algorithm and students. We also observe that course staff favors higher control for the RS algorithm in sourcing and updating the recommendations long-term. Using content analysis, we discuss frequent staff-recommended student editorial roles and highlight their frequent rationales, such as perceived expertise, scaling the learning environment, professional curriculum needs, and learner disengagement. We argue that our analyses highlight critical user preferences to help detect editorial power asymmetry and identify RS use-cases for supporting teaching and research ","[{'version': 'v1', 'created': 'Wed, 10 Mar 2021 22:57:39 GMT'}, {'version': 'v2', 'created': 'Fri, 17 Sep 2021 08:16:35 GMT'}]",2021-09-20,"[['Hassan', 'Taha', ''], ['Edmison', 'Bob', ''], ['Stelter', 'Timothy', ''], ['McCrickard', 'D. Scott', '']]","['recommendation', 'education', 'context', 'trust', 'interpretation']" 327,1712.10213,Simon Foster,"Simon Foster, Ana Cavalcanti, Jim Woodcock, Frank Zeyda",Unifying Theories of Time with Generalised Reactive Processes,"7 pages, accepted for Information Processing Letters, 15th February 2018",,10.1016/j.ipl.2018.02.017,,cs.LO cs.FL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Hoare and He's theory of reactive processes provides a unifying foundation for the formal semantics of concurrent and reactive languages. Though highly applicable, their theory is limited to models that can express event histories as discrete sequences. In this paper, we show how their theory can be generalised by using an abstract trace algebra. We show how the algebra, notably, allows us to also consider continuous-time traces and thereby facilitate models of hybrid systems. We then use this algebra to reconstruct the theory of reactive processes in our generic setting, and prove characteristic laws for sequential and parallel processes, all of which have been mechanically verified in the Isabelle/HOL proof assistant. ","[{'version': 'v1', 'created': 'Fri, 29 Dec 2017 13:09:25 GMT'}, {'version': 'v2', 'created': 'Wed, 21 Feb 2018 11:25:48 GMT'}]",2018-04-05,"[['Foster', 'Simon', ''], ['Cavalcanti', 'Ana', ''], ['Woodcock', 'Jim', ''], ['Zeyda', 'Frank', '']]","['formal semantics', 'hybrid systems', 'process algebra', 'unifying theories', 'theorem proving']" 328,1005.0058,Pino Caballero-Gil,A. F\'uster-Sabater and P. Caballero-Gil,Linear solutions for cryptographic nonlinear sequence generators,,"Physics Letters A Vol. 369, Is. 5-6, 1 Oct. 2007, pp. 432-437",10.1016/j.physleta.2007.04.103,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This letter shows that linear Cellular Automata based on rules 90/150 generate all the solutions of linear difference equations with binary constant coefficients. Some of these solutions are pseudo-random noise sequences with application in cryptography: the sequences generated by the class of shrinking generators. Consequently, this contribution show that shrinking generators do not provide enough guarantees to be used for encryption purposes. Furthermore, the linearization is achieved through a simple algorithm about which a full description is provided. ","[{'version': 'v1', 'created': 'Sat, 1 May 2010 09:14:31 GMT'}]",2015-03-17,"[['Fúster-Sabater', 'A.', ''], ['Caballero-Gil', 'P.', '']]","['Nonlinear Science', 'Cellular Automata', 'Predictability', 'Cryptanalysis']" 329,1907.11352,Jim Buchan,Jim Buchan and Mark Pearl,Leveraging the Mob Mentality: An Experience Report on Mob Programming,6 pages. Best Paper in Industry Collaboration Track at EASE'18,"In EASE'18 Proceedings of the 22nd International Conference on Evaluation and Assessment in Software Engineering 2018 Vol. Part F137700 (pp. 199-204)",10.1145/3210459.3210482,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Mob Programming, or ""mobbing"", is a relatively new collaborative programming practice being experimented with in different organizational contexts. There are a number of claimed benefits to this way of working, but it is not clear if these are realized in practice and under what circumstances. This paper describes the experience of one team's experiences experimenting with Mob Programming over an 18-month period. The context is programming in a software product organization in the Financial Services sector. The paper details the benefits and challenges observed as well as lessons learned from these experiences. It also reports some early work on understanding others' experiences and perceptions of mobbing through a preliminary international survey of 82 practitioners of Mob Programming. The findings from the case and the survey generally align well, as well as suggesting several fruitful areas for further research into Mob Programming. Practitioners should find this useful to extract learnings to inform their own mobbing experiments and its potential impact on collaborative software development. ","[{'version': 'v1', 'created': 'Fri, 26 Jul 2019 01:09:50 GMT'}]",2019-07-29,"[['Buchan', 'Jim', ''], ['Pearl', 'Mark', '']]","['Mob programming', 'Mobbing', 'Collaborative programming']" 330,1307.6939,B\'ela Csaba,"B\'ela Csaba, Thomas A. Plick, Ali Shokoufandeh","Optimal Random Matchings, Tours, and Spanning Trees in Hierarchically Separated Trees","24 pages, to appear in TCS",,10.1016/j.tcs.2013.05.021,,cs.DM math.CO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We derive tight bounds on the expected weights of several combinatorial optimization problems for random point sets of size $n$ distributed among the leaves of a balanced hierarchically separated tree. We consider {\it monochromatic} and {\it bichromatic} versions of the minimum matching, minimum spanning tree, and traveling salesman problems. We also present tight concentration results for the monochromatic problems. ","[{'version': 'v1', 'created': 'Fri, 26 Jul 2013 07:19:33 GMT'}]",2013-07-29,"[['Csaba', 'Béla', ''], ['Plick', 'Thomas A.', ''], ['Shokoufandeh', 'Ali', '']]","['hierarchically separated tree', 'Euclidean optimization', 'metric space']" 331,1705.04832,Renata Rychtarikova,"Renata Rychtarikova, Jan Urban, Dalibor Stys","Zampa's systems theory: a comprehensive theory of measurement in dynamic systems","16 pages, 9 figures","Acta Polytechnica 58(2), 128-143, 2018",10.14311/AP.2018.58.0128,,cs.OH,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The article outlines in memoriam Prof. Pavel Zampa's concepts of system theory which enable to devise a measurement in dynamic systems independently of the particular system behaviour. From the point of view of Zampa's theory, terms like system time, system attributes, system link, system element, input, output, subsystems, and state variables are defined. In Conclusions, Zampa's theory is discussed together with another mathematical approaches of qualitative dynamics known since the 19th century. In Appendices, we present applications of Zampa's technical approach to measurement of complex dynamical (chemical and biological) systems at the Institute of Complex Systems, University of South Bohemia in Ceske Budejovice. ","[{'version': 'v1', 'created': 'Sat, 13 May 2017 14:10:47 GMT'}, {'version': 'v2', 'created': 'Tue, 12 Jun 2018 12:28:38 GMT'}]",2018-06-13,"[['Rychtarikova', 'Renata', ''], ['Urban', 'Jan', ''], ['Stys', 'Dalibor', '']]","['system theory', 'dynamic system', 'theory of measurement', 'complex systems', 'cybernetics']" 332,1606.06461,Dat Quoc Nguyen,"Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu and Mark Johnson",Neighborhood Mixture Model for Knowledge Base Completion,"V1: In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016. V2: Corrected citation to (Krompa{\ss} et al., 2015). V3: A revised version of our CoNLL 2016 paper to update latest related work",,10.18653/v1/K16-1005,,cs.CL cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Knowledge bases are useful resources for many natural language processing tasks, however, they are far from complete. In this paper, we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on TransE-a well-known embedding model for knowledge base completion. Experimental results show that the neighborhood information significantly helps to improve the results of the TransE model, leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification, entity prediction and relation prediction tasks. ","[{'version': 'v1', 'created': 'Tue, 21 Jun 2016 07:54:35 GMT'}, {'version': 'v2', 'created': 'Thu, 21 Jul 2016 16:08:32 GMT'}, {'version': 'v3', 'created': 'Thu, 9 Mar 2017 12:51:31 GMT'}]",2017-03-10,"[['Nguyen', 'Dat Quoc', ''], ['Sirts', 'Kairit', ''], ['Qu', 'Lizhen', ''], ['Johnson', 'Mark', '']]","['Knowledge base completion', 'embedding model', 'mixture model', 'linkprediction', 'triple classification', 'entity prediction', 'relation prediction']" 333,1812.00992,Irene C\'ordoba,"Irene C\'ordoba, Juan de Lara","Ann: A domain-specific language for the effective design and validation of Java annotations","45 pages, 14 figures, 2016 journal publication. arXiv admin note: text overlap with arXiv:1807.03566","Computer Languages, Systems and Structures, 45:164-190, 2016",10.1016/j.cl.2016.02.002,,cs.PL cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper describes a new modelling language for the effective design and validation of Java annotations. Since their inclusion in the 5th edition of Java, annotations have grown from a useful tool for the addition of meta-data to play a central role in many popular software projects. Usually they are not conceived in isolation, but in groups, with dependency and integrity constraints between them. However, the native support provided by Java for expressing this design is very limited. To overcome its deficiencies and make explicit the rich conceptual model which lies behind a set of annotations, we propose a domain-specific modelling language. The proposal has been implemented as an Eclipse plug-in, including an editor and an integrated code generator that synthesises annotation processors. The environment also integrates a model finder, able to detect unsatisfiable constraints between different annotations, and to provide examples of correct annotation usages for validation. The language has been tested using a real set of annotations from the Java Persistence API (JPA). Within this subset we have found enough rich semantics expressible with Ann and omitted nowadays by the Java language, which shows the benefits of Ann in a relevant field of application. ","[{'version': 'v1', 'created': 'Sun, 2 Dec 2018 15:53:24 GMT'}]",2019-10-02,"[['Córdoba', 'Irene', ''], ['de Lara', 'Juan', '']]","['Model Driven Engineering', 'Domain-Specific Languages', 'Codegeneration', 'Java annotations', 'Model Finders']" 334,1602.08863,Lu\'is Cruz-Filipe,Lu\'is Cruz-Filipe and Fabrizio Montesi,Choreographies in Practice,,,10.1007/978-3-319-39570-8_8,,cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Choreographic Programming is a development methodology for concurrent software that guarantees correctness by construction. The key to this paradigm is to disallow mismatched I/O operations in programs, called choreographies, and then mechanically synthesise distributed implementations in terms of standard process models via a mechanism known as EndPoint Projection (EPP). Despite the promise of choreographic programming, there is still a lack of practical evaluations that illustrate the applicability of choreographies to concrete computational problems with standard concurrent solutions. In this work, we explore the potential of choreographies by using Procedural Choreographies (PC), a model that we recently proposed, to write distributed algorithms for sorting (Quicksort), solving linear equations (Gaussian elimination), and computing Fast Fourier Transform. We discuss the lessons learned from this experiment, giving possible directions for the usage and future improvements of choreography languages. ","[{'version': 'v1', 'created': 'Mon, 29 Feb 2016 08:49:49 GMT'}]",2017-08-09,"[['Cruz-Filipe', 'Luís', ''], ['Montesi', 'Fabrizio', '']]","['Choreographies', 'Correctness by Construction', 'DistributedAlgorithms']" 335,1903.09525,Fabio Calefato,"Fabio Calefato, Filippo Lanubile, Nicole Novielli, Luigi Quaranta",EMTk -- The Emotion Mining Toolkit,"Proceedings of the 4th International Workshop on Emotion Awareness in Software Engineering (SEmotion '19), May 2019, pp. 34-37",,10.1109/SEmotion.2019.00014,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The Emotion Mining Toolkit (EMTk) is a suite of modules and datasets offering a comprehensive solution for mining sentiment and emotions from technical text contributed by developers on communication channels. The toolkit is written in Java, Python, and R, and is released under the MIT open source license. In this paper, we describe its architecture and the benchmark against the previous, standalone versions of our sentiment analysis tools. Results show large improvements in terms of speed. ","[{'version': 'v1', 'created': 'Fri, 22 Mar 2019 14:23:17 GMT'}, {'version': 'v2', 'created': 'Sun, 19 May 2019 07:13:22 GMT'}, {'version': 'v3', 'created': 'Mon, 12 Apr 2021 06:43:45 GMT'}]",2021-04-13,"[['Calefato', 'Fabio', ''], ['Lanubile', 'Filippo', ''], ['Novielli', 'Nicole', ''], ['Quaranta', 'Luigi', '']]","['sentiment analysis', 'emotion mining', 'social software']" 336,1811.01997,Ami Paz,"Keren Censor-Hillel, Ami Paz, Noam Ravid",The Sparsest Additive Spanner via Multiple Weighted BFS Trees,"Preliminary versions appeared in OPODIS 2018 conference and in TCS journal",,10.1016/j.tcs.2020.05.035,,cs.DC cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Spanners are fundamental graph structures that sparsify graphs at the cost of small stretch. In particular, in recent years, many sequential algorithms constructing additive all-pairs spanners were designed, providing very sparse small-stretch subgraphs. Remarkably, it was then shown that the known (+6)-spanner constructions are essentially the sparsest possible, that is, a larger additive stretch cannot guarantee a sparser spanner, which brought the stretch-sparsity trade-off to its limit. Distributed constructions of spanners are also abundant. However, for additive spanners, while there were algorithms constructing (+2) and (+4)-all-pairs spanners, the sparsest case of (+6)-spanners remained elusive. We remedy this by designing a new sequential algorithm for constructing a (+6)-spanner with the essentially-optimal sparsity of roughly O(n^{4/3}) edges. We then show a distributed implementation of our algorithm, answering an open problem in [Censor-Hillel et al., DISC 2016]. A main ingredient in our distributed algorithm is an efficient construction of multiple weighted BFS trees. A weighted BFS tree is a BFS tree in a weighted graph, that consists of the lightest among all shortest paths from the root to each node. We present a distributed algorithm in the CONGEST model, that constructs multiple weighted BFS trees in |S|+D-1 rounds, where S is the set of sources and D is the diameter of the network graph. ","[{'version': 'v1', 'created': 'Mon, 5 Nov 2018 19:44:50 GMT'}, {'version': 'v2', 'created': 'Tue, 2 Jun 2020 14:20:52 GMT'}]",2020-06-03,"[['Censor-Hillel', 'Keren', ''], ['Paz', 'Ami', ''], ['Ravid', 'Noam', '']]","['Distributed graph algorithms', 'congest model', 'weighted BFS trees', 'additivespanners']" 337,1112.0647,Christoph Koutschan,"Christoph Koutschan and Thotsaporn ""Aek"" Thanatipanonda",Advanced Computer Algebra for Determinants,16 pages,"Annals of Combinatorics 17(3), 509-523, 2013",10.1007/s00026-013-0183-8,,cs.SC math.CO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We prove three conjectures concerning the evaluation of determinants, which are related to the counting of plane partitions and rhombus tilings. One of them was posed by George Andrews in 1980, the other two were by Guoce Xin and Christian Krattenthaler. Our proofs employ computer algebra methods, namely, the holonomic ansatz proposed by Doron Zeilberger and variations thereof. These variations make Zeilberger's original approach even more powerful and allow for addressing a wider variety of determinants. Finally, we present, as a challenge problem, a conjecture about a closed-form evaluation of Andrews's determinant. ","[{'version': 'v1', 'created': 'Sat, 3 Dec 2011 11:26:40 GMT'}, {'version': 'v2', 'created': 'Mon, 23 Apr 2012 12:54:58 GMT'}, {'version': 'v3', 'created': 'Fri, 16 Aug 2013 11:38:25 GMT'}]",2013-08-19,"[['Koutschan', 'Christoph', ''], ['Thanatipanonda', 'Thotsaporn ""Aek""', '']]","['determinant', 'computer algebra', 'holonomic ansatz', 'rhombustiling']" 338,0810.1773,Amir Leshem,"Eitan Sayag, Amir Leshem, Nikolaos D. Sidiropoulos","Finite Word Length Effects on Transmission Rate in Zero Forcing Linear Precoding for Multichannel DSL",,,10.1109/TSP.2009.2012889,,cs.IT math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Crosstalk interference is the limiting factor in transmission over copper lines. Crosstalk cancelation techniques show great potential for enabling the next leap in DSL transmission rates. An important issue when implementing crosstalk cancelation techniques in hardware is the effect of finite world length on performance. In this paper we provide an analysis of the performance of linear zero-forcing precoders, used for crosstalk compensation, in the presence of finite word length errors. We quantify analytically the trade off between precoder word length and transmission rate degradation. More specifically, we prove a simple formula for the transmission rate loss as a function of the number of bits used for precoding, the signal to noise ratio, and the standard line parameters. We demonstrate, through simulations on real lines, the accuracy of our estimates. Moreover, our results are stable in the presence of channel estimation errors. Finally, we show how to use these estimates as a design tool for DSL linear crosstalk precoders. For example, we show that for standard VDSL2 precoded systems, 14 bits representation of the precoder entries results in capacity loss below 1% for lines over 300m. ","[{'version': 'v1', 'created': 'Thu, 9 Oct 2008 22:40:14 GMT'}]",2009-11-13,"[['Sayag', 'Eitan', ''], ['Leshem', 'Amir', ''], ['Sidiropoulos', 'Nikolaos D.', '']]","['Multichannel DSL', 'vectoring', 'linear precoding', 'capacity estimates', 'quantization']" 339,1308.5811,Kyeong Soo (Joseph) Kim,"Kyeong Soo Kim, Karin Ennser, Yogesh K. Dwivedi",Clean-Slate Design of Next-Generation Optical Access,"4 pages, 3 figures",,10.1109/ICTON.2011.5970910,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We report the current status of our research on the clean-slate design of next-generation optical access (NGOA). We have been studying candidate architectures with a major focus on their elasticity to user demands, energy efficiency, and support of better Quality of Experience (QoE). One of the major challenges in this study is to establish a comparative analysis framework where we can assess the performances of candidate architectures in an objective and quantifiable way. In this paper we describe our efforts to meet this challenge: (1) the development of a new comparison framework based on integrated QoE and statistical hypothesis testing and (2) the implementation of a virtual test bed capturing important aspects from physical layer to application layer to end-user behaviour governing traffic generation. The comparison framework and the virtual test bed will provide researchers a sound basis and useful tools for comparative analysis in the clean-slate design of NGOA. ","[{'version': 'v1', 'created': 'Tue, 27 Aug 2013 09:49:33 GMT'}]",2014-03-25,"[['Kim', 'Kyeong Soo', ''], ['Ennser', 'Karin', ''], ['Dwivedi', 'Yogesh K.', '']]","['Optical access', 'clean-slate design', 'virtual test bed', 'comparison framework', 'energy efficiency']" 340,1504.01039,Jeremy Kepner,"Jeremy Kepner, David Bade, Ayd{\i}n Buluc, John Gilbert, Timothy Mattson, Henning Meyerhenke","Graphs, Matrices, and the GraphBLAS: Seven Good Reasons","10 pages; International Conference on Computational Science workshop on the Applications of Matrix Computational Methods in the Analysis of Modern Data","Procedia Computer Science Volume 51, 2015, Pages 2453-2462, International Conference On Computational Science",10.1016/j.procs.2015.05.353,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istc- bigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs. ","[{'version': 'v1', 'created': 'Sat, 4 Apr 2015 19:11:38 GMT'}]",2016-06-21,"[['Kepner', 'Jeremy', ''], ['Bade', 'David', ''], ['Buluc', 'Aydın', ''], ['Gilbert', 'John', ''], ['Mattson', 'Timothy', ''], ['Meyerhenke', 'Henning', '']]","['graphs', 'algorithms', 'matrices', 'linear algebra', 'software standards']" 341,2003.13633,Francisco \'Alvarez,"F. Mart\'inez-\'Alvarez, G. Asencio-Cort\'es, J. F. Torres, D. Guti\'errez-Avil\'es, L. Melgar-Garc\'ia, R. P\'erez-Chac\'on, C. Rubio-Escudero, J. C. Riquelme, A. Troncoso","Coronavirus Optimization Algorithm: A bioinspired metaheuristic based on the COVID-19 propagation model","30 pages, 4 figures",,10.1089/big.2020.0051,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A novel bioinspired metaheuristic is proposed in this work, simulating how the coronavirus spreads and infects healthy people. From an initial individual (the patient zero), the coronavirus infects new patients at known rates, creating new populations of infected people. Every individual can either die or infect and, afterwards, be sent to the recovered population. Relevant terms such as re-infection probability, super-spreading rate or traveling rate are introduced in the model in order to simulate as accurately as possible the coronavirus activity. The Coronavirus Optimization Algorithm has two major advantages compared to other similar strategies. First, the input parameters are already set according to the disease statistics, preventing researchers from initializing them with arbitrary values. Second, the approach has the ability of ending after several iterations, without setting this value either. Infected population initially grows at an exponential rate but after some iterations, when considering social isolation measures and the high number recovered and dead people, the number of infected people starts decreasing in subsequent iterations. Furthermore, a parallel multi-virus version is proposed in which several coronavirus strains evolve over time and explore wider search space areas in less iterations. Finally, the metaheuristic has been combined with deep learning models, in order to find optimal hyperparameters during the training phase. As application case, the problem of electricity load time series forecasting has been addressed, showing quite remarkable performance. ","[{'version': 'v1', 'created': 'Mon, 30 Mar 2020 17:10:02 GMT'}, {'version': 'v2', 'created': 'Thu, 16 Apr 2020 11:28:04 GMT'}]",2020-08-03,"[['Martínez-Álvarez', 'F.', ''], ['Asencio-Cortés', 'G.', ''], ['Torres', 'J. F.', ''], ['Gutiérrez-Avilés', 'D.', ''], ['Melgar-García', 'L.', ''], ['Pérez-Chacón', 'R.', ''], ['Rubio-Escudero', 'C.', ''], ['Riquelme', 'J. C.', ''], ['Troncoso', 'A.', '']]","['Metaheuristics', 'soft computing', 'deep learning', 'Coronavirus']" 342,1306.1773,Daniel Graziotin,"Daniel Graziotin and Pekka Abrahamsson (Free University of Bozen-Bolzano)","Making Sense out of a Jungle of JavaScript Frameworks: towards a Practitioner-friendly Comparative Analysis","5 Pages, 1 Figure. The final publication is available at link.springer.com. Link: http://link.springer.com/chapter/10.1007/978-3-642-39259-7_28. DOI: 10.1007/978-3-642-39259-7_28","Proceedings of the 14th International Conference on Product-Focused Software Process Improvement (PROFES 2013), LNCS 7983, Springer-Verlag, pp. 334-337, 2013",10.1007/978-3-642-39259-7_28,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The field of Web development is entering the HTML5 and CSS3 era and JavaScript is becoming increasingly influential. A large number of JavaScript frameworks have been recently promoted. Practitioners applying the latest technologies need to choose a suitable JavaScript framework (JSF) in order to abstract the frustrating and complicated coding steps and to provide a cross-browser compatibility. Apart from benchmark suites and recommendation from experts, there is little research helping practitioners to select the most suitable JSF to a given situation. The few proposals employ software metrics on the JSF, but practitioners are driven by different concerns when choosing a JSF. As an answer to the critical needs, this paper is a call for action. It proposes a re-search design towards a comparative analysis framework of JSF, which merges researcher needs and practitioner needs. ","[{'version': 'v1', 'created': 'Fri, 7 Jun 2013 16:53:24 GMT'}]",2013-06-10,"[['Graziotin', 'Daniel', '', 'Free University of\n Bozen-Bolzano'], ['Abrahamsson', 'Pekka', '', 'Free University of\n Bozen-Bolzano']]","['Web Development', 'JavaScript Framework', 'Comparative Analysis']" 343,1509.07326,Pascal Potvin,"Pascal Potvin, Mario Bonja, Gordon Bailey, Pierre Busnel",An IMS DSL Developed at Ericsson,"19 pages, 2 figures, 1 table, oral presentation at SDL 2013: Model-Driven Dependability Engineering conference","SDL 2013: Model-Driven Dependability Engineering Volume 7916 of the series Lecture Notes in Computer Science pp 144-162",10.1007/978-3-642-38911-5,,cs.SE cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we present how we created a Domain Specific Language (DSL) dedicated to IP Multimedia Subsystem (IMS) at Ericsson. First, we introduce IMS and how developers are burdened by its complexity when integrating it in their application. Then we describe the principles we followed to create our new IMS DSL from its core in the Scala language to its syntax. We then present how we integrated it in two existing projects and show how it can save time for developers and how readable the syntax of the IMS DSL is. ","[{'version': 'v1', 'created': 'Thu, 24 Sep 2015 11:46:20 GMT'}]",2015-09-28,"[['Potvin', 'Pascal', ''], ['Bonja', 'Mario', ''], ['Bailey', 'Gordon', ''], ['Busnel', 'Pierre', '']]","['Domain Specific Language', 'IP Multimedia System', 'application development', 'industrial experience']" 344,1703.08698,Sajal Mukhopadhyay,"Vikash Kumar Singh, Sajal Mukhopadhyay, Aniruddh Sharma, Arpan Roy",Hiring Expert Consultants in E-Healthcare: A Two Sided Matching Approach,"33 pages, 9 figures","Trans. on Computational Collective Intelligence, vol 11120, 2018, pp. 178-199",10.1007/978-3-319-99810-7_9,,cs.GT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Very often in some censorious healthcare scenario, there may be a need to have some expert consultancies (especially by doctors) that are not available in-house to the hospital. With the advancement in technologies (such as video conferencing, smartphone, etc.), it has become reality that, for the critical medical cases in the hospitals, expert consultants (ECs) from around the world could be hired, who will serve the patients by their physical or virtual presence. Earlier, this interesting healthcare scenario of hiring the ECs (mainly doctors) from outside of the hospitals had been studied with the robust concepts of mechanism design with or without money. We have tried to model the ECs (mainly doctors) hiring problem as a two-sided matching problem. In this paper, for the first time, to the best of our knowledge, we explore the more realistic two-sided matching in our set-up, where the members of the two participating communities, namely patients and doctors are revealing the strict preference ordering over all the members of the opposite community for a stipulated amount of time. We assume that patients and doctors are strategic in nature. With the theoretical analysis, we demonstrate that the proposed mechanism that results in a stable allocation of doctors to patients is strategy-proof (or truthful) and optimal. The proposed mechanism is also validated with exhaustive experiments. ","[{'version': 'v1', 'created': 'Sat, 25 Mar 2017 14:39:40 GMT'}, {'version': 'v2', 'created': 'Tue, 19 Sep 2017 10:59:41 GMT'}]",2018-10-18,"[['Singh', 'Vikash Kumar', ''], ['Mukhopadhyay', 'Sajal', ''], ['Sharma', 'Aniruddh', ''], ['Roy', 'Arpan', '']]","['E-Healthcare', 'hiring ECs', 'DSIC', 'mechanism design', 'stable allocation']" 345,2110.01848,Xin Zhang,"Xin Zhang, Xiujun Shu, Bingwen Zhang, Jie Ren, Lizhou Zhou, Xin Chen","Cellular Network Radio Propagation Modeling with Deep Convolutional Neural Networks",,"Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August, 2020, Pages 2378",,,cs.IT cs.AI cs.LG math.IT,http://creativecommons.org/licenses/by/4.0/," Radio propagation modeling and prediction is fundamental for modern cellular network planning and optimization. Conventional radio propagation models fall into two categories. Empirical models, based on coarse statistics, are simple and computationally efficient, but are inaccurate due to oversimplification. Deterministic models, such as ray tracing based on physical laws of wave propagation, are more accurate and site specific. But they have higher computational complexity and are inflexible to utilize site information other than traditional global information system (GIS) maps. In this article we present a novel method to model radio propagation using deep convolutional neural networks and report significantly improved performance compared to conventional models. We also lay down the framework for data-driven modeling of radio propagation and enable future research to utilize rich and unconventional information of the site, e.g. satellite photos, to provide more accurate and flexible models. ","[{'version': 'v1', 'created': 'Tue, 5 Oct 2021 07:20:48 GMT'}]",2021-10-06,"[['Zhang', 'Xin', ''], ['Shu', 'Xiujun', ''], ['Zhang', 'Bingwen', ''], ['Ren', 'Jie', ''], ['Zhou', 'Lizhou', ''], ['Chen', 'Xin', '']]","['radio propagation', 'deep convolutional neural networks', 'path loss']" 346,0806.4510,Casper Thomsen,"Olav Geil, Ryutaroh Matsumoto, Casper Thomsen",On Field Size and Success Probability in Network Coding,"16 pages, 3 figures, 2 tables. Accepted for publication at International Workshop on the Arithmetic of Finite Fields, WAIFI 2008","Proceedings of the 2nd International Workshop on the Arithmetic of Finite Fields, WAIFI 2008, pp. 157-173",10.1007/978-3-540-69499-1_14,,cs.IT math.IT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Using tools from algebraic geometry and Groebner basis theory we solve two problems in network coding. First we present a method to determine the smallest field size for which linear network coding is feasible. Second we derive improved estimates on the success probability of random linear network coding. These estimates take into account which monomials occur in the support of the determinant of the product of Edmonds matrices. Therefore we finally investigate which monomials can occur in the determinant of the Edmonds matrix. ","[{'version': 'v1', 'created': 'Fri, 27 Jun 2008 13:05:32 GMT'}]",2008-09-04,"[['Geil', 'Olav', ''], ['Matsumoto', 'Ryutaroh', ''], ['Thomsen', 'Casper', '']]","['Distributed networking', 'linear network coding', 'multicast', 'network coding', 'random network coding']" 347,1807.06087,Sebastian Baltes,Sebastian Baltes and Stephan Diehl,Towards a Theory of Software Development Expertise,"14 pages, 5 figures, 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018), ACM, 2018",,10.1145/3236024.3236061,,cs.SE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Software development includes diverse tasks such as implementing new features, analyzing requirements, and fixing bugs. Being an expert in those tasks requires a certain set of skills, knowledge, and experience. Several studies investigated individual aspects of software development expertise, but what is missing is a comprehensive theory. We present a first conceptual theory of software development expertise that is grounded in data from a mixed-methods survey with 335 software developers and in literature on expertise and expert performance. Our theory currently focuses on programming, but already provides valuable insights for researchers, developers, and employers. The theory describes important properties of software development expertise and which factors foster or hinder its formation, including how developers' performance may decline over time. Moreover, our quantitative results show that developers' expertise self-assessments are context-dependent and that experience is not necessarily related to expertise. ","[{'version': 'v1', 'created': 'Mon, 16 Jul 2018 20:09:25 GMT'}, {'version': 'v2', 'created': 'Wed, 1 Aug 2018 10:46:55 GMT'}, {'version': 'v3', 'created': 'Tue, 9 Oct 2018 11:14:53 GMT'}, {'version': 'v4', 'created': 'Sun, 4 Nov 2018 02:05:32 GMT'}]",2018-11-06,"[['Baltes', 'Sebastian', ''], ['Diehl', 'Stephan', '']]","['software engineering', 'expertise', 'theory', 'psychology']" 348,1305.1396,Marcelo Fiori,"Mat\'ias Di Martino, Guzman Hern\'andez, Marcelo Fiori, Alicia Fern\'andez",A new framework for optimal classifier design,,"Pattern Recognition, Volume 46, Issue 8, August 2013, Pages 2249-2255",10.1016/j.patcog.2013.01.006,,cs.CV cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The use of alternative measures to evaluate classifier performance is gaining attention, specially for imbalanced problems. However, the use of these measures in the classifier design process is still unsolved. In this work we propose a classifier designed specifically to optimize one of these alternative measures, namely, the so-called F-measure. Nevertheless, the technique is general, and it can be used to optimize other evaluation measures. An algorithm to train the novel classifier is proposed, and the numerical scheme is tested with several databases, showing the optimality and robustness of the presented classifier. ","[{'version': 'v1', 'created': 'Tue, 7 May 2013 04:05:24 GMT'}, {'version': 'v2', 'created': 'Thu, 12 Sep 2013 16:09:55 GMT'}]",2013-09-13,"[['Di Martino', 'Matías', ''], ['Hernández', 'Guzman', ''], ['Fiori', 'Marcelo', ''], ['Fernández', 'Alicia', '']]","['Class Imbalance', 'One Class SVM', 'F-measure', 'Recall', 'Precision', 'Fraud Detection', 'Level Set Method']" 349,1607.05088,Giorgio Roffo,Giorgio Roffo,Towards Personality-Aware Recommendation,"This paper is an overview of Personality in Computational Advertising: A Benchmark, G. Roffo, ACM RecSys workshop on Emotions and Personality in Personalized Systems, (EMPIRE 2016)",,10.13140/RG.2.1.4167.0649,,cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In the last decade new ways of shopping online have increased the possibility of buying products and services more easily and faster than ever. In this new context, personality is a key determinant in the decision making of the consumer when shopping. The two main reasons are: firstly, a person's buying choices are influenced by psychological factors like impulsiveness, and secondly, some consumers may be more susceptible to making impulse purchases than others. To the best of our knowledge, the impact of personality factors on advertisements has been largely neglected at the level of recommender systems. This work proposes a highly innovative research which uses a personality perspective to determine the unique associations among the consumer's buying tendency and advert recommendations. As a matter of fact, the lack of a publicly available benchmark for computational advertising do not allow both the exploration of this intriguing research direction and the evaluation of state-of-the-art algorithms. We present the ADS Dataset, a publicly available benchmark for computational advertising enriched with Big-Five users' personality factors and 1,200 personal users' pictures. The proposed benchmark allows two main tasks: rating prediction over 300 real advertisements (i.e., Rich Media Ads, Image Ads, Text Ads) and click-through rate prediction. Moreover, this work carries out experiments, reviews various evaluation criteria used in the literature, and provides a library for each one of them within one integrated toolbox. ","[{'version': 'v1', 'created': 'Mon, 18 Jul 2016 14:08:20 GMT'}, {'version': 'v2', 'created': 'Thu, 21 Jul 2016 11:06:03 GMT'}, {'version': 'v3', 'created': 'Sat, 23 Jul 2016 09:45:57 GMT'}]",2016-07-26,"[['Roffo', 'Giorgio', '']]","['Ads Click Prediction', 'Ads Rating Prediction', 'Computational Advertising', 'Online Advertising', 'Affective Computing']" 350,1512.08047,Omar Al-Kadi,Omar Sultan Al-Kadi,"Assessment of texture measures susceptibility to noise in conventional and contrast enhanced computed tomography lung tumour images","10 pages, 9 figures","Computerized Medical Imaging and Graphics, vol.34, pp.494-503, 2010",10.1016/j.compmedimag.2009.12.011,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Noise is one of the major problems that hinder an effective texture analysis of disease in medical images, which may cause variability in the reported diagnosis. In this paper seven texture measurement methods (two wavelet, two model and three statistical based) were applied to investigate their susceptibility to subtle noise caused by acquisition and reconstruction deficiencies in computed tomography (CT) images. Features of lung tumours were extracted from two different conventional and contrast enhanced CT image data-sets under filtered and noisy conditions. When measuring the noise in the background open-air region of the analysed CT images, noise of Gaussian and Rayleigh distributions with varying mean and variance was encountered, and Fisher distance was used to differentiate between an original extracted lung tumour region of interest (ROI) with the filtered and noisy reconstructed versions. It was determined that the wavelet packet (WP) and fractal dimension measures were the least affected, while the Gaussian Markov random field, run-length and co-occurrence matrices were the most affected by noise. Depending on the selected ROI size, it was concluded that texture measures with fewer extracted features can decrease susceptibility to noise, with the WP and the Gabor filter having a stable performance in both filtered and noisy CT versions and for both data-sets. Knowing how robust each texture measure under noise presence is can assist physicians using an automated lung texture classification system in choosing the appropriate feature extraction algorithm for a more accurate diagnosis. ","[{'version': 'v1', 'created': 'Fri, 25 Dec 2015 23:00:45 GMT'}]",2015-12-29,"[['Al-Kadi', 'Omar Sultan', '']]","['texture analysis', 'feature extraction', 'CT image noise', 'contrast enhanced CT', 'lung tumour']" 351,1307.3005,Sayed Amir Hoseini,Sayed Amir Hoseini and Mohammad Reza Ashraf,"Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion Methods By Matlab",,,10.5121/ijccms.2013.2201,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and one decentralized methods. Using MATLAB, computational loads of these methods are compared while number of sensors increases. The results show that inverse covariance method has the best computational performance if the number of sensors is above 20. For a smaller number of sensors, other methods, especially group sensors, are more appropriate.. ","[{'version': 'v1', 'created': 'Thu, 11 Jul 2013 07:52:08 GMT'}]",2013-07-12,"[['Hoseini', 'Sayed Amir', ''], ['Ashraf', 'Mohammad Reza', '']]","['Data fusion', 'Target Tracking', 'Kalman Filter', 'Multi-sensor', 'MATLAB']" 352,1308.5952,Sergey Dolgov,S. V. Dolgov and A. P. Smirnov and E. E. Tyrtyshnikov,"Low-rank approximation in the numerical modeling of the Farley-Buneman instability in ionospheric plasma",,,10.1016/j.jcp.2014.01.029,,cs.NA math.NA physics.comp-ph physics.plasm-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We consider the numerical modeling of the Farley-Buneman instability development in the earth's ionosphere plasma. The ion behavior is governed by the kinetic Landau equation in the four-dimensional phase space, and since the finite difference discretization on a tensor product grid is used, this equation becomes the most computationally challenging part of the scheme. To relax the complexity and memory consumption, an adaptive model reduction using the low-rank separation of variables, namely the Tensor Train format, is employed. The approach was verified via the prototype MATLAB implementation. Numerical experiments demonstrate the possibility of efficient separation of space and velocity variables, resulting in the solution storage reduction by a factor of order tens. ","[{'version': 'v1', 'created': 'Tue, 27 Aug 2013 19:14:09 GMT'}]",2014-03-05,"[['Dolgov', 'S. V.', ''], ['Smirnov', 'A. P.', ''], ['Tyrtyshnikov', 'E. E.', '']]","['high–dimensional problems', 'DMRG', 'MPS', 'tensor trainformat', 'ionospheric irregularities', 'plasma waves and instabilities', 'Vlasovequation', 'hybrid methods']" 353,1101.3859,Mohammed Sqalli Dr.,"Mohammed H. Sqalli, Sadiq M. Sait, and Syed Asadullah",OSPF Weight Setting Optimization for Single Link Failures,,"International Journal of Computer Networks & Communications (IJCNC), pp:168-183, Vol. 3, No. 1, January 2011",10.5121/ijcnc.2011.3111,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In operational networks, nodes are connected via multiple links for load sharing and redundancy. This is done to make sure that a failure of a link does not disconnect or isolate some parts of the network. However, link failures have an effect on routing, as the routers find alternate paths for the traffic originally flowing through the link which has failed. This effect is severe in case of failure of a critical link in the network, such as backbone links or the links carrying higher traffic loads. When routing is done using the Open Shortest Path First (OSPF) routing protocol, the original weight selection for the normal state topology may not be as efficient for the failure state. In this paper, we investigate the single link failure issue with an objective to find a weight setting which results in efficient routing in normal and failure states. We engineer Tabu Search Iterative heuristic using two different implementation strategies to solve the OSPF weight setting problem for link failure scenarios. We evaluate these heuristics and show through experimental results that both heuristics efficiently handle weight setting for the failure state. A comparison of both strategies is also presented. ","[{'version': 'v1', 'created': 'Thu, 20 Jan 2011 09:54:07 GMT'}]",2011-01-21,"[['Sqalli', 'Mohammed H.', ''], ['Sait', 'Sadiq M.', ''], ['Asadullah', 'Syed', '']]","['Routing', 'Open Shortest Path First (OSPF)', 'OSPF Weight Setting Problem', 'Iterative Heuristics', 'Link Failure', 'Tabu Search']" 354,1908.00112,Jia-Huai You,"David Spies, Jia-Huai You, Ryan Hayward",Domain-Independent Cost-Optimal Planning in ASP,"Paper presented at the 35th International Conference on Logic Programming (ICLP 2019), Las Cruces, New Mexico, USA, 20-25 September 2019, 16 pages",Theory and Practice of Logic Programming 19 (2019) 1124-1142,10.1017/S1471068419000395,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP. ","[{'version': 'v1', 'created': 'Wed, 31 Jul 2019 21:42:24 GMT'}]",2020-02-19,"[['Spies', 'David', ''], ['You', 'Jia-Huai', ''], ['Hayward', 'Ryan', '']]","['Cost-Optimal Planning', 'Answer Set Programming', 'CORE-2 ASP Standard']" 355,1701.00400,Jerome Darmont,"Zhen He, J\'er\^ome Darmont (ERIC)",Evaluating the Dynamic Behavior of Database Applications,arXiv admin note: text overlap with arXiv:0705.1454,"Journal of Database Management, IGI Global, 2005, 16 (2), pp.21 - 45",10.4018/jdm.2005040102,,cs.DB,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper explores the effect that changing access patterns has on the performance of database management systems. Changes in access patterns play an important role in determining the efficiency of key performance optimization techniques, such as dynamic clustering, prefetching, and buffer replacement. However, all existing benchmarks or evaluation frameworks produce static access patterns in which objects are always accessed in the same order repeatedly. Hence, we have proposed the Dynamic Evaluation Framework (DEF) that simulates access pattern changes using configurable styles of change. DEF has been designed to be open and fully extensible (e.g., new access pattern change models can be added easily). In this paper, we instantiate DEF into the Dynamic object Evaluation Framework (DoEF) which is designed for object databases, i.e., object-oriented or object-relational databases such as multi-media databases or most XML databases.The capabilities of DoEF have been evaluated by simulating the execution of four different dynamic clustering algorithms. The results confirm our analysis that flexible conservative re-clustering is the key in determining a clustering algorithm's ability to adapt to changes in access pattern. These results show the effectiveness of DoEF at determining the adaptability of each dynamic clustering algorithm to changes in access pattern in a simulation environment. In a second set of experiments, we have used DoEF to compare the performance of two real-life object stores : Platypus and SHORE. DoEF has helped to reveal the poor swapping performance of Platypus. ","[{'version': 'v1', 'created': 'Mon, 2 Jan 2017 14:20:12 GMT'}]",2017-01-03,"[['He', 'Zhen', '', 'ERIC'], ['Darmont', 'Jérôme', '', 'ERIC']]","['Performance evaluation', 'Dynamic access patterns', 'Benchmarking', 'Object-oriented']" 356,1809.03100,EPTCS,"Michele Chiari (DEIB, Politecnico di Milano), Dino Mandrioli (DEIB, Politecnico di Milano), Matteo Pradella (DEIB, Politecnico di Milano, and IEIIT, Consiglio Nazionale delle Ricerche)",Temporal Logic and Model Checking for Operator Precedence Languages,"In Proceedings GandALF 2018, arXiv:1809.02416","EPTCS 277, 2018, pp. 161-175",10.4204/EPTCS.277.12,,cs.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In the last decades much research effort has been devoted to extending the success of model checking from the traditional field of finite state machines and various versions of temporal logics to suitable subclasses of context-free languages and appropriate extensions of temporal logics. To the best of our knowledge such attempts only covered structured languages, i.e. languages whose structure is immediately ""visible"" in their sentences, such as tree-languages or visibly pushdown ones. In this paper we present a new temporal logic suitable to express and automatically verify properties of operator precedence languages. This ""historical"" language family has been recently proved to enjoy fundamental algebraic and logic properties that make it suitable for model checking applications yet breaking the barrier of visible-structure languages (in fact the original motivation of its inventor Floyd was just to support efficient parsing, i.e. building the ""hidden syntax tree"" of language sentences). We prove that our logic is at least as expressive as analogous logics defined for visible pushdown languages yet covering a much more powerful family; we design a procedure that, given a formula in our logic builds an automaton recognizing the sentences satisfying the formula, whose size is at most exponential in the length of the formula. ","[{'version': 'v1', 'created': 'Mon, 10 Sep 2018 02:32:56 GMT'}]",2018-09-11,"[['Chiari', 'Michele', '', 'DEIB, Politecnico di Milano'], ['Mandrioli', 'Dino', '', 'DEIB,\n Politecnico di Milano'], ['Pradella', 'Matteo', '', 'DEIB, Politecnico di Milano, and\n IEIIT, Consiglio Nazionale delle Ricerche']]","['Operator Precedence Languages', 'Visibly Pushdown Languages', 'Input Driven Languages', 'Linear Temporal Logic', 'Model Checking']" 357,1212.1798,Nizar Rokbani,Nizar Rokbani and Adel M Alimi,"IK-PSO, PSO Inverse Kinematics Solver with Application to Biped Gait Generation","7 pages, 7 figures, ""Published with International Journal of Computer Applications (IJCA)""","International Journal of Computer applications (IJCA) 58 (22), 33-39 (2012)",10.5120/9432-3844,,cs.RO cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper describes a new approach allowing the generation of a simplified Biped gait. This approach combines a classical dynamic modeling with an inverse kinematics' solver based on particle swarm optimization, PSO. First, an inverted pendulum, IP, is used to obtain a simplified dynamic model of the robot and to compute the target position of a key point in biped locomotion, the Centre Of Mass, COM. The proposed algorithm, called IK-PSO, Inverse Kinematics PSO, returns and inverse kinematics solution corresponding to that COM respecting the joints constraints. In This paper the inertia weight PSO variant is used to generate a possible solution according to the stability based fitness function and a set of joints motions constraints. The method is applied with success to a leg motion generation. Since based on a pre-calculated COM, that satisfied the biped stability, the proposal allowed also to plan a walk with application on a small size biped robot. ","[{'version': 'v1', 'created': 'Sat, 8 Dec 2012 14:45:54 GMT'}]",2013-01-08,"[['Rokbani', 'Nizar', ''], ['Alimi', 'Adel M', '']]","['Biped robotics', 'Gait generation', 'Particle Swarm Optimization', 'Inverse kinematics. Inertia weight PSO']" 358,1610.01922,Mohamad Ivan Fanany,"Arif Budiman, Mohamad Ivan Fanany, Chan Basaruddin",Adaptive Online Sequential ELM for Concept Drift Tackling,"Hindawi Publishing. Computational Intelligence and Neuroscience Volume 2016 (2016), Article ID 8091267, 17 pages Received 29 January 2016, Accepted 17 May 2016. Special Issue on ""Advances in Neural Networks and Hybrid-Metaheuristics: Theory, Algorithms, and Novel Engineering Applications"". Academic Editor: Stefan Haufe","Computational Intelligence and Neuroscience Volume 2016 (2016), Article ID 8091267, 17 pages",10.1155/2016/8091267,8091267,cs.AI cs.LG cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OS-ELM) and Constructive Enhancement OS-ELM (CEOS-ELM) by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OS-ELM (AOS-ELM). It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOS-ELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOS-ELM on regression and classification problem by using concept drift public data set (SEA and STAGGER) and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect underfitting condition. ","[{'version': 'v1', 'created': 'Thu, 6 Oct 2016 16:08:52 GMT'}]",2016-10-10,"[['Budiman', 'Arif', ''], ['Fanany', 'Mohamad Ivan', ''], ['Basaruddin', 'Chan', '']]","['adaptive', 'concept drift', 'extreme learning machine', 'online sequential']" 359,1809.00223,Carlos Vega,"Carlos Vega Moreno, Eduardo Miravalls Sierra, Guillermo Juli\'an Moreno, Jorge E. L\'opez de Vergara, Eduardo Maga\~na, Javier Aracil","Evaluation of the performance challenges in automatic traffic report generation with huge data volumes",Preprint. Pre-peer reviewed version. 15 pages. 7 figures. 1 table,,10.1002/nem.2044,,cs.NI cs.PF,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we analyze the performance issues involved in the generation of auto- mated traffic reports for large IT infrastructures. Such reports allows the IT manager to proactively detect possible abnormal situations and roll out the corresponding cor- rective actions. With the ever-increasing bandwidth of current networks, the design of automated traffic report generation systems is very challenging. In a first step, the huge volumes of collected traffic are transformed into enriched flow records obtained from diverse collectors and dissectors. Then, such flow records, along with time series obtained from the raw traffic, are further processed to produce a usable report. As will be shown, the data volume in flow records is very large as well and requires careful selection of the Key Performance Indicators (KPIs) to be included in the report. In this regard, we discuss the use of high-level languages versus low- level approaches, in terms of speed and versatility. Furthermore, our design approach is targeted for rapid development in commodity hardware, which is essential to cost-effectively tackle demanding traffic analysis scenarios. ","[{'version': 'v1', 'created': 'Sat, 1 Sep 2018 17:01:04 GMT'}]",2018-09-05,"[['Moreno', 'Carlos Vega', ''], ['Sierra', 'Eduardo Miravalls', ''], ['Moreno', 'Guillermo Julián', ''], ['de Vergara', 'Jorge E. López', ''], ['Magaña', 'Eduardo', ''], ['Aracil', 'Javier', '']]","['Traffic analysis', 'Network management', 'Automatic reports']" 360,1711.03147,Clemente Rubio-Manzano,"Clemente Rubio-Manzano, Martin Pereira-Fari\~na","On the incorporation of interval-valued fuzzy sets into the Bousi-Prolog system: declarative semantics, implementation and applications",,"Interactions Between Computational Intelligence and Mathematics Studies in Computational Intelligence, vol 794. Springer 2018",10.1007/978-3-030-01632-6_1,,cs.AI cs.CL cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we analyse the benefits of incorporating interval-valued fuzzy sets into the Bousi-Prolog system. A syntax, declarative semantics and im- plementation for this extension is presented and formalised. We show, by using potential applications, that fuzzy logic programming frameworks enhanced with them can correctly work together with lexical resources and ontologies in order to improve their capabilities for knowledge representation and reasoning. ","[{'version': 'v1', 'created': 'Wed, 8 Nov 2017 20:25:43 GMT'}]",2021-01-07,"[['Rubio-Manzano', 'Clemente', ''], ['Pereira-Fariña', 'Martin', '']]","['Interval-valued fuzzy sets', 'Approximate Reasoning', 'Lexical Knowl']" 361,1412.8527,EPTCS,"Anne Preller (LIRMM, France)",From Logical to Distributional Models,"In Proceedings QPL 2013, arXiv:1412.7917","EPTCS 171, 2014, pp. 113-131",10.4204/EPTCS.171.11,,cs.LO cs.CL quant-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The paper relates two variants of semantic models for natural language, logical functional models and compositional distributional vector space models, by transferring the logic and reasoning from the logical to the distributional models. The geometrical operations of quantum logic are reformulated as algebraic operations on vectors. A map from functional models to vector space models makes it possible to compare the meaning of sentences word by word. ","[{'version': 'v1', 'created': 'Tue, 30 Dec 2014 01:43:39 GMT'}]",2014-12-31,"[['Preller', 'Anne', '', 'LIRMM, France']]","['compositional semantics for natural language', 'compact closed categories', 'quantum logic', 'logical models', 'vector']" 362,1805.00224,Ibraheem Kasim Ibraheem AL-Timeemee,"Fatin H. Ajeil, Ibraheem Kasim Ibraheem, Mouayad A. Sahib, Amjad J. Humaidi","Multi-objective path planning of an autonomous mobile robot using hybrid PSO-MFB optimization algorithm",,"Volume 89, April 2020, 106076",10.1016/j.asoc.2020.106076,,cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The main aim of this paper is to solve a path planning problem for an autonomous mobile robot in static and dynamic environments. The problem is solved by determining the collision-free path that satisfies the chosen criteria for shortest distance and path smoothness. The proposed path planning algorithm mimics the real world by adding the actual size of the mobile robot to that of the obstacles and formulating the problem as a moving point in the free-space. The proposed algorithm consists of three modules. The first module forms an optimized path by conducting a hybridized Particle Swarm Optimization-Modified Frequency Bat (PSO-MFB) algorithm that minimizes distance and follows path smoothness criteria. The second module detects any infeasible points generated by the proposed hybrid PSO-MFB Algorithm by a novel Local Search (LS) algorithm integrated with the hybrid PSO-MFB algorithm to be converted into feasible solutions. The third module features obstacle detection and avoidance (ODA), which is triggered when the mobile robot detects obstacles within its sensing region, allowing it to avoid collision with obstacles. The simulation results indicate that this method generates an optimal feasible path even in complex dynamic environments and thus overcomes the shortcomings of conventional approaches such as grid methods. Moreover, compared to recent path planning techniques, simulation results show that the proposed hybrid PSO-MFB algorithm is highly competitive in terms of path optimality. ","[{'version': 'v1', 'created': 'Tue, 1 May 2018 07:56:43 GMT'}, {'version': 'v2', 'created': 'Thu, 10 May 2018 21:42:02 GMT'}, {'version': 'v3', 'created': 'Fri, 20 Mar 2020 21:41:10 GMT'}]",2020-03-24,"[['Ajeil', 'Fatin H.', ''], ['Ibraheem', 'Ibraheem Kasim', ''], ['Sahib', 'Mouayad A.', ''], ['Humaidi', 'Amjad J.', '']]","['Autonomous Mobile Robot', 'Robot path planning', 'particle swarm optimization', 'bat algorithm', 'collision avoidance. 1']" 363,1606.05506,Sebastian Stabinger MSc,"Sebastian Stabinger, Antonio Rodriguez-Sanchez, Justus Piater",Learning Abstract Classes using Deep Learning,"To be published in the proceedings of the International Conference on Bio-inspired Information and Communications Technologies 2015",,10.4108/eai.3-12-2015.2262468,,cs.CV cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Humans are generally good at learning abstract concepts about objects and scenes (e.g.\ spatial orientation, relative sizes, etc.). Over the last years convolutional neural networks have achieved almost human performance in recognizing concrete classes (i.e.\ specific object categories). This paper tests the performance of a current CNN (GoogLeNet) on the task of differentiating between abstract classes which are trivially differentiable for humans. We trained and tested the CNN on the two abstract classes of horizontal and vertical orientation and determined how well the network is able to transfer the learned classes to other, previously unseen objects. ","[{'version': 'v1', 'created': 'Fri, 17 Jun 2016 12:51:23 GMT'}]",2016-08-01,"[['Stabinger', 'Sebastian', ''], ['Rodriguez-Sanchez', 'Antonio', ''], ['Piater', 'Justus', '']]","['Deep Learning', 'Convolutional Neural Networks', 'Visual Cortex', 'Abstract Reasoning']" 364,1903.03276,Prakash Murali,"Prakash Murali and Ali Javadi-Abhari and Frederic T. Chong and Margaret Martonosi","Formal Constraint-based Compilation for Noisy Intermediate-Scale Quantum Systems","Invited paper in Special Issue on Quantum Computer Architecture: a full-stack overview, Microprocessors and Microsystems",Microprocessors and Microsystems 2019,10.1016/j.micpro.2019.02.005,,cs.PL quant-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Noisy, intermediate-scale quantum (NISQ) systems are expected to have a few hundred qubits, minimal or no error correction, limited connectivity and limits on the number of gates that can be performed within the short coherence window of the machine. The past decade's research on quantum programming languages and compilers is directed towards large systems with thousands of qubits. For near term quantum systems, it is crucial to design tool flows which make efficient use of the hardware resources without sacrificing the ease and portability of a high-level programming environment. In this paper, we present a compiler for the Scaffold quantum programming language in which aggressive optimization specifically targets NISQ machines with hundreds of qubits. Our compiler extracts gates from a Scaffold program, and formulates a constrained optimization problem which considers both program characteristics and machine constraints. Using the Z3 SMT solver, the compiler maps program qubits to hardware qubits, schedules gates, and inserts CNOT routing operations while optimizing the overall execution time. The output of the optimization is used to produce target code in the OpenQASM language, which can be executed on existing quantum hardware such as the 16-qubit IBM machine. Using real and synthetic benchmarks, we show that it is feasible to synthesize near-optimal compiled code for current and small NISQ systems. For large programs and machine sizes, the SMT optimization approach can be used to synthesize compiled code that is guaranteed to finish within the coherence window of the machine. ","[{'version': 'v1', 'created': 'Fri, 8 Mar 2019 04:13:58 GMT'}]",2019-03-11,"[['Murali', 'Prakash', ''], ['Javadi-Abhari', 'Ali', ''], ['Chong', 'Frederic T.', ''], ['Martonosi', 'Margaret', '']]","['Quantum compilation', 'SMT optimization', 'Quantum computing']" 365,1711.05702,Xu Han,"Xu Han, Roland Kwitt, Stephen Aylward, Spyridon Bakas, Bjoern Menze, Alexander Asturias, Paul Vespa, John Van Horn, Marc Niethammer","Brain Extraction from Normal and Pathological Images: A Joint PCA/Image-Reconstruction Approach",,,10.1016/j.neuroimage.2018.04.073,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Brain extraction from images is a common pre-processing step. Many approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images with strong pathologies, for example, the presence of a tumor or of a traumatic brain injury, is challenging. In such cases, tissue appearance may deviate from normal tissue and violates algorithmic assumptions for these approaches; hence, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis, (2) pathologies are captured via a total variation term, and (3) non-brain tissue is captured by a sparse term. Decomposition and image registration steps are alternated to allow statistical modeling in a fixed atlas space. As a beneficial side effect, the model allows for the identification of potential pathologies and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our method on four datasets: the IBSR and LPBA40 datasets which show normal images, the BRATS dataset containing images with brain tumors and a dataset containing clinical TBI images. We compare the performance with other popular models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing methods on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction on a wide variety of images. ","[{'version': 'v1', 'created': 'Wed, 15 Nov 2017 17:57:52 GMT'}, {'version': 'v2', 'created': 'Mon, 30 Apr 2018 19:53:45 GMT'}]",2018-05-10,"[['Han', 'Xu', ''], ['Kwitt', 'Roland', ''], ['Aylward', 'Stephen', ''], ['Bakas', 'Spyridon', ''], ['Menze', 'Bjoern', ''], ['Asturias', 'Alexander', ''], ['Vespa', 'Paul', ''], ['Van Horn', 'John', ''], ['Niethammer', 'Marc', '']]","['Brain Extraction', 'Image Registration', 'PCA', 'Total-Variation', 'Pathology']" 366,1610.04213,Konstantinos Chatzilygeroudis,"Konstantinos Chatzilygeroudis, Vassilis Vassiliades, Jean-Baptiste Mouret",Reset-free Trial-and-Error Learning for Robot Damage Recovery,"18 pages, 16 figures, 3 tables, 6 pseudocodes/algorithms, video at https://youtu.be/IqtyHFrb3BU, code at https://github.com/resibots/chatzilygeroudis_2018_rte",,10.1016/j.robot.2017.11.010,,cs.RO cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The high probability of hardware failures prevents many advanced robots (e.g., legged robots) from being confidently deployed in real-world situations (e.g., post-disaster rescue). Instead of attempting to diagnose the failures, robots could adapt by trial-and-error in order to be able to complete their tasks. In this situation, damage recovery can be seen as a Reinforcement Learning (RL) problem. However, the best RL algorithms for robotics require the robot and the environment to be reset to an initial state after each episode, that is, the robot is not learning autonomously. In addition, most of the RL methods for robotics do not scale well with complex robots (e.g., walking robots) and either cannot be used at all or take too long to converge to a solution (e.g., hours of learning). In this paper, we introduce a novel learning algorithm called ""Reset-free Trial-and-Error"" (RTE) that (1) breaks the complexity by pre-generating hundreds of possible behaviors with a dynamics simulator of the intact robot, and (2) allows complex robots to quickly recover from damage while completing their tasks and taking the environment into account. We evaluate our algorithm on a simulated wheeled robot, a simulated six-legged robot, and a real six-legged walking robot that are damaged in several ways (e.g., a missing leg, a shortened leg, faulty motor, etc.) and whose objective is to reach a sequence of targets in an arena. Our experiments show that the robots can recover most of their locomotion abilities in an environment with obstacles, and without any human intervention. ","[{'version': 'v1', 'created': 'Thu, 13 Oct 2016 19:39:58 GMT'}, {'version': 'v2', 'created': 'Wed, 12 Apr 2017 23:08:17 GMT'}, {'version': 'v3', 'created': 'Thu, 23 Nov 2017 10:55:03 GMT'}, {'version': 'v4', 'created': 'Tue, 12 Dec 2017 08:02:31 GMT'}]",2017-12-13,"[['Chatzilygeroudis', 'Konstantinos', ''], ['Vassiliades', 'Vassilis', ''], ['Mouret', 'Jean-Baptiste', '']]","['Robot Damage Recovery', 'Autonomous Systems', 'Robotics', 'Trial-and-Error Learning', 'ReinforcementLearning']" 367,1907.09696,Yeonjong Shin,"Yeonjong Shin, George Em Karniadakis",Trainability of ReLU networks and Data-dependent Initialization,,,10.1615/.2020034126,,cs.LG stat.ML,http://creativecommons.org/licenses/by-nc-sa/4.0/," In this paper, we study the trainability of rectified linear unit (ReLU) networks. A ReLU neuron is said to be dead if it only outputs a constant for any input. Two death states of neurons are introduced; tentative and permanent death. A network is then said to be trainable if the number of permanently dead neurons is sufficiently small for a learning task. We refer to the probability of a network being trainable as trainability. We show that a network being trainable is a necessary condition for successful training and the trainability serves as an upper bound of successful training rates. In order to quantify the trainability, we study the probability distribution of the number of active neurons at the initialization. In many applications, over-specified or over-parameterized neural networks are successfully employed and shown to be trained effectively. With the notion of trainability, we show that over-parameterization is both a necessary and a sufficient condition for minimizing the training loss. Furthermore, we propose a data-dependent initialization method in the over-parameterized setting. Numerical examples are provided to demonstrate the effectiveness of the method and our theoretical findings. ","[{'version': 'v1', 'created': 'Tue, 23 Jul 2019 05:11:32 GMT'}, {'version': 'v2', 'created': 'Tue, 31 Mar 2020 04:25:31 GMT'}]",2020-10-23,"[['Shin', 'Yeonjong', ''], ['Karniadakis', 'George Em', '']]","['ReLU networks', 'Trainability', 'Dying ReLU', 'Over-parameterization', 'Overspecification', 'Data-dependent initialization']" 368,0906.2742,Daniel Kharitonov,"Luc Ceuppens, Alan Sardella, Daniel Kharitonov","Power Saving Strategies and Technologies in Network Equipment Opportunities and Challenges, Risk and Rewards","IEEE SAINT 2008 proceedings, July 28th - Aug 1st 2008, PCFNS workshop",,10.1109/SAINT.2008.79,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Drawing from todays best-in-class solutions, we identify power-saving strategies that have succeeded in the past and look forward to new ideas and paradigms. We strongly believe that designing energy-efficient network equipment can be compared to building sports cars, task-oriented, focused and fast. However, unlike track-bound sports cars, ultra-fast and purpose-built silicon yields better energy efficiency when compared to more generic family sedan designs that mitigate go-to-market risks by being the masters of many tasks. Thus, we demonstrate that the best opportunities for power savings come via protocol simplification, best-of-breed technology, and silicon and software optimization, to achieve the least amount of processing necessary to move packets. We also look to the future of networking from a new angle, where energy efficiency and environmental concerns are viewed as fundamental design criteria and forces that need to be harnessed to continually create more powerful networking equipment. ","[{'version': 'v1', 'created': 'Sat, 13 Jun 2009 10:49:20 GMT'}]",2016-11-17,"[['Ceuppens', 'Luc', ''], ['Sardella', 'Alan', ''], ['Kharitonov', 'Daniel', '']]","['power', 'green', 'network', 'routers']" 369,1705.06824,Zhengyang Wang,"Zhengyang Wang, Shuiwang Ji","Learning Convolutional Text Representations for Visual Question Answering",Conference paper at SDM 2018. https://github.com/divelab/svae,"In proceedings of the 2018 SIAM International Conference on Data Mining (pp. 594-602). 2018",10.1137/1.9781611975321.67,,cs.LG cs.CL cs.NE stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual representation as compared to other natural language processing tasks. In this work, we perform a detailed analysis on natural language questions in visual question answering. Based on the analysis, we propose to rely on convolutional neural networks for learning textual representations. By exploring the various properties of convolutional neural networks specialized for text data, such as width and depth, we present our ""CNN Inception + Gate"" model. We show that our model improves question representations and thus the overall accuracy of visual question answering models. We also show that the text representation requirement in visual question answering is more complicated and comprehensive than that in conventional natural language processing tasks, making it a better task to evaluate textual representation methods. Shallow models like fastText, which can obtain comparable results with deep learning models in tasks like text classification, are not suitable in visual question answering. ","[{'version': 'v1', 'created': 'Thu, 18 May 2017 22:51:44 GMT'}, {'version': 'v2', 'created': 'Wed, 18 Apr 2018 17:38:50 GMT'}]",2018-09-05,"[['Wang', 'Zhengyang', ''], ['Ji', 'Shuiwang', '']]","['Deep learning', 'visual question answering', 'convolutionalneural networks', 'text representations']" 370,1808.10367,Vahid Keshavarzzadeh,"Vahid Keshavarzzadeh, Robert M. Kirby and Akil Narayan","Parametric Topology Optimization with Multi-Resolution Finite Element Models",,,10.1002/nme.6063,,cs.NA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a methodical procedure for topology optimization under uncertainty with multi-resolution finite element models. We use our framework in a bi-fidelity setting where a coarse and a fine mesh corresponding to low- and high-resolution models are available. The inexpensive low-resolution model is used to explore the parameter space and approximate the parameterized high-resolution model and its sensitivity where parameters are considered in both structural load and stiffness. We provide error bounds for bi-fidelity finite element (FE) approximations and their sensitivities and conduct numerical studies to verify these theoretical estimates. We demonstrate our approach on benchmark compliance minimization problems where we show significant reduction in computational cost for expensive problems such as topology optimization under manufacturing variability while generating almost identical designs to those obtained with single resolution mesh. We also compute the parametric Von-Mises stress for the generated designs via our bi-fidelity FE approximation and compare them with standard Monte Carlo simulations. The implementation of our algorithm which extends the well-known 88-line topology optimization code in MATLAB is provided. ","[{'version': 'v1', 'created': 'Thu, 30 Aug 2018 15:52:08 GMT'}]",2019-04-08,"[['Keshavarzzadeh', 'Vahid', ''], ['Kirby', 'Robert M.', ''], ['Narayan', 'Akil', '']]","['Multi-Resolution Finite Elements', 'Parametric Topology Optimization', 'Bi-Fidelity Error Estimate', 'Man']" 371,1309.2183,Seyyed Reza Khaze,"Seyyed Reza Khaze, Mohammad Masdari and Sohrab Hojjatkhah","Application of Artificial Neural Networks in Estimating Participation in Elections",,"International Journal of Information Technology, Modeling and Computing (IJITMC) Vol.1, No.3,August 2013",10.5121/ijitmc.2013.1303,,cs.NE cs.CY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," It is approved that artificial neural networks can be considerable effective in anticipating and analyzing flows in which traditional methods and statics are not able to solve. in this article, by using two-layer feedforward network with tan-sigmoid transmission function in input and output layers, we can anticipate participation rate of public in kohgiloye and boyerahmad province in future presidential election of islamic republic of iran with 91% accuracy. the assessment standards of participation such as confusion matrix and roc diagrams have been approved our claims. ","[{'version': 'v1', 'created': 'Mon, 9 Sep 2013 15:03:59 GMT'}]",2013-09-10,"[['Khaze', 'Seyyed Reza', ''], ['Masdari', 'Mohammad', ''], ['Hojjatkhah', 'Sohrab', '']]","['Anticipating', 'Data Mining', 'Artificial Neural Network', 'political behaviour', 'elections']" 372,1807.10816,Ling Liang,"Ling Liang, Lei Deng, Yueling Zeng, Xing Hu, Yu Ji, Xin Ma, Guoqi Li and Yuan Xie",Crossbar-aware neural network pruning,,IEEE Access 6 (2018): 58324-58337,10.1109/ACCESS.2018.2874823,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Crossbar architecture based devices have been widely adopted in neural network accelerators by taking advantage of the high efficiency on vector-matrix multiplication (VMM) operations. However, in the case of convolutional neural networks (CNNs), the efficiency is compromised dramatically due to the large amounts of data reuse. Although some mapping methods have been designed to achieve a balance between the execution throughput and resource overhead, the resource consumption cost is still huge while maintaining the throughput. Network pruning is a promising and widely studied leverage to shrink the model size. Whereas, previous work didn`t consider the crossbar architecture and the corresponding mapping method, which cannot be directly utilized by crossbar-based neural network accelerators. Tightly combining the crossbar structure and its mapping, this paper proposes a crossbar-aware pruning framework based on a formulated L0-norm constrained optimization problem. Specifically, we design an L0-norm constrained gradient descent (LGD) with relaxant probabilistic projection (RPP) to solve this problem. Two grains of sparsity are successfully achieved: i) intuitive crossbar-grain sparsity and ii) column-grain sparsity with output recombination, based on which we further propose an input feature maps (FMs) reorder method to improve the model accuracy. We evaluate our crossbar-aware pruning framework on median-scale CIFAR10 dataset and large-scale ImageNet dataset with VGG and ResNet models. Our method is able to reduce the crossbar overhead by 44%-72% with little accuracy degradation. This work greatly saves the resource and the related energy cost, which provides a new co-design solution for mapping CNNs onto various crossbar devices with significantly higher efficiency. ","[{'version': 'v1', 'created': 'Wed, 25 Jul 2018 21:08:35 GMT'}, {'version': 'v2', 'created': 'Tue, 13 Nov 2018 23:13:32 GMT'}, {'version': 'v3', 'created': 'Thu, 6 Dec 2018 03:08:39 GMT'}]",2018-12-07,"[['Liang', 'Ling', ''], ['Deng', 'Lei', ''], ['Zeng', 'Yueling', ''], ['Hu', 'Xing', ''], ['Ji', 'Yu', ''], ['Ma', 'Xin', ''], ['Li', 'Guoqi', ''], ['Xie', 'Yuan', '']]","['Crossbar Architecture', 'Convolutional NeuralNetworks', 'Neural Network Pruning', 'Constrained OptimizationProblem']" 373,1303.0284,Tomasz Kajdanowicz,"Katarzyna Musial, Przemyslaw Kazienkol and Tomasz Kajdanowicz",Social Recommendations within the Multimedia Sharing Systems,"recommender system, multirelational social network, multimedia sharing system, social network analysis, Best Paper Award. arXiv admin note: text overlap with arXiv:1303.0093","Musial K., Kazienko P., Kajdanowicz T.: Social Recommendations within the Multimedia Sharing Systems. The First World Summit on the Knowledge Society, WSKS'08, Lecture Notes in Computer Science LNCS 5288, 2008, pp. 364-372",10.1007/978-3-540-87781-3_40,,cs.SI cs.IR physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The social recommender system that supports the creation of new relations between users in the multimedia sharing system is presented in the paper. To generate suggestions the new concept of the multirelational social network was introduced. It covers both direct as well as object-based relationships that reflect social and semantic links between users. The main goal of the new method is to create the personalized suggestions that are continuously adapted to users' needs depending on the personal weights assigned to each layer from the social network. The conducted experiments confirmed the usefulness of the proposed model. ","[{'version': 'v1', 'created': 'Fri, 1 Mar 2013 07:09:39 GMT'}]",2013-03-05,"[['Musial', 'Katarzyna', ''], ['Kazienkol', 'Przemyslaw', ''], ['Kajdanowicz', 'Tomasz', '']]","['recommender system', 'multirelational social network', 'multimedia sharing system', 'social network analysis']" 374,1803.06649,Taichi Uemura,Taichi Uemura,"Cubical Assemblies, a Univalent and Impredicative Universe and a Failure of Propositional Resizing",,,10.4230/LIPIcs.TYPES.2018.7,,cs.LO,http://creativecommons.org/licenses/by/4.0/," We construct a model of cubical type theory with a univalent and impredicative universe in a category of cubical assemblies. We show that this impredicative universe in the cubical assembly model does not satisfy a form of propositional resizing. ","[{'version': 'v1', 'created': 'Sun, 18 Mar 2018 11:58:07 GMT'}, {'version': 'v2', 'created': 'Sun, 15 Apr 2018 13:35:50 GMT'}, {'version': 'v3', 'created': 'Mon, 9 Sep 2019 08:40:32 GMT'}]",2019-11-19,"[['Uemura', 'Taichi', '']]","['Cubical type theory', 'Realizability', 'Impredicative universe', 'Univalence', 'Propositional resizing']" 375,1808.05283,Ashkan Yousefpour,"Ashkan Yousefpour, Caleb Fung, Tam Nguyen, Krishna Kadiyala, Fatemeh Jalali, Amirreza Niakanlahiji, Jian Kong, Jason P. Jue","All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey","48 pages, 7 tables, 11 figures, 450 references. The data (categories and features/objectives of the papers) of this survey are now available publicly. Accepted by Elsevier Journal of Systems Architecture",,10.1016/j.sysarc.2019.02.009,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing. ","[{'version': 'v1', 'created': 'Wed, 15 Aug 2018 20:56:28 GMT'}, {'version': 'v2', 'created': 'Thu, 6 Sep 2018 20:24:07 GMT'}, {'version': 'v3', 'created': 'Wed, 13 Feb 2019 23:30:17 GMT'}]",2019-02-15,"[['Yousefpour', 'Ashkan', ''], ['Fung', 'Caleb', ''], ['Nguyen', 'Tam', ''], ['Kadiyala', 'Krishna', ''], ['Jalali', 'Fatemeh', ''], ['Niakanlahiji', 'Amirreza', ''], ['Kong', 'Jian', ''], ['Jue', 'Jason P.', '']]","['Fog Computing', 'Edge Computing', 'Cloud Computing', 'Internet ofThings (IoT)', 'Cloudlet', 'Mobile Edge Computing', 'Multi-access EdgeComputing', 'Mist Computing']" 376,1802.06624,Dian Pratiwi,"Putri Kurniasih, Dian Pratiwi","Osteoarthritis Disease Detection System using Self Organizing Maps Method based on Ossa Manus X-Ray","6 pages, 12 figures, 1 table","International Journal of Computer Applications, Foundation of Computer Science (FCS), NY, USA. Volume 173 - Number 3, 2017",10.5120/ijca2017915278,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Osteoarthritis is a disease found in the world, including in Indonesia. The purpose of this study was to detect the disease Osteoarthritis using Self Organizing mapping (SOM), and to know the procedure of artificial intelligence on the methods of Self Organizing Mapping (SOM). In this system, there are several stages to preserve to detect disease Osteoarthritis using Self Organizing maps is the result of photographic images rontgen Ossa Manus normal and sick with the resolution (150 x 200 pixels) do the repair phase contrast, the Gray scale, thresholding process, Histogram of process , and do the last process, where the process of doing training (Training) and testing on images that have kept the shape data (.text). the conclusion is the result of testing by using a data image, where 42 of data have 12 Normal image data and image data 30 sick. On the results of the process of training data there are 8 X-ray image revealed normal right and 19 data x-ray image of pain expressed is correct. Then the accuracy on the process of training was 96.42%, and in the process of testing normal true image 4 obtained revealed Normal, 9 data pain stated true pain and 1 data imagery hurts stated incorrectly, then the accuracy gained from the results of testing are 92,8%. ","[{'version': 'v1', 'created': 'Mon, 19 Feb 2018 13:43:05 GMT'}]",2018-02-20,"[['Kurniasih', 'Putri', ''], ['Pratiwi', 'Dian', '']]","['Osteoarthritis', 'Ossa manus', 'Grayscale', 'Thresholding', 'Self Organizing Maps']" 377,1005.4501,Secretary Aircc Journal,"S.Sangeetha(1), V.Vaidehi(2), ((1)Angel College of Engineering, India, (2)Madras Institute of Technology, India)","Fuzzy Aided Application Layer Semantic Intrusion Detection System - FASIDS","18 Pages, IJNSA","International Journal of Network Security & Its Applications 2.2 (2010) 39-56",10.5121/ijnsa.2010.2204,,cs.CR,http://creativecommons.org/licenses/by-nc-sa/3.0/," The objective of this is to develop a Fuzzy aided Application layer Semantic Intrusion Detection System (FASIDS) which works in the application layer of the network stack. FASIDS consist of semantic IDS and Fuzzy based IDS. Rule based IDS looks for the specific pattern which is defined as malicious. A non-intrusive regular pattern can be malicious if it occurs several times with a short time interval. For detecting such malicious activities, FASIDS is proposed in this paper. At application layer, HTTP traffic's header and payload are analyzed for possible intrusion. In the proposed misuse detection module, the semantic intrusion detection system works on the basis of rules that define various application layer misuses that are found in the network. An attack identified by the IDS is based on a corresponding rule in the rule-base. An event that doesn't make a 'hit' on the rule-base is given to a Fuzzy Intrusion Detection System (FIDS) for further analysis. ","[{'version': 'v1', 'created': 'Tue, 25 May 2010 08:02:30 GMT'}]",2010-07-15,"[['Sangeetha', 'S.', ''], ['Vaidehi', 'V.', '']]","['Semantic Intrusion detection', 'Application Layer misuse detector', 'Fuzzy']" 378,1005.2277,Amparo F\'uster-Sabater,Amparo F\'uster-Sabater and Pedro Garc\'ia-Mochales,"A Simple Computational Model for Acceptance/Rejection of Binary Sequence Generators","16 pages, 0 figures","Applied Mathematical Modelling. Volume 31, Issue 8, pp. 1548-1558. August 2007.",10.1016/j.apm.2006.05.004,,cs.CR cs.DM,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A simple binary model to compute the degree of balancedness in the output sequence of LFSR-combinational generators has been developed. The computational method is based exclusively on the handling of binary strings by means of logic operations. The proposed model can serve as a deterministic alternative to existing probabilistic methods for checking balancedness in binary sequence generators. The procedure here described can be devised as a first selective criterium for acceptance/rejection of this type of generators. ","[{'version': 'v1', 'created': 'Thu, 13 May 2010 08:48:31 GMT'}]",2010-05-14,"[['Fúster-Sabater', 'Amparo', ''], ['García-Mochales', 'Pedro', '']]","['Balancedness', 'Bit-string model', 'Combinational generator', 'Design rules']" 379,1812.03174,Milica Bogicevic,"Milica Bogi\'cevi\'c, Milan Merkle","Approximate Calculation of Tukey's Depth and Median With High-dimensional Data",,"Yugoslav Journal of Operations Research 28 (2018), Number 4, 475--499",10.2298/YJOR180520022B,,cs.DS cs.CG,http://creativecommons.org/publicdomain/zero/1.0/," We present a new fast approximate algorithm for Tukey (halfspace) depth level sets and its implementation-ABCDepth. Given a $d$-dimensional data set for any $d\geq 1$, the algorithm is based on a representation of level sets as intersections of balls in $\mathbb{R}^d$. Our approach does not need calculations of projections of sample points to directions. This novel idea enables calculations of approximate level sets in very high dimensions with complexity which is linear in $d$, which provides a great advantage over all other approximate algorithms. Using different versions of this algorithm we demonstrate approximate calculations of the deepest set of points (""Tukey median"") and Tukey's depth of a sample point or out-of-sample point, all with a linear in $d$ complexity. An additional theoretical advantage of this approach is that the data points are not assumed to be in ""general position"". Examples with real and synthetic data show that the executing time of the algorithm in all mentioned versions in high dimensions is much smaller than the time of other implemented algorithms. Also, our algorithms can be used with thousands of multidimensional observations. ","[{'version': 'v1', 'created': 'Fri, 7 Dec 2018 10:27:14 GMT'}]",2018-12-11,"[['Bogićević', 'Milica', ''], ['Merkle', 'Milan', '']]","['Big data', 'multivariate medians', 'depth functions', 'computing Tukey’s depth']" 380,1801.03546,Upal Mahbub,Upal Mahbub and Sayantan Sarkar and Rama Chellappa,Segment-based Methods for Facial Attribute Detection from Partial Faces,,"IEEE Transactions on Affective Computing, 2018",10.1109/TAFFC.2018.2820048,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," State-of-the-art methods of attribute detection from faces almost always assume the presence of a full, unoccluded face. Hence, their performance degrades for partially visible and occluded faces. In this paper, we introduce SPLITFACE, a deep convolutional neural network-based method that is explicitly designed to perform attribute detection in partially occluded faces. Taking several facial segments and the full face as input, the proposed method takes a data driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. With access to segment-based predictions, SPLITFACE can predict well those attributes which are localized in the visible parts of the face, without having to rely on the presence of the whole face. We use the CelebA and LFWA facial attribute datasets for standard evaluations. We also modify both datasets, to occlude the faces, so that we can evaluate the performance of attribute detection algorithms on partial faces. Our evaluation shows that SPLITFACE significantly outperforms other recent methods especially for partial faces. ","[{'version': 'v1', 'created': 'Wed, 10 Jan 2018 20:32:35 GMT'}]",2018-07-19,"[['Mahbub', 'Upal', ''], ['Sarkar', 'Sayantan', ''], ['Chellappa', 'Rama', '']]","['attribute detection', 'facial segment', 'committee machines', 'score fusion', 'local to global decision propagation;']" 381,1006.3848,Secretary Aircc Journal,"Mosin Hasan, Nilesh Prajapati and Safvan Vohara (BVM Engineering College, India)",Case Study On Social Engineering Techniques for Persuasion,7 Pages,"International journal on applications of graph theory in wireless ad hoc networks and sensor networks 2.2 (2010) 17-23",10.5121/jgraphoc.2010.2202,,cs.CR,http://creativecommons.org/licenses/by-nc-sa/3.0/," There are plenty of security software in market; each claiming the best, still we daily face problem of viruses and other malicious activities. If we know the basic working principal of such malware then we can very easily prevent most of them even without security software. Hackers and crackers are experts in psychology to manipulate people into giving them access or the information necessary to get access. This paper discusses the inner working of such attacks. Case study of Spyware is provided. In this case study, we got 100% success using social engineering techniques for deception on Linux operating system, which is considered as the most secure operating system. Few basic principal of defend, for the individual as well as for the organization, are discussed here, which will prevent most of such attack if followed. ","[{'version': 'v1', 'created': 'Sat, 19 Jun 2010 07:57:58 GMT'}]",2011-01-20,"[['Hasan', 'Mosin', '', 'BVM Engineering\n College, India'], ['Prajapati', 'Nilesh', '', 'BVM Engineering\n College, India'], ['Vohara', 'Safvan', '', 'BVM Engineering\n College, India']]","['Spyware', 'Malware', 'Social Engineering', 'Psychology']" 382,1701.03836,Khaza Anuarul Hoque,"Khaza Anuarul Hoque, Otmane Ait Mohamed, Yvon Savaria","Formal Analysis of SEU Mitigation for Early Dependability and Performability Analysis of FPGA-based Space Applications","Accepted version for publication in the Journal of Applied Science, Elsevier",,10.1016/j.jal.2017.03.001,,cs.PF cs.AR cs.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," SRAM-based FPGAs are increasingly popular in the aerospace industry due to their field programmability and low cost. However, they suffer from cosmic radiation induced Single Event Upsets (SEUs). In safety-critical applications, the dependability of the design is a prime concern since failures may have catastrophic consequences. An early analysis of the relationship between dependability metrics, performability-area trade-off, and different mitigation techniques for such applications can reduce the design effort while increasing the design confidence. This paper introduces a novel methodology based on probabilistic model checking, for the analysis of the reliability, availability, safety and performance-area tradeoffs of safety-critical systems for early design decisions. Starting from the high-level description of a system, a Markov reward model is constructed from the Control Data Flow Graph (CDFG) and a component characterization library targeting FPGAs. The proposed model and exhaustive analysis capture all the failure states (based on the fault detection coverage) and repairs possible in the system. We present quantitative results based on an FIR filter circuit to illustrate the applicability of the proposed approach and to demonstrate that a wide range of useful dependability and performability properties can be analyzed using the proposed methodology. The modeling results show the relationship between different mitigation techniques and fault detection coverage, exposing their direct impact on the design for early decisions. ","[{'version': 'v1', 'created': 'Thu, 12 Jan 2017 17:07:36 GMT'}, {'version': 'v2', 'created': 'Thu, 23 Feb 2017 17:21:29 GMT'}]",2017-03-07,"[['Hoque', 'Khaza Anuarul', ''], ['Mohamed', 'Otmane Ait', ''], ['Savaria', 'Yvon', '']]","['Probabilistic model checking', 'FPGA', 'Dependability', 'Performability', 'Markov Reward Model', 'SEU', 'CDFG']" 383,1807.05324,Heng Ding,"Heng Ding, Krisztian Balog",Generating Synthetic Data for Neural Keyword-to-Question Models,"Extended version of ICTIR'18 full paper, 11 pages",,10.1145/3234944.3234964,,cs.IR cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Search typically relies on keyword queries, but these are often semantically ambiguous. We propose to overcome this by offering users natural language questions, based on their keyword queries, to disambiguate their intent. This keyword-to-question task may be addressed using neural machine translation techniques. Neural translation models, however, require massive amounts of training data (keyword-question pairs), which is unavailable for this task. The main idea of this paper is to generate large amounts of synthetic training data from a small seed set of hand-labeled keyword-question pairs. Since natural language questions are available in large quantities, we develop models to automatically generate the corresponding keyword queries. Further, we introduce various filtering mechanisms to ensure that synthetic training data is of high quality. We demonstrate the feasibility of our approach using both automatic and manual evaluation. This is an extended version of the article published with the same title in the Proceedings of ICTIR'18. ","[{'version': 'v1', 'created': 'Sat, 14 Jul 2018 03:24:31 GMT'}]",2018-07-18,"[['Ding', 'Heng', ''], ['Balog', 'Krisztian', '']]","['Keyword-to-question', 'synthetic data generation', 'neural machinetranslation']" 384,1512.03565,Taner Cevik Dr.,"Taner Cevik, Alex Gunagwera, Nazife Cevik","A Survey of multimedia streaming in wireless sensor networks: progress, issues and design challenges",,,10.5121/ijcnc.2015.7508,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Advancements in Complementary Metal Oxide Semiconductor (CMOS) technology have enabled Wireless Sensor Networks (WSN) to gather, process and transport multimedia (MM) data as well and not just limited to handling ordinary scalar data anymore. This new generation of WSN type is called Wireless Multimedia Sensor Networks (WMSNs). Better and yet relatively cheaper sensors that are able to sense both scalar data and multimedia data with more advanced functionalities such as being able to handle rather intense computations easily have sprung up. In this paper, the applications, architectures, challenges and issues faced in the design of WMSNs are explored. Security and privacy issues, over all requirements, proposed and implemented solutions so far, some of the successful achievements and other related works in the field are also highlighted. Open research areas are pointed out and a few solution suggestions to the still persistent problems are made, which, to the best of my knowledge, so far have not been explored yet. ","[{'version': 'v1', 'created': 'Fri, 11 Dec 2015 09:51:58 GMT'}]",2015-12-14,"[['Cevik', 'Taner', ''], ['Gunagwera', 'Alex', ''], ['Cevik', 'Nazife', '']]","['Multimedia', 'Multimedia Streaming', 'Wireless Sensor Networks', 'Wireless Multimedia Sensor Networks. 1']" 385,1408.2914,Surender Kumar,"Surender Kumar, Manish Prateek, N.J. Ahuja, Bharat Bhushan",DE-LEACH: Distance and Energy Aware LEACH,"7 pages, 5 figures. available online at http://ijcaonline.org/2014",,10.5120/15384-4072,,cs.NI,http://creativecommons.org/licenses/by-nc-sa/3.0/," Wireless sensor network consists of large number of tiny sensor nodes which are usually deployed in a harsh environment. Self configuration and infrastructure less are the two fundamental properties of sensor networks. Sensor nodes are highly energy constrained devices because they are battery operated devices and due to harsh environment deployment it is impossible to change or recharge their battery. Energy conservation and prolonging the network life are two major challenges in a sensor network. Communication consumes the large portion of WSN energy. Several protocols have been proposed to realize power- efficient communication in a wireless sensor network. Cluster based routing protocols are best known for increasing energy efficiency, stability and network lifetime of WSNs. Low Energy Adaptive Clustering Hierarchy (LEACH) is an important protocol in this class. One of the disadvantages of LEACH is that it does not consider the nodes energy and distance for the election of cluster head. This paper proposes a new energy efficient clustering protocol DE-LEACH for homogeneous wireless sensor network which is an extension of LEACH. DE-LEACH elects cluster head on the basis of distance and residual energy of the nodes. Proposed protocol increases the network life, stability and throughput of sensor network and simulations result shows that DE-LEACH is better than LEACH. ","[{'version': 'v1', 'created': 'Wed, 13 Aug 2014 05:37:11 GMT'}]",2015-06-22,"[['Kumar', 'Surender', ''], ['Prateek', 'Manish', ''], ['Ahuja', 'N. J.', ''], ['Bhushan', 'Bharat', '']]","['Cluster', 'Energy Efficiency', 'Initial Energy', 'Residual Energy', 'Wireless Sensor Network']" 386,1902.06224,Michele Polese,"Michele Polese, Tommaso Zugno and Michele Zorzi",Implementation of Reference Public Safety Scenarios in ns-3,"8 pages, 9 figures, submitted to WNS3 2019",,10.1145/3321349.3321356,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," During incidents and disasters it is fundamental to provide to first responders high performance and reliable communications, in order to improve their coordination capabilities and their awareness of the surrounding environment, and to allow them to promptly transmit and receive alerts on possible dangerous situations or emergencies. The accurate evaluation of the performance of different Public Safety Communication (PSC) networking and communications technologies is therefore of paramount importance, and the characterization of the scenario in which these technologies need to operate is fundamental to obtain meaningful results. In this paper, we present the implementation of three reference PSC scenarios, which are open source and made publicly available to the research community, describing the incidents, the mobility and applications of first responders, and providing examples on how a mmWave-based Radio Access Network (RAN) can support high-traffic use cases. Moreover, we present the implementation of two novel mobility models for ns-3, which can be used to enable the simulation of realistic PSC scenarios in ns-3. ","[{'version': 'v1', 'created': 'Sun, 17 Feb 2019 08:56:58 GMT'}]",2019-11-11,"[['Polese', 'Michele', ''], ['Zugno', 'Tommaso', ''], ['Zorzi', 'Michele', '']]","['Public safety', 'ns-3', 'mmWave', 'scenarios', 'mobility models']" 387,1006.1177,Secretary Aircc Journal,"Srirangam V Addepallil, Per Andersen, George L Barnes",Efficient Resource Matching in Heterogeneous Grid Using Resource Vector,10 pages,"International Journal of Computer Science and Information Technology 2.3 (2010) 1-10",10.5121/ijcsit.2010.2301,,cs.DC,http://creativecommons.org/licenses/by-nc-sa/3.0/," In this paper, a method for efficient scheduling to obtain optimum job throughput in a distributed campus grid environment is presented; Traditional job schedulers determine job scheduling using user and job resource attributes. User attributes are related to current usage, historical usage, user priority and project access. Job resource attributes mainly comprise of soft requirements (compilers, libraries) and hard requirements like memory, storage and interconnect. A job scheduler dispatches jobs to a resource if a job's hard and soft requirements are met by a resource. In current scenario during execution of a job, if a resource becomes unavailable, schedulers are presented with limited options, namely re-queuing job or migrating job to a different resource. Both options are expensive in terms of data and compute time. These situations can be avoided, if the often ignored factor, availability time of a resource in a grid environment is considered. We propose resource rank approach, in which jobs are dispatched to a resource which has the highest rank among all resources that match the job's requirement. The results show that our approach can increase throughput of many serial / monolithic jobs. ","[{'version': 'v1', 'created': 'Mon, 7 Jun 2010 06:22:05 GMT'}]",2010-07-15,"[['Addepallil', 'Srirangam V', ''], ['Andersen', 'Per', ''], ['Barnes', 'George L', '']]","['SGE', 'LSF', 'Venus', 'ENDyne', 'Condor']" 388,2102.10101,Kunnath Ranjith,Kunnath Ranjith,"Spectral formulation of the boundary integral equation method for antiplane problems",In Press,"Mechanics of Materials, 2022",10.1016/j.mechmat.2021.104177,,cs.CE cond-mat.mtrl-sci physics.geo-ph,http://creativecommons.org/licenses/by-nc-nd/4.0/," A spectral formulation of the boundary integral equation method for antiplane problems is presented. The boundary integral equation method relates the slip and the shear stress at an interface between two half-planes. It involves evaluating a space-time convolution of the shear stress or the slip at the interface. In the spectral formulation, the convolution with respect to the spatial coordinate is performed in the spectral domain. This leads to greater numerical efficiency. Prior work on the spectral formulation of the boundary integral equation method has performed the elastodynamic convolution of the slip at the interface. In the present work, the convolution is performed of the shear stress at the interface. The spectral formulation is developed both for an interface between identical solids and for a bi-material interface. It is validated by numerically calculating the response of the interface to harmonic and to impulsive disturbances and comparing with known analytical solutions. To illustrate use of the method, dynamic slip rupture propagation with a slip-weakening friction law is simulated. ","[{'version': 'v1', 'created': 'Tue, 16 Feb 2021 12:18:12 GMT'}, {'version': 'v2', 'created': 'Wed, 3 Mar 2021 02:39:15 GMT'}, {'version': 'v3', 'created': 'Sun, 28 Nov 2021 03:47:37 GMT'}]",2021-11-30,"[['Ranjith', 'Kunnath', '']]","['Boundary integral equation method', 'elasticity', 'waves', 'slip', 'spectral', 'modal']" 389,1502.00258,Liang Lin,"Xiaodan Liang, Liang Lin, Liangliang Cao","Learning Latent Spatio-Temporal Compositional Model for Human Action Recognition","This manuscript has 10 pages with 7 figures, and a preliminary version was published in ACM MM'13",,10.1145/2502081.2502089,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Action recognition is an important problem in multimedia understanding. This paper addresses this problem by building an expressive compositional action model. We model one action instance in the video with an ensemble of spatio-temporal compositions: a number of discrete temporal anchor frames, each of which is further decomposed to a layout of deformable parts. In this way, our model can identify a Spatio-Temporal And-Or Graph (STAOG) to represent the latent structure of actions e.g. triple jumping, swinging and high jumping. The STAOG model comprises four layers: (i) a batch of leaf-nodes in bottom for detecting various action parts within video patches; (ii) the or-nodes over bottom, i.e. switch variables to activate their children leaf-nodes for structural variability; (iii) the and-nodes within an anchor frame for verifying spatial composition; and (iv) the root-node at top for aggregating scores over temporal anchor frames. Moreover, the contextual interactions are defined between leaf-nodes in both spatial and temporal domains. For model training, we develop a novel weakly supervised learning algorithm which iteratively determines the structural configuration (e.g. the production of leaf-nodes associated with the or-nodes) along with the optimization of multi-layer parameters. By fully exploiting spatio-temporal compositions and interactions, our approach handles well large intra-class action variance (e.g. different views, individual appearances, spatio-temporal structures). The experimental results on the challenging databases demonstrate superior performance of our approach over other competing methods. ","[{'version': 'v1', 'created': 'Sun, 1 Feb 2015 13:49:31 GMT'}]",2015-02-03,"[['Liang', 'Xiaodan', ''], ['Lin', 'Liang', ''], ['Cao', 'Liangliang', '']]","['Video Understanding', 'Action Recognition', 'Structural Learning', 'AndOr Graph']" 390,1406.7168,Amelia Carolina Sparavigna,Amelia Carolina Sparavigna,Co-occurrence matrices of time series applied to literary works,"Literary experiments, Time series, Co-occurrence plots, Harry Potter","ijSciences, 2014, Volume 3, Issue 7, Pages: 12-18",10.18483/ijSci.533,,cs.CY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Recently, it has been proposed to analyse the literary works, plays or novels, using graphs to display the social network of their interacting characters. In this approach, the timeline of the literary work is lost, because the storyline is projected on a planar graph. However, timelines can be used to build some time series and analyse the work by means of vectors and matrices. These series can be used to describe the presence and relevance, not only of words in the text, but also of persons and places portrayed in the drama or novel. In this framework, we discuss here an approach with co-occurrence matrices plotted over time, concerning the presence of characters in the pages of a novel. These matrices are similar to those appearing in recurrence plots. ","[{'version': 'v1', 'created': 'Fri, 27 Jun 2014 12:37:46 GMT'}]",2015-08-06,"[['Sparavigna', 'Amelia Carolina', '']]","['Literary experiments', 'Time series', 'Co-occurrence plots', 'Harry Potter']" 391,1902.08985,Marc Aubreville,"Marc Aubreville, Miguel Goncalves, Christian Knipfer, Nicolai Oetter, Helmut Neumann, Florian Stelzle, Christopher Bohr, Andreas Maier","Transferability of Deep Learning Algorithms for Malignancy Detection in Confocal Laser Endomicroscopy Images from Different Anatomical Locations of the Upper Gastrointestinal Tract","Erratum for version 1, correcting the number of CLE image sequences used in one data set",BIOSTEC 2018: Biomedical Engineering Systems and Technologies,10.1007/978-3-030-29196-9_4,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Squamous Cell Carcinoma (SCC) is the most common cancer type of the epithelium and is often detected at a late stage. Besides invasive diagnosis of SCC by means of biopsy and histo-pathologic assessment, Confocal Laser Endomicroscopy (CLE) has emerged as noninvasive method that was successfully used to diagnose SCC in vivo. For interpretation of CLE images, however, extensive training is required, which limits its applicability and use in clinical practice of the method. To aid diagnosis of SCC in a broader scope, automatic detection methods have been proposed. This work compares two methods with regard to their applicability in a transfer learning sense, i.e. training on one tissue type (from one clinical team) and applying the learnt classification system to another entity (different anatomy, different clinical team). Besides a previously proposed, patch-based method based on convolutional neural networks, a novel classification method on image level (based on a pre-trained Inception V.3 network with dedicated preprocessing and interpretation of class activation maps) is proposed and evaluated. The newly presented approach improves recognition performance, yielding accuracies of 91.63% on the first data set (oral cavity) and 92.63% on a joint data set. The generalization from oral cavity to the second data set (vocal folds) lead to similar area-under-the-ROC curve values than a direct training on the vocal folds data set, indicating good generalization. ","[{'version': 'v1', 'created': 'Sun, 24 Feb 2019 17:38:25 GMT'}, {'version': 'v2', 'created': 'Fri, 3 Jan 2020 13:38:45 GMT'}]",2020-01-06,"[['Aubreville', 'Marc', ''], ['Goncalves', 'Miguel', ''], ['Knipfer', 'Christian', ''], ['Oetter', 'Nicolai', ''], ['Neumann', 'Helmut', ''], ['Stelzle', 'Florian', ''], ['Bohr', 'Christopher', ''], ['Maier', 'Andreas', '']]","['Confocal Laser Endomicroscopy', 'Transfer Learning', 'Head', 'Neck Squamous Cell Carcinoma']" 392,1811.02965,Valdemar \v{S}v\'abensk\'y,"Martin Ukrop, Valdemar \v{S}v\'abensk\'y, Jan Nehyba",Reflective Diary for Professional Development of Novice Teachers,"ACM SIGCSE 2019 conference, 7 pages, 2 figures",,10.1145/3287324.3287448,,cs.CY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Many starting teachers of computer science have great professional skill but often lack pedagogical training. Since providing expert mentorship directly during their lessons would be quite costly, institutions usually offer separate teacher training sessions for novice instructors. However, the reflection on teaching performed with a significant delay after the taught lesson limits the possible impact on teachers. To bridge this gap, we introduced a weekly semi-structured reflective practice to supplement the teacher training sessions at our faculty. We created a paper diary that guides the starting teachers through the process of reflection. Over the course of the semester, the diary poses questions of increasing complexity while also functioning as a reference to the topics covered in teacher training. Piloting the diary on a group of 25 novice teaching assistants resulted in overwhelmingly positive responses and provided the teacher training sessions with valuable input for discussion. The diary also turned out to be applicable in a broader context: it was appreciated and used by several experienced university teachers from multiple faculties and even some high-school teachers. The diary is freely available online, including source and print versions. ","[{'version': 'v1', 'created': 'Wed, 7 Nov 2018 16:30:24 GMT'}]",2018-11-08,"[['Ukrop', 'Martin', ''], ['Švábenský', 'Valdemar', ''], ['Nehyba', 'Jan', '']]","['reflective practice', 'learning journal', 'teacher training', 'teaching assistants', 'teaching skills']" 393,0904.2061,Zhigang Cao,"Zhigang Cao, Xiaoguang Yang",Selfish Bin Covering,16 pages,"Theoretical Computer Science, 412, 2011, 7049-7058",10.1016/j.tcs.2011.09.017,,cs.GT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we address the selfish bin covering problem, which is greatly related both to the bin covering problem, and to the weighted majority game. What we mainly concern is how much the lack of coordination harms the social welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium, we also take into account the strong Nash equilibrium, and several other new equilibria. For each equilibrium, the corresponding PoA and PoS are given, and the problems of computing an arbitrary equilibrium, as well as approximating the best one, are also considered. ","[{'version': 'v1', 'created': 'Tue, 14 Apr 2009 07:17:01 GMT'}, {'version': 'v2', 'created': 'Mon, 20 Jul 2009 14:22:34 GMT'}]",2011-11-10,"[['Cao', 'Zhigang', ''], ['Yang', 'Xiaoguang', '']]","['Selfish bin covering', 'weighted majority games', 'price of anarchy', 'price of stability', 'Nash equilibrium']" 394,2301.06629,David D. Nguyen,"David D. Nguyen, Surya Nepal, Salil S. Kanhere",Diverse Multimedia Layout Generation with Multi Choice Learning,9 pages,"Proceedings of the 29th ACM International Conference on Multimedia 2021",10.1145/3474085.3475525,mfp1907,cs.CV cs.LG,http://creativecommons.org/licenses/by-nc-sa/4.0/," Designing visually appealing layouts for multimedia documents containing text, graphs and images requires a form of creative intelligence. Modelling the generation of layouts has recently gained attention due to its importance in aesthetics and communication style. In contrast to standard prediction tasks, there are a range of acceptable layouts which depend on user preferences. For example, a poster designer may prefer logos on the top-left while another prefers logos on the bottom-right. Both are correct choices yet existing machine learning models treat layouts as a single choice prediction problem. In such situations, these models would simply average over all possible choices given the same input forming a degenerate sample. In the above example, this would form an unacceptable layout with a logo in the centre. In this paper, we present an auto-regressive neural network architecture, called LayoutMCL, that uses multi-choice prediction and winner-takes-all loss to effectively stabilise layout generation. LayoutMCL avoids the averaging problem by using multiple predictors to learn a range of possible options for each layout object. This enables LayoutMCL to generate multiple and diverse layouts from a single input which is in contrast with existing approaches which yield similar layouts with minor variations. Through quantitative benchmarks on real data (magazine, document and mobile app layouts), we demonstrate that LayoutMCL reduces Fr\'echet Inception Distance (FID) by 83-98% and generates significantly more diversity in comparison to existing approaches. ","[{'version': 'v1', 'created': 'Mon, 16 Jan 2023 22:53:55 GMT'}]",2023-01-18,"[['Nguyen', 'David D.', ''], ['Nepal', 'Surya', ''], ['Kanhere', 'Salil S.', '']]","['multimedia applications', 'neural networks', 'generative models', 'creative intelligence', 'layouts', 'multi-choice learning', 'mixture models']" 395,1907.06498,Emrah Basaran,"Emrah Basaran, Muhittin Gokmen, Mustafa E. Kamasak","An Efficient Framework for Visible-Infrared Cross Modality Person Re-Identification",,,10.1016/j.image.2020.115933,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Visible-infrared cross-modality person re-identification (VI-ReId) is an essential task for video surveillance in poorly illuminated or dark environments. Despite many recent studies on person re-identification in the visible domain (ReId), there are few studies dealing specifically with VI-ReId. Besides challenges that are common for both ReId and VI-ReId such as pose/illumination variations, background clutter and occlusion, VI-ReId has additional challenges as color information is not available in infrared images. As a result, the performance of VI-ReId systems is typically lower than that of ReId systems. In this work, we propose a four-stream framework to improve VI-ReId performance. We train a separate deep convolutional neural network in each stream using different representations of input images. We expect that different and complementary features can be learned from each stream. In our framework, grayscale and infrared input images are used to train the ResNet in the first stream. In the second stream, RGB and three-channel infrared images (created by repeating the infrared channel) are used. In the remaining two streams, we use local pattern maps as input images. These maps are generated utilizing local Zernike moments transformation. Local pattern maps are obtained from grayscale and infrared images in the third stream and from RGB and three-channel infrared images in the last stream. We improve the performance of the proposed framework by employing a re-ranking algorithm for post-processing. Our results indicate that the proposed framework outperforms current state-of-the-art with a large margin by improving Rank-1/mAP by 29.79%/30.91% on SYSU-MM01 dataset, and by 9.73%/16.36% on RegDB dataset. ","[{'version': 'v1', 'created': 'Mon, 15 Jul 2019 13:32:15 GMT'}, {'version': 'v2', 'created': 'Sun, 2 Aug 2020 03:41:05 GMT'}]",2020-08-04,"[['Basaran', 'Emrah', ''], ['Gokmen', 'Muhittin', ''], ['Kamasak', 'Mustafa E.', '']]","['Person re-identification', 'cross modality person re-identification', 'local Zernike moments']" 396,1507.06689,Sarah Alice Gaggl,"Sarah A. Gaggl, Norbert Manthey, Alessandro Ronca, Johannes P. Wallner, Stefan Woltran",Improved Answer-Set Programming Encodings for Abstract Argumentation,"To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015",Theory and Practice of Logic Programming 15 (2015) 434-448,10.1017/S1471068415000149,,cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The design of efficient solutions for abstract argumentation problems is a crucial step towards advanced argumentation systems. One of the most prominent approaches in the literature is to use Answer-Set Programming (ASP) for this endeavor. In this paper, we present new encodings for three prominent argumentation semantics using the concept of conditional literals in disjunctions as provided by the ASP-system clingo. Our new encodings are not only more succinct than previous versions, but also outperform them on standard benchmarks. ","[{'version': 'v1', 'created': 'Thu, 23 Jul 2015 21:43:48 GMT'}, {'version': 'v2', 'created': 'Tue, 20 Oct 2015 13:54:18 GMT'}]",2020-02-19,"[['Gaggl', 'Sarah A.', ''], ['Manthey', 'Norbert', ''], ['Ronca', 'Alessandro', ''], ['Wallner', 'Johannes P.', ''], ['Woltran', 'Stefan', '']]","['Answer-Set Programming', 'Abstract Argumentation', 'Implementation', 'ASPARTIX']" 397,1304.0156,M.M.A. Hashem,"Mohammad Ashekur Rahman, Atanu Barai, Md. Asadul Islam and M.M.A Hashem","Development of a Device for Remote Monitoring of Heart Rate and Body Temperature",,"Procs. of the IEEE 2012 15th International Conference on Computer & Information Technology (ICCIT 2012), pp.411-416, Chittagong, Bangladesh, December 22-24, (2012)",10.1109/ICCITechn.2012.6509783,,cs.OH,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a new integrated, portable device to provide a convenient solution for remote monitoring heart rate at the fingertip and body temperature using Ethernet technology and widely spreading internet. Now a days, heart related disease is rising. Most of the times in these cases, patients may not realize their actual conditions and even it is a common fact that there are no doctors by their side, especially in rural areas, but now a days most of the diseases are curable if detected in time. We have tried to make a system which may give information about one's physical condition and help him or her to detect these deadly but curable diseases. The system gives information of heart rate and body temperature simultaneously acquired on the portable side in real time and transmits results to web. In this system, the condition of heart and body temperature can be monitored from remote places. Eventually, this device provides a low cost, easily accessible human health monitor solution bridging the gaps between patients and doctors. ","[{'version': 'v1', 'created': 'Sun, 31 Mar 2013 05:44:47 GMT'}]",2016-11-15,"[['Rahman', 'Mohammad Ashekur', ''], ['Barai', 'Atanu', ''], ['Islam', 'Md. Asadul', ''], ['Hashem', 'M. M. A', '']]","['Body Temperature', 'Ethernet', 'Heart Rate', 'Infrared Transmitter', 'Infrared Receiver', 'Microcontroller']" 398,1007.5127,Secretary Ijsea,"Zeeshan Ahmed (University of Wuerzburg, Germany)","Towards Performance Measurement And Metrics Based Analysis of PLA Applications","15 pages, 12 figures","International Journal of Software Engineering & Applications 1.3 (2010) 66-80",10.5121/ijsea.2010.1305,,cs.SE,http://creativecommons.org/licenses/by-nc-sa/3.0/," This article is about a measurement analysis based approach to help software practitioners in managing the additional level complexities and variabilities in software product line applications. The architecture of the proposed approach i.e. ZAC is designed and implemented to perform preprocessesed source code analysis, calculate traditional and product line metrics and visualize results in two and three dimensional diagrams. Experiments using real time data sets are performed which concluded with the results that the ZAC can be very helpful for the software practitioners in understanding the overall structure and complexity of product line applications. Moreover the obtained results prove strong positive correlation between calculated traditional and product line measures. ","[{'version': 'v1', 'created': 'Thu, 29 Jul 2010 07:08:52 GMT'}]",2010-07-30,"[['Ahmed', 'Zeeshan', '', 'University of Wuerzburg, Germany']]","['Analysis', 'Measurement', 'Software product lines', 'Variability']" 399,1511.08118,D\v{z}enan Zuki\'c,"D\v{z}enan Zuki\'c and Julien Finet and Emmanuel Wilson and Filip Banovac and Giuseppe Esposito and Kevin Cleary and Andinet Enquobahrie","SlicerPET: A workflow based software module for PET/CT guided needle biopsy",,,10.1007/s11548-015-1213-2,,cs.GR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Biopsy is commonly used to confirm cancer diagnosis when radiologically indicated. Given the ability of PET to localize malignancies in heterogeneous tumors and tumors that do not have a CT correlate, PET/CT guided biopsy may improve the diagnostic yield of biopsies. To facilitate PET/CT guided needle biopsy, we developed a workflow that allows us to bring PET image guidance into the interventional CT suite. In this abstract, we present SlicerPET, a user-friendly workflow based module developed using open source software libraries to guide needle biopsy in the interventional suite. ","[{'version': 'v1', 'created': 'Wed, 25 Nov 2015 17:02:20 GMT'}]",2015-11-26,"[['Zukić', 'Dženan', ''], ['Finet', 'Julien', ''], ['Wilson', 'Emmanuel', ''], ['Banovac', 'Filip', ''], ['Esposito', 'Giuseppe', ''], ['Cleary', 'Kevin', ''], ['Enquobahrie', 'Andinet', '']]","['Image Guided Therapy', 'PET/CT', 'PET', 'CT', 'Needle Biopsy', 'Liver Biopsy', 'Open Source Software']" 400,1509.08700,Mendes Oulamara,"Mendes Oulamara (ENS Paris), Arnaud Venet (NASA - ARC)","Abstract Interpretation with Higher-Dimensional Ellipsoids and Conic Extrapolation","Proceedings, Part I, Computer Aided Verification 27th International Conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015",,10.1007/978-3-319-21690-4_24,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The inference and the verification of numerical relationships among variables of a program is one of the main goals of static analysis. In this paper, we propose an Abstract Interpretation framework based on higher-dimensional ellipsoids to automatically discover symbolic quadratic invariants within loops, using loop counters as implicit parameters. In order to obtain non-trivial invariants, the diameter of the set of values taken by the numerical variables of the program has to evolve (sub-)linearly during loop iterations. These invariants are called ellipsoidal cones and can be seen as an extension of constructs used in the static analysis of digital filters. Semidefinite programming is used to both compute the numerical results of the domain operations and provide proofs (witnesses) of their correctness. ","[{'version': 'v1', 'created': 'Tue, 29 Sep 2015 11:53:10 GMT'}]",2015-09-30,"[['Oulamara', 'Mendes', '', 'ENS Paris'], ['Venet', 'Arnaud', '', 'NASA - ARC']]","['static analysis', 'semidefinite programming', 'ellipsoids', 'conicextrapolation']" 401,1509.01815,Valery Vilisov,Valery Vilisov,"Research: Analysis of Transport Model that Approximates Decision Taker's Preferences",,,10.13140/RG.2.1.5085.6166,,cs.LG cs.AI math.OC stat.AP,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Paper provides a method for solving the reverse Monge-Kantorovich transport problem (TP). It allows to accumulate positive decision-taking experience made by decision-taker in situations that can be presented in the form of TP. The initial data for the solution of the inverse TP is the information on orders, inventories and effective decisions take by decision-taker. The result of solving the inverse TP contains evaluations of the TPs payoff matrix elements. It can be used in new situations to select the solution corresponding to the preferences of the decision-taker. The method allows to gain decision-taker experience, so it can be used by others. The method allows to build the model of decision-taker preferences in a specific application area. The model can be updated regularly to ensure its relevance and adequacy to the decision-taker system of preferences. This model is adaptive to the current preferences of the decision taker. ","[{'version': 'v1', 'created': 'Sun, 6 Sep 2015 14:25:45 GMT'}]",2015-09-08,"[['Vilisov', 'Valery', '']]","['transport problem', 'reverse problem', 'decision-taking', 'decision-making', 'adaptation']" 402,1604.06158,Andrew Connor,"Matthew Martin, James Charlton and Andy M. Connor",Augmented Body: Changing Interactive Body Play,,,10.1145/2677758.2677790,,cs.HC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper investigates the player's body as a system capable of unfamiliar interactive movement achieved through digital mediation in a playful environment. Body interactions in both digital and non-digital environments can be considered as a perceptually manipulative exploration of self. This implies a player may alter how they perceive their body and its operations in order to create a new playful and original experience. This paper therefore questions how player interaction can change as their perception of their body changes using augmentative technology. ","[{'version': 'v1', 'created': 'Thu, 21 Apr 2016 02:02:13 GMT'}]",2016-04-22,"[['Martin', 'Matthew', ''], ['Charlton', 'James', ''], ['Connor', 'Andy M.', '']]","['Augmented reality', 'interaction design', 'body interaction']" 403,1709.02858,Aritra Banerjee,"Aritra Banerjee, Shrey Choudhary","Advanced Page Rank Algorithm with Semantics, In Links, Out Links and Google Analytics","6 pages, 2 figures, Published with International Journal of Computer Trends and Technology (IJCTT)","International Journal of Computer Trends and Technology(IJCTT) V50 (3):137-142, August 2017. ISSN:2231-2803. Published by Seventh Sense Research Group",10.14445/22312803/IJCTT-V50P124,,cs.SI cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we have modified the existing page ranking mechanism as an advanced Page Rank Algorithm based on Semantics Inlinks Outlinks and Google Analytics. We have used Semantics page ranking to rank pages according to the word searched and match it with the metadata of the website and provide a value of rank according to the highest priority.We have also used Google analytics to store the number of hits of a website in a particular variable and add the required percentage amount to the ranking procedure.The proposed algorithm is used to find more relevant information according to users query.So this concept is very useful to display most valuable pages on the top of the result list on the basis of user browsing behaviour which reduce the search space to a large scale. ","[{'version': 'v1', 'created': 'Thu, 7 Sep 2017 13:17:46 GMT'}]",2017-09-12,"[['Banerjee', 'Aritra', ''], ['Choudhary', 'Shrey', '']]","['PageRank', 'Semantics', 'Inlinks', 'Outlinks and Google Analytics']" 404,1907.07962,Camille Roth,"Agathe Baltzer, M\'arton Karsai, Camille Roth",Interactional and Informational Attention on Twitter,"16 pages, 6 figures","Information 2019, 10(8), 250",10.3390/info10080250,,cs.SI cs.CY physics.data-an physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Twitter may be considered as a decentralized social information processing platform whose users constantly receive their followees' information feeds, which they may in turn dispatch to their followers. This decentralization is not devoid of hierarchy and heterogeneity, both in terms of activity and attention. In particular, we appraise the distribution of attention at the collective and individual level, which exhibits the existence of attentional constraints and focus effects. We observe that most users usually concentrate their attention on a limited core of peers and topics, and discuss the relationship between interactional and informational attention processes -- all of which, we suggest, may be useful to refine influence models by enabling the consideration of differential attention likelihood depending on users, their activity levels and peers' positions. ","[{'version': 'v1', 'created': 'Thu, 18 Jul 2019 10:13:04 GMT'}]",2020-04-27,"[['Baltzer', 'Agathe', ''], ['Karsai', 'Márton', ''], ['Roth', 'Camille', '']]","['attention', 'influence', 'ego-centered networks', 'Twitter study', 'information spreading']" 405,1301.2010,Maheswara Rao Valluri,"Maheswara Rao Valluri (School of Mathematical and Computing Sciences, Fiji National University, Derrick Campus, Suva, Fiji)",Authentication Schemes Using Polynomials Over Non-Commutative Rings,"International Journal on Cryptography and Information Security (IJCIS),Vol.2, No.4, December 2012",,10.5121/ijcis.2012.2406,,cs.CR math.RA,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Authentication is a process by which an entity,which could be a person or intended computer,establishes its identity to another entity.In private and public computer networks including the Internet,authentication is commonly done through the use of logon passwords. Knowledge of the password is assumed to guarantee that the user is authentic.Internet business and many other transactions require a more stringent authentication process. The aim of this paper is to propose two authentication schemes based on general non-commutative rings. The key idea of the schemes is that for a given non-commutative ring; one can build polynomials on additive structure and takes them as underlying work structure. By doing so, one can implement authentication schemes, one of them being zero-knowledge interactive proofs of knowledge, on multiplicative structure of the ring. The security of the schemes is based on the intractability of the polynomial symmetrical decomposition problem over the given non-commutative ring. ","[{'version': 'v1', 'created': 'Thu, 10 Jan 2013 00:23:35 GMT'}]",2013-01-11,"[['Valluri', 'Maheswara Rao', '', 'School of Mathematical and Computing Sciences,\n Fiji National University, Derrick Campus, Suva, Fiji']]","['Authentication', 'Cryptography', 'Non-commutative rings', 'Polynomial rings', 'Protocols', '&Security']" 406,1602.00269,Sunil Mandhan,"Sarath P R, Sunil Mandhan, Yoshiki Niwa",Numerical Atrribute Extraction from Clinical Texts,6 Pages,,10.13140/RG.2.1.4763.3365,"Submission 42, CLEF 2015",cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper describes about information extraction system, which is an extension of the system developed by team Hitachi for ""Disease/Disorder Template filling"" task organized by ShARe/CLEF eHealth Evolution Lab 2014. In this extension module we focus on extraction of numerical attributes and values from discharge summary records and associating correct relation between attributes and values. We solve the problem in two steps. First step is extraction of numerical attributes and values, which is developed as a Named Entity Recognition (NER) model using Stanford NLP libraries. Second step is correctly associating the attributes to values, which is developed as a relation extraction module in Apache cTAKES framework. We integrated Stanford NER model as cTAKES pipeline component and used in relation extraction module. Conditional Random Field (CRF) algorithm is used for NER and Support Vector Machines (SVM) for relation extraction. For attribute value relation extraction, we observe 95% accuracy using NER alone and combined accuracy of 87% with NER and SVM. ","[{'version': 'v1', 'created': 'Sun, 31 Jan 2016 15:58:51 GMT'}]",2016-02-02,"[['R', 'Sarath P', ''], ['Mandhan', 'Sunil', ''], ['Niwa', 'Yoshiki', '']]","['NLP', 'NER', 'relation extraction', 'information extraction', 'crf', 'svm']" 407,1905.03288,Abu Sufian,"Farhana Sultana, A. Sufian and Paramartha Dutta",Advancements in Image Classification using Convolutional Neural Network,"9 pages, 15 figures, 3 Tables. Submitted to 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks(ICRCICN 2018)","2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN)",10.1109/ICRCICN.2018.8718718,,cs.CV cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Convolutional Neural Network (CNN) is the state-of-the-art for image classification task. Here we have briefly discussed different components of CNN. In this paper, We have explained different CNN architectures for image classification. Through this paper, we have shown advancements in CNN from LeNet-5 to latest SENet model. We have discussed the model description and training details of each model. We have also drawn a comparison among those models. ","[{'version': 'v1', 'created': 'Wed, 8 May 2019 18:34:19 GMT'}]",2019-05-27,"[['Sultana', 'Farhana', ''], ['Sufian', 'A.', ''], ['Dutta', 'Paramartha', '']]","['AlexNet', 'Capsnet', 'Convolutional Neural Network', 'Deep learning', 'DenseNet', 'Image classification', 'ResNet', 'SENet']" 408,1812.11586,Veronica Vilaplana,"Marc G\'orriz, Albert Aparicio, Berta Ravent\'os, Ver\'onica Vilaplana, Elisa Sayrol and Daniel L\'opez-Codina","Leishmaniasis Parasite Segmentation and Classification using Deep Learning","10th International Conference, AMDO 2018, Palma de Mallorca, Spain, July 12-13, 2018, Proceedings","Articulated Motion and Deformable Objects, Series volume 10945 , 2018, Springer International Publishing AG, part of Springer Nature",10.1007/978-3-319-94544-6,,cs.CV cs.AI cs.CY,http://creativecommons.org/licenses/by/4.0/," Leishmaniasis is considered a neglected disease that causes thousands of deaths annually in some tropical and subtropical countries. There are various techniques to diagnose leishmaniasis of which manual microscopy is considered to be the gold standard. There is a need for the development of automatic techniques that are able to detect parasites in a robust and unsupervised manner. In this paper we present a procedure for automatizing the detection process based on a deep learning approach. We train a U-net model that successfully segments leismania parasites and classifies them into promastigotes, amastigotes and adhered parasites. ","[{'version': 'v1', 'created': 'Sun, 30 Dec 2018 18:42:08 GMT'}]",2019-01-01,"[['Górriz', 'Marc', ''], ['Aparicio', 'Albert', ''], ['Raventós', 'Berta', ''], ['Vilaplana', 'Verónica', ''], ['Sayrol', 'Elisa', ''], ['López-Codina', 'Daniel', '']]","['leishmaniosi', 'deep learning', 'segmentation']" 409,1808.01477,Hacer Yalim Keles,Long Ang Lim and Hacer Yalim Keles,Learning Multi-scale Features for Foreground Segmentation,,,10.1007/s10044-019-00845-9,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Foreground segmentation algorithms aim segmenting moving objects from the background in a robust way under various challenging scenarios. Encoder-decoder type deep neural networks that are used in this domain recently perform impressive segmentation results. In this work, we propose a novel robust encoder-decoder structure neural network that can be trained end-to-end using only a few training examples. The proposed method extends the Feature Pooling Module (FPM) of FgSegNet by introducing features fusions inside this module, which is capable of extracting multi-scale features within images; resulting in a robust feature pooling against camera motion, which can alleviate the need of multi-scale inputs to the network. Our method outperforms all existing state-of-the-art methods in CDnet2014 dataset by an average overall F-Measure of 0.9847. We also evaluate the effectiveness of our method on SBI2015 and UCSD Background Subtraction datasets. The source code of the proposed method is made available at https://github.com/lim-anggun/FgSegNet_v2 . ","[{'version': 'v1', 'created': 'Sat, 4 Aug 2018 12:55:25 GMT'}]",2019-09-04,"[['Lim', 'Long Ang', ''], ['Keles', 'Hacer Yalim', '']]","['Foregroundsegmentation', 'convolutional neural networks', 'feature poolingmodule', 'background subtraction']" 410,2102.06901,Tuukka Korhonen,Tuukka Korhonen,Lower Bounds on Dynamic Programming for Maximum Weight Independent Set,"14 pages, to appear in ICALP 2021",,10.4230/LIPIcs.ICALP.2021.87,,cs.CC cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We prove lower bounds on pure dynamic programming algorithms for maximum weight independent set (MWIS). We model such algorithms as tropical circuits, i.e., circuits that compute with $\max$ and $+$ operations. For a graph $G$, an MWIS-circuit of $G$ is a tropical circuit whose inputs correspond to vertices of $G$ and which computes the weight of a maximum weight independent set of $G$ for any assignment of weights to the inputs. We show that if $G$ has treewidth $w$ and maximum degree $d$, then any MWIS-circuit of $G$ has $2^{\Omega(w/d)}$ gates and that if $G$ is planar, or more generally $H$-minor-free for any fixed graph $H$, then any MWIS-circuit of $G$ has $2^{\Omega(w)}$ gates. An MWIS-formula is an MWIS-circuit where each gate has fan-out at most one. We show that if $G$ has treedepth $t$ and maximum degree $d$, then any MWIS-formula of $G$ has $2^{\Omega(t/d)}$ gates. It follows that treewidth characterizes optimal MWIS-circuits up to polynomials for all bounded degree graphs and $H$-minor-free graphs, and treedepth characterizes optimal MWIS-formulas up to polynomials for all bounded degree graphs. ","[{'version': 'v1', 'created': 'Sat, 13 Feb 2021 11:26:43 GMT'}, {'version': 'v2', 'created': 'Fri, 30 Apr 2021 05:51:10 GMT'}]",2022-02-08,"[['Korhonen', 'Tuukka', '']]","['Maximum weight independent set', 'Treewidth', 'Tropical circuits', 'Dynamicprogramming', 'Treedepth', 'Monotone circuit complexity']" 411,1901.04077,Mohamed Shehata,"Mohamed Shehata, Reda Abo-Al-Ez, Farid Zaghlool and Mohamed Taha Abou-Kreisha",Vehicles Detection Based on Background Modeling,"4 pages, 4 figures","International Journal of Engineering Trends and Technology 66.2 (2018): 92-95",10.14445/22315381/IJETT-V66P216,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Background image subtraction algorithm is a common approach which detects moving objects in a video sequence by finding the significant difference between the video frames and the static background model. This paper presents a developed system which achieves vehicle detection by using background image subtraction algorithm based on blocks followed by deep learning data validation algorithm. The main idea is to segment the image into equal size blocks, to model the static reference background image (SRBI), by calculating the variance between each block pixels and each counterpart block pixels in the adjacent frame, the system implemented into four different methods: Absolute Difference, Image Entropy, Exclusive OR (XOR) and Discrete Cosine Transform (DCT). The experimental results showed that the DCT method has the highest vehicle detection accuracy. ","[{'version': 'v1', 'created': 'Sun, 13 Jan 2019 22:41:18 GMT'}]",2019-01-23,"[['Shehata', 'Mohamed', ''], ['Abo-Al-Ez', 'Reda', ''], ['Zaghlool', 'Farid', ''], ['Abou-Kreisha', 'Mohamed Taha', '']]","['video processing', 'object detection', 'DCT', 'image entropy']" 412,2105.05796,Tomasz Stanis{\l}awek,"Tomasz Stanis{\l}awek and Filip Grali\'nski and Anna Wr\'oblewska and Dawid Lipi\'nski and Agnieszka Kaliska and Paulina Rosalska and Bartosz Topolski and Przemys{\l}aw Biecek","Kleister: Key Information Extraction Datasets Involving Long Documents with Complex Layouts",accepted to ICDAR 2021,"International Conference on Document Analysis and Recognition ICDAR 2021",10.1007/978-3-030-86549-8_36,,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The relevance of the Key Information Extraction (KIE) task is increasingly important in natural language processing problems. But there are still only a few well-defined problems that serve as benchmarks for solutions in this area. To bridge this gap, we introduce two new datasets (Kleister NDA and Kleister Charity). They involve a mix of scanned and born-digital long formal English-language documents. In these datasets, an NLP system is expected to find or infer various types of entities by employing both textual and structural layout features. The Kleister Charity dataset consists of 2,788 annual financial reports of charity organizations, with 61,643 unique pages and 21,612 entities to extract. The Kleister NDA dataset has 540 Non-disclosure Agreements, with 3,229 unique pages and 2,160 entities to extract. We provide several state-of-the-art baseline systems from the KIE domain (Flair, BERT, RoBERTa, LayoutLM, LAMBERT), which show that our datasets pose a strong challenge to existing models. The best model achieved an 81.77% and an 83.57% F1-score on respectively the Kleister NDA and the Kleister Charity datasets. We share the datasets to encourage progress on more in-depth and complex information extraction tasks. ","[{'version': 'v1', 'created': 'Wed, 12 May 2021 17:08:01 GMT'}]",2022-11-28,"[['Stanisławek', 'Tomasz', ''], ['Graliński', 'Filip', ''], ['Wróblewska', 'Anna', ''], ['Lipiński', 'Dawid', ''], ['Kaliska', 'Agnieszka', ''], ['Rosalska', 'Paulina', ''], ['Topolski', 'Bartosz', ''], ['Biecek', 'Przemysław', '']]","['Key Information Extraction', 'Visually Rich Documents', 'NamedEntity Recognition']" 413,1609.09179,Lucas Assun\c{c}\~ao,"Lucas Assun\c{c}\~ao, Thiago F. Noronha, Andr\'ea Cynthia Santos, Rafael Andrade","A linear programming based heuristic framework for min-max regret combinatorial optimization problems with interval costs",,,10.1016/j.cor.2016.12.010,,cs.DS cs.DM math.CO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. ","[{'version': 'v1', 'created': 'Thu, 29 Sep 2016 02:32:29 GMT'}]",2016-12-21,"[['Assunção', 'Lucas', ''], ['Noronha', 'Thiago F.', ''], ['Santos', 'Andréa Cynthia', ''], ['Andrade', 'Rafael', '']]","['Robust optimization', 'Matheuristics', 'Benders’ decomposition']" 414,1807.10819,Andrew Jesson D,"Andrew Jesson, Nicolas Guizard, Sina Hamidi Ghalehjegh, Damien Goblot, Florian Soudan, Nicolas Chapados",CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance,"20th International Conference on Medical Image Computing and Computer Assisted Intervention 2017",,10.1007/978-3-319-66179-7_73,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem. ","[{'version': 'v1', 'created': 'Fri, 27 Jul 2018 20:10:11 GMT'}]",2018-07-31,"[['Jesson', 'Andrew', ''], ['Guizard', 'Nicolas', ''], ['Ghalehjegh', 'Sina Hamidi', ''], ['Goblot', 'Damien', ''], ['Soudan', 'Florian', ''], ['Chapados', 'Nicolas', '']]","['lung cancer', 'computer aided detection', 'nodule detection', 'curriculum learning', 'data imbalance', '3D convolutional neural networks']" 415,1907.11817,Firas Alomari,"F Alomari, M Harbi",Scalable Source Code Similarity Detection in Large Code Repositories,"11 pages, 5 figures, Journal","EAI Endorsed Transactions on Scalable Information Systems: Online first, 2019",10.4108/eai.13-7-2018.159353,,cs.SE cs.IR cs.PL,http://creativecommons.org/licenses/by/4.0/," Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions. ","[{'version': 'v1', 'created': 'Fri, 26 Jul 2019 23:28:30 GMT'}]",2019-07-30,"[['Alomari', 'F', ''], ['Harbi', 'M', '']]","['clones', 'software similarity', 'Control Flow Graphs', 'Fingerprints']" 416,1309.3096,Neetu Goel,"Neetu Goel, R.B. Garg","Simulation of an Optimum Multilevel Dynamic Round Robin Scheduling Algorithm","International Journal of Computer Applications, Aug 2013",,10.5120/13263-0743,,cs.OS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," CPU scheduling has valiant effect on resource utilization as well as overall quality of the system. Round Robin algorithm performs optimally in time shared systems, but it performs more number of context switches, larger waiting time and larger response time. In order to simulate the behavior of various CPU scheduling algorithms and to improve Round Robin scheduling algorithm using dynamic time slice concept, in this paper we produce the implementation of new CPU scheduling algorithm called An Optimum Multilevel Dynamic Round Robin Scheduling (OMDRRS), which calculates intelligent time slice and warps after every round of execution. The results display the robustness of this software, especially for academic, research and experimental use, as well as proving the desirability and efficiency of the probabilistic algorithm over the other existing techniques and it is observed that this OMDRRS projects good performance as compared to the other existing CPU scheduling algorithms. ","[{'version': 'v1', 'created': 'Thu, 12 Sep 2013 10:26:10 GMT'}]",2015-06-17,"[['Goel', 'Neetu', ''], ['Garg', 'R. B.', '']]","['Operating System', 'FCFS', 'SJF', 'Dynamic Time Slice', 'Context Switch', 'Waiting time', 'Turnaround time']" 417,1207.4308,Alejandro Frery,"Maria E. Buemi, Marta Mejail, Julio Jacobo, Alejandro C. Frery and Heitor S. Ramos",Assessment of SAR Image Filtering using Adaptive Stack Filters,,"Proceedings 16th Iberoamerican Congress on Pattern Recognition (CIARP 2011), Lecture Notes in Computer Science vol. 7042, p. 89--96",10.1007/978-3-642-25085-9_10,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Stack filters are a special case of non-linear filters. They have a good performance for filtering images with different types of noise while preserving edges and details. A stack filter decomposes an input image into several binary images according to a set of thresholds. Each binary image is then filtered by a Boolean function, which characterizes the filter. Adaptive stack filters can be designed to be optimal; they are computed from a pair of images consisting of an ideal noiseless image and its noisy version. In this work we study the performance of adaptive stack filters when they are applied to Synthetic Aperture Radar (SAR) images. This is done by evaluating the quality of the filtered images through the use of suitable image quality indexes and by measuring the classification accuracy of the resulting images. ","[{'version': 'v1', 'created': 'Wed, 18 Jul 2012 09:16:07 GMT'}]",2012-07-19,"[['Buemi', 'Maria E.', ''], ['Mejail', 'Marta', ''], ['Jacobo', 'Julio', ''], ['Frery', 'Alejandro C.', ''], ['Ramos', 'Heitor S.', '']]","['Non-linear filters', 'speckle noise', 'stack filters', 'SAR image filtering']" 418,1903.03061,Tahar Kechadi M,"Damir Kahvedzic, Tahar Kechadi","DIALOG: A framework for modeling, analysis and reuse of digital forensic knowledge",,"Digital Investigation Volume 6, Supplement, September 2009, Pages S23-S33",10.1016/j.diin.2009.06.014,,cs.DL cs.AI cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper presents DIALOG (Digital Investigation Ontology); a framework for the management, reuse, and analysis of Digital Investigation knowledge. DIALOG provides a general, application independent vocabulary that can be used to describe an investigation at different levels of detail. DIALOG is defined to encapsulate all concepts of the digital forensics field and the relationships between them. In particular, we concentrate on the Windows Registry, where registry keys are modeled in terms of both their structure and function. Registry analysis software tools are modeled in a similar manner and we illustrate how the interpretation of their results can be done using the reasoning capabilities of ontology ","[{'version': 'v1', 'created': 'Thu, 21 Feb 2019 13:47:02 GMT'}]",2019-03-08,"[['Kahvedzic', 'Damir', ''], ['Kechadi', 'Tahar', '']]","['Windows', 'Registry', 'Digital', 'Investigation', 'Ontology']" 419,1808.09891,Zhan Su,"Peng Zhang, Zhan Su, Lipeng Zhang, Benyou Wang, Dawei Song",A Quantum Many-body Wave Function Inspired Language Modeling Approach,"10 pages,4 figures,CIKM","The 27th ACM International Conference on Information and Knowledge Management, October 22--26, 2018, Torino, Italy",10.1145/3269206.3271723,fp0675,cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The recently proposed quantum language model (QLM) aimed at a principled approach to modeling term dependency by applying the quantum probability theory. The latest development for a more effective QLM has adopted word embeddings as a kind of global dependency information and integrated the quantum-inspired idea in a neural network architecture. While these quantum-inspired LMs are theoretically more general and also practically effective, they have two major limitations. First, they have not taken into account the interaction among words with multiple meanings, which is common and important in understanding natural language text. Second, the integration of the quantum-inspired LM with the neural network was mainly for effective training of parameters, yet lacking a theoretical foundation accounting for such integration. To address these two issues, in this paper, we propose a Quantum Many-body Wave Function (QMWF) inspired language modeling approach. The QMWF inspired LM can adopt the tensor product to model the aforesaid interaction among words. It also enables us to reveal the inherent necessity of using Convolutional Neural Network (CNN) in QMWF language modeling. Furthermore, our approach delivers a simple algorithm to represent and match text/sentence pairs. Systematic evaluation shows the effectiveness of the proposed QMWF-LM algorithm, in comparison with the state of the art quantum-inspired LMs and a couple of CNN-based methods, on three typical Question Answering (QA) datasets. ","[{'version': 'v1', 'created': 'Tue, 28 Aug 2018 13:39:44 GMT'}, {'version': 'v2', 'created': 'Thu, 30 Aug 2018 02:34:18 GMT'}, {'version': 'v3', 'created': 'Mon, 3 Sep 2018 14:23:37 GMT'}]",2018-09-05,"[['Zhang', 'Peng', ''], ['Su', 'Zhan', ''], ['Zhang', 'Lipeng', ''], ['Wang', 'Benyou', ''], ['Song', 'Dawei', '']]","['Language modeling', 'quantum many-body wave function', 'convolutional neural network']" 420,0811.4170,Alain Barrat,"Alain Barrat, Ciro Cattuto, Vittoria Colizza, Jean-Francois Pinton, Wouter Van den Broeck, Alessandro Vespignani","High resolution dynamical mapping of social interactions with active RFID",,PLoS ONE 5(7): e11596 (2010),10.1371/journal.pone.0011596,,cs.CY cs.HC physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper we present an experimental framework to gather data on face-to-face social interactions between individuals, with a high spatial and temporal resolution. We use active Radio Frequency Identification (RFID) devices that assess contacts with one another by exchanging low-power radio packets. When individuals wear the beacons as a badge, a persistent radio contact between the RFID devices can be used as a proxy for a social interaction between individuals. We present the results of a pilot study recently performed during a conference, and a subsequent preliminary data analysis, that provides an assessment of our method and highlights its versatility and applicability in many areas concerned with human dynamics. ","[{'version': 'v1', 'created': 'Tue, 25 Nov 2008 20:54:34 GMT'}, {'version': 'v2', 'created': 'Tue, 25 Nov 2008 21:01:28 GMT'}]",2010-08-18,"[['Barrat', 'Alain', ''], ['Cattuto', 'Ciro', ''], ['Colizza', 'Vittoria', ''], ['Pinton', 'Jean-Francois', ''], ['Broeck', 'Wouter Van den', ''], ['Vespignani', 'Alessandro', '']]","['RFID', 'sensor networks', 'human dynamics', 'social network analysis', 'epidemiology']" 421,1408.4245,Dmitry Ustalov,Dmitry Ustalov,Towards crowdsourcing and cooperation in linguistic resources,"11 pages, 2 figures, accepted to RuSSIR 2014, the final publication is available at link.springer.com",,10.1007/978-3-319-25485-2_14,,cs.SI cs.CL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Linguistic resources can be populated with data through the use of such approaches as crowdsourcing and gamification when motivated people are involved. However, current crowdsourcing genre taxonomies lack the concept of cooperation, which is the principal element of modern video games and may potentially drive the annotators' interest. This survey on crowdsourcing taxonomies and cooperation in linguistic resources provides recommendations on using cooperation in existent genres of crowdsourcing and an evidence of the efficiency of cooperation using a popular Russian linguistic resource created through crowdsourcing as an example. ","[{'version': 'v1', 'created': 'Tue, 19 Aug 2014 08:32:49 GMT'}, {'version': 'v2', 'created': 'Mon, 24 Apr 2017 12:23:03 GMT'}]",2017-04-25,"[['Ustalov', 'Dmitry', '']]","['games with a purpose', 'mechanized labor', 'wisdom of thecrowd', 'gamification', 'crowdsourcing', 'cooperation', 'linguistic resources']" 422,1502.06703,Smitha M.L.,"B.H. Shekar, Smitha M.L., P. Shivakumara","Discrete Wavelet Transform and Gradient Difference based approach for text localization in videos","Fifth International Conference on Signals and Image Processing, IEEE, DOI 10.1109/ICSIP.2014.50, pp. 280-284, held at BNMIT, Bangalore in January 2014",,10.1109/ICSIP.2014.50,,cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The text detection and localization is important for video analysis and understanding. The scene text in video contains semantic information and thus can contribute significantly to video retrieval and understanding. However, most of the approaches detect scene text in still images or single video frame. Videos differ from images in temporal redundancy. This paper proposes a novel hybrid method to robustly localize the texts in natural scene images and videos based on fusion of discrete wavelet transform and gradient difference. A set of rules and geometric properties have been devised to localize the actual text regions. Then, morphological operation is performed to generate the text regions and finally the connected component analysis is employed to localize the text in a video frame. The experimental results obtained on publicly available standard ICDAR 2003 and Hua dataset illustrate that the proposed method can accurately detect and localize texts of various sizes, fonts and colors. The experimentation on huge collection of video databases reveal the suitability of the proposed method to video databases. ","[{'version': 'v1', 'created': 'Tue, 24 Feb 2015 07:46:34 GMT'}]",2015-02-25,"[['Shekar', 'B. H.', ''], ['L.', 'Smitha M.', ''], ['Shivakumara', 'P.', '']]","['Shot detection', 'Key Frame Extraction', 'DiscreteWavelet Transform', 'Gradient Difference', 'Text Localization']" 423,1806.00917,Roger Paredes,"R. Paredes, L. Duenas-Osorio, K.S. Meel, M.Y. Vardi",Principled Network Reliability Approximation: A Counting-Based Approach,,,10.1016/j.ress.2019.04.025,,cs.DS,http://creativecommons.org/licenses/by/4.0/," As engineered systems expand, become more interdependent, and operate in real-time, reliability assessment is indispensable to support investment and decision making. However, network reliability problems are known to be #P-complete, a computational complexity class largely believed to be intractable. The computational intractability of network reliability motivates our quest for reliable approximations. Based on their theoretical foundations, available methods can be grouped as follows: (i) exact or bounds, (ii) guarantee-less sampling, and (iii) probably approximately correct (PAC). Group (i) is well regarded due to its useful byproducts, but it does not scale in practice. Group (ii) scales well and verifies desirable properties, such as the bounded relative error, but it lacks error guarantees. Group (iii) is of great interest when precision and scalability are required, as it harbors computationally feasible approximation schemes with PAC-guarantees. We give a comprehensive review of classical methods before introducing modern techniques and our developments. We introduce K-RelNet, an extended counting-based estimation method that delivers PAC-guarantees for the K-terminal reliability problem. Then, we test methods' performance using various benchmark systems. We highlight the range of application of algorithms and provide the foundation for future resilience engineering as it increasingly necessitates methods for uncertainty quantification in complex systems. ","[{'version': 'v1', 'created': 'Mon, 4 Jun 2018 01:43:31 GMT'}, {'version': 'v2', 'created': 'Wed, 1 May 2019 22:36:00 GMT'}]",2019-05-03,"[['Paredes', 'R.', ''], ['Duenas-Osorio', 'L.', ''], ['Meel', 'K. S.', ''], ['Vardi', 'M. Y.', '']]","['network reliability', 'FPRAS', 'PAC', 'relative variance', 'uncertainty', 'model counting', 'satisfiability']" 424,0905.3640,Mattheos Protopapas,"Mattheos K. Protopapas, Elias B. Kosmatopoulos, Francesco Battaglia","Coevolutionary Genetic Algorithms for Establishing Nash Equilibrium in Symmetric Cournot Games","18 pages, 4 figures","Advances in Decision Sciences, vol. 2010, Article ID 573107",10.1155/2010/573107,,cs.GT cs.LG,http://creativecommons.org/licenses/by/3.0/," We use co-evolutionary genetic algorithms to model the players' learning process in several Cournot models, and evaluate them in terms of their convergence to the Nash Equilibrium. The ""social-learning"" versions of the two co-evolutionary algorithms we introduce, establish Nash Equilibrium in those models, in contrast to the ""individual learning"" versions which, as we see here, do not imply the convergence of the players' strategies to the Nash outcome. When players use ""canonical co-evolutionary genetic algorithms"" as learning algorithms, the process of the game is an ergodic Markov Chain, and therefore we analyze simulation results using both the relevant methodology and more general statistical tests, to find that in the ""social"" case, states leading to NE play are highly frequent at the stationary distribution of the chain, in contrast to the ""individual learning"" case, when NE is not reached at all in our simulations; to find that the expected Hamming distance of the states at the limiting distribution from the ""NE state"" is significantly smaller in the ""social"" than in the ""individual learning case""; to estimate the expected time that the ""social"" algorithms need to get to the ""NE state"" and verify their robustness and finally to show that a large fraction of the games played are indeed at the Nash Equilibrium. ","[{'version': 'v1', 'created': 'Fri, 22 May 2009 19:07:21 GMT'}]",2010-05-13,"[['Protopapas', 'Mattheos K.', ''], ['Kosmatopoulos', 'Elias B.', ''], ['Battaglia', 'Francesco', '']]","['Genetic Algorithms', 'Cournot oligopoly', 'Evolutionary GameTheory', 'Nash Equilibrium']" 425,1705.06681,Luisa Herrmann,"Zolt\'an F\""ul\""op and Luisa Herrmann and Heiko Vogler",Weighted Regular Tree Grammars with Storage,added errata,"Discrete Mathematics & Theoretical Computer Science, Vol. 20 no. 1, Automata, Logic and Semantics (July 3, 2018) dmtcs:4660",10.23638/DMTCS-20-1-26,,cs.FL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We introduce weighted regular tree grammars with storage as combination of (a) regular tree grammars with storage and (b) weighted tree automata over multioperator monoids. Each weighted regular tree grammar with storage generates a weighted tree language, which is a mapping from the set of trees to the multioperator monoid. We prove that, for multioperator monoids canonically associated to particular strong bi-monoids, the support of the generated weighted tree languages can be generated by (unweighted) regular tree grammars with storage. We characterize the class of all generated weighted tree languages by the composition of three basic concepts. Moreover, we prove results on the elimination of chain rules and of finite storage types, and we characterize weighted regular tree grammars with storage by a new weighted MSO-logic. ","[{'version': 'v1', 'created': 'Thu, 18 May 2017 16:34:49 GMT'}, {'version': 'v2', 'created': 'Sat, 20 May 2017 03:47:37 GMT'}, {'version': 'v3', 'created': 'Tue, 23 May 2017 07:36:22 GMT'}, {'version': 'v4', 'created': 'Fri, 8 Jun 2018 16:28:28 GMT'}, {'version': 'v5', 'created': 'Mon, 2 Jul 2018 16:36:22 GMT'}, {'version': 'v6', 'created': 'Thu, 2 Jul 2020 19:39:52 GMT'}]",2020-07-06,"[['Fülöp', 'Zoltán', ''], ['Herrmann', 'Luisa', ''], ['Vogler', 'Heiko', '']]","['regular tree grammars', 'weighted tree automata', 'multioperator monoids', 'storage types', 'weighted MSOlogic']" 426,1208.5556,Alireza Nemaney Pour,"Alireza Nemaney Pour, Raheleh Kholghi, Soheil Behnam Roudsari","Minimizing the Time of Spam Mail Detection by Relocating Filtering System to the Sender Mail Server","10 pages, 7 figures","International Journal of Network Security & Its Applications (IJNSA), Vol.4, No.2, March 2012",10.5121/ijnsa.2012.4204,,cs.IR cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Unsolicited Bulk Emails (also known as Spam) are undesirable emails sent to massive number of users. Spam emails consume the network resources and cause lots of security uncertainties. As we studied, the location where the spam filter operates in is an important parameter to preserve network resources. Although there are many different methods to block spam emails, most of program developers only intend to block spam emails from being delivered to their clients. In this paper, we will introduce a new and efficient approach to prevent spam emails from being transferred. The result shows that if we focus on developing a filtering method for spams emails in the sender mail server rather than the receiver mail server, we can detect the spam emails in the shortest time consequently to avoid wasting network resources. ","[{'version': 'v1', 'created': 'Tue, 28 Aug 2012 04:33:29 GMT'}]",2012-08-29,"[['Pour', 'Alireza Nemaney', ''], ['Kholghi', 'Raheleh', ''], ['Roudsari', 'Soheil Behnam', '']]","['Anti-spams', 'Receiver mail server', 'Sender mail server', 'Spam Email']" 427,2102.03339,Thibaut Verron,Maria Francis and Thibaut Verron,"On Two Signature Variants Of Buchberger's Algorithm Over Principal Ideal Domains","9 pages, 0 figures, accepted at ISSAC'21",,10.1145/3452143.3465522,,cs.SC math.AC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Signature-based algorithms have brought large improvements in the performances of Gr\""obner bases algorithms for polynomial systems over fields. Furthermore, they yield additional data which can be used, for example, to compute the module of syzygies of an ideal or to compute coefficients in terms of the input generators. In this paper, we examine two variants of Buchberger's algorithm to compute Gr\""obner bases over principal ideal domains, with the addition of signatures. The first one is adapted from Kandri-Rody and Kapur's algorithm, whereas the second one uses the ideas developed in the algorithms by L. Pan (1989) and D. Lichtblau (2012). The differences in constructions between the algorithms entail differences in the operations which are compatible with the signatures, and in the criteria which can be used to discard elements. We prove that both algorithms are correct and discuss their relative performances in a prototype implementation in Magma. ","[{'version': 'v1', 'created': 'Fri, 5 Feb 2021 18:40:51 GMT'}, {'version': 'v2', 'created': 'Tue, 25 May 2021 10:53:50 GMT'}]",2021-05-26,"[['Francis', 'Maria', ''], ['Verron', 'Thibaut', '']]","['Algorithms', 'Gröbner bases', 'Signature-based algorithms', 'Polynomials over rings', 'Principal Ideal Domains']" 428,2105.10878,Guandong Xu,"Hamad Zogan, Imran Razzak, Shoaib Jameel, Guandong Xu","DepressionNet: A Novel Summarization Boosted Deep Framework for Depression Detection on Social Media",,,10.1145/3404835.3462938,,cs.LG cs.CL cs.SI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Twitter is currently a popular online social media platform which allows users to share their user-generated content. This publicly-generated user data is also crucial to healthcare technologies because the discovered patterns would hugely benefit them in several ways. One of the applications is in automatically discovering mental health problems, e.g., depression. Previous studies to automatically detect a depressed user on online social media have largely relied upon the user behaviour and their linguistic patterns including user's social interactions. The downside is that these models are trained on several irrelevant content which might not be crucial towards detecting a depressed user. Besides, these content have a negative impact on the overall efficiency and effectiveness of the model. To overcome the shortcomings in the existing automatic depression detection methods, we propose a novel computational framework for automatic depression detection that initially selects relevant content through a hybrid extractive and abstractive summarization strategy on the sequence of all user tweets leading to a more fine-grained and relevant content. The content then goes to our novel deep learning framework comprising of a unified learning machinery comprising of Convolutional Neural Network (CNN) coupled with attention-enhanced Gated Recurrent Units (GRU) models leading to better empirical performance than existing strong baselines. ","[{'version': 'v1', 'created': 'Sun, 23 May 2021 08:05:53 GMT'}]",2021-05-25,"[['Zogan', 'Hamad', ''], ['Razzak', 'Imran', ''], ['Jameel', 'Shoaib', ''], ['Xu', 'Guandong', '']]","['depression detection', 'social network', 'deep learning', 'machine learning', 'text summarization']" 429,1803.04566,Nicholas Waytowich,"Nicholas R. Waytowich, Vernon Lawhern, Javier O. Garcia, Jennifer Cummings, Josef Faller, Paul Sajda, Jean M. Vettel","Compact Convolutional Neural Networks for Classification of Asynchronous Steady-state Visual Evoked Potentials",Accepted for publication at the Journal of Neural Engineering,,10.1088/1741-2552/aae5d8,,cs.LG q-bio.NC stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli. SSVEPs are robust signals measurable in the electroencephalogram (EEG) and are commonly used in brain-computer interfaces (BCIs). However, methods for high-accuracy decoding of SSVEPs usually require hand-crafted approaches that leverage domain-specific knowledge of the stimulus signals, such as specific temporal frequencies in the visual stimuli and their relative spatial arrangement. When this knowledge is unavailable, such as when SSVEP signals are acquired asynchronously, such approaches tend to fail. In this paper, we show how a compact convolutional neural network (Compact-CNN), which only requires raw EEG signals for automatic feature extraction, can be used to decode signals from a 12-class SSVEP dataset without the need for any domain-specific knowledge or calibration data. We report across subject mean accuracy of approximately 80% (chance being 8.3%) and show this is substantially better than current state-of-the-art hand-crafted approaches using canonical correlation analysis (CCA) and Combined-CCA. Furthermore, we analyze our Compact-CNN to examine the underlying feature representation, discovering that the deep learner extracts additional phase and amplitude related features associated with the structure of the dataset. We discuss how our Compact-CNN shows promise for BCI applications that allow users to freely gaze/attend to any stimulus at any time (e.g., asynchronous BCI) as well as provides a method for analyzing SSVEP signals in a way that might augment our understanding about the basic processing in the visual cortex. ","[{'version': 'v1', 'created': 'Mon, 12 Mar 2018 23:03:44 GMT'}, {'version': 'v2', 'created': 'Tue, 9 Oct 2018 16:53:26 GMT'}]",2018-10-10,"[['Waytowich', 'Nicholas R.', ''], ['Lawhern', 'Vernon', ''], ['Garcia', 'Javier O.', ''], ['Cummings', 'Jennifer', ''], ['Faller', 'Josef', ''], ['Sajda', 'Paul', ''], ['Vettel', 'Jean M.', '']]","['Brain-Computer Interface', 'EEG', 'Deep Learning', 'Convolutional Neural Network', 'Steady-state visual evoked potentials']" 430,0711.0840,Kees Middelburg,"J. A. Bergstra, C. A. Middelburg",A thread calculus with molecular dynamics,"47 pages; examples and results added, phrasing improved, references replaced","Information and Computation, 208(7):817-844, 2010",10.1016/j.ic.2010.01.004,,cs.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a theory of threads, interleaving of threads, and interaction between threads and services with features of molecular dynamics, a model of computation that bears on computations in which dynamic data structures are involved. Threads can interact with services of which the states consist of structured data objects and computations take place by means of actions which may change the structure of the data objects. The features introduced include restriction of the scope of names used in threads to refer to data objects. Because that feature makes it troublesome to provide a model based on structural operational semantics and bisimulation, we construct a projective limit model for the theory. ","[{'version': 'v1', 'created': 'Tue, 6 Nov 2007 11:25:20 GMT'}, {'version': 'v2', 'created': 'Tue, 18 Nov 2008 09:29:03 GMT'}]",2010-05-18,"[['Bergstra', 'J. A.', ''], ['Middelburg', 'C. A.', '']]","['thread calculus', 'thread algebra', 'molecular dynamics', 'restriction', 'projective limit model']" 431,1702.01805,Renato J Cintra,"F. M. Bayer, R. J. Cintra, A. Edirisuriya, A. Madanayake","A Digital Hardware Fast Algorithm and FPGA-based Prototype for a Novel 16-point Approximate DCT for Image Compression Applications","17 pages, 6 figures, 6 tables","Measurement Science and Technology, Volume 23, Number 11, 2012",10.1088/0957-0233/23/11/114010,,cs.MM cs.AR cs.DS cs.IT math.IT stat.ME,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The discrete cosine transform (DCT) is the key step in many image and video coding standards. The 8-point DCT is an important special case, possessing several low-complexity approximations widely investigated. However, 16-point DCT transform has energy compaction advantages. In this sense, this paper presents a new 16-point DCT approximation with null multiplicative complexity. The proposed transform matrix is orthogonal and contains only zeros and ones. The proposed transform outperforms the well-know Walsh-Hadamard transform and the current state-of-the-art 16-point approximation. A fast algorithm for the proposed transform is also introduced. This fast algorithm is experimentally validated using hardware implementations that are physically realized and verified on a 40 nm CMOS Xilinx Virtex-6 XC6VLX240T FPGA chip for a maximum clock rate of 342 MHz. Rapid prototypes on FPGA for 8-bit input word size shows significant improvement in compressed image quality by up to 1-2 dB at the cost of only eight adders compared to the state-of-art 16-point DCT approximation algorithm in the literature [S. Bouguezel, M. O. Ahmad, and M. N. S. Swamy. A novel transform for image compression. In {\em Proceedings of the 53rd IEEE International Midwest Symposium on Circuits and Systems (MWSCAS)}, 2010]. ","[{'version': 'v1', 'created': 'Mon, 6 Feb 2017 22:00:34 GMT'}]",2017-02-08,"[['Bayer', 'F. M.', ''], ['Cintra', 'R. J.', ''], ['Edirisuriya', 'A.', ''], ['Madanayake', 'A.', '']]","['DCT Approximation', 'Fast algorithms', 'FPGA']" 432,1912.02260,Jessica Thompson,"Jessica A.F. Thompson, Yoshua Bengio, Marc Schoenwiesner","The effect of task and training on intermediate representations in convolutional neural networks revealed with modified RV similarity analysis","4 pages, 4 figures, Conference on Cognitive Computational Neuroscience 2019",,10.32470/CCN.2019.1300-0,,cs.LG stat.ML,http://creativecommons.org/licenses/by/4.0/," Centered Kernel Alignment (CKA) was recently proposed as a similarity metric for comparing activation patterns in deep networks. Here we experiment with the modified RV-coefficient (RV2), which has very similar properties as CKA while being less sensitive to dataset size. We compare the representations of networks that received varying amounts of training on different layers: a standard trained network (all parameters updated at every step), a freeze trained network (layers gradually frozen during training), random networks (only some layers trained), and a completely untrained network. We found that RV2 was able to recover expected similarity patterns and provide interpretable similarity matrices that suggested hypotheses about how representations are affected by different training recipes. We propose that the superior performance achieved by freeze training can be attributed to representational differences in the penultimate layer. Our comparisons of random networks suggest that the inputs and targets serve as anchors on the representations in the lowest and highest layers. ","[{'version': 'v1', 'created': 'Wed, 4 Dec 2019 21:43:57 GMT'}]",2019-12-06,"[['Thompson', 'Jessica A. F.', ''], ['Bengio', 'Yoshua', ''], ['Schoenwiesner', 'Marc', '']]","['similarity analysis', 'random features', 'CNNs', 'freezetraining', 'RV coefficient']" 433,1107.5556,Flavio Cruz,Flavio Cruz and Ricardo Rocha,"Efficient Instance Retrieval of Subgoals for Subsumptive Tabled Evaluation of Logic Programs","Theory and Practice of Logic Programming, 27th Int'l. Conference on Logic Programming (ICLP 2011) Special Issue, volume 11, issue 4-5","Theory and Practice of Logic Programming, Volume 11, Special Issue 4-5, July 2011, pp 697-712 Published Cambridge University Press 2011",10.1017/S1471068411000251,,cs.PL,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Tabled evaluation is an implementation technique that solves some problems of traditional Prolog systems in dealing with recursion and redundant computations. Most tabling engines determine if a tabled subgoal will produce or consume answers by using variant checks. A more refined method, named call subsumption, considers that a subgoal A will consume from a subgoal B if A is subsumed by (an instance of) B, thus allowing greater answer reuse. We recently developed an extension, called Retroactive Call Subsumption, that improves upon call subsumption by supporting bidirectional sharing of answers between subsumed/subsuming subgoals. In this paper, we present both an algorithm and an extension to the table space data structures to efficiently implement instance retrieval of subgoals for subsumptive tabled evaluation of logic programs. Experiments results using the YapTab tabling system show that our implementation performs quite well on some complex benchmarks and is robust enough to handle a large number of subgoals without performance degradation. ","[{'version': 'v1', 'created': 'Wed, 27 Jul 2011 18:31:13 GMT'}]",2011-07-29,"[['Cruz', 'Flavio', ''], ['Rocha', 'Ricardo', '']]","['Tabled Evaluation', 'Call Subsumption', 'Implementation']" 434,1901.09082,Arjun Pakrashi,"Arjun Pakrashi, Bidyut B. Chaudhuri","A Kalman filtering induced heuristic optimization based partitional data clustering",,,10.1016/j.ins.2016.07.057,,cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Clustering algorithms have regained momentum with recent popularity of data mining and knowledge discovery approaches. To obtain good clustering in reasonable amount of time, various meta-heuristic approaches and their hybridization, sometimes with K-Means technique, have been employed. A Kalman Filtering based heuristic approach called Heuristic Kalman Algorithm (HKA) has been proposed a few years ago, which may be used for optimizing an objective function in data/feature space. In this paper at first HKA is employed in partitional data clustering. Then an improved approach named HKA-K is proposed, which combines the benefits of global exploration of HKA and the fast convergence of K-Means method. Implemented and tested on several datasets from UCI machine learning repository, the results obtained by HKA-K were compared with other hybrid meta-heuristic clustering approaches. It is shown that HKA-K is atleast as good as and often better than the other compared algorithms. ","[{'version': 'v1', 'created': 'Fri, 25 Jan 2019 21:09:35 GMT'}]",2019-01-29,"[['Pakrashi', 'Arjun', ''], ['Chaudhuri', 'Bidyut B.', '']]","['Clustering', 'K-Means', 'Optimization', 'Metaheuristic Optimization', 'Heuristics']" 435,1408.0101,Sandeep Kumar,"Sandeep Kumar, Vivek Kumar Sharma, Rajani Kumari",Memetic Search in Differential Evolution Algorithm,,,10.5120/15582-4406,,cs.NE,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments. ","[{'version': 'v1', 'created': 'Fri, 1 Aug 2014 08:45:14 GMT'}]",2015-06-22,"[['Kumar', 'Sandeep', ''], ['Sharma', 'Vivek Kumar', ''], ['Kumari', 'Rajani', '']]","['Differential Evolution', 'Swarm intelligence', 'Evolutionary computation', 'Memetic algorithm']" 436,1408.5951,Ashish Hota,"Ashish R. Hota, Siddharth Garg, Shreyas Sundaram",Fragility of the Commons under Prospect-Theoretic Risk Attitudes,"Accepted for publication in Games and Economic Behavior, 2016",,10.1016/j.geb.2016.06.003,,cs.GT q-fin.EC,http://creativecommons.org/licenses/by-nc-sa/4.0/," We study a common-pool resource game where the resource experiences failure with a probability that grows with the aggregate investment in the resource. To capture decision making under such uncertainty, we model each player's risk preference according to the value function from prospect theory. We show the existence and uniqueness of a pure Nash equilibrium when the players have heterogeneous risk preferences and under certain assumptions on the rate of return and failure probability of the resource. Greater competition, vis-a-vis the number of players, increases the failure probability at the Nash equilibrium; we quantify this effect by obtaining bounds on the ratio of the failure probability at the Nash equilibrium to the failure probability under investment by a single user. We further show that heterogeneity in attitudes towards loss aversion leads to higher failure probability of the resource at the equilibrium. ","[{'version': 'v1', 'created': 'Tue, 26 Aug 2014 00:22:36 GMT'}, {'version': 'v2', 'created': 'Mon, 22 Dec 2014 08:55:18 GMT'}, {'version': 'v3', 'created': 'Tue, 30 Jun 2015 17:42:21 GMT'}, {'version': 'v4', 'created': 'Fri, 20 May 2016 16:19:10 GMT'}, {'version': 'v5', 'created': 'Thu, 30 Jun 2016 22:20:32 GMT'}]",2016-07-04,"[['Hota', 'Ashish R.', ''], ['Garg', 'Siddharth', ''], ['Sundaram', 'Shreyas', '']]","['Tragedy of the commons', 'Common-pool resource', 'Resource dilemma', 'Risk heterogeneity', 'Loss aversion', 'Prospect theory', 'Inefficiency of equilibria']" 437,1907.11093,Pengyi Zhangpy,"Pengyi Zhang, Yunxin Zhong, Xiaoqiong Li","SlimYOLOv3: Narrower, Faster and Better for Real-Time UAV Applications",,,10.1109/ICCVW.2019.00011,,cs.CV,http://creativecommons.org/licenses/by-nc-sa/4.0/," Drones or general Unmanned Aerial Vehicles (UAVs), endowed with computer vision function by on-board cameras and embedded systems, have become popular in a wide range of applications. However, real-time scene parsing through object detection running on a UAV platform is very challenging, due to limited memory and computing power of embedded devices. To deal with these challenges, in this paper we propose to learn efficient deep object detectors through channel pruning of convolutional layers. To this end, we enforce channel-level sparsity of convolutional layers by imposing L1 regularization on channel scaling factors and prune less informative feature channels to obtain ""slim"" object detectors. Based on such approach, we present SlimYOLOv3 with fewer trainable parameters and floating point operations (FLOPs) in comparison of original YOLOv3 (Joseph Redmon et al., 2018) as a promising solution for real-time object detection on UAVs. We evaluate SlimYOLOv3 on VisDrone2018-Det benchmark dataset; compelling results are achieved by SlimYOLOv3 in comparison of unpruned counterpart, including ~90.8% decrease of FLOPs, ~92.0% decline of parameter size, running ~2 times faster and comparable detection accuracy as YOLOv3. Experimental results with different pruning ratios consistently verify that proposed SlimYOLOv3 with narrower structure are more efficient, faster and better than YOLOv3, and thus are more suitable for real-time object detection on UAVs. Our codes are made publicly available at https://github.com/PengyiZhang/SlimYOLOv3. ","[{'version': 'v1', 'created': 'Thu, 25 Jul 2019 14:22:43 GMT'}]",2020-05-04,"[['Zhang', 'Pengyi', ''], ['Zhong', 'Yunxin', ''], ['Li', 'Xiaoqiong', '']]","['SlimYOLOv3', 'object detection', 'drone', 'channel pruning', 'sparsity training']" 438,1406.3506,Hadi Fanaee-T,Hadi Fanaee-T and Jo\~ao Gama,Eigenspace Method for Spatiotemporal Hotspot Detection,To appear in Expert Systems Journal,,10.1111/exsy.12088,,cs.AI stat.AP,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Hotspot detection aims at identifying subgroups in the observations that are unexpected, with respect to the some baseline information. For instance, in disease surveillance, the purpose is to detect sub-regions in spatiotemporal space, where the count of reported diseases (e.g. Cancer) is higher than expected, with respect to the population. The state-of-the-art method for this kind of problem is the Space-Time Scan Statistics (STScan), which exhaustively search the whole space through a sliding window looking for significant spatiotemporal clusters. STScan makes some restrictive assumptions about the distribution of data, the shape of the hotspots and the quality of data, which can be unrealistic for some nontraditional data sources. A novel methodology called EigenSpot is proposed where instead of an exhaustive search over the space, tracks the changes in a space-time correlation structure. Not only does the new approach presents much more computational efficiency, but also makes no assumption about the data distribution, hotspot shape or the data quality. The principal idea is that with the joint combination of abnormal elements in the principal spatial and the temporal singular vectors, the location of hotspots in the spatiotemporal space can be approximated. A comprehensive experimental evaluation, both on simulated and real data sets reveals the effectiveness of the proposed method. ","[{'version': 'v1', 'created': 'Fri, 13 Jun 2014 11:26:13 GMT'}]",2014-09-23,"[['Fanaee-T', 'Hadi', ''], ['Gama', 'João', '']]","['Hotspot Detection', 'Spatiotemporal Data', 'Eigenspace', 'SVD', 'Outbreak Detection']" 439,1810.11388,Muhammad Burhan Hafez,"Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter","Deep Intrinsically Motivated Continuous Actor-Critic for Efficient Robotic Visuomotor Skill Learning",,"Paladyn, Journal of Behavioral Robotics, Volume 10, Issue 1, Pages 14-29, 2019",10.1515/pjbr-2019-0005,,cs.LG cs.AI cs.RO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input. Our neural architecture is composed of a critic and an actor network. Both networks receive the hidden representation of a deep convolutional autoencoder which is trained to reconstruct the visual input, while the centre-most hidden representation is also optimized to estimate the state value. Separately, an ensemble of predictive world models generates, based on its learning progress, an intrinsic reward signal which is combined with the extrinsic reward to guide the exploration of the actor-critic learner. Our approach is more data-efficient and inherently more stable than the existing actor-critic methods for continuous control from pixel data. We evaluate our algorithm for the task of learning robotic reaching and grasping skills on a realistic physics simulator and on a humanoid robot. The results show that the control policies learned with our approach can achieve better performance than the compared state-of-the-art and baseline algorithms in both dense-reward and challenging sparse-reward settings. ","[{'version': 'v1', 'created': 'Fri, 26 Oct 2018 15:32:32 GMT'}, {'version': 'v2', 'created': 'Mon, 18 Feb 2019 10:54:46 GMT'}]",2019-02-19,"[['Hafez', 'Muhammad Burhan', ''], ['Weber', 'Cornelius', ''], ['Kerzel', 'Matthias', ''], ['Wermter', 'Stefan', '']]","['deep reinforcement learning', 'actor-critic', 'continuous control', 'efficient exploration', 'neuro-robotics']" 440,1301.5887,Tamara Kolda,"Tamara G. Kolda and Ali Pinar and Todd Plantenga and C. Seshadhri and Christine Task",Counting Triangles in Massive Graphs with MapReduce,,"SIAM Journal on Scientific Computing, Vol. 36, No. 5, pp. S44-S77, October 2014",10.1137/13090729X,,cs.SI cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly-available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges. We can estimate the clustering coefficient by degree bin (e.g., we use exponential binning) and the number of triangles per bin, as well as the global clustering coefficient and total number of triangles, in an average of 0.33 seconds per million edges plus overhead (approximately 225 seconds total for our configuration). The technique can also be used to study triangle statistics such as the ratio of the highest and lowest degree, and we highlight differences between social and non-social networks. To the best of our knowledge, these are the largest triangle-based graph computations published to date. ","[{'version': 'v1', 'created': 'Thu, 24 Jan 2013 20:32:25 GMT'}, {'version': 'v2', 'created': 'Wed, 11 Sep 2013 21:45:16 GMT'}, {'version': 'v3', 'created': 'Mon, 9 Dec 2013 20:37:01 GMT'}]",2014-12-02,"[['Kolda', 'Tamara G.', ''], ['Pinar', 'Ali', ''], ['Plantenga', 'Todd', ''], ['Seshadhri', 'C.', ''], ['Task', 'Christine', '']]","['triangle counting', 'clustering coefficient', 'triangle characteristics', 'largescale networks', 'MapReduce']" 441,1903.08454,Lynsay Shepherd,"Sam Scholefield, Lynsay A. Shepherd",Gamification Techniques for Raising Cyber Security Awareness,"14 pages. Human-Computer International 2019, HCII 2019, Orlando, United States (2019), Springer",,10.1007/978-3-030-22351-9_13,,cs.HC cs.CR,http://creativecommons.org/licenses/by-nc-sa/4.0/," Due to the prevalence of online services in modern society, such as internet banking and social media, it is important for users to have an understanding of basic security measures in order to keep themselves safe online. However, users often do not know how to make their online interactions secure, which demonstrates an educational need in this area. Gamification has grown in popularity in recent years and has been used to teach people about a range of subjects. This paper presents an exploratory study investigating the use of gamification techniques to educate average users about password security, with the aim of raising overall security awareness. To explore the impact of such techniques, a role-playing quiz application (RPG) was developed for the Android platform to educate users about password security. Results gained from the work highlighted that users enjoyed learning via the use of the password application, and felt they benefitted from the inclusion of gamification techniques. Future work seeks to expand the prototype into a full solution, covering a range of security awareness issues. ","[{'version': 'v1', 'created': 'Wed, 20 Mar 2019 11:45:26 GMT'}, {'version': 'v2', 'created': 'Thu, 21 Mar 2019 00:57:45 GMT'}]",2019-07-24,"[['Scholefield', 'Sam', ''], ['Shepherd', 'Lynsay A.', '']]","['Gamification', 'games-based learning', 'security awareness', 'usable security', 'human-centered cyber security']" 442,1201.5217,Mohammad Tarek Al-Muallim M.Sc.,"M. T. Al-Muallim, R. El-Kouatly",Unsupervised Classification Using Immune Algorithm,,,10.5120/677-952,,cs.LG cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed UCSC algorithm is more reliable and has high classification precision comparing to traditional classification methods such as K-means. ","[{'version': 'v1', 'created': 'Wed, 25 Jan 2012 09:44:06 GMT'}]",2012-01-26,"[['Al-Muallim', 'M. T.', ''], ['El-Kouatly', 'R.', '']]","['Artificial Immune Systems', 'Clonal Selection Algorithms', 'Clustering', 'K-means Algorithm']" 443,1703.01975,Ruben Mayer,"Ruben Mayer, Harshit Gupta, Enrique Saurez, Umakishore Ramachandran","The Fog Makes Sense: Enabling Social Sensing Services With Limited Internet Connectivity","Ruben Mayer, Harshit Gupta, Enrique Saurez, and Umakishore Ramachandran. 2017. The Fog Makes Sense: Enabling Social Sensing Services With Limited Internet Connectivity. In Proceedings of The 2nd International Workshop on Social Sensing, Pittsburgh, PA, USA, April 21 2017 (SocialSens'17), 6 pages",,10.1145/3055601.3055614,,cs.DC,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Social sensing services use humans as sensor carriers, sensor operators and sensors themselves in order to provide situation-awareness to applications. This promises to provide a multitude of benefits to the users, for example in the management of natural disasters or in community empowerment. However, current social sensing services depend on Internet connectivity since the services are deployed on central Cloud platforms. In many circumstances, Internet connectivity is constrained, for instance when a natural disaster causes Internet outages or when people do not have Internet access due to economical reasons. In this paper, we propose the emerging Fog Computing infrastructure to become a key-enabler of social sensing services in situations of constrained Internet connectivity. To this end, we develop a generic architecture and API of Fog-enabled social sensing services. We exemplify the usage of the proposed social sensing architecture on a number of concrete use cases from two different scenarios. ","[{'version': 'v1', 'created': 'Mon, 6 Mar 2017 17:02:14 GMT'}]",2017-03-07,"[['Mayer', 'Ruben', ''], ['Gupta', 'Harshit', ''], ['Saurez', 'Enrique', ''], ['Ramachandran', 'Umakishore', '']]","['Social Sensing', 'Fog Computing', 'Situation Awareness']" 444,1705.01661,Li Yi,"Li Yi, Leonidas Guibas, Aaron Hertzmann, Vladimir G. Kim, Hao Su, Ersin Yumer","Learning Hierarchical Shape Segmentation and Labeling from Online Repositories",,,10.1145/3072959.3073652,,cs.GR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We propose a method for converting geometric shapes into hierarchically segmented parts with part labels. Our key idea is to train category-specific models from the scene graphs and part names that accompany 3D shapes in public repositories. These freely-available annotations represent an enormous, untapped source of information on geometry. However, because the models and corresponding scene graphs are created by a wide range of modelers with different levels of expertise, modeling tools, and objectives, these models have very inconsistent segmentations and hierarchies with sparse and noisy textual tags. Our method involves two analysis steps. First, we perform a joint optimization to simultaneously cluster and label parts in the database while also inferring a canonical tag dictionary and part hierarchy. We then use this labeled data to train a method for hierarchical segmentation and labeling of new 3D shapes. We demonstrate that our method can mine complex information, detecting hierarchies in man-made objects and their constituent parts, obtaining finer scale details than existing alternatives. We also show that, by performing domain transfer using a few supervised examples, our technique outperforms fully-supervised techniques that require hundreds of manually-labeled models. ","[{'version': 'v1', 'created': 'Thu, 4 May 2017 00:11:16 GMT'}]",2017-05-05,"[['Yi', 'Li', ''], ['Guibas', 'Leonidas', ''], ['Hertzmann', 'Aaron', ''], ['Kim', 'Vladimir G.', ''], ['Su', 'Hao', ''], ['Yumer', 'Ersin', '']]","['hierarchical shape structure', 'shape labeling', 'learning', 'Siamese networks']" 445,2204.13844,Wenjie Wang,"Wenjie Wang, Fuli Feng, Liqiang Nie, Tat-Seng Chua",User-controllable Recommendation Against Filter Bubbles,Accepted by SIGIR 2022,"Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022)",10.1145/3477495.3532075,,cs.IR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Recommender systems usually face the issue of filter bubbles: overrecommending homogeneous items based on user features and historical interactions. Filter bubbles will grow along the feedback loop and inadvertently narrow user interests. Existing work usually mitigates filter bubbles by incorporating objectives apart from accuracy such as diversity and fairness. However, they typically sacrifice accuracy, hurting model fidelity and user experience. Worse still, users have to passively accept the recommendation strategy and influence the system in an inefficient manner with high latency, e.g., keeping providing feedback (e.g., like and dislike) until the system recognizes the user intention. This work proposes a new recommender prototype called UserControllable Recommender System (UCRS), which enables users to actively control the mitigation of filter bubbles. Functionally, 1) UCRS can alert users if they are deeply stuck in filter bubbles. 2) UCRS supports four kinds of control commands for users to mitigate the bubbles at different granularities. 3) UCRS can respond to the controls and adjust the recommendations on the fly. The key to adjusting lies in blocking the effect of out-of-date user representations on recommendations, which contains historical information inconsistent with the control commands. As such, we develop a causality-enhanced User-Controllable Inference (UCI) framework, which can quickly revise the recommendations based on user controls in the inference stage and utilize counterfactual inference to mitigate the effect of out-of-date user representations. Experiments on three datasets validate that the UCI framework can effectively recommend more desired items based on user controls, showing promising performance w.r.t. both accuracy and diversity. ","[{'version': 'v1', 'created': 'Fri, 29 Apr 2022 01:46:56 GMT'}]",2022-05-02,"[['Wang', 'Wenjie', ''], ['Feng', 'Fuli', ''], ['Nie', 'Liqiang', ''], ['Chua', 'Tat-Seng', '']]","['User-controllable Recommender Systems', 'Counterfactual Inference', 'Filter Bubbles', 'Causal Recommendation']" 446,1807.10731,John Ashburner PhD,"John Ashburner, Mikael Brudfors, Kevin Bronik, Yael Balbastre","An Algorithm for Learning Shape and Appearance Models without Annotations","61 pages, 16 figures (some downsampled by a factor of 4), submitted to MedIA",,10.1016/j.media.2019.04.008,,cs.CV,http://creativecommons.org/licenses/by/4.0/," This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. It is based on the idea that having a more accurate shape and appearance model leads to more accurate image registration, which in turn leads to a more accurate shape and appearance model. This leads naturally to an iterative scheme, which is based on a probabilistic generative model that is fit using Gauss-Newton updates within an EM-like framework. It was developed with the aim of enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications. The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle ``missing data'', which allows it to be cross-validated according to how well it can predict left-out voxels. The suitability of the derived features for classifying individuals into patient groups was assessed by applying it to a dataset of over 1,900 segmented T1-weighted MR images, which included images from the COBRE and ABIDE datasets. ","[{'version': 'v1', 'created': 'Fri, 27 Jul 2018 16:59:22 GMT'}]",2019-05-29,"[['Ashburner', 'John', ''], ['Brudfors', 'Mikael', ''], ['Bronik', 'Kevin', ''], ['Balbastre', 'Yael', '']]","['Machine Learning', 'Latent Variables', 'Diffeomorphisms', 'Geodesic']" 447,1005.5613,Secretary Aircc Journal,"Murtaza Ali Khan (Royal University for Women, Bahrain)","An Automated Algorithm for Approximation of Temporal Video Data Using Linear B'EZIER Fitting","14 Pages, IJMA 2010","International journal of Multimedia & Its Applications 2.2 (2010) 81-94",10.5121/ijma.2010.2207,,cs.MM,http://creativecommons.org/licenses/by-nc-sa/3.0/," This paper presents an efficient method for approximation of temporal video data using linear Bezier fitting. For a given sequence of frames, the proposed method estimates the intensity variations of each pixel in temporal dimension using linear Bezier fitting in Euclidean space. Fitting of each segment ensures upper bound of specified mean squared error. Break and fit criteria is employed to minimize the number of segments required to fit the data. The proposed method is well suitable for lossy compression of temporal video data and automates the fitting process of each pixel. Experimental results show that the proposed method yields good results both in terms of objective and subjective quality measurement parameters without causing any blocking artifacts. ","[{'version': 'v1', 'created': 'Mon, 31 May 2010 08:11:59 GMT'}]",2010-07-15,"[['Khan', 'Murtaza Ali', '', 'Royal University for Women, Bahrain']]","['Video data', 'Compression', 'Linear Bezier', 'Fitting']" 448,1606.07506,Biljana Risteska Stojkoska Dr,"Biljana Stojkoska, Danco Davcev and Andrea Kulakov","Cluster-based MDS Algorithm for Nodes Localization in Wireless Sensor Networks with Irregular Topologies",6 pages. arXiv admin note: text overlap with arXiv:1606.07389,"Proceedings of the 5th international conference on Soft computing as transdisciplinary science and technology, ISBN: 978-1-60558-046-3, ACM, 2008. pp. 384-389",10.1145/1456223.1456302,,cs.DC cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Nodes localization in Wireless Sensor Networks (WSN) has arisen as a very challenging problem in the research community. Most of the applications for WSN are not useful without a priori known nodes positions. One solution to the problem is by adding GPS receivers to each node. Since this is an expensive approach and inapplicable for indoor environments, we need to find an alternative intelligent mechanism for determining nodes location. In this paper, we propose our cluster-based approach of multidimensional scaling (MDS) technique. Our initial experiments show that our algorithm outperforms MDS-MAP[8], particularly for irregular topologies in terms of accuracy. ","[{'version': 'v1', 'created': 'Thu, 23 Jun 2016 23:03:22 GMT'}]",2016-06-27,"[['Stojkoska', 'Biljana', ''], ['Davcev', 'Danco', ''], ['Kulakov', 'Andrea', '']]","['nodes localization', 'wireless sensor networks', 'multidimensional scaling']" 449,1701.04616,Vincenzo De Florio,"Vincenzo De Florio, Mohamed Bakhouya, D. Eloudghiri, Chris Blondia",Towards a Smarter organization for a Self-servicing Society,"Final version of a paper published in the Proceedings of International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (DSAI'16), special track on Emergent Technologies for Ambient Assisted Living (ETAAL)",,10.1145/3019943.3019980,,cs.CY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Traditional social organizations such as those for the management of healthcare are the result of designs that matched well with an operational context considerably different from the one we are experiencing today. The new context reveals all the fragility of our societies. In this paper, a platform is introduced by combining social-oriented communities and complex-event processing concepts: SELFSERV. Its aim is to complement the ""old recipes"" with smarter forms of social organization based on the self-service paradigm and by exploring culture-specific aspects and technological challenges. ","[{'version': 'v1', 'created': 'Tue, 17 Jan 2017 10:52:19 GMT'}]",2017-01-18,"[['De Florio', 'Vincenzo', ''], ['Bakhouya', 'Mohamed', ''], ['Eloudghiri', 'D.', ''], ['Blondia', 'Chris', '']]","['Self-service paradigm', 'complex event processing', 'service-oriented communities', 'health services', 'service-dominant logic']" 450,2301.06777,Marjan Celikik,"Marjan Celikik, Jacek Wasilewski, Ana Peleteiro Ramallo","Reusable Self-Attention Recommender Systems in Fashion Industry Applications",,"Sixteenth ACM Conference on Recommender Systems (RecSys '22), September 18--23, 2022, Seattle, WA, USA",10.1145/3523227.3547377,,cs.IR cs.LG,http://creativecommons.org/licenses/by/4.0/," A large number of empirical studies on applying self-attention models in the domain of recommender systems are based on offline evaluation and metrics computed on standardized datasets. Moreover, many of them do not consider side information such as item and customer metadata although deep-learning recommenders live up to their full potential only when numerous features of heterogeneous type are included. Also, normally the model is used only for a single use case. Due to these shortcomings, even if relevant, previous works are not always representative of their actual effectiveness in real-world industry applications. In this talk, we contribute to bridging this gap by presenting live experimental results demonstrating improvements in user retention of up to 30\%. Moreover, we share our learnings and challenges from building a re-usable and configurable recommender system for various applications from the fashion industry. In particular, we focus on fashion inspiration use-cases, such as outfit ranking, outfit recommendation and real-time personalized outfit generation. ","[{'version': 'v1', 'created': 'Tue, 17 Jan 2023 10:00:17 GMT'}]",2023-01-18,"[['Celikik', 'Marjan', ''], ['Wasilewski', 'Jacek', ''], ['Ramallo', 'Ana Peleteiro', '']]","['Recommendation Systems', 'Transformers', 'Fashion Industry']" 451,1207.4448,Sebastien Verel,"Bilel Derbel (LIFL, INRIA Lille - Nord Europe), S\'ebastien Verel (INRIA Lille - Nord Europe)",DAMS: Distributed Adaptive Metaheuristic Selection,,"Genetic And Evolutionary Computation Conference, Dublin : Ireland (2011)",10.1145/2001576.2001839,,cs.NE cs.AI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present a distributed generic algorithm called DAMS dedicated to adaptive optimization in distributed environments. Given a set of metaheuristic, the goal of DAMS is to coordinate their local execution on distributed nodes in order to optimize the global performance of the distributed system. DAMS is based on three-layer architecture allowing node to decide distributively what local information to communicate, and what metaheuristic to apply while the optimization process is in progress. The adaptive features of DAMS are first addressed in a very general setting. A specific DAMS called SBM is then described and analyzed from both a parallel and an adaptive point of view. SBM is a simple, yet efficient, adaptive distributed algorithm using an exploitation component allowing nodes to select the metaheuristic with the best locally observed performance, and an exploration component allowing nodes to detect the metaheuristic with the actual best performance. The efficiency of BSM-DAMS is demonstrated through experimentations and comparisons with other adaptive strategies (sequential and distributed). ","[{'version': 'v1', 'created': 'Wed, 18 Jul 2012 19:06:37 GMT'}]",2012-07-19,"[['Derbel', 'Bilel', '', 'LIFL, INRIA Lille - Nord Europe'], ['Verel', 'Sébastien', '', 'INRIA Lille - Nord Europe']]","['metaheurististics', 'distributed algorithms', 'adaptative algorithms', 'parameter control']" 452,1304.7373,Gunjan Kumar,"Gunjan Kumar, Saswata Shannigrahi",NP-Hardness of Speed Scaling with a Sleep State,"12 pages, 5 figures",,10.1016/j.tcs.2015.06.012,,cs.DS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," A modern processor can dynamically set it's speed while it's active, and can make a transition to sleep state when required. When the processor is operating at a speed $s$, the energy consumed per unit time is given by a convex power function $P(s)$ having the property that $P(0) > 0$ and $P""(s) > 0$ for all values of $s$. Moreover, $C > 0$ units of energy is required to make a transition from the sleep state to the active state. The jobs are specified by their arrival time, deadline and the processing volume. We consider a scheduling problem, called speed scaling with sleep state, where each job has to be scheduled within their arrival time and deadline, and the goal is to minimize the total energy consumption required to process these jobs. Albers et. al. proved the NP-hardness of this problem by reducing an instance of an NP-hard partition problem to an instance of this scheduling problem. The instance of this scheduling problem consists of the arrival time, the deadline and the processing volume for each of the jobs, in addition to $P$ and $C$. Since $P$ and $C$ depend on the instance of the partition problem, this proof of the NP-hardness of the speed scaling with sleep state problem doesn't remain valid when $P$ and $C$ are fixed. In this paper, we prove that the speed scaling with sleep state problem remains NP-hard for any fixed positive number $C$ and convex $P$ satisfying $P(0) > 0$ and $P""(s) > 0$ for all values of $s$. ","[{'version': 'v1', 'created': 'Sat, 27 Apr 2013 14:18:42 GMT'}]",2019-12-03,"[['Kumar', 'Gunjan', ''], ['Shannigrahi', 'Saswata', '']]","['Energy efficient algorithm', 'scheduling algorithm', 'NP-hardness']" 453,1812.00040,Daniel Severin Dr.,"Mauro Lucci, Graciela Nasini, Daniel Sever\'in",A Branch and Price Algorithm for List Coloring Problem,,"Electronic Notes in Theoretical Computer Science 346 (2019) 613-624",10.1016/j.entcs.2019.08.054,,cs.DM,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Coloring problems in graphs have been used to model a wide range of real applications. In particular, the List Coloring Problem generalizes the well-known Graph Coloring Problem for which many exact algorithms have been developed. In this work, we present a Branch-and-Price algorithm for the weighted version of the List Coloring Problem, based on the one developed by Mehrotra and Trick (1996) for the Graph Coloring Problem. This version considers non-negative weights associated to each color and it is required to assign a color to each vertex from predetermined lists in such a way the sum of weights of the assigned colors is minimum. Computational experiments show the good performance of our approach, being able to comfortably solve instances whose graphs have up to seventy vertices. These experiences also bring out that the hardness of the instances of the List Coloring Problem does not seem to depend only on quantitative parameters such as the size of the graph, its density, and the size of list of colors, but also on the distribution of colors present in the lists. ","[{'version': 'v1', 'created': 'Fri, 30 Nov 2018 20:18:42 GMT'}]",2020-02-28,"[['Lucci', 'Mauro', ''], ['Nasini', 'Graciela', ''], ['Severín', 'Daniel', '']]","['List Coloring', 'Branch and Price', 'Weighted Problem']" 454,1312.2859,Chanabasayya Vastrad M,Doreswamy and Chanabasayya .M. Vastrad,"A Robust Missing Value Imputation Method MifImpute For Incomplete Molecular Descriptor Data And Comparative Analysis With Other Missing Value Imputation Methods","arXiv admin note: text overlap with arXiv:1105.0828 by other authors without attribution","Published International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No4, August 2013",10.5121/ijcsa.2013.3406,,cs.CE,http://creativecommons.org/licenses/by-nc-sa/3.0/," Missing data imputation is an important research topic in data mining. Large-scale Molecular descriptor data may contains missing values (MVs). However, some methods for downstream analyses, including some prediction tools, require a complete descriptor data matrix. We propose and evaluate an iterative imputation method MiFoImpute based on a random forest. By averaging over many unpruned regression trees, random forest intrinsically constitutes a multiple imputation scheme. Using the NRMSE and NMAE estimates of random forest, we are able to estimate the imputation error. Evaluation is performed on two molecular descriptor datasets generated from a diverse selection of pharmaceutical fields with artificially introduced missing values ranging from 10% to 30%. The experimental result demonstrates that missing values has a great impact on the effectiveness of imputation techniques and our method MiFoImpute is more robust to missing value than the other ten imputation methods used as benchmark. Additionally, MiFoImpute exhibits attractive computational efficiency and can cope with high-dimensional data. ","[{'version': 'v1', 'created': 'Tue, 10 Dec 2013 16:24:28 GMT'}]",2013-12-13,"[['Doreswamy', '', ''], ['Vastrad', 'Chanabasayya . M.', '']]","['Random Forest', 'normalized root mean squared error', 'normalized mean absolute error', 'missing values 1']" 455,1903.00922,Benedikt Ahrens,"Benedikt Ahrens, Andr\'e Hirschowitz, Ambroise Lafont, Marco Maggesi",Modular specification of monads through higher-order presentations,17 pages,"Formal Structures for Computation and Deduction (FSCD) 2019, LIPIcs Vol. 131, pp. 6:1-6:19",10.4230/LIPIcs.FSCD.2019.6,,cs.LO math.LO,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," In their work on second-order equational logic, Fiore and Hur have studied presentations of simply typed languages by generating binding constructions and equations among them. To each pair consisting of a binding signature and a set of equations, they associate a category of `models', and they give a monadicity result which implies that this category has an initial object, which is the language presented by the pair. In the present work, we propose, for the untyped setting, a variant of their approach where monads and modules over them are the central notions. More precisely, we study, for monads over sets, presentations by generating (`higher-order') operations and equations among them. We consider a notion of 2-signature which allows to specify a monad with a family of binding operations subject to a family of equations, as is the case for the paradigmatic example of the lambda calculus, specified by its two standard constructions (application and abstraction) subject to $\beta$- and $\eta$-equalities. Such a 2-signature is hence a pair $(\Sigma,E)$ of a binding signature $\Sigma$ and a family $E$ of equations for $\Sigma$. This notion of 2-signature has been introduced earlier by Ahrens in a slightly different context. We associate, to each 2-signature $(\Sigma,E)$, a category of `models of $(\Sigma,E)$; and we say that a 2-signature is `effective' if this category has an initial object; the monad underlying this (essentially unique) object is the `monad specified by the 2-signature'. Not every 2-signature is effective; we identify a class of 2-signatures, which we call `algebraic', that are effective. Importantly, our 2-signatures together with their models enjoy `modularity': when we glue (algebraic) 2-signatures together, their initial models are glued accordingly. We provide a computer formalization for our main results. ","[{'version': 'v1', 'created': 'Sun, 3 Mar 2019 15:00:36 GMT'}]",2019-07-16,"[['Ahrens', 'Benedikt', ''], ['Hirschowitz', 'André', ''], ['Lafont', 'Ambroise', ''], ['Maggesi', 'Marco', '']]","['free monads', 'presentation of monads', 'initial semantics', 'signatures', 'syntax', 'monadic substitution', 'computer-checked proofs']" 456,1702.02223,Zongwei Zhou,"Hongkai Wang, Zongwei Zhou, Yingci Li, Zhonghua Chen, Peiou Lu, Wenzhi Wang, Wanyu Liu and Lijuan Yu","Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images",,,10.1186/s13550-017-0260-9,,cs.CV physics.med-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research. ","[{'version': 'v1', 'created': 'Tue, 7 Feb 2017 23:12:45 GMT'}]",2017-02-09,"[['Wang', 'Hongkai', ''], ['Zhou', 'Zongwei', ''], ['Li', 'Yingci', ''], ['Chen', 'Zhonghua', ''], ['Lu', 'Peiou', ''], ['Wang', 'Wenzhi', ''], ['Liu', 'Wanyu', ''], ['Yu', 'Lijuan', '']]","['Computer-aided diagnosis', 'Non-small cell lung cancer', 'Positron-emission tomography', 'Machine learning', 'Deep learning']" 457,1111.3127,Argimiro Arratia,Argimiro Arratia and Alejandra Caba\~na,Tracing the temporal evolution of clusters in a financial stock market,"22 pages, 3 figures (submitted for publication)",,10.1007/s10614-012-9327-x,,cs.CE math.ST q-fin.ST stat.TH,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We propose a methodology for clustering financial time series of stocks' returns, and a graphical set-up to quantify and visualise the evolution of these clusters through time. The proposed graphical representation allows for the application of well known algorithms for solving classical combinatorial graph problems, which can be interpreted as problems relevant to portfolio design and investment strategies. We illustrate this graph representation of the evolution of clusters in time and its use on real data from the Madrid Stock Exchange market. ","[{'version': 'v1', 'created': 'Mon, 14 Nov 2011 08:04:16 GMT'}]",2015-03-19,"[['Arratia', 'Argimiro', ''], ['Cabaña', 'Alejandra', '']]","['financial time series', 'raw–data clustering', 'graph combinatorics']" 458,1712.09619,Abdolah Sepahvand,"Mohammadreza Razzazi, Abdolah Sepahvand",Finding Two Disjoint Simple Paths on Two Sets of Points is NP-Complete,,scientiairanica.sharif.edu/article_4116.html 2017,10.24200/SCI.2017.4116,,cs.CC cs.CG,http://creativecommons.org/publicdomain/zero/1.0/," Finding two disjoint simple paths on two given sets of points is a geometric problem introduced by Jeff Erickson. This problem has various applications in computational geometry, like robot motion planning, generating polygon etc. We will present a reduction from planar Hamiltonian path to this problem, and prove that it is NP-Complete. To the best of our knowledge, no study has considered its complexity up until now. We also present a reduction from planar Hamiltonian path problem to the problem of finding a path on given points in the presence of arbitrary obstacles and prove that it is NP-Complete too. Also, we present a heuristic algorithm with time complexity of O(n4) to solve this problem. The proposed algorithm first calculates the convex hull for each of the entry points and then produces two simple paths on the two entry point sets ","[{'version': 'v1', 'created': 'Wed, 27 Dec 2017 16:36:50 GMT'}]",2017-12-29,"[['Razzazi', 'Mohammadreza', ''], ['Sepahvand', 'Abdolah', '']]","['Hamiltonian path', 'NP-complete', 'planar graph', 'simple path']" 459,1908.02877,Aaron Reite,"Aaron Reite, Scott Kangas, Zackery Steck, Steven Goley, Jonathan Von Stroh, and Steven Forsyth",Unsupervised Feature Learning in Remote Sensing,,,10.1117/12.2529791,,cs.CV cs.LG eess.IV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The need for labeled data is among the most common and well-known practical obstacles to deploying deep learning algorithms to solve real-world problems. The current generation of learning algorithms requires a large volume of data labeled according to a static and pre-defined schema. Conversely, humans can quickly learn generalizations based on large quantities of unlabeled data, and turn these generalizations into classifications using spontaneous labels, often including labels not seen before. We apply a state-of-the-art unsupervised learning algorithm to the noisy and extremely imbalanced xView data set to train a feature extractor that adapts to several tasks: visual similarity search that performs well on both common and rare classes; identifying outliers within a labeled data set; and learning a natural class hierarchy automatically. ","[{'version': 'v1', 'created': 'Wed, 7 Aug 2019 23:48:49 GMT'}]",2019-09-24,"[['Reite', 'Aaron', ''], ['Kangas', 'Scott', ''], ['Steck', 'Zackery', ''], ['Goley', 'Steven', ''], ['Von Stroh', 'Jonathan', ''], ['Forsyth', 'Steven', '']]","['remote sensing', 'unsupervised learning', 'deep learning', 'classification', 'similarity search', 'anomalydetection', 'hierarchy discovery']" 460,1005.0055,Pino Caballero-Gil,"Pino Caballero-Gil, Amparo F\'uster-Sabater",On the Design of Cryptographic Primitives,,"Acta Applicandae Mathematicae. Volume 93, Numbers 1-3, pp. 279-297. Sept 2006. Springer.",10.1007/s10440-006-9044-3,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The main objective of this work is twofold. On the one hand, it gives a brief overview of the area of two-party cryptographic protocols. On the other hand, it proposes new schemes and guidelines for improving the practice of robust protocol design. In order to achieve such a double goal, a tour through the descriptions of the two main cryptographic primitives is carried out. Within this survey, some of the most representative algorithms based on the Theory of Finite Fields are provided and new general schemes and specific algorithms based on Graph Theory are proposed. ","[{'version': 'v1', 'created': 'Sat, 1 May 2010 08:13:55 GMT'}]",2015-03-17,"[['Caballero-Gil', 'Pino', ''], ['Fúster-Sabater', 'Amparo', '']]","['Cryptography', 'Secure communications', 'Finite Fields', 'Discrete Mathematics']" 461,1907.01985,Keyan Ding,"Keyan Ding, Kede Ma, Shiqi Wang",Intrinsic Image Popularity Assessment,Accepted by ACM Multimedia 2019,"Proceedings of the 27th ACM International Conference on Multimedia, 2019",10.1145/3343031.3351007,,cs.MM cs.CV,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," The goal of research in automatic image popularity assessment (IPA) is to develop computational models that can accurately predict the potential of a social image to go viral on the Internet. Here, we aim to single out the contribution of visual content to image popularity, i.e., intrinsic image popularity. Specifically, we first describe a probabilistic method to generate massive popularity-discriminable image pairs, based on which the first large-scale image database for intrinsic IPA (I$^2$PA) is established. We then develop computational models for I$^2$PA based on deep neural networks, optimizing for ranking consistency with millions of popularity-discriminable image pairs. Experiments on Instagram and other social platforms demonstrate that the optimized model performs favorably against existing methods, exhibits reasonable generalizability on different databases, and even surpasses human-level performance on Instagram. In addition, we conduct a psychophysical experiment to analyze various aspects of human behavior in I$^2$PA. ","[{'version': 'v1', 'created': 'Wed, 3 Jul 2019 15:15:21 GMT'}, {'version': 'v2', 'created': 'Thu, 4 Jul 2019 15:38:50 GMT'}]",2021-01-25,"[['Ding', 'Keyan', ''], ['Ma', 'Kede', ''], ['Wang', 'Shiqi', '']]","['Intrinsic image popularity', 'learning-to-rank', 'deep neural networks', 'human behavior analysis']" 462,1705.00717,Reza Farahbakhsh,"Reza Farahbakhsh, Angel Cuevas, Antonio M. Ortiz, Xiao Han, Noel Crespi",How far is Facebook from me? Facebook network infrastructure analysis,"Published in: IEEE Communications Magazine (Volume: 53, Issue: 9, September 2015)",IEEE Communications Magazine 53.9 (2015): 134-142,10.1109/MCOM.2015.7263357,,cs.NI,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Facebook is today the most popular social network with more than one billion subscribers worldwide. To provide good quality of service (e.g., low access delay) to their clients, FB relies on Akamai, which provides a worldwide content distribution network with a large number of edge servers that are much closer to FB subscribers. In this article we aim to depict a global picture of the current FB network infrastructure deployment taking into account both native FB servers and Akamai nodes. Toward this end, we have performed a measurement-based analysis during a period of two weeks using 463 Planet- Lab nodes distributed across 41 countries. Based on the obtained data we compare the average access delay that nodes in different countries experience accessing both native FB servers and Akamai nodes. In addition, we obtain a wide view of the deployment of Akamai nodes serving FB users worldwide. Finally, we analyze the geographical coverage of those nodes, and demonstrate that in most of the cases Akamai nodes located in a particular country service not only local FB subscribers, but also FB users located in nearby countries. ","[{'version': 'v1', 'created': 'Mon, 1 May 2017 21:15:15 GMT'}]",2017-05-03,"[['Farahbakhsh', 'Reza', ''], ['Cuevas', 'Angel', ''], ['Ortiz', 'Antonio M.', ''], ['Han', 'Xiao', ''], ['Crespi', 'Noel', '']]","['Facebook', 'Akamai', 'CDN', 'Geolocation', 'AccessDelay']" 463,1707.08494,Alessandro Vittorio Papadopoulos,"Daniele Ioli, Alessandro Falsone, Alessandro Vittorio Papadopoulos, Maria Prandini","A compositional modeling framework for the optimal energy management of a district network","65 pages, 19 figures",,10.1016/j.jprocont.2017.10.005,,cs.SY,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," This paper proposes a compositional modeling framework for the optimal energy management of a district network. The focus is on cooling of buildings, which can possibly share resources to the purpose of reducing maintenance costs and using devices at their maximal efficiency. Components of the network are described in terms of energy fluxes and combined via energy balance equations. Disturbances are accounted for as well through their contribution in terms of energy. Different district configurations can be built, and the dimension and complexity of the resulting model will depend on the number and type of components and on the adopted disturbance description. Control inputs are available to efficiently operate and coordinate the district components, thus enabling energy management strategies to minimize the electrical energy costs or track some consumption profile agreed with the main grid operator. ","[{'version': 'v1', 'created': 'Wed, 26 Jul 2017 15:25:25 GMT'}]",2020-05-12,"[['Ioli', 'Daniele', ''], ['Falsone', 'Alessandro', ''], ['Papadopoulos', 'Alessandro Vittorio', ''], ['Prandini', 'Maria', '']]","['Smart grid modeling', 'Compositional systems', 'Energy']" 464,1901.07299,Faiq Khalid,"Faiq Khalid, Syed Rafay Hasan, Osman Hasan, Muhammad Shafique","SIMCom: Statistical Sniffing of Inter-Module Communications for Run-time Hardware Trojan Detection",,"Elsevier Microprocessors and Microsystems, 2020, pp. 103-122",10.1016/j.micpro.2020.103122,,cs.CR cs.LG stat.ML,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Timely detection of Hardware Trojans (HTs) has become a major challenge for secure integrated circuits. We present a run-time methodology for HT detection that employs a multi-parameter statistical traffic modeling of the communication channel in a given System-on-Chip (SoC), named as SIMCom. The main idea is to model the communication using multiple side-channel information like the Hurst exponent, the standard deviation of the injection distribution, and the hop distribution jointly to accurately identify HT-based online anomalies (that affects the communication without affecting the protocols or control signals). At design time, our methodology employs a ""property specification language"" to define and embed assertions in the RTL, specifying the correct communication behavior of a given SoC. At run-time, it monitors the anomalies in the communication behavior by checking the execution patterns against these assertions. For illustration, we evaluate SIMCom for three SoCs, i.e., SoC1 ( four single-core MC8051 and UART modules), SoC2 (four single-core MC8051, AES, ethernet, memctrl, BasicRSA, RS232 modules), and SoC3 (four single-core LEON3 connected with each other and AES, ethernet, memctrl, BasicRSA, RS23s modules microcontrollers). The experimental results show that with the combined analysis of multiple statistical parameters, SIMCom is able to detect all the benchmark Trojans (available on trust-hub) with less than 1% area and power overhead. ","[{'version': 'v1', 'created': 'Sun, 4 Nov 2018 22:21:45 GMT'}, {'version': 'v2', 'created': 'Thu, 14 May 2020 07:36:54 GMT'}, {'version': 'v3', 'created': 'Sat, 23 May 2020 20:07:16 GMT'}]",2020-05-26,"[['Khalid', 'Faiq', ''], ['Hasan', 'Syed Rafay', ''], ['Hasan', 'Osman', ''], ['Shafique', 'Muhammad', '']]","['Hardware Trojans', 'Statistical Modeling', 'Communication', 'microcontrollers', 'internet-of-thing', 'IoT', 'Hurst Exponent']" 465,1208.1918,Mahdi Aiash,"Mahdi Aiash, Glenford Mapp and Aboubaker Lasebae","A Survey on Authentication and Key Agreement Protocols in Heterogeneous Networks",,"International Journal of Network Security & Its Applications (IJNSA), Vol.4, No.4, July 2012, 199-214",10.5121/ijnsa.2012.4413,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Unlike current closed systems such as 2nd and 3rd generations where the core network is controlled by a sole network operator, multiple network operators will coexist and manage the core network in Next Generation Networks (NGNs). This open architecture and the collaboration between different network operators will support ubiquitous connectivity and thus enhances users' experience. However, this brings to the fore certain security issues which must be addressed, the most important of which is the initial Authentication and Key Agreement (AKA) to identify and authorize mobile nodes on these various networks. This paper looks at how existing research efforts the HOKEY WG, Mobile Ethernet and 3GPP frameworks respond to this new environment and provide security mechanisms. The analysis shows that most of the research had realized the openness of the core network and tried to deal with it using different methods. These methods will be extensively analysed in order to highlight their strengths and weaknesses. ","[{'version': 'v1', 'created': 'Thu, 9 Aug 2012 14:27:27 GMT'}]",2012-08-10,"[['Aiash', 'Mahdi', ''], ['Mapp', 'Glenford', ''], ['Lasebae', 'Aboubaker', '']]","['Authentication and Key Agreement Protocols', 'Casper/FDR', 'Next Generation Networks', 'Heterogeneous Networks']" 466,1806.09997,Pierre Denis Mr.,Pierre Denis,Probabilistic Inference Using Generators - The Statues Algorithm,"50 pages, incl. 3 appendices (v2: typos and minor corrections, added appendix C with proof of correctness)",,10.1007/978-3-030-52246-9_10,,cs.AI cs.MS,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We present here a new probabilistic inference algorithm that gives exact results in the domain of discrete probability distributions. This algorithm, named the Statues algorithm, calculates the marginal probability distribution on probabilistic models defined as direct acyclic graphs. These models are made up of well-defined primitives that allow to express, in particular, joint probability distributions, Bayesian networks, discrete Markov chains, conditioning and probabilistic arithmetic. The Statues algorithm relies on a variable binding mechanism based on the generator construct, a special form of coroutine; being related to the enumeration algorithm, this new algorithm brings important improvements in terms of efficiency, which makes it valuable in regard to other exact marginalization algorithms. After introduction of several definitions, primitives and compositional rules, we present in details the Statues algorithm. Then, we briefly discuss the interest of this algorithm compared to others and we present possible extensions. Finally, we introduce Lea and MicroLea, two Python libraries implementing the Statues algorithm, along with several use cases. A proof of the correctness of the algorithm is provided in appendix. ","[{'version': 'v1', 'created': 'Sun, 24 Jun 2018 23:00:29 GMT'}, {'version': 'v2', 'created': 'Thu, 2 Aug 2018 07:19:02 GMT'}]",2020-07-08,"[['Denis', 'Pierre', '']]","['probabilistic inference', 'probabilistic arithmetic', 'discrete probability distribution', 'probabilistic model', 'Bayesian network', 'marginalization', 'generator']" 467,1111.2301,Morgan Barbier,"Daniel Augot (INRIA Saclay - Ile de France, LIX), Morgan Barbier (INRIA Saclay - Ile de France, LIX), Caroline Fontaine (Lab-STICC)",Ensuring message embedding in wet paper steganography,IMACC 2011 (2011),IMACC 2011 7089 (2011) 244-258,10.1007/978-3-642-25516-8_15,,cs.CR,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," Syndrome coding has been proposed by Crandall in 1998 as a method to stealthily embed a message in a cover-medium through the use of bounded decoding. In 2005, Fridrich et al. introduced wet paper codes to improve the undetectability of the embedding by nabling the sender to lock some components of the cover-data, according to the nature of the cover-medium and the message. Unfortunately, almost all existing methods solving the bounded decoding syndrome problem with or without locked components have a non-zero probability to fail. In this paper, we introduce a randomized syndrome coding, which guarantees the embedding success with probability one. We analyze the parameters of this new scheme in the case of perfect codes. ","[{'version': 'v1', 'created': 'Wed, 9 Nov 2011 18:29:16 GMT'}]",2011-12-16,"[['Augot', 'Daniel', '', 'INRIA Saclay - Ile de France, LIX'], ['Barbier', 'Morgan', '', 'INRIA Saclay - Ile de France, LIX'], ['Fontaine', 'Caroline', '', 'Lab-STICC']]","['steganography', 'syndrome coding problem', 'wet paper codes']" 468,1710.11395,J\'er\^ome Kunegis,"J\'er\^ome Kunegis, Andreas Lommatzsch, Christian Bauckhage",The Slashdot Zoo: Mining a Social Network with Negative Edges,"10 pages, color, accepted at WWW 2009",Proc. WWW 2009,10.1145/1526709.1526809,,cs.SI physics.soc-ph,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We analyse the corpus of user relationships of the Slashdot technology news site. The data was collected from the Slashdot Zoo feature where users of the website can tag other users as friends and foes, providing positive and negative endorsements. We adapt social network analysis techniques to the problem of negative edge weights. In particular, we consider signed variants of global network characteristics such as the clustering coefficient, node-level characteristics such as centrality and popularity measures, and link-level characteristics such as distances and similarity measures. We evaluate these measures on the task of identifying unpopular users, as well as on the task of predicting the sign of links and show that the network exhibits multiplicative transitivity which allows algebraic methods based on matrix multiplication to be used. We compare our methods to traditional methods which are only suitable for positively weighted edges. ","[{'version': 'v1', 'created': 'Tue, 31 Oct 2017 10:04:05 GMT'}]",2017-11-01,"[['Kunegis', 'Jérôme', ''], ['Lommatzsch', 'Andreas', ''], ['Bauckhage', 'Christian', '']]","['Social network', 'Slashdot Zoo', 'negative edge', 'link prediction']" 469,2105.08671,Neel Kanwal,"Jiahui Geng, Neel Kanwal, Martin Gilje Jaatun, Chunming Rong","DID-eFed: Facilitating Federated Learning as a Service with Decentralized Identities",Paper accepted in EASE2021,,10.1145/3463274.3463352,,cs.CR cs.DC cs.LG,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We have entered the era of big data, and it is considered to be the ""fuel"" for the flourishing of artificial intelligence applications. The enactment of the EU General Data Protection Regulation (GDPR) raises concerns about individuals' privacy in big data. Federated learning (FL) emerges as a functional solution that can help build high-performance models shared among multiple parties while still complying with user privacy and data confidentiality requirements. Although FL has been intensively studied and used in real applications, there is still limited research related to its prospects and applications as a FLaaS (Federated Learning as a Service) to interested 3rd parties. In this paper, we present a FLaaS system: DID-eFed, where FL is facilitated by decentralized identities (DID) and a smart contract. DID enables a more flexible and credible decentralized access management in our system, while the smart contract offers a frictionless and less error-prone process. We describe particularly the scenario where our DID-eFed enables the FLaaS among hospitals and research institutions. ","[{'version': 'v1', 'created': 'Tue, 18 May 2021 16:55:34 GMT'}, {'version': 'v2', 'created': 'Wed, 19 May 2021 07:44:07 GMT'}]",2022-01-25,"[['Geng', 'Jiahui', ''], ['Kanwal', 'Neel', ''], ['Jaatun', 'Martin Gilje', ''], ['Rong', 'Chunming', '']]","['decentralized identity', 'blockchain', 'federated learning', 'FLaaS']" 470,1507.02955,Michael Walter,Christian Ikenmeyer and Ketan D. Mulmuley and Michael Walter,On vanishing of Kronecker coefficients,"43 pages, 1 figure",,10.1007/s00037-017-0158-y,,cs.CC math.RT,http://arxiv.org/licenses/nonexclusive-distrib/1.0/," We show that the problem of deciding positivity of Kronecker coefficients is NP-hard. Previously, this problem was conjectured to be in P, just as for the Littlewood-Richardson coefficients. Our result establishes in a formal way that Kronecker coefficients are more difficult than Littlewood-Richardson coefficients, unless P=NP. We also show that there exists a #P-formula for a particular subclass of Kronecker coefficients whose positivity is NP-hard to decide. This is an evidence that, despite the hardness of the positivity problem, there may well exist a positive combinatorial formula for the Kronecker coefficients. Finding such a formula is a major open problem in representation theory and algebraic combinatorics. Finally, we consider the existence of the partition triples $(\lambda, \mu, \pi)$ such that the Kronecker coefficient $k^\lambda_{\mu, \pi} = 0$ but the Kronecker coefficient $k^{l \lambda}_{l \mu, l \pi} > 0$ for some integer $l>1$. Such ""holes"" are of great interest as they witness the failure of the saturation property for the Kronecker coefficients, which is still poorly understood. Using insight from computational complexity theory, we turn our hardness proof into a positive result: We show that not only do there exist many such triples, but they can also be found efficiently. Specifically, we show that, for any $0<\epsilon\leq1$, there exists $0