Conference paper Open Access

Artificial Neural Networks: the missing link between curiosity and accuracy

Franchini, Giorgia; Zanni, Luca; Burgio, Paolo

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="URL"></identifier>
      <creatorName>Franchini, Giorgia</creatorName>
      <affiliation>University of Modena and Reggio Emilia</affiliation>
      <creatorName>Zanni, Luca</creatorName>
      <affiliation>University of Modena and Reggio Emilia</affiliation>
      <creatorName>Burgio, Paolo</creatorName>
      <affiliation>University of Modena and Reggio Emilia</affiliation>
    <title>Artificial Neural Networks: the missing link between curiosity and accuracy</title>
    <subject>artificial neural network</subject>
    <subject>stochastic gradient</subject>
    <subject>mini-batch size increasing</subject>
    <date dateType="Issued">2019-08-19</date>
  <resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1007/978-3-030-16660-1_100</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;Artificial Neural Networks, as the name itself suggests, are biologically inspired algorithms designed to simulate the way in which the human brain processes information. Like neurons, which consist of a cell nucleus that receives input from other neurons through a web of input terminals, an Artificial Neural Network includes hundreds of single units, artificial neurons or processing elements, connected with coefficients (weights), and are organized in layers. The power of neural computations comes from connecting neurons in a network: in fact, in an Artificial Neural Network it is possible to manage a different number of information at the same time. What is not fully understood is which is the most efficient way to train an Artificial Neural Network, and in particular what is the best mini-batch size for maximize accuracy while minimizing training time. The idea that will be developed in this study has its roots in the biological world, that inspired the creation of Artifi- cial Neural Network in the first place.&lt;/p&gt;

&lt;p&gt;Humans have altered the face of the world through extraordinary adap- tive and technological advances: those changes were made possible by our cognitive structure, particularly the ability to reasoning and build causal models of external events. This dynamism is made possible by a high degree of curiosity. In the biological world, and especially in human beings, curiosity arises from the constant search of knowledge and infor- mation: behaviours that support the information sampling mechanism range from the very small (initial mini-batch size) to the very elaborate sustained (increasing mini-batch size).&lt;/p&gt;

&lt;p&gt;The goal of this project is to train an Artificial Neural Network by in- creasing dynamically, in an adaptive manner (with validation set), the mini-batch size; our hypothesis is that this training method will be more efficient (in terms of time and costs) compared to the ones implemented so far.&lt;/p&gt;</description>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/780622/">780622</awardNumber>
      <awardTitle>Edge and CLoud Computation: A Highly Distributed Software Architecture for Big Data AnalyticS</awardTitle>
Views 45
Downloads 68
Data volume 46.4 MB
Unique views 41
Unique downloads 63


Cite as