Report Open Access
Tonella, Paolo; Biagiola, Matteo
<?xml version='1.0' encoding='UTF-8'?> <record xmlns="http://www.loc.gov/MARC21/slim"> <leader>00000nam##2200000uu#4500</leader> <controlfield tag="005">20220316014931.0</controlfield> <controlfield tag="001">6355271</controlfield> <datafield tag="700" ind1=" " ind2=" "> <subfield code="u">Università della Svizzera italiana</subfield> <subfield code="a">Biagiola, Matteo</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">3038304</subfield> <subfield code="z">md5:1a453a18f1a4adc1dc3237d2383f9858</subfield> <subfield code="u">https://zenodo.org/record/6355271/files/TR-Precrime-2022-01.pdf</subfield> </datafield> <datafield tag="542" ind1=" " ind2=" "> <subfield code="l">open</subfield> </datafield> <datafield tag="260" ind1=" " ind2=" "> <subfield code="c">2022-02-09</subfield> </datafield> <datafield tag="909" ind1="C" ind2="O"> <subfield code="p">openaire</subfield> <subfield code="o">oai:zenodo.org:6355271</subfield> </datafield> <datafield tag="100" ind1=" " ind2=" "> <subfield code="u">Università della Svizzera italiana</subfield> <subfield code="a">Tonella, Paolo</subfield> </datafield> <datafield tag="245" ind1=" " ind2=" "> <subfield code="a">Testing the Plasticity of Reinforcement Learning Based Systems</subfield> </datafield> <datafield tag="536" ind1=" " ind2=" "> <subfield code="c">787703</subfield> <subfield code="a">Self-assessment Oracles for Anticipatory Testing</subfield> </datafield> <datafield tag="540" ind1=" " ind2=" "> <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield> <subfield code="a">Creative Commons Attribution 4.0 International</subfield> </datafield> <datafield tag="650" ind1="1" ind2="7"> <subfield code="a">cc-by</subfield> <subfield code="2">opendefinition.org</subfield> </datafield> <datafield tag="520" ind1=" " ind2=" "> <subfield code="a"><p>The data set available for pre-release training of a machine learning based system is often not representative of all possible execution contexts that the system will encounter in the field. Reinforcement Learning (RL) is a prominent approach among those that support continual learning, i.e., learning continually in the field, in the post-release phase. No study has so far investigated any method to test the plasticity of RL based systems, i.e., their capability to adapt to an execution context that may deviate from the training one.&nbsp;</p> <p>We propose an approach to test the plasticity of&nbsp; RL based systems. The output of our approach is a quantification of the adaptation and anti-regression capabilities of the system, obtained by computing&nbsp; the adaptation frontier of the system in a changed environment. We visualize such frontier as an adaptation/anti-regression heatmap in two dimensions, or as a clustered projection when more than two dimensions are involved. In this way, we provide developers with information on the amount of changes that can be accommodated by the continual learning component of the system, which is&nbsp; key&nbsp; to decide if online, in-the-field learning can be safely enabled or not.</p></subfield> </datafield> <datafield tag="773" ind1=" " ind2=" "> <subfield code="n">doi</subfield> <subfield code="i">isVersionOf</subfield> <subfield code="a">10.5281/zenodo.6026648</subfield> </datafield> <datafield tag="024" ind1=" " ind2=" "> <subfield code="a">10.5281/zenodo.6355271</subfield> <subfield code="2">doi</subfield> </datafield> <datafield tag="980" ind1=" " ind2=" "> <subfield code="a">publication</subfield> <subfield code="b">report</subfield> </datafield> </record>
All versions | This version | |
---|---|---|
Views | 89 | 57 |
Downloads | 141 | 54 |
Data volume | 427.8 MB | 164.1 MB |
Unique views | 77 | 50 |
Unique downloads | 130 | 47 |