Dataset Open Access
Juan J. Bosch;
Ferdinand Fuhrmann;
Perfecto Herrera
<?xml version='1.0' encoding='UTF-8'?> <record xmlns="http://www.loc.gov/MARC21/slim"> <leader>00000nmm##2200000uu#4500</leader> <controlfield tag="005">20200124192601.0</controlfield> <controlfield tag="001">1290750</controlfield> <datafield tag="711" ind1=" " ind2=" "> <subfield code="d">08/10/2012</subfield> <subfield code="g">ISMIR 2012</subfield> <subfield code="a">13th International Society for Music Information Retrieval Conference</subfield> <subfield code="c">Porto, Portugal</subfield> </datafield> <datafield tag="700" ind1=" " ind2=" "> <subfield code="u">Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain</subfield> <subfield code="a">Ferdinand Fuhrmann</subfield> </datafield> <datafield tag="700" ind1=" " ind2=" "> <subfield code="u">Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain</subfield> <subfield code="0">(orcid)0000-0003-2799-7675</subfield> <subfield code="a">Perfecto Herrera</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">2282264831</subfield> <subfield code="z">md5:5a2e65520dcedada565dff2050bb2a56</subfield> <subfield code="u">https://zenodo.org/record/1290750/files/IRMAS-TestingData-Part1.zip</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">3387310839</subfield> <subfield code="z">md5:afb0c8ea92f34ee653693106be95c895</subfield> <subfield code="u">https://zenodo.org/record/1290750/files/IRMAS-TestingData-Part2.zip</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">2132712716</subfield> <subfield code="z">md5:9b3fb2d0c89cdc98037121c25bd5b556</subfield> <subfield code="u">https://zenodo.org/record/1290750/files/IRMAS-TestingData-Part3.zip</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">3181049879</subfield> <subfield code="z">md5:4fd9f5ed5a18d8e2687e6360b5f60afe</subfield> <subfield code="u">https://zenodo.org/record/1290750/files/IRMAS-TrainingData.zip</subfield> </datafield> <datafield tag="542" ind1=" " ind2=" "> <subfield code="l">open</subfield> </datafield> <datafield tag="260" ind1=" " ind2=" "> <subfield code="c">2014-09-08</subfield> </datafield> <datafield tag="909" ind1="C" ind2="O"> <subfield code="p">openaire_data</subfield> <subfield code="p">user-mdm-dtic-upf</subfield> <subfield code="p">user-mtgupf</subfield> <subfield code="o">oai:zenodo.org:1290750</subfield> </datafield> <datafield tag="100" ind1=" " ind2=" "> <subfield code="u">Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain</subfield> <subfield code="0">(orcid)0000-0003-4221-3517</subfield> <subfield code="a">Juan J. Bosch</subfield> </datafield> <datafield tag="245" ind1=" " ind2=" "> <subfield code="a">IRMAS: a dataset for instrument recognition in musical audio signals</subfield> </datafield> <datafield tag="980" ind1=" " ind2=" "> <subfield code="a">user-mdm-dtic-upf</subfield> </datafield> <datafield tag="980" ind1=" " ind2=" "> <subfield code="a">user-mtgupf</subfield> </datafield> <datafield tag="540" ind1=" " ind2=" "> <subfield code="u">https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode</subfield> <subfield code="a">Creative Commons Attribution Non Commercial Share Alike 4.0 International</subfield> </datafield> <datafield tag="650" ind1="1" ind2="7"> <subfield code="a">cc-by</subfield> <subfield code="2">opendefinition.org</subfield> </datafield> <datafield tag="520" ind1=" " ind2=" "> <subfield code="a"><p>This dataset includes musical audio excerpts with annotations of the predominant instrument(s) present. It was used for the evaluation in the following article:</p> <blockquote> <p>Bosch, J. J., Janer, J., Fuhrmann, F., &amp; Herrera, P. &ldquo;<a href="http://ismir2012.ismir.net/event/papers/559_ISMIR_2012.pdf">A Comparison of Sound Segregation Techniques for Predominant Instrument Recognition in Musical Audio Signals</a>&rdquo;, in Proc. ISMIR (pp. 559-564), 2012</p> </blockquote> <p>Please Acknowledge IRMAS in Academic Research</p> <p>IRMAS is intended to be used for training and testing methods for the automatic recognition of predominant instruments in musical audio. The instruments considered are: cello, clarinet, flute, acoustic guitar, electric guitar, organ, piano, saxophone, trumpet, violin, and human singing voice. This dataset is derived from the one compiled by Ferdinand Fuhrmann in his&nbsp;<a href="http://www.dtic.upf.edu/~ffuhrmann/PhD/">PhD thesis</a>, with the difference that we provide audio data in stereo format, the annotations in the testing dataset are limited to specific pitched instruments, and there is a different amount and lenght of excerpts.</p> <p><strong>Using this dataset</strong></p> <p>When IRMAS is used for academic research, we would highly appreciate if scientific publications of works partly based on the IRMAS dataset quote the above publication.</p> <p>We are interested in knowing if you find our datasets useful! If you use our dataset please email us at <a href="mailto:mtg-info@upf.edu">mtg-info@upf.edu</a> and tell us about your research.</p> <p>&nbsp;</p> <p><a href="https://www.upf.edu/web/mtg/irmas">https://www.upf.edu/web/mtg/irmas </a></p></subfield> </datafield> <datafield tag="773" ind1=" " ind2=" "> <subfield code="n">doi</subfield> <subfield code="i">isVersionOf</subfield> <subfield code="a">10.5281/zenodo.1290749</subfield> </datafield> <datafield tag="024" ind1=" " ind2=" "> <subfield code="a">10.5281/zenodo.1290750</subfield> <subfield code="2">doi</subfield> </datafield> <datafield tag="980" ind1=" " ind2=" "> <subfield code="a">dataset</subfield> </datafield> </record>
All versions | This version | |
---|---|---|
Views | 10,069 | 10,064 |
Downloads | 24,777 | 24,776 |
Data volume | 68.5 TB | 68.5 TB |
Unique views | 8,860 | 8,855 |
Unique downloads | 7,097 | 7,096 |