Conference paper Open Access

Unsupervised Detection of Cancerous Regions in Histology Imagery using Image-to-Image Translation

Stepec, D.; Skocaj, D.


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.5158712</identifier>
  <creators>
    <creator>
      <creatorName>Stepec, D.</creatorName>
      <givenName>D.</givenName>
      <familyName>Stepec</familyName>
      <affiliation>1University of Ljubljana, Faculty of Computer and Information Science Vecna pot 113, 1000 Ljubljana, Slovenia</affiliation>
    </creator>
    <creator>
      <creatorName>Skocaj, D.</creatorName>
      <givenName>D.</givenName>
      <familyName>Skocaj</familyName>
      <affiliation>University of Ljubljana, Faculty of Computer and Information Science Vecna pot 113, 1000 Ljubljana, Slovenia</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Unsupervised Detection of Cancerous Regions in Histology Imagery using Image-to-Image Translation</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2021</publicationYear>
  <dates>
    <date dateType="Issued">2021-04-29</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="ConferencePaper"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5158712</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5158711</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/ipc</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Detection of visual anomalies refers to the problem of finding patterns in different imaging data that do not conform to the expected visual appearance and is a widely studied problem in different domains. Due to the nature of anomaly occurrences and underlying generating processes, it is hard to characterize them and obtain labeled data. Obtaining labeled data is especially difficult in biomedical applications, where only trained domain experts can provide labels, which often come in large diversity and complexity. Recently presented approaches for unsupervised detection of visual anomalies approaches omit the need for labeled data and demonstrate promising results in domains, where anomalous samples significantly deviate from the normal appearance. Despite promising results, the performance of such approaches still lags behind supervised approaches and does not provide a one-fits-all solution. In this work, we present an image-to-image translation-based framework that significantly surpasses the performance of existing unsupervised methods and approaches the performance of supervised methods in a challenging domain of cancerous region detection in histology imagery&lt;/p&gt;</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/826121/">826121</awardNumber>
      <awardTitle>individualizedPaediatricCure: Cloud-based virtual-patient models for precision paediatric oncology</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
18
11
views
downloads
All versions This version
Views 1818
Downloads 1111
Data volume 95.5 MB95.5 MB
Unique views 1818
Unique downloads 1111

Share

Cite as