Project deliverable Open Access

D3.1 Visual Analysis for Real Sensing

Marios Krestenitis; Konstantinos Ioannidis

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.7220100</identifier>
      <creatorName>Marios Krestenitis</creatorName>
      <creatorName>Konstantinos Ioannidis</creatorName>
    <title>D3.1  Visual Analysis for Real Sensing</title>
    <subject>Digital Twin</subject>
    <subject>3D Representation</subject>
    <subject>Semantic Segmentation</subject>
    <subject>Object Detection</subject>
    <date dateType="Issued">2022-10-18</date>
  <resourceType resourceTypeGeneral="Text">Project deliverable</resourceType>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.7220099</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;The main purpose of the document is to report all the algorithms that have been deployed for extracting features of the constructions and comprise the baseline for the higher level of implementations.&lt;br&gt;
The report is divided into three distinct sections. First, it presents the developments made with respect to the 3D representation pipeline that were deployed and applied to demo sites #1, #4, #6 and #7, which included bridges and industrial buildings. These include tools for Structure from Motion and Dense 3D point cloud generation on images captured in ASHVIN demo sites. Furthermore, a single image 3D depth prediction pipeline is presented.&lt;br&gt;
Secondly, the approach and implementation carried out to develop an AI-based defect detection service with pixel segmentation is presented. The aim was to detect and pixel segment different types of defects that are present in realistic inspection scenarios in demonstration site #3, which included airport operational areas. Convolutional neural network architectures were trained and validated.&lt;br&gt;
Finally, the report presents the results of the training and implementation of a state-of-the-art object detection algorithm to detect objects at construction sites for monitoring the construction progress. The implemented model was applied to images obtained from demo site #4 (construction of industrial building) and is based on YOLO v5 detector.&lt;/p&gt;</description>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/958161/">958161</awardNumber>
      <awardTitle>Assistants for Healthy, Safe, and Productive Virtual Construction Design, Operation &amp; Maintenance using a Digitial Twin</awardTitle>
All versions This version
Views 3030
Downloads 2323
Data volume 151.9 MB151.9 MB
Unique views 2828
Unique downloads 2323


Cite as