Geospatial Big Data and Cloud Computing
Description
Geospatial data is getting bigger and more difficult to analyse. Satellites, drones, vehicles, social networks, mobile devices, cameras, etc. generate vast amount of (open) geospatial data. Numerous methods and (open-source) applications have been developed to enable discovery, delivery, analysis, and visualization of geospatial data. However, large and complex geospatial data sets are difficult to handle using conventional systems and methods. Data processing and analysis tasks are time consuming, sometimes even not possible, if they are performed on laptops or local workstations.
Solutions require expert know-how and infrastructure. Local and regional studies with medium size data can be done faster by parallel computing on a workstation. Machine learning and AI studies with medium size data require special processing units (e.g., GPU/TPU) due to computational complexity. National, continental, and global studies with big data require distributed computing on a computing cluster due to computational complexity and/or large volume of data.
Cloud computing is on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Computing is moving to the Cloud, so is geocomputing. Developments in infrastructure, both hardware and software, gave a big push to data processing and analysis capabilities. Scalable and affordable computing is available through open-source systems that allow computing clusters on commodity hardware, and proprietary cloud-based data storage and computing services. However, it is challenging to choose the right solution(s) depending on the nature of geospatial data and analysis needs. Using the solutions usually requires a transition in modus operandi.
Files
20230223-IDEAMAP-Sudan-Public-Lecture-Cloud-Computing-Big-Data.pdf
Files
(2.7 MB)
Name | Size | Download all |
---|---|---|
md5:03612a86dee89b347d53fa5725cb42c4
|
2.7 MB | Preview Download |