Adapting our Traditional Geochemistry Workflows to Cope with a Data Tsunami
Description
The size of geochemical datasets used in exploration and orebody modelling are growing at a rapid rate. However, commonly, the industry has still been interpreting them using manual, somewhat subjective and outdated processes. Rio Tinto Exploration experienced its first tsunami of data through the discovery and delineation of the Winu Cu-Au deposit. This was in part because the vast majority of RTX samples now have >200 channels of information (geochemistry, spectral, petrophysical, geological meta-data). The desire to process this data in compressed timeframes, with increased value realisation, and with a workforce of varied backgrounds and experience, forced changes in data handling and interpretation practises. Some examples of step changes in interpretation methods are automated chemostratigraphy/lithology prediction and supervised labelling of core imagery to consistently identify textures of interest. These products mean that Rio Tinto has an objective deposit-scale geological and sulphide-vein classification model, which has had an impact on the resource modelling process and our understanding of the orebody. Close collaboration with service providers has been critical in achieving these objectives. Can the broader geochemical and data science community, coupled with increased internal capability in Rio Tinto, provide further assistance on the pathway to generating deeper, objective, real-time insights from our geochemical data?
Notes
Files
AEGC_2023_ID081.pdf
Files
(2.8 MB)
Name | Size | Download all |
---|---|---|
md5:ee2c0dd643173368efe46ab16e674c29
|
2.8 MB | Preview Download |