The future of mineral resource estimation: expert Q&A
On 24-25 May, AusIMM is hosting the inaugural Mineral Resource Estimation Conference, which aims to showcase international excellence and leading best practice in resource estimation.
In the lead up to the conference, we spoke to Conference Chair Rene Sterk FAusIMM(CP) and Committee Member Scott Dunham FAusIMM to find out more about the event and this critical area of professional practice.
Why is now the right time to be talking about mineral resource estimation?
Scott Dunham (SD): A lot of industries are seeing a fairly rapid technology change, which is also affecting resource estimation. The question is: is the effect going to be for better or worse? The conference will explore a range of issues and review the things that we’ve always done that may be changing, things we’ve always done that may not have been the best; and consider how we can take advantage of new technologies to improve mineral resource estimation going forward.
Rene Sterk (RS): I don’t think we’ve ever really brought mineral resource estimators together in one room. It’s so important because mineral resource estimation is really that point on the map where you get the first look at what you have, and where a lot of very important decisions are made. The conference presents a great opportunity to discuss what we’re doing as a community, and see if there are better ways of doing things.
What are the key challenges being faced by mineral resource estimators currently, and how will the conference address these challenges?
SD: We’re seeing demand for new and different types of minerals as well as a raft of new technologies. For a lot of these new minerals, we don’t have estimation examples and we don’t yet have widespread knowledge. As well as the disruption we’re seeing around new technologies, the primary products that feed into those technologies are also being disrupted. So how can we be sure that the practices we’ve used in the past are still fit for purpose going forward? With the spread of commodities growing, the conference will address this industry challenge.
RS: This is what conferences are for; a place where the world’s best practitioners can come together and discuss these challenges and where the entrants in the industry can learn and participate in the discussions.
What themes are covered in the accepted abstracts for the conference?
RS: We do have a lot of papers on optimisation of drilling, i.e. the economics of planning resources. There’s also a theme that’s been developing over a number of years now about the quantification of geological uncertainty. And of course, we have software and solutions as a theme because we do need these techniques in our toolkit.
SD: I think that one of the things I’ve noticed with the abstracts is that some people are doing fairly traditional types of estimates, whereas others are trying novel and new approaches to estimation. But one of the things that sits under most of those papers is a willingness to show what they’re doing and how they’re doing it, and to share that information with the rest of the industry, which I think is great. People can then start to learn from the experience of others. I think the practical implementation of this stuff is critical to the industry going forward, and that’s what this conference should help people with.
As part of the conference, the Parker Challenge is calling on mineral resource estimators to create a classified model from the same base dataset. What is the conference committee hoping to achieve by hosting this challenge?
SD: One of the things that plagues the resource estimation discipline is that we all work independently, working on individual deposits to produce our estimates. We all understand that how that estimate performs is difficult to determine.
One of the things that’s never been measured is: if you gave that same data to a wide group of people and asked them to come up with their estimate, how different would each one be? How much of what’s involved in estimation is the noise between people, as opposed to the noise of the data?
We assume that we get a single number coming out of a resource estimate, and that if I gave that data to another estimator then they’d come up with exactly the same number; yet we also know that’s not right. So what’s the variance going to be? Is it going to be plus or minus 10 per cent? 20 per cent? 100 per cent? The Parker Challenge gives us the opportunity to look at this variance because we need to understand when it comes to classification, when it comes to risk, when it comes to just about every part of what we do. So I’m really excited by the Parker Challenge. I think it’s a great opportunity to take a big step forward for the industry.
RS: As Scott mentioned, every deposit is different, and every deposit gets estimated by a different person, so it’s incredibly difficult to look at any of these different variables and find any standardisation. It will be interesting to see the spread, to see how different people interpret the geology differently. And then there is the actual treatment of the input data, what are different people doing with the quantification of risk and the classification? The results will be fascinating! The submissions are looking very strong at the moment so we’re looking forward to presenting it all on stage in the last act on Day 2.
I’d also like to thank Rio Tinto for their generous support of this challenge and supplying the base dataset.
In years to come, what do you expect will be the industry benefits of the Parker Challenge?
RS: What excites me about the industry right now is that there is more unity starting to develop. It’s not the secrecy about data that drives all decision making. We’re sharing more stuff between us as practitioners, so if we continue this with different deposits over the years to come and then reconcile our models with production data or improved models, there’s going to be a shift in how we see models coming out of ground and reconciliation. It will improve how we do things and should have a profound impact if there is continuity in this process.
SD: If you relate this to the AI industry, a lot of the big leaps forward all happened around challenges. For example, when a dataset was made publicly available and people were challenged to do the best image classification they could. Things would progress along at a fairly steady rate and then somebody would have a new idea and it would catapult the industry forward. I suspect we’ve got the same type of opportunity with the Parker Challenge, where we will have these step changes occurring.
Hopefully the Parker Challenge becomes an annual or a biannual event where we estimate a new deposit and build on our learnings.