The aim of this work is to develop an algorithm (iterative and continuous improvement) for the selection of the optimal coal blends based in the cross disciplines geomet data.
Earlier, This model was developed as linear blending models but not suitable in today scenario due to the fact that coal and geomet properties are not in a perfect linear relationship. Coal blending refers to the process of mixing or combining different coals grades that are mined from different seam locations to achieve the desired quality attributes for salable products.
Geostatistics and conditional simulation (provides multiple spatial grade variation outcomes for each seam) were applied to measure and quantify spatial variation of coal qualities (grade) for Ash, Moistures and calorific value for each coal seam. The dummy data have been prepared for classifying products as a target variable based on the feature as input variables considered in this model.
Mine Optimization has been performed basedon the Geostats and conditional simulation, which resulted improved classification of coal grade/waste in each roof and floor of seams, improving reserve estimation.
Plant optimization: geometallurgical variables are not necessarily linear or additive and therefore require very careful geostatistical and simulation scenarios to model non-linear variation in coal qualities minerology textures, and correct classification of coal/waste.
XGBoost (Extreme Gradient Boosting) belongs to a family of boosting algorithms. It uses the gradient boosting (GBM) framework at its core. For developing XGBoost models to predict product responses, from the whole dataset 80% of records were randomly considered for the training set, and the rest of the data was used for the model’s testing phase. The four main XGBoost hyperparameters are general, booster, learning task, and command-line parameters. General parameters are the overall functioning of the XGBoost model and include booster, verbosity, and nthread. From booster parameters, gbtree, gblinear, or dart can be selected. The verbosity of printing messages could be 0 (silent), 1 (warning), 2 (info), 3 (debug). After the tuning process (a try and error procedure), the optimum model features are adjusted and applied for training. The findings indicated that the XGBoost model provided significantly higher accuracy for predicting classification responses than others.