web123456

【Feature Selection】2-Shadow Variable Search

'The idea is to add permutated copies of the original features to the data set. These permutated copies are called shadow variables or pseudovariables and the permutation breaks any relationship with the target variable, making them useless for prediction. The subsequent search is similar to the sequential forward selection algorithm, where one new feature is added in each iteration of the algorithm. This new feature is selected as the one that improves the performance of the model the most. This selection is computationally expensive, as one model for each of the not yet included features has to be trained. The difference between shadow variable search and sequential forward selection is that the former uses the selection of a shadow variable as the termination criterion. Selecting a shadow variable means that the best improvement is achieved by adding a feature that is unrelated to the target variable. Consequently, the variables not yet selected are most likely also correlated to the target variable only by chance. Therefore, only the previously selected features have a true influence on the target variable.'

  1. library(mlr3verse)
  2. #no control parameters
  3. optimizer = fs("shadow_variable_search")
  4. task = tsk("pima")
  5. #The data set contains missing values.
  6. task$missings()
  7. #impute the missing values
  8. learner = po("imputehist") %>>% lrn("", predict_type = "prob")
  9. instance = fsi(
  10. task = task,
  11. learner = learner,
  12. resampling = rsmp("cv", folds = 3),
  13. measures = msr(""),
  14. terminator = trm("none") #shadow variable search algorithm terminates by itself.
  15. )
  16. optimizer$optimize(instance)

 

  1. library()
  2. library(ggplot2)
  3. library(mlr3misc)
  4. library(viridisLite)
  5. data = (instance$archive)[order(-), head(.SD, 1), by = batch_nr][order(batch_nr)]
  6. data[, features := map_chr(features, str_collapse)]
  7. data[, batch_nr := (batch_nr)]
  8. ggplot(data, aes(x = batch_nr, y = )) +
  9. geom_bar(
  10. stat = "identity",
  11. width = 0.5,
  12. fill = viridis(1, begin = 0.5),
  13. alpha = 0.8) +
  14. geom_text(
  15. data = data,
  16. mapping = aes(x = batch_nr, y = 0, label = features),
  17. hjust = 0,
  18. nudge_y = 0.05,
  19. color = "white",
  20. size = 5
  21. ) +
  22. coord_flip() +
  23. xlab("Iteration") +
  24. theme_minimal()

optimization path of the feature selection. The feature glucose was selected first and in the following iterations age, mass and pedigree. Then a shadow variable was selected and the feature selection was terminated.

  1. task$select(instance$result_feature_set)
  2. learner$train(task)
  3. #The trained model can now be used to predict new, external data.