<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>From Must to May: Enabling Test-Time Feature Imputation and Interventions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Evan Rex</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mateo Espinosa Zarlenga</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrei Margeloiu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mateja Jamnik</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Technology, University of Cambridge</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Interpretable machine learning models can be improved by correcting mispredicted intermediate steps via test-time interventions on their intermediate predictions. Methods that jointly learn to impute missing features and predict a downstream task can benefit from such interventions. However, determining which features to prioritise for intervention remains a challenge. To address this, we propose F-Act, a novel method employing feature selection to adaptively manage feature availability during test-time. Our approach achieves this by combining in-model imputation and test-time interventions on intermediate predictions to avoid the need for model retraining. Furthermore, F-Act can recommend which features to prioritise when collecting data, a key property when optimising performance in resource-limited environments. Our empirical analysis shows F-Act performs competitively or better than previous baselines in inference tasks with missing features when incorporating feature collection recommendations. Additionally, we show F-Act can incorporate missing feature values through test-time interventions, improving predictive performance without retraining across tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Test-time interventions</kwd>
        <kwd>Missing value imputation</kwd>
        <kwd>Feature selection</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Machine learning models for tabular datasets typically expect a complete feature set during
both training and inference. However, in practice, features are often missing during inference
due to the high cost and dificulty of obtaining the complete feature set for some samples (e.g.,
gene expression counts) [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Such test-time feature unavailability necessitates models that
can make accurate predictions with incomplete feature sets. Additionally, since acquiring new
features may be prohibitively expensive, it is crucial for these models to ofer recommendations
on which missing feature values to collect to maximise their impact on the model’s accuracy.
      </p>
      <p>
        Current strategies addressing limited feature availability typically involve either: (i) imputing,
or predicting, missing features at test-time [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], or (ii) selecting a minimal feature subset on
which the model is retrained [3, 4]. Although these methods are practical, they have clear
limitations in scenarios with variable feature availability: Feature selection identifies critical
features but cannot adapt to changes in feature availability, while imputation provides flexibility
but lacks guidance for users on prioritising features.
      </p>
      <p>In this paper, we address this gap by introducing F-Act (Feature-wise Active adaptation), a
method that combines feature selection with imputation to enable adaptation to variable feature
availability without retraining, all while maintaining high predictive accuracy. F-Act achieves
this by, first, imputing missing features at test-time and, second, enabling new features to be
incorporated through test-time interventions, where F-Act’s intermediate predictions space is
modified to incorporate the presence of a new feature. Technically, F-Act employs diferentiable
mask sampling and feature reconstruction to learn to optimally operate from an incomplete set
of features. This design enables F-Act to advise on the order features should be collected to
maximise their impact, permitting deployment in resource-constrained settings. Using real and
synthetic datasets, we evaluate F-Act and find that it matches benchmarks’ performance in
imputation, feature selection, and prediction, providing recommendations that enhance model
performance through adaptive feature incorporation.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Work</title>
      <p>Imputation and Feature Selection Our work incorporates both feature imputation and
feature selection to address limited test-time feature availability. As such, our work is placed at
the intersection of these two research subfields. Previous works in feature imputation can be
divided into auxiliary model-based approaches and joint learning approaches. In this context,
auxiliary-model-based approaches pair prediction models with separate imputation methods
[5, 6, 7] while joint learning approaches integrate both prediction and feature selection in an
end-to-end model [8, 9]. Nevertheless, we emphasise that previously proposed imputation
techniques lack feature prioritisation for collection, leading to uncertainty over what features
one should prioritise when deploying the model in a setup with varying feature availability.
This is a key gap we aim to address with this paper.</p>
      <p>Feature selection techniques [10, 11, 12, 13], in contrast, deal with potential feature
unavailability (or redundancy) by learning to select a subset of features from which a task can be
accurately solved. These approaches commonly achieve this by learning a feature importance
ranking that can then inform which subset of features one should select. A shortcoming of these
approaches, however, is their inflexibility to a varying set of input features, as, once features
have been selected, they require a fixed subset of features to train the downstream model [14].
As such, in this work we combine feature imputation with feature selection to enable easy
adaptability from a core set of initially selected features. We note that combining feature
selection and imputation has been previously explored [15, 16, 17, 3]. However, performing feature
selection with joint learning for inference and imputation in a single end-to-end architecture is
novel. This is worth exploring, as Bertsimas et al. [8] and Le Morvan et al. [9] note that joint
learning for imputation and inference can yield improved results.</p>
      <p>Relation to Active Feature Acquisition Active Feature Acquisition (AFA) involves learning
a policy for collecting new features at test-time such that a model’s accuracy is maximised
after observing a small set of features [18, 19, 20, 21]. As we are interested in providing feature
collection recommendations, our work is highly related to AFA. Nevertheless, we highlight that
we distinguish ourselves from traditional AFA approaches in two key ways. First, we provide
feature collection recommendations at a global level rather than at a local, per-sample level.
Second, we learn to both select a subset of features and impute missing features in an end-to-end
(
,</p>
      <p>)</p>
      <p>Mask Module
(
)
( )
( , )</p>
      <p>( , )
Reconstruction</p>
      <p>Module</p>
      <p>Test-time
interventions</p>
      <p>Prediction
Module
fashion, enabling missing features to be predicted at test-time.</p>
      <sec id="sec-2-1">
        <title>Relation to Human Interpretable Artificial Intelligence Human Interpretable Artificial</title>
        <p>Intelligence (HI-AI) refers to AI systems designed to ensure their decisions and workings are
understandable and transparent to humans. Our work is related to HI-AI methods as it enables
(1) reconstruction of missing features through test-time imputations, providing insights into
a model’s understanding of how missing features relate to provided ones, (2) construction of
feature importance rankings through its feature collection recommendations, and (3) test-time
interventions, where users can provide previously missing features by intervening on F-Act’s
intermediate predictions using these features’ values. As such, our work is related to previous
interpretable imputation techniques [22, 7] and methods in the concept-based explainable AI
literature [23, 24, 25, 26] that provide test-time feedback to models via human-aligned concepts.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Feature-wise Active adaptation</title>
      <p>We present a joint learning framework for inference and missing value imputation that ofers
users insights into which missing feature values should be prioritised for collection and
intervention. Formally, our goal is to learn a predictor  , parameterised by  , which can operate on
any subset of features  ⊆ ℱ . Concurrently, the predictor  should also suggest which missing
feature values to collect for intervention at test-time, prioritised by their importance to improve
the predictor’s performance. We achieve this by introducing F-Act (Figure 1), a method for the
joint learning of feature selection, missing value imputation, and prediction. Our architecture
comprises three modules: (i) a Mask module that facilitates feature selection, (ii) a Reconstruction
module for feature imputation, and (iii) a Prediction module for making predictions.</p>
      <p>
        The Mask module serves three objectives: i) global feature selection to eliminate irrelevant
features, ii) learning feature importance rankings to provide recommendations for feature
collection, and iii) simulating a missing feature scenario to train the Reconstruction module.
We achieve all this functionality through hierarchical masking, first employing a
soft mask
msoft ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] for feature selection and then a hard mask mhard ∈ {0, 1} to simulate missing
features. The hard mask is sampled from a Gumbel-Softmax distribution [27] with a learnable
probability  ℎ.
      </p>
      <p>To enable predictions from any truncated feature space and facilitate test-time interventions,
predictions even when in the presence of missing features.
we use the Reconstruction Module to reconstruct features from the truncated feature space
˜
  generated by the hard mask. The Reconstruction Module outputs “reconstructed” samples,
containing feature values for all values, even though they are missing from the original input.
The reconstructed samples are then processed by the Prediction Module, which maps from the
complete feature space ˜ to make the predictions . This setup ensures the model can make</p>
      <p>To provide feature collection recommendations, we define a greedy intervention policy that
corrects reconstructed data based on feature selection probabilities from the mask module. This
is the same approach used by feature importance-based selection methods [11, 4, 13, 12].</p>
      <p>In order to jointly learn to perform feature selection, missing feature imputation and,
downstream task prediction, we train our model using a composite loss function ℒ
ℒP +  ℒS +  ℒR where   and   are hyperparameters controlling how much we value
feature selection (i.e., ℒS) and feature reconstruction (i.e., ℒR) over task accuracy (i.e., ℒP).</p>
      <p>To encourage our model to perform a sparse feature selection, we follow previous works [28, 4]
=
and let ℒS be the ℓ1 norm of the soft and hard learnable mask probabilities:
ℒS := ∑︁ (︁

=1
 soft +  hard
︁)</p>
      <p>In contrast, to encourage accurate imputation of masked features, we include a reconstruction
feature values and ground truth feature values:
loss term ℒR that minimises the ℓ2 norm of the diference between reconstructed/imputed
ℒR :=</p>
      <p>1
|ℱ ∖ | ∈ℱ∖</p>
      <p>∑︁ (x˜ −  (mhard ⊙ x˜;  ))2
where  is the set of features selected by the Mask module. This loss encourages our model to
learn to select a set of core features from which other, dependent features, may be easily imputed.</p>
      <p>Finally, to enable our model to predict downstream tasks both in and outside the presence of
potential feature interventions, we follow the work in [25] and define our prediction loss, ℒP, as
ℒP := pred( (x; , 0), X, Y) + max pred( (x; ,  max), X, Y)
Here,  (x; ,  ) represents the output of the task predictor for sample x when the top-
dependent features are intervened on and max is the maximum number of interventions one
may perform (i.e., the number of dependent features |ℱ ∖ |). We clarify that, in this context,
an intervention for a feature x involves setting the -th feature of the reconstructed features
x to x˜ . This loss, therefore, encourages the model to minimise a task-specific loss (e.g.,
cross-entropy) before and after interventions, with higher penalties incurred when a mistake is
made after a higher number of features have been intervened on at train time (controlled by a
hyperparameter  &gt; 1).
Inference By thresholding mask probabilities, we can identify core necessary features and
recommend which features to prioritize for collection. During inference, F-Act imputes missing
features dynamically and allows the re-incorporation of non-selected features in the form of
test-time interventions. In practice, given an incomplete sample at test-time, we replace the
missing values with 0. To perform feature selection, imputation and prediction, we apply the
mask, reconstruction and prediction modules in order. A change from the training procedure is
that, at test-time, the Gumbel Softmax function’s temperature is set to 0, making its performance
deterministic. This is equivalent to thresholding the hard mask probabilities at 0.5. Test-time
interventions are performed by replacing the reconstructions of the hard-masked features with
their true values, as shown in Figure 1.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <p>
        Datasets and benchmark methods We consider various real-world datasets commonly
referenced in feature selection literature. These include image datasets (COIL20 and USPS), a
voice audio dataset (Isolet) sourced from [29], a synthetic dataset (Madelon) from [30], genomic
datasets (PBMC [31] and Mice Protein [32]), and a financial dataset (Finance) from [
        <xref ref-type="bibr" rid="ref3">33</xref>
        ].
      </p>
      <p>
        Beyond predictive accuracy, we assess F-Act’s capabilities in selecting important features
and recommending which features to collect at test-time to enhance performance. To this
end, we consider several feature selection methods, including LASSO [11], Random Forest [13],
Concrete Autoencoders (CAE) [
        <xref ref-type="bibr" rid="ref4">34</xref>
        ], XGBoost [12], and SEFS [4]. All methods except CAE rank
the features by their importance, which allows us to evaluate F-Act’s ability to recommend
features for collection. We train each feature selection method on the prediction task, using a
simple MLP as a baseline for comparison.
      </p>
      <p>For missing data imputation, we evaluate four methods: Mean, Iterative Chained Equations
(ICE) [6], and MissForest [7]. Additionally, we assess the performance of all combinations of
downstream models and imputation methods.</p>
      <p>We train F-Act by minimising the loss . We pre-train the reconstruction module following
[4], with tasks that include reconstructing input vectors and estimating gate vectors. Following
this, we tune the intervention number  to minimise the prediction loss on the validation dataset.
(a) Missing Completely at Random (MCAR)
(b) Collecting features by model’s feature ranking
For further implementation details, please refer to Appendix A.</p>
      <p>Predictive Accuracy Table 1 illustrates the predictive accuracy of F-Act compared to other
benchmark methods. F-Act demonstrates competitive performance, consistently ranking in the
top three across datasets and outperforming all baselines in three cases. Overall, F-Act ranks
the best across datasets. However, besides making predictions, F-Act ofers two additional
functionalities without any re-training: imputing missing data at test-time and recommending
which feature values to collect. We next explore these capabilities.</p>
      <p>Test-time Imputation First, we evaluate F-Act’s imputation capabilities when features are
missing completely at random (MCAR). To simulate this, we randomly remove features at
test-time (without considering the potential to collect these feature values). At low levels of
missing features, Figure 2a shows that F-Act is generally outperformed by Random Forest
and MLP, and at higher levels of missing values, it is outperformed by Lasso and MLP. These
mixed results suggest F-Act achieves relatively average performance when one cannot utilise
its learned feature ranking.</p>
      <p>Second, we consider prioritising test-time feature collection based on the model’s learned
feature ranking. Figure 2b shows that when this ranking is used, F-Act outperforms all other
methods with only a few features collected.</p>
      <p>Test-Time Interventions As a feature selection method, F-Act uses a threshold to separate
core from non-core features. Unlike standard approaches, F-Act can incorporate non-core
features during inference. Figure 3 illustrates that F-Act’s performance improves with test-time
interventions, sometimes even surpassing the best-performing method on that dataset. For
more results, please see Appendix B.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper introduces F-Act, a method combining feature selection and missing data imputation
to enable the model to operate when there are missing features at test-time. More importantly,
F-Act provides recommendations of which features one should prioritise collecting at
testtime to improve the model’s performance. Our empirical analysis shows F-Act performs
competitively or better than previous baselines in inference tasks with missing features when
incorporating feature collection recommendations. Additionally, we show how F-Act can
incorporate missing feature values at test-time through test-time interventions, improving
performance without retraining and boosting F1 scores across datasets. This work highlights
the benefit of designing methods that learn, in an end-to-end fashion, to adapt to diferent
feature availability while providing feature collection recommendations.
[3] J. Cai, L. Fan, X. Xu, X. Wu, Unsupervised and supervised feature selection for incomplete
data via l2, 1-norm and reconstruction error minimization, Applied Sciences 12 (2022)
8752.
[4] C. Lee, F. Imrie, M. van der Schaar, Self-supervision enhanced feature selection with
correlated gates, in: International Conference on Learning Representations, 2022.
[5] D. Jarrett, B. C. Cebere, T. Liu, A. Curth, M. van der Schaar, HyperImpute: Generalized
iterative imputation with automatic model selection, in: K. Chaudhuri, S. Jegelka, L. Song,
C. Szepesvari, G. Niu, S. Sabato (Eds.), Proceedings of the 39th International Conference
on Machine Learning, volume 162 of Proceedings of Machine Learning Research, PMLR,
2022, pp. 9916–9937. URL: https://proceedings.mlr.press/v162/jarrett22a.html.
[6] S. van Buuren, Multiple imputation of discrete and continuous data by fully conditional
specification, Statistical Methods in Medical Research 16 (2007) 219–242.
[7] D. J. Stekhoven, P. Bühlmann, Missforest–non-parametric missing value imputation for
mixed-type data, Bioinformatics 28 (2012) 112–118.
[8] D. Bertsimas, A. Delarue, J. Pauphilet, Beyond impute-then-regress: Adapting prediction to
missing data, ArXiv preprint abs/2104.03158 (2021). URL: https://arxiv.org/abs/2104.03158.
[9] M. Le Morvan, J. Josse, T. Moreau, E. Scornet, G. Varoquaux, Neumiss networks:
diferentiable programming for supervised learning with missing values., Advances in Neural
Information Processing Systems 33 (2020) 5980–5990.
[10] V. Bolón-Canedo, N. Sánchez-Maroño, A. Alonso-Betanzos, A review of feature selection
methods on synthetic data, Knowledge and information systems 34 (2013) 483–519.
[11] R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal</p>
      <p>Statistical Society, Series B 58 (1996) 267–288.
[12] T. Chen, C. Guestrin, Xgboost: A scalable tree boosting system, in: B. Krishnapuram,
M. Shah, A. J. Smola, C. C. Aggarwal, D. Shen, R. Rastogi (Eds.), Proceedings of the 22nd
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San
Francisco, CA, USA, August 13-17, 2016, ACM, 2016, pp. 785–794. URL: https://doi.org/10.
1145/2939672.2939785. doi:10.1145/2939672.2939785.
[13] L. Breiman, Random forests, Machine Learning 45 (2001) 5–32.
[14] I. C. Covert, W. Qiu, M. Lu, N. Y. Kim, N. J. White, S.-I. Lee, Learning to maximize
mutual information for dynamic feature selection, in: A. Krause, E. Brunskill, K. Cho,
B. Engelhardt, S. Sabato, J. Scarlett (Eds.), Proceedings of the 40th International Conference
on Machine Learning, volume 202 of Proceedings of Machine Learning Research, PMLR,
2023, pp. 6424–6447. URL: https://proceedings.mlr.press/v202/covert23a.html.
[15] A. M. Sefidian, N. Daneshpour, Missing value imputation using a novel grey based fuzzy
cmeans, mutual information based feature selection, and regression model, Expert Systems
with Applications 115 (2019) 68–94. URL: https://www.sciencedirect.com/science/article/
pii/S0957417418304822. doi:https://doi.org/10.1016/j.eswa.2018.07.057.
[16] G. Doquire, M. Verleysen, Feature selection with missing data using mutual information
estimators, Neurocomputing 90 (2012) 3–11.
[17] P. Meesad, K. Hengpraprohm, Combination of knn-based feature selection and knnbased
missing-value imputation of microarray data, in: 2008 3rd International Conference on
Innovative Computing Information and Control, 2008, pp. 341–341. doi:10.1109/ICICIC.
2008.635.
[18] M. Saar-Tsechansky, P. Melville, F. Provost, Active feature-value acquisition, Management</p>
      <p>Science 55 (2009) 664–684.
[19] Y. Li, J. Oliva, Active feature acquisition with generative surrogate models, in: M. Meila,
T. Zhang (Eds.), Proceedings of the 38th International Conference on Machine Learning,
volume 139 of Proceedings of Machine Learning Research, PMLR, 2021, pp. 6450–6459. URL:
https://proceedings.mlr.press/v139/li21p.html.
[20] H. Shim, S. J. Hwang, E. Yang, Joint active feature acquisition and classification with
variable-size set encoding, in: S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman,
N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems
31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018,
December 3-8, 2018, Montréal, Canada, 2018, pp. 1375–1385. URL: https://proceedings.
neurips.cc/paper/2018/hash/e5841df2166dd424a57127423d276bbe-Abstract.html.
[21] Y. Li, J. Oliva, Active feature acquisition with generative surrogate models, in: International</p>
      <p>Conference on Machine Learning, PMLR, 2021, pp. 6450–6459.
[22] M. J. Azur, E. A. Stuart, C. Frangakis, P. J. Leaf, Multiple imputation by chained equations:
what is it and how does it work?, International journal of methods in psychiatric research
20 (2011) 40–49.
[23] P. W. Koh, T. Nguyen, Y. S. Tang, S. Mussmann, E. Pierson, B. Kim, P. Liang, Concept
bottleneck models, in: Proceedings of the 37th International Conference on Machine
Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine
Learning Research, PMLR, 2020, pp. 5338–5348. URL: http://proceedings.mlr.press/v119/
koh20a.html.
[24] M. Espinosa Zarlenga, P. Barbiero, G. Ciravegna, G. Marra, F. Giannini, M. Diligenti,
Z. Shams, F. Precioso, S. Melacci, A. Weller, et al., Concept embedding models: Beyond the
accuracy-explainability trade-of, Advances in Neural Information Processing Systems 35
(2022) 21400–21413.
[25] M. Espinosa Zarlenga, K. Collins, K. Dvijotham, A. Weller, Z. Shams, M. Jamnik, Learning
to receive help: Intervention-aware concept embedding models, Advances in Neural
Information Processing Systems 36 (2024).
[26] R. Marcinkevičs, S. Laguna, M. Vandenhirtz, J. E. Vogt, Beyond concept bottleneck models:</p>
      <p>How to make black boxes intervenable?, arXiv preprint arXiv:2401.13544 (2024).
[27] E. Jang, S. Gu, B. Poole, Categorical reparameterization with gumbel-softmax, arXiv
preprint arXiv:1611.01144 (2016).
[28] A. Margeloiu, N. Simidjievski, P. Lio, M. Jamnik, Weight predictor network with feature
selection for small sample tabular biomedical data, in: Proceedings of the AAAI Conference
on Artificial Intelligence, volume 37, 2023, pp. 9081–9089.
[29] J. Li, K. Cheng, S. Wang, F. Morstatter, R. P. Trevino, J. Tang, H. Liu, Feature selection: A
data perspective, ACM Computing Surveys (CSUR) 50 (2018) 94.
[30] I. Guyon, Madelon, UCI Machine Learning Repository, 2008. DOI:
https://doi.org/10.24432/C5602H.
[31] A. Gayoso, Z. Steier, R. Lopez, J. Regier, K. L. Nazor, A. Streets, N. Yosef, Joint probabilistic
modeling of paired transcriptome and proteome measurements in single cells, Biorxiv
(2020) 2020–05.
[32] C. Higuera, K. Gardiner, K. Cios, Mice protein expression, UCI Machine Learning
Reposi</p>
    </sec>
    <sec id="sec-6">
      <title>A. Reproducibility</title>
      <sec id="sec-6-1">
        <title>A.1. Datasets</title>
        <p>Our code is made available at https://github.com/evanrex/feature-wise-active-adaptation.</p>
      </sec>
      <sec id="sec-6-2">
        <title>A.3. Training Protocol</title>
        <p>We present the training algorithm for our approach in Appendix A.3.</p>
      </sec>
      <sec id="sec-6-3">
        <title>A.4. Training and Evaluation Methodology</title>
        <p>We divided the datasets into three subsets: training, validation, and testing, using a 60:20:20
split. We use the validation to select the model’s hyperparameters. We evaluate and report the
class-weighted F1 score on the test set. The results are averaged across these seeds during both
Algorithm 1 Pre-training F-Act
hyperparameter tuning and final evaluation phases. We pre-train out model as per Appendix A.3
and train our model as per Algorithm 2.</p>
      </sec>
      <sec id="sec-6-4">
        <title>A.5. Hyper-parameter Tuning</title>
        <p>Random Forest. For the Random Forest model, we conducted a hyper-parameter sweep on
the max_depth parameter. The values considered for max_depth were {3, 5, 7}. This tuning
was performed to control the complexity of the individual trees in the forest, with a goal of
balancing the bias-variance tradeof.</p>
        <p>Lasso. In our implementation of the Lasso model, we performed hyper-parameter tuning
on two key parameters: l1_ratio and C. The l1_ratio was varied over {0, 0.25, 0.5, 0.75, 1},
allowing us to explore the impact of the ElasticNet mixing parameter which adjusts the balance
between L1 and L2 penalties. The C parameter, which controls the inverse of regularisation
strength, was swept over {10, 100, 1000}, providing a wide range of regularisation efects.</p>
        <p>XGBoost. For the XGBoost model, we focused our hyper-parameter sweep on the eta
(learning rate) and max_depth. The eta values considered were {0.1, 0.3, 0.5}, providing a
spectrum of learning rates to control the step size during optimisation. For max_depth, the
values were {3, 6, 9}, allowing us to examine diferent depths for the trees to manage the model’s
complexity and prevent overfitting.</p>
        <p>Neural Network Based Models. For the neural network-based models, which include
MLP, SEFS, CAE, Supervised CAE, and F-Act, we conducted a hyper-parameter sweep. Key
parameters included learning rate, lr, ({1e-3, 3e-4, 1e-4}), number of hidden layers ({1, 2, 4}), and
dropout rates ({0, 0.2}). Additionally, for the Concrete Autoencoder models (CAE, Supervised
CAE), we swept the neurons_ratio over {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} to explore
various proportions of neurons in the encoder and decoder layers. This extensive tuning process
was aimed at optimising each of the methods architectures and regularisation techniques to
Algorithm 2 Training F-Act
5:
6:
7:
8:
9: x˜ ← (x˜ ;  )
10: ^ ←  (x˜ ;   )
11: x˜ ←  (x˜ , x˜;  hard , max)
12: ^ ←  (x˜ ;   )
13: end for
14:  ←  −  ∇ ∑︀=b 1 ︀( ℒ(^ , ^ , y)︀)
15: until convergence
Require: Dataset (X, Y), mini-batch size mb, loss coeficients ( ,  ), Gumbel-Softmax
temperature parameter  , maximum intervention max, learning rate 
Ensure: Trained model parameters ( soft ,  hard ,  ,   )
1: Initialise ( soft ,  hard ,  ,   ) ◁ Initialise parameter weights randomly
2: repeat
3: for  = 1 to  do ◁ For each sample in the training set
4: (x, ) ∼ (X, Y) ◁ Sample a data point
Mask the data</p>
        <p>Sigmoid( soft )</p>
        <p>GumbelSoftmax( hard ,  )
msoft ← ◁ Compute soft mask
mhard ← ◁ Compute hard mask
x˜ ← msoft ⊙ x ◁ Apply soft mask to input data
x˜ ← mhard ⊙ x˜ ◁ Apply hard mask to soft-masked data
Reconstruct the data, apply interventions, and make prediction
◁ Reconstruct the hard-masked features
◁ Make prediction without intervention</p>
        <p>◁ Apply interventions on top features
◁ Make prediction with full intervention
◁ Update parameters using gradient descent
enhance model performance</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>B. Further Experiments and Discussion</title>
      <sec id="sec-7-1">
        <title>B.1. Test-time Imputation</title>
      </sec>
      <sec id="sec-7-2">
        <title>B.1.1. Further Discussion</title>
        <p>
          In Figure 2b, we observe the interesting phenomenon that at low levels of missing data, ICE
imputation enables the tree-based models to achieve considerably improved performance.
Concurrently, ICE negatively afects the Neural Network and Lasso models. Here, we discuss
that phenomenon. Note that, as per the structure of the experiment, the missing features at
those levels are relatively lower-ranked features which have less impact on predictions in the
case of tree-based models. One possible explanation for the increase in performance of the
tree-based models, is that they were initially negatively afected by over-fitting to noise in the
lower-ranked features. As a result, the models might benefit from the removal of these features
in the test set. This then explains why replacing the lower-ranked features with expected values
conditioned by the other observed features of that data point (as per the ICE imputation strategy
[22]) would reduce their noisy impact. To explain the poor performance of ICE with the Neural
Network-based and Lasso models, we note that those models are known to be more sensitive
to scale and domain shift [
          <xref ref-type="bibr" rid="ref5">35</xref>
          ]. We also note that ICE is known to sufer from misspecification
        </p>
        <sec id="sec-7-2-1">
          <title>Validation</title>
          <p>F-Act (selected only)
0.9929
0.9264
0.8461
0.9681
0.5992
0.7287
0.9815</p>
        </sec>
        <sec id="sec-7-2-2">
          <title>Test</title>
          <p>F-Act (selected only)
0.9860
0.9197
0.8317
0.9683
0.6029
0.7225
0.9877</p>
        </sec>
        <sec id="sec-7-2-3">
          <title>Test</title>
          <p>
            F-Act
0.9884
0.9286
0.8987
0.9683
0.5981
0.7290
0.9846
[
            <xref ref-type="bibr" rid="ref6">36</xref>
            ], where imputed values are “implausible”, falling out of the domain.
          </p>
        </sec>
      </sec>
      <sec id="sec-7-3">
        <title>B.2. Optimal feature availability</title>
        <p>The adaptive nature of our model enables the ability to provide a more finely tuned optimal
feature selection recommendation than the standard “selected” vs “non-selected” features
dichotomy. Rather than being derived from what is typically an arbitrarily set threshold, we
are able to tune the number of selected features without re-training. By contrast, to implement
feature selection with methods such as Lasso it is standard to re-train the model with the
reduced feature set. In this section, we hypothesise that this functionality will enable improved
performance, due to the ability to more finely tune the number of selected features.</p>
        <p>The “optimal feature selection” is found through post-training hyperparameter tuning of
the number of interventions, . That is, we evaluate the model at varying degrees of test-time
interventions on the validation set, and then set this as the number of features used by the
model. In Table 3, we present the results of this approach. In the table, we compare F-Act,
which uses post-training hyperparameter tuning of , to F-Act “selected only”, which only
uses only the selected features. We find that in most datasets, this enables an improvement
over the “selected only” variant of the model. Exceptions include the finance and mice protein
datasets, where F-Act “selected only” outperforms F-Act 0῾.005, which is within the standard
deviation of our model’s F1 score for those datasets. However, on other datasets, such as PBMC,
the improvement of F-Act over F-Act "selected only" is rather notably greater than twice the
standard deviation. Overall we find that the diference in average F1 score falls within the
standard deviation of the model, indicating that the potential gains for this mechanism are
limited, however the substantial gains on the PBMC dataset, as well as the small computational
costs associated with its implementation, indicate that the mechanism is worth exploring when
deploying F-Act.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. H. A.</given-names>
            <surname>Kohbalan Moorthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. W. H.</given-names>
            <surname>Mohd Arafin Ismail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohd Saberi</surname>
          </string-name>
          <string-name>
            <surname>Mohamad</surname>
          </string-name>
          ,
          <article-title>An evaluation of machine learning algorithms for missing values imputation</article-title>
          ,
          <source>InternationalJournal of Innovative Technology and Exploring Engineering</source>
          <volume>8</volume>
          (
          <year>2019</year>
          )
          <fpage>415</fpage>
          -
          <lpage>420</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. D.</given-names>
            <surname>Grosse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Gudgeon</surname>
          </string-name>
          ,
          <article-title>Cost or price of sequencing? implications for economic evaluations in genomic medicine</article-title>
          ,
          <source>Genetics in Medicine</source>
          <volume>23</volume>
          (
          <year>2021</year>
          )
          <fpage>1833</fpage>
          -
          <lpage>1835</lpage>
          . tory,
          <year>2015</year>
          . DOI: https://doi.org/10.24432/C50S3Z.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>N.</given-names>
            <surname>Carbone</surname>
          </string-name>
          ,
          <volume>200</volume>
          +
          <article-title>financial indicators of us stocks (</article-title>
          <year>2014</year>
          -2018),
          <year>2020</year>
          . URL: https://www. kaggle.com/datasets/cnic92/200-financial
          <article-title>-indicators-of-us-stocks-20142018.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Balın</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Abid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <article-title>Concrete autoencoders: Diferentiable feature selection and reconstruction</article-title>
          , in: K. Chaudhuri, R. Salakhutdinov (Eds.),
          <source>Proceedings of the 36th International Conference on Machine Learning</source>
          , volume
          <volume>97</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>444</fpage>
          -
          <lpage>453</lpage>
          . URL: https://proceedings.mlr.press/v97/balin19a.html.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>R.</given-names>
            <surname>Salakhutdinov</surname>
          </string-name>
          ,
          <article-title>Deep learning</article-title>
          , in: S. A.
          <string-name>
            <surname>Macskassy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Perlich</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Leskovec</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
          </string-name>
          , R. Ghani (Eds.),
          <source>The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          , KDD '
          <fpage>14</fpage>
          , New York, NY, USA - August
          <volume>24</volume>
          -
          <issue>27</issue>
          ,
          <year>2014</year>
          , ACM,
          <year>2014</year>
          , p.
          <year>1973</year>
          . URL: https://doi.org/10.1145/2623330.2630809. doi:
          <volume>10</volume>
          .1145/2623330.2630809.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>I. R.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Royston</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <article-title>Multiple imputation using chained equations: issues and guidance for practice</article-title>
          ,
          <source>Statistics in medicine 30</source>
          (
          <year>2011</year>
          )
          <fpage>377</fpage>
          -
          <lpage>399</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Validation</surname>
            <given-names>F</given-names>
          </string-name>
          <source>-Act 0.9953 0.9430 0.8793 0.9681 0.6027 0.7415 0</source>
          .
          <fpage>9908</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>